id int64 110 1.16M | edu_score float64 3.5 5.1 | url stringlengths 21 286 | text stringlengths 507 485k | timestamp stringdate 2026-01-18 07:33:45 2026-02-05 07:22:54 |
|---|---|---|---|---|
299,742 | 4.380565 | http://www.biologyreference.com/Ar-Bi/Bacterial-Genetics.html | There are hundreds of thousands of bacterial species in existence on Earth. They grow relatively quickly, and most reproduce by binary fission, the production of two identical daughter cells from one mother cell. Therefore, each replication cycle doubles the number of cells in a bacterial population. The bacterial chromosome is a long circle of deoxyribonucleic acid (DNA) that is attached to the membrane of the cell. During replication, the chromosome is copied, and the two copies are divided into the two daughter cells. Transfer of genetic information from the mother cell to offspring is called vertical transmission.
Beneficial mutations that develop in one bacterial cell can also be passed to related bacteria of different lineages through the process of horizontal transmission. There are three main forms of horizontal transmission used to spread genes between members of the same or different species: conjugation (bacteria-to-bacteria transfer), transduction (viral-mediated transfer), and transformation (free DNA transfer). These forms of genetic transfer can move plasmid , bacteriophage, or genomic DNA sequences. A plasmid is a small circle of DNA separate from the chromosome; a bacteriophage is a virus that reproduces in bacteria by injecting its DNA; the genome is the total DNA of the bacterial organism.
After transfer, the DNA molecules can exist in two forms, either as DNA molecules separate from the bacterial chromosome (an episome), or can become part of the bacterial chromosome. The study of basic mechanisms used by bacteria to exchange genes allowed scientists to develop many of the essential tools of modern molecular biology.
Bacterial conjugation refers to the transfer of DNA between bacterial cells that requires cell-to-cell contact. Joshua Lederberg and Edward Tatum first
The steps of bacterial conjugation are: mating pair formation, conjugal DNA synthesis, DNA transfer, and maturation. The main structure of the F factor that allows mating pair formation is the F pilus or sex pilus (a long thin fiber that extends from the bacterial cell surface). There are one to three pili expressed on an E. coli cell that carries the F factor, and one pilus will specifically interact with several molecules on the recipient cell surface (attachment). About twenty genes on the F factor are required to produce a functional pilus, but the structure is mainly made up of one protein , pilin. To bring the donor and recipient cell into close proximity, the F pilus retracts into the donor cell by removing pilin protein monomers from the base of the pilus to draw the bacterial cells together.
Once a stable mating pair is formed, a specialized form of DNA replication starts. Conjugal DNA synthesis produces a single-stranded copy of the F factor DNA (as opposed to a double-stranded DNA that is formed by normal replication). This DNA strand is transferred into the recipient cell. Once in the recipient cell, the single-stranded copy of the F plasmid DNA is copied to make a double-stranded DNA molecule, which then forms a mature circular plasmid. At the end of conjugation the mating pair is broken and both the donor and the recipient cells carry an identical episomal copy of the F factor. All of the approximately one hundred genes carried on the F factor can now be expressed by the recipient cell and will be inherited by its offspring.
In addition to transferring itself, the F factor can also transfer chromosomal genes between a donor and recipient cell. The F factor can be found inserted (integrated) into the bacterial chromosome at many locations in a small fraction of bacterial cells. An integrated F factor is replicated along with the rest of the chromosome and inherited by offspring along with the rest of the chromosome. When a mating pair is formed between the donor cell carrying an integrated F factor and a recipient cell, DNA transfer occurs as it does for the episomal F factor, but now the chromosomal sequences adjacent to the integrated F factor are transferred into the recipient. Since these DNA sequences encode bacterial genes, they can recombine with the same genes in the recipient. If the donor gene has minor changes in DNA sequence from the recipient gene, the different sequence can be incorporated into the recipient gene and inherited by the recipient cell's offspring. Donor cells that have an integrated copy of the F factor are called Hfr strains (High frequency of recombination).
The second way that DNA is transferred between bacterial cells is through a phage particle in the process of transduction. Joshua Lederberg and Norton Zinder first discovered transduction in 1956. When phage inject their DNA into a recipient cell, a process occurs that produces new bacteriophage particles and kills the host cell (lytic growth). Some phage do not always kill the host cell (temperate phage), but instead can be inherited by daughter host cells. Therefore acquisition of a so-called temperate "prophage" by a recipient cell is a form of transduction. Many phage also have the ability to transfer chromosomal or plasmid genes between bacterial cells. During generalized transduction any gene can be transferred from a donor cell to a recipient cell. Generalized transducing phage are produced when a phage packages bacterial genes into its capsid (protein envelope) instead of its own DNA. When a phage particle carrying bacterial chromosomal genes attaches to a recipient cell, the DNA is injected into the cytoplasm where it can recombine with a homologous DNA sequences.
Some bacteriophage can pick up a subset of chromosomal genes and transfer them to other bacteria. This process is called specialized transduction
The third main way that bacteria exchange DNA is called DNA transformation. Some bacteria have evolved systems that transport free DNA from the outside of the bacterial cell into the cytoplasm. These bacterial are called "naturally competent" for DNA transformation. Natural DNA transformation of Streptococcus pneumonaiae provided the first proof that DNA encoded the genetic material in experiments by Oswald Avery and colleagues. Some other naturally competent bacteria include Bacillus subtilis, Haemophilus influenzae, and Neisseria gonorrhoeae. Other bacterial species such as E. coli are not naturally competent for DNA transformation. Scientists have devised many ways to physically or chemically force noncompetant bacteria to take up DNA. These methods of artificial DNA transformation form the basis of plasmid cloning in molecular biology.
Most naturally competent bacteria regulate transformation competence so that they only take up DNA into their cells when there is a high density of cells in the environment. The ability to sense how many other cells are in an area is called quorum sensing. Bacteria that are naturally competent for DNA transformation express ten to twenty proteins that form a structure that spans the bacterial cell envelope. In some bacteria this structure also is required to form a particular type of pilus different than the F factor pilus. Other bacteria express similar structures that are involved in secreting proteins into the exterior medium (Type II secretion). Therefore, it appears that DNA transformation and protein secretion have evolved together.
During natural DNA transformation, doubled-stranded DNA is bound to the recipient cell surface by a protein receptor. One strand of the DNA is transported through the cell envelope, where it can recombine with similar sequences present in the recipient cell. If the DNA taken up is not homologous to genes already present in the cell, the DNA is usually broken down and the nucleotides released are used to synthesize new DNA during normal replication. This observation has led to the speculation that DNA transformation competence may have originally evolved to allow the acquisition of nucleic acids for food.
The source of DNA for transformation is thought to be DNA released from other cells in the same population. Most naturally competent bacteria spontaneously break apart by expressing enzymes that break the cell wall. Autolysis will release the genomic DNA into the environment where it will be available for DNA transformation. Of course, this results in the death of some cells in the population, but usually not large numbers of cells. It appears that losing a few cells from the population is counterbalanced by having the possibility of gaining new traits by DNA transformation.
Tortora, Gerard J., Berdell R. Funke, Christine L. Case. Microbiology: An Introduction. Redwood City: CA: Benjamin/Cummings Publishing Company, Inc., 2001. | 2026-01-22T19:19:43.328495 |
198,752 | 3.798711 | http://ncse.com/creationism/analysis/anatomical-homology | You are here
Nested patterns of shared similarities between species play an important role in testing evolutionary hypotheses. "Homology" is one term used to describe these patterns, but scientists prefer other, more clearly defined terms. Explore Evolution would have done well to accurately present the way scientists talk about this issue, instead of building two chapters around a misguided attack on a particular word with a meaning that dates to pre-evolutionary attempts at understanding the diversity of life. Explore Evolution's use of the term promotes confusion and obscures the actual ways in which scientists use the term, and more modern concepts. Explore Evolution's authors could have found those modern concepts clarified in the writing of David Wake, work they cite and quote inaccurately, obscuring the point he and others have made about the importance of using concepts which reflect modern biology, not a term which predates evolutionary thinking. The chapter badly mangles key concepts, and repeats creationist canards, without presenting the actual state of science.
p. 40-41: Homology is defined only by implication
Despite using the word "homology" or "homologous" over 80 times, Explore Evolution never provides a clear and consistent definition of homology. Their use of the term confuses and obscures the actual ways in which scientists analyze the morphological evidence of common descent. Homology is not simply similarity, nor is similarity in development the sole basis for assessing homology. A focus on "homology," as opposed to terms and concepts with clearer meanings and less historical baggage, only adds to the confusion.
p. 43, 45-48: Homology "exists for important functional reasons … not due to shared ancestry"
Homology is similarity in structure and position that occurs because a trait occurred in a common ancestor. If the similarity is not due to common ancestry, the structures would not be homologous. Biologists test alternative explanations, including shared function, natural laws, and other constraints. Like homology, these effects are all testable. Furthermore, similarity in shape, as between mole cricket forelimbs and the paws of a mole, is not homology. No one has ever suggested such a thing, and Explore Evolution is grossly misleading in suggesting otherwise. The errors Explore Evolution promotes are common only in the creationist literature. Once more, Explore Evolution confuses students rather than bringing clarity to a subject. A good textbook would explain that, and show how scientists test these hypotheses; Explore Evolution does not.
p. 44-45: "the development of non-homologous structures should be regulated by non-homologous genes"
Explore Evolution's premise is simply false, and does not reflect the state of developmental biology. The study of how genes control the development of structures changes rapidly. An important lesson scientists are learning is that developmental pathways are modular and redundant; it is possible to replace or alter one module without changing the end result. Putting the differences in developmental pathway into an evolutionary context clarifies how homologous adult structures could be produced by slightly different pathways.
p. 49: "the concept of homology [is] circular"
This claim has a long history in the creationist literature, but is rooted in basic misunderstandings and is therefore rejected by biologists. Because homology is not the same as similarity, the mere similarity of a single trait in two species would not be treated as evidence of common descent. By examining multiple traits, and identifying a shared nested hierarchy of modifications of a common starting point, scientists can test hypotheses about common descent. There is nothing circular about this process.
Defining homology: Despite using the phrase over 80 times, Explore Evolution never defines "homology." The term is used as a synonym for similarity in places, is treated as if it required a given sort of developmental recapitulation elsewhere, and finally treated as if it were circularly defined and useless. These are all simply restatements of long-discredited creationist falsehoods.
Convergence: Explore Evolution wrongly treats homology as if it were mere similarity, and then claims that the existence of similarity without common ancestry were evidence that no similarities are results of common descent. In fact, homology is one hypothesis among many which scientists can test, and hypotheses like convergence can also be readily tested. There is no excuse for confusing students by mixing up basic biological concepts.
Development: Explore Evolution assumes that developmental similarities are necessary for homology, and assumes that students are well versed in developmental biology. To understand the issues they raise, a student would need to have taken a college-level developmental biology class, and the student would then realize that this book's presentation is consistently false. Structures can in fact be homologous without sharing every step in their development.
Misquoting: Prominent scientists like David Wake and Brian Goodwin are misquoted and misrepresented in an attempt to portray homology as invalid. Wake has written that "Homology is the central concept for all of biology," a far cry from the claim in Explore Evolution that Wake holds that "homology is not evidence of evolution, nor is it necessary to understand homology in order to accept or understand evolution." Similarly, Goodwin relies on homology for his research, which investigates restrictions on the sorts of variation which is likely to be seen. Explore Evolution misleads students by misrepresenting these and other scientists. | 2026-01-21T07:29:49.113705 |
539,702 | 3.966699 | http://www.thehealthage.com/2012/08/fainting-linked-strong-genetic-predisposition/ | Fainting also called as vasovagal syncope is a brief loss of consciousness when human body reacts to certain triggers, such as emotional distress or the sight of blood. A new research suggests that fainting has a strong genetic predisposition and some people may be genetically predisposed to fainting.
Researchers from American Academy of Neurology found that fainting has a strong genetic constituent and it could be inherited but not typically by a single gene. For their study researchers gave fifty sets of twins of the same gender between the age group of nine and sixty-nine s of nine and a telephonic survey.
In all study participants at least one of the twins had a history of fainting. Researchers also collected information on whether any family history of fainting. Among fifty sets of twin fifty-seven percent reported to have typical fainting simulators. The study showed that among twins where one fainted, those who were identical were nearly twice as likely to both faint compared to fraternal twins.
Identical twins were those who were born from the same fertilized egg and fraternal twins were those who born from two different fertilised eggs. The study also showed that the risk of fainting not associated with outside factors such as dehydration was also much higher in identical twins in comparison to fraternal twins.
In addition to that identical twins were much more likely to both experiences fainting related to typical stimulators than fraternal twins. The frequency of fainting in non-twin relatives was low, which suggests that the manner in which fainting is inherited is usually not by a single gene.
Lead researcher Samuel F Berkovic from the University of Melbourne in Victoria, Australia, and a member of the American Academy of Neurology, explained the question of whether fainting is caused by genetic factors, environmental factors or a mixture of both has been the subject of debate.
The study findings suggest that while fainting seems to have a strong genetic component, there may be multiple genes and multiple environmental factors that influence the phenomenon, added F Berkovic. The study was published in the medical journal of the American Academy of Neurology. | 2026-01-26T18:00:01.485332 |
236,216 | 3.503099 | http://www-personal.umd.umich.edu/~jcthomas/JCTHOMAS/1997%20Case%20Studies/J.%20Newman1.html | by Jackie Newman
Tangier Disease is an extremely rare autosomal recessive metabolic disorder.
Documentation shows that as of 1988, 27 cases of Tangier Disease had been
reported (Makrides pg.465) and in 1992 the reported cases were still fewer
than 50 persons worldwide (Thoene pg.265). The majority of the cases tend
to localize in one single area of the U.S., Tangier Island, Virginia. The
fact that most of the people that are affected by Tangier disease all live
in close proximity to one another could be due to Founder's effect. The
original settlers to the island came in 1686 and it is possible that one
or two of them were carriers of the disease or actually had the symptoms
and passed it down through the blood line.
Characteristics of Tangier Disease include increased levels or even a complete
absence of high-density lipoproteins (HDL) concentrations in one's plasma,
low cholesterol levels in the plasma, increased cholesteryl esters in the
tonsils, spleen, liver, skin and lymph nodes. One easily visual characteristic
usually found in children with Tangier disease is the presence of enlarged,
Initial research of Tangier disease showed a marked decrease in the HDL
concentrations when compared to normal controls. In some cases the reduction
was as great as 50% (Schmitz pg.6306). Scientists studied the HDL concentrations
and looked for any possible links in its involvement with the disease. They
specifically looked at the apo A-I (apolipoprotein) concentrations, which
is a major protein component of HDL.
The main hypothesis was that apo A-I was structurally abnormal. Studies
proved that this was incorrect because the DNA- derived protein sequence
for Tangier apo A-I was identical to the control groups apo A-I sequence
(Makrides pg.468). Scientists discovered that the cause of Tangier disease
is involved with the intracellular membrane trafficking of the HDL. Normally
macrophages inside the cell have receptors that bind the HDL. After the
HDL is bound it is transported into the endosomes. The endosome is transported
through the cell without any degradation by the lysosome and the HDL is
eventually resecreted from the cell. It is during this cycle that there
are problems for the Tangier disease people. When the HDL is allowed to
bind to the receptor monocyte, the two stick together but they never separate.
The HDL is not resecreted outside the cell (Schmitz pg.6308) The data suggest
that there is a deficiency in the cellular metabolism of HDL in the Tangier
monocytes. The HDL-monocyte unit together also supports the observed condition
of high concentrations of excess cholesterol in body tissues.
Currently the treatment for Tangier patients is dependent on the various
symptoms, ranging from heart surgery to removal of organs. Gene therapy
has been proposed as a possible treatment but is difficult because there
isn't anything wrong specifically with the gene involved in the HDL conversion.
The problem is in the cellular transportation. Many of the specific processes
within the cell are still not known so any extensive treatment is still
Return Case Studies in Virtual Genetics | 2026-01-21T20:26:17.094497 |
240,279 | 3.634612 | http://www.healthline.com/galecontent/genetics-and-genetic-counseling | Genetics and Genetic Counseling
Genetics and Genetic Counseling
A branch of science that attempts to understand the fundamental biologic makeup of organisms by examining the genetic blueprints in each cell
The nucleus of every cell holds the key to nearly every visible and invisible feature of the human body, from the color of hair to the pumping capacity of the heart. In each nucleus of every cell there are 23 pair of chromosomes (46 total). One pair of these chromosomes determines the sex of the child while the other 22 pair determine all the other components of the human body. Chromosomes contain genes which influence the production of proteins and thus influence all aspects of body structure and function. There is a tremendous amount of information encoded in the nearly 100,000 genes in each cell Geneticists and molecular biologists work to identify the variations that exist between animals or humans by studying the changes that occur during the cell's division. The alterations that take place during the development of any organism may include mutations, insertions, deletions, or translocation during the copying of genetic material from one cell to the other. These changes are the basis for chromosomal abnormalities such as in Down syndrome or trisomy 18, where there is an extra or missing chromosome material in the embryo, or in singlegene disorders like sickle-cell anemia and cystic fibrosis, which are caused by a small change on a single gene called a point mutation.
The study of human genetics is less than 100 years old and yet in the last century scientists have identified over 400 genes that cause a variety of diseases from sickle-cell anemia and Down syndrome to high cholesterol and depression. In addition science has been able to elucidate the inheritance pattern of disease in certain families.
A genetic counselor works with a person concerned about the risk of an inherited disease. In 1975, the American Society of Human Genetics clarified the role of genetic counseling. As a communication process, genetic counseling attempts to 1) accurately diagnose a disorder, 2) access risk of recurrence in the concerned family members and their relatives, 3) provide alternatives for decision-making, and 4) provide support groups that will help the family members cope with the recurrence of a disorder.
The role of the genetic counselor is to facilitate the exchange of information regarding a person's genetic legacy. The genetic counselor does not prevent the incidence of a disease in a family but can help family members assess the risk for certain hereditary diseases and offer guidance. At present there are less than 2,000 accredited
GENES AND BEHAVIOR
Is a child's athletic ability inherited, or simply a product of training? If one parent has schizophrenia, will his child acquire the disease? The genetic foundations of behavior are studied by behavior genetics, an interdisciplinary science which draws on the resources of several scientific disciplines, including genetics, physiology, and psychology. Because of the nature of heredity, behavior geneticists are unable to assess the role played by genetic factors in an individual's behavior: their estimates by definition apply to groups. There are 23 pairs of chromosomes in each human cell (a total of 46 chromosomes—each with approximately 20,000 genes). Genes from both members of a pair act in concert to produce a particular trait. What makes heredity complex and extremely difficult to measure is the fact that human sperm and eggs, which are produced by cell division, have 23 unpaired chromosomes. This means that one half of a person's genes comes from the mother, and the other half from the father, and that each individual, with the exception of his or her identical twin, has a unique genetic profile.
Scientists are currently working on the Human Genome Project, which will map the estimated 100,000 genes in the human DNA. So far, they have been able to identify genes responsible for a variety of diseases, including Huntington's disease, Down syndrome, cystic fibrosis, Tay-Sachs disease, and a number of cancers. Genetic information about a particular disease constitutes a crucial milestone in the search for a cure. For example, phenylketonuria (PKU) is a disease caused by a re cessive gene from each parent; PKU's genetic basis is clearly understood. A child with PKU is unable to metabolize phenylalanine, an amino acid found in proteins. The phenylalanine build-up afflicts the central nervous system, causing severe brain damage. Because the genetic processes underlying PKU are known, scientists have been able to develop a screening test, and thus can quickly diagnose the afflicted children shortly after birth. When diagnosed early, PKU can be successfully controlled by diet.
While genetic research can determine the heritability of a some diseases, the genetic foundations of behavior are much more difficult to identify. From a genetic point of view, physical traits, such as the color of a person's hair, have a much higher heritability than behavior. In fact, behavior genetics assumes that the genetic bases of an individual's behavior simply cannot be determined. Consequently, researchers have focused their efforts on the behavior of groups, particularly families. However, even controlled studies of families have failed to establish conclusive links between genetics and behavior, or between genetics and particular psychological traits and aptitudes. In theory, these links probably exist; in practice, however, researchers have been unable to isolate traits that are unmodified by environmental factors. For example, musical aptitude seems to recur in certain families. While it is tempting to assume that this aptitude is an inherited genetic trait, it would be a mistake to ignore the environment. What is colloquially known as "talent" is probably a combination of genetic and other, highly variable, factors.
More reliable information about genetics and behavior can be gleaned from twin studies. When compared to fraternal (dizygotic) twins, identical (monozygotic) twins display remarkable behavioral similarities. (Unlike fraternal twins, who develop from two separate eggs, identical twins originate from a single divided fertilized egg.) However, even studies of identical twins reared in different families are inconclusive, because, as scientists have discovered, in many cases, the different environments often turn out to be quite comparable, thus invalidating the hypothesis that the twins' behavioral similarities are entirely genetically determined. Conversely, studies of identical twins raised in the same environment have shown that identical twins can develop markedly different personalities. Thus, while certain types of behavior can be traced to certain genetic characteristics, there is no genetic blueprint for an individual's personality.
Twin studies have also attempted to elucidate the genetic basis of intelligence, which, according to many psychologists, is not one trait, but a cluster of distinct traits. Generally, these studies indicate that identical twins reared in different families show a high correlation in IQ scores. No one questions the genetic basis of intelligence, but scientists still do not know how intelligence is inherited and what specific aspects of intelligence can be linked to genetic factors.
genetic counselors practicing in the United States. This figure is expected to increase in response to the enormous changes taking place both in the scientific community and society. There are limitations to the power of genetic counseling, though, since many of the diseases that have been mapped offer no cure for such disorders as Down syndrome or Huntington's disease. Although a genetic counselor cannot predict the future unequivocally, he or she can discuss the occurrence of a disease in terms of probability. A genetic counselor, with the aid of the patient or family, creates a detailed family pedigree that includes the incidence of disease in first-degree (parents and siblings) and second-degree relatives (aunts and uncles). Before or after this pedigree is completed, certain genetic tests are performed using DNA analysis, χ ray, ultrasound, urine analysis, skin biopsy, and physical evaluation. For a pregnant woman, prenatal diagnosis can be made through amniocentesis (the withdrawal of amniotic fluid during pregnancy) or chorionic villus sampling (the biopsy of chorionic villus tissue).
Concerns about genetics research is an issue that will become increasingly relevant to families trying to embark on uncovering their genetic risks towards common diseases such as breast cancer, heart disease, asthma, depression, and diabetes. Even as each month brings a new gene discovery, barriers still remain between the concerned person and the wealth of information that lies in one's DNA. Access to genetic centers and counseling may be further hampered by insurance companies that do not reimburse patients for testing and counseling.
Jorde, L., J. Carey, and R. White. Medical Genetics. St. Louis: Mosby, 1995.
Milunsky, Aubrey. Choices Not Chances: An Essential Guide to Your Hereditary and Health. Boston: Little, Brown, 1989.
Plomin, R. Nature and Nurture. Pacific Grove, CA: Brooks/Cole Publishing, 1990.
Stine, G., ed. The New Human Genetics. Dubuque, IA: Wm. C. Brown, 1989. | 2026-01-21T21:52:09.037746 |
767,567 | 3.71709 | http://en.wikipedia.org/wiki/Topological_skeleton | In shape analysis, skeleton (or topological skeleton) of a shape is a thin version of that shape that is equidistant to its boundaries. The skeleton usually emphasizes geometrical and topological properties of the shape, such as its connectivity, topology, length, direction, and width. Together with the distance of its points to the shape boundary, the skeleton can also serve as a representation of the shape (they contain all the information necessary to reconstruct the shape).
Skeletons have several different mathematical definitions in the technical literature, and there are many different algorithms for computing them. Various different variants of skeleton can also be found, including straight skeletons, morphological skeletons, and skeletons by influence zones (SKIZ) (also known as Voronoi diagram).
In the technical literature, the concepts of skeleton and medial axis are used interchangeably by some authors, while some other authors regard them as related, but not the same. Similarly, the concepts of skeletonization and thinning are also regarded as identical by some, and not by others.
Skeletons have been used in several applications in computer vision, image analysis, and digital image processing, including optical character recognition, fingerprint recognition, visual inspection, pattern recognition, binary image compression, and protein folding.
- 1 Mathematical definitions
- 2 Skeletonization algorithms
- 3 Notes
- 4 References
- 5 Open source software
- 6 See also
- 7 External links
Skeletons have several different mathematical definitions in the technical literature; most of them lead to similar results in continuous spaces, but usually yield different results in discrete spaces.
Quench points of the fire propagation model
In his seminal paper, Harry Blum of the Air Force Cambridge Research Laboratories in Cambridge, Massachusetts, defined a medial axis for computing a skeleton of a shape, using an intuitive model of fire propagation on a grass field, where the field has the form of the given shape. If one "sets fire" at all points on the boundary of that grass field simultaneously, then the skeleton is the set of quench points, i.e., those points where two or more wavefronts meet. This intuitive description is the starting point for a number of more precise definitions.
Centers of maximal disks (or balls)
- , and
- If another disc D contains B, then .
One way of defining the skeleton of a shape A is as the set of centers of all maximal disks in A.
Centers of bi-tangent circles
The skeleton of a shape A can also be defined as the set of centers of the discs that touch the boundary of A in two or more locations. This definition assures that the skeleton points are equidistant from the shape boundary and is mathematically equivalent to Blum's medial axis transform.
Ridges of the distance function
Many definitions of skeleton make use of the concept of distance function, which is a function that returns for each point x inside a shape A its distance to the closest point on the boundary of A. Using the distance function is very attractive because its computation is relatively fast.
One of the definitions of skeleton using the distance function is as the ridges of the distance function. There is a common mis-statement in the literature that the skeleton consists of points which are "locally maximum" in the distance transform. This is simply not the case, as even cursory comparison of a distance transform and the resulting skeleton will show.
- Points with no upstream segments in the distance function. The upstream of a point x is the segment starting at x which follows the maximal gradient path.
- Points where the gradient of the distance function are different from 1 (or, equivalently, not well defined)
- Smallest possible set of lines that preserve the topology and are equidistant to the borders
- Using morphological operators
- Supplementing morphological operators with shape based pruning
- Using curve evolution
- Using level sets
- Finding ridge points on the distance function
- "Peeling" the shape, without changing the topology, until convergence
Skeletonization algorithms can sometimes create unwanted branches on the output skeletons. Pruning algorithms are often used to remove these branches.
- Jain, Kasturi & Schunck (1995), Section 2.5.10, p. 55.
- Gonzales & Woods (2001), Section 11.1.5, p. 650
- Dougherty (1992).
- Ogniewicz (1995).
- A. K. Jain (1989), Section 9.9, p. 382.
- Serra (1982).
- Sethian (1999), Section 17.5.2, p. 234.
- Abeysinghe et al. (2008b).
- Harry Blum (1967)
- A. K. Jain (1989), Section 9.9, p. 387.
- Gonzales & Woods (2001), Section 9.5.7, p. 543.
- Abeysinghe et al. (2008a).
- Tannenbaum (1996)
- Bai, Longin & Wenyu (2007).
- A. K. Jain (1989), Section 9.9, p. 389.
- Abeysinghe, Sasakthi; Baker, Matthew; Chiu, Wah; Ju, Tao (2008a), "Segmentation-free skeletonization of grayscale volumes for shape understanding", IEEE Int. Conf. Shape Modeling and Applications (SMI 2008), pp. 63–71, doi:10.1109/SMI.2008.4547951, ISBN 978-1-4244-2260-9.
- Abeysinghe, Sasakthi; Ju, Tao; Baker, Matthew; Chiu, Wah (2008b), "Shape modeling and matching in identifying 3D protein structures", Computer-Aided Design (Elsevier) 40 (6): 708–720, doi:10.1016/j.cad.2008.01.013
- Bai, Xiang; Longin, Latecki; Wenyu, Liu (2007), "Skeleton pruning by contour partitioning with discrete curve evolution", IEEE Transactions on Pattern Analysis and Machine Intellingence 29 (3): 449–462, doi:10.1109/TPAMI.2007.59, PMID 17224615.
- Blum, Harry (1967), "A Transformation for Extracting New Descriptors of Shape", in Wathen-Dunn, W., Models for the Perception of Speech and Visual Form, Cambridge, MA: MIT Press, pp. 362–380.
- Cychosz, Joseph (1994), Graphics gems IV, San Diego, CA, USA: Academic Press Professional, Inc., pp. 465–473, ISBN 0-12-336155-9.
- Dougherty, Edward R. (1992), An Introduction to Morphological Image Processing, ISBN 0-8194-0845-X.
- Gonzales, Rafael C.; Woods, Richard E. (2001), Digital Image Processing, ISBN 0-201-18075-8.
- Jain, Anil K. (1989), Fundamentals of Digital Image Processing, ISBN 0-13-336165-9.
- Jain, Ramesh; Kasturi, Rangachar; Schunck, Brian G. (1995), Machine Vision, ISBN 0-07-032018-7.
- Ogniewicz, R. L. (1995), "Automatic Medial Axis Pruning Based on Characteristics of the Skeleton-Space", in Dori, D.; Bruckstein, A., Shape, Structure and Pattern Recognition, ISBN 981-02-2239-4.
- Petrou, Maria; García Sevilla, Pedro (2006), Image Processing Dealing with Texture, ISBN 978-0-470-02628-1.
- Serra, Jean (1982), Image Analysis and Mathematical Morphology, ISBN 0-12-637240-3.
- Sethian, J. A. (1999), Level Set Methods and Fast Marching Methods, ISBN 0-521-64557-3.
- Tannenbaum, Allen (1996), "Three snippets of curve evolution theory in computer vision", Mathematical and Computer Modelling 24 (5): 103–118. | 2026-01-30T03:09:25.570276 |
1,010,410 | 3.741981 | http://everything2.com/title/THANKSGIVING | Thanksgiving festivals have been traditions
of many cultures throughout history
. They often started as harvest
festivals, as many ancient farmers had beliefs that spirits
caused the crops to grow
, and to later die
. These spirits were thought to be released during the harvest, with the possibility they may attack the people doing the harvesting. Some of the harvest festivals celebrated the defeat of these spirits.
The Greeks, for example, had a three-day autumn festival known as Thesmosphoria, where they honored Demeter, goddess of grain. Married women would create shelters made of leaves, and use plants to make couches, which they would place inside the shelter. They would proceed to fast for the next day, and then hold a big feast on the third. Offerings were also made to Demeter, to encourage her to give them a good harvest.
The Romans celebrated Cerelia, to the goddess Ceres, their goddess of grain. The first food harvested would be given as an offering, and then a great feast would take place. This holiday was celebrated on October 4th.
Chung Ch'ui is the Chinese festival for giving thanks, a three-day festival, like the Greek version. The day was celebrated with a feast of roast pig, freshly harvested fruit, and moon cakes - cakes round and yellow, stamped with the face of a rabbit. This is because one of the features of the day is that it is the birthday of the moon.
However, there's more to those cakes, according to legend. During one point when China had been invaded, and Chinese people were without food and homes. They decided to attack the invaders, and used the moon cakes to coordinate the attack. A large number of cakes were made, with the exact time of the attack contained inside the cake. The attack was successful, driving out the invaders.
The Egyptians celebrated in the springtime, their harvest season, to honor Min. There would be great parades, often with the participation of the Pharoah. Huge feasts, sport, music, and dancing were all involved.
History of the American Thanksgiving:
In 1620, the members of the English Separatist Church, a Puritan sect crossed over the Atlantic Ocean in the Mayflower, journeying to the New World. They had fled from England to Holland to escape religious persecution, but felt the Dutch way of life was ungodly. They had to negotiate with a London company to finance the trip, and to protect their financian interests, a large number of passengers were hired for the trip.
They arrived in Massachusetts, the area native to the Wampanoag Indians, on December 11, 1620. Plymouth was built right where an old Native American village of Patuxet was located. They had brought some supplies, but not a lot, and the first winter they dealt with was very harsh, and nearly half of the original 106 people abord the ship had been lost. When the next spring came around, they found that the wheat they had brought for planting would not grow in the soil, and things didn't look good.
Fortunately, a man by the name of Tisquantum (also known as Squanto) was among the Wampanoag tribe. He was originally from Patuxet, but had gone to England with John Weymouth, an explorer, and had learned to speak English. Upon his return after a number of other things happened, he found his old village empty, after all had died of infections brought by the English. He had been living in the nearby Wampanoag village. His ability to communicate with the Pilgrims, and the Wampanoag custom of helping visitors, they started
to teach the Pilgrims how to survive in the New World.
The next few months, Squanto stayed in the village, educating the Pilgrims. He helped by bringing food and supplies, such as venison, and beaver pelts. He helped them learn to cultivate various new plants, such as corn. He pointed out poisonous and medicinal foliage, and gave them countless other skills. They also graduated from the simple small structures they had been living in, to native designs.
With Squanto's help, they not only survived, but started to thrive. There would be enough food to last them through the winter from the great crop yields. They had started to build more traditional european buildings - they had created a log Church.
To celebrate their good fortune, Governor William Bradford decided to hold a feast of thanksgiving. It had been a religious observation in England for them, so they were used to having one. The Pilgrim Captain, Miles Standish, invited a few of the Native Americans to the feast to show their gratitude, as they wouldn't have survived without them. Squanto, Samoset, and the leader of the Wampanoag, Massasoit were invited, along with their immediate families. The Wampanoag tribe also had feasts of thanksgiving - six throughout the year, so it was not new to them. The feast of thanks would last three days.
The Pilgrims were not prepared for the size of their families, as the Native American families were quite large - about 90 came in total. Massasoit, aware that there would not be enough food, ordered some people to go back to their village, for more food. In fact, they ended up supplying the majority of the feast, with such items as deer, wild fowl, fish, beans, berries, squash, corn soup, and corn bread.
Everyone sat at long tables for the feast - this was a new experience for the Native Americans, as they ate sitting on furs or mats sitting on the ground. The Puritans were also exposed to something new - Puritan women stood behind the table, allowing the men to eat first. But the Native American women sat at The table right along with the men. Both groups established a close friendship.
Two years later, in 1623, crops were suffering from a severe drought. The Pilgrims gathered in their church, praying for rain, so they may have enough food at the harvest for winter. The next day greeted them with a nice, steady rain. So Governor Bradford again declared a day of thanks, again inviting some members of the Wampanoag tribe.
However, as the years passed and more settlers came in from England who were unaware of the help of the Native Americans, the friendship disappeared. Many of the newcomers were full of mistrust, and they started showing intolerance toward the Native Americans over their customs, and especially, their religion. By the time the children at the thanksgiving feast were adults, the Pilgrims and Native Americans killing each other.
On June 20, 1676, the governing council in Charlestown, Massachusetts voted on how to celebrate the secure establishment of their community, especially with all the "heathen" Native Americans around. They voted, and The First Thanksgiving Proclamation selected June 29 as their day of feasting.
On October 13, 1777, to celebrate thanks for both the recent victory at Saratoga for the American Revolution, and for more traditional reasons to give thanks, all 13 colonies celebrated a day of thanks.
George Washington himself declared a national day of thanksgiving, though there was not a lot of support for the idea of thanksgiving as a national holiday. In fact, Thomas Jefferson himself disagreed with the idea.
In 1817, the state of New York adopted a Thanksgiving Day holiday. And by the middle of the century, many other states had done the same. Finally, in 1863, after years of the cause being championed by Sarah Josepha Hale, President Abraham Lincoln declared the last Thursday of Noevmber a national day of Thanksgiving. For a while, each successive president would make the same declaration, though a few presidents did choose a different date. Franklin Roosevelt set it as the next-to-last Thursday, to try and create a longe shopping season for Christmas. People were very unhappy with the change, and it was returned to the last Thursday.
In 1941, Congress decided to sanction the holiday. No longer did each president have to declare it. It was fixed as the fourth Thursday in November.
Thanksgiving Information, http://www.2020tech.com/thanks/temp.html
Thanksgiving on the Net, http://www.holidays.net/thanksgiving/story.htm
The Thanksgiving Story, http://wilstar.com/holidays/thankstr.htm | 2026-02-02T20:50:33.562998 |
125,076 | 4.029659 | http://www.newscientist.com/article/dn10389-soil-minerals-point-to-planetwide-ocean-on-mars.html | An ocean of water once wrapped around Mars, suggests the discovery of soil chemicals by NASA's rovers. But the same chemicals also indicate that life was not widespread on the planet at the time the ocean was present.
Sulphates, which form most readily in liquid water, had already been detected by the Spirit and Opportunity rovers. The minerals have been interpreted as evidence for past bodies of water on the surface. But it has not been clear how large these bodies of water might have been.
Now, a new analysis of rover data suggests that the sulphates were once dissolved in a planet-wide ocean. The study was carried out by James Greenwood of Wesleyan University and Ruth Blake of Yale University, both in Connecticut, US.
The researchers point out that phosphates, which are also linked to water, are also present at both sites. More importantly, the ratio of phosphates to sulphates is about the same at both locations. They say the most likely explanation for this is that any local variations were smoothed out by mixing in a planet-wide ocean.
Some researchers argue that a broad, flat area in Mars's northern hemisphere is the relic of an ancient ocean, and point to rock weathering that could have been caused by seawater. But the uniform phosphorus-to-sulphur ratio is the first chemical evidence that such a large body of water might have once existed.
The phosphorus was probably leached from rocks in the form of calcium phosphate, the researchers say. The fact that it appears to have been dissolved and mixed with sulphates in large amounts suggests that the hypothesised ocean must have been very acidic, because calcium phosphate only dissolves well in acidic water.
A phosphorus-rich ocean is a bad sign for past Mars life. Phosphorus is an important element for life on Earth, and is quickly extracted from the environment by organisms. If life were extensive on Mars, it would not have left so much phosphorus dissolved in the water, the researchers say.
"To a first order approximation, you couldn't have had a biosphere that was anything like the one on Earth," Greenwood says.
Lots of lakes?
Michael Wyatt of Brown University in Rhode Island, US, says the idea of a past ocean on Mars fits well with the phosphorus and sulphur data, but adds that several smaller bodies of water might also explain it.
"It's kind of hard to pin down smaller bodies of water versus one large ocean," he told New Scientist. Another possible explanation for the data would be many bodies of water that have similar chemistry, he says.
The researchers admit that the similar phosphate-to-sulphate ratio seen on opposite sides of the planet could also arise if wind mixed these materials together after the bodies of water disappeared.
Journal reference: Geology (vol 34, p 953)
If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to.
Have your say
Only subscribers may leave comments on this article. Please log in.
Only personal subscribers may leave comments on this article
Life Still Exists On Mars. fact.
Thu Nov 22 16:34:52 GMT 2007 by Pete.
Not only do i think life exists on mars but i think intelligent
life still exists there...there has obviously been some kind of catastrophy there but some survived, and i think some obviously came here and are amongst us..I reckon it has to be the technology there after.its afact they are not giving us the true colour of mars because i think it is still earth like and can be lived on by humans.i could go on for hours on this subject.because they cant pull the wool over my eyes.we are in the tweny 1st century.whats ther game.!?
Life Still Exists On Mars. Fact.
Fri Nov 30 06:37:13 GMT 2007 by Josh
I agree. Anybody that has waken just a bit up from the massive brainwashing going on here on this planet agrees with you.
Our governments work with the aliens. We have underground bases on planet earth, the moon, mars and who else knows. NASA retouches the pictures we get to see in public. Most UFO sightings are from our secret government. They already have anti gravity, light speed or above, free energy and god knows what. The history of human man kind is also a public joke. We ourselves comes from the stars and will soon meet our space brothers.
So much will happen the next 5 years. A few of us are ready.
Life Still Exists On Mars. Fact.
Mon Dec 24 04:07:12 GMT 2007 by Malachi
Yeah i agree life exist there..i believe that life exist within the planet whereas on earth we live on top of the planet!
Thu Feb 28 04:36:38 GMT 2008 by Smith Underwood
Why is the mars can't have a past biospere?
why do you say that phosporus-rich ocean is a bad sign to have a life form in it?
- from a student of harvard.
Life On Mars
Sat Jun 07 23:06:03 BST 2008 by Barrie
Please give me more detail-im hungry for info-anybody got any good picture files that indicate life was there-show me more!
All comments should respect the New Scientist House Rules. If you think a particular comment breaks these rules then please use the "Report" link in that comment to report it to us.
If you are having a technical problem posting a comment, please contact technical support. | 2026-01-20T04:20:51.433977 |
445,623 | 3.951635 | http://www.scientificamerican.com/article.cfm?id=how-are-the-abbreviations | Michael R. Topp, professor of chemistry at the University of Pennsylvania, explains.
Although some of the symbols on the periodic table may seem strange, they all make sense with a little background information. For example, the symbol for the element mercury, Hg, comes from the Latin word hydrargyrum, meaning "liquid silver," a more recent version of which was the English "quicksilver." Many other elements that were known to the ancients also have names derived from Latin. The table below lists some examples for which the initials do not correspond with their modern English names.
The rare (or inert) gases were discovered much more recently and tend to have classical-sounding names based on Greek. For example, xenon means "the stranger" in Greek and argon means "inert." Helium is named after the sun.
So far, 110 elements have been formally named. Newly discovered ones have names determined jointly by the International Union of Pure and Applied Chemistry (IUPAC) and the International Union of Pure and Applied Physics (IUPAP), which gives discoverers the chance to name them. All of these "new" elements are synthetic and the discovery of each needs confirmation. An abstract published by IUPAC gives insights into the procedure: "A joint IUPAC-IUPAP Working Party confirmed the discovery of the element with atomic number 110. In accord with IUPAC procedures, the discoverers proposed a name and symbol for the element. The Inorganic Chemistry Division recommended this proposal for acceptance, and it was adopted by the IUPAC Council at Ottawa, 16 August 2003. The recommended name is darmstadtium with symbol Ds."
The proposal to name element 111 Roentgenium, symbol Rg, has been recommended for approval by the Inorganic Chemistry Division Committee of IUPAC. As they state: "This proposal lies within the long-established tradition of naming elements to honor famous scientists. Wilhelm Conrad Roentgen discovered x-rays in 1895."
For informational purposes, as yet undiscovered elements having higher atomic numbers are assigned so-called placeholder names, which are simply Latinized versions of their atomic numbers. Thus element 111 was formerly designated unununium, literally "one one one," (symbol "Uuu") and element 112, which is still formally unnamed, has been given the temporary name of ununbium (symbol "Uub").
[Editor's note: The name roentgenium for the element of atomic number 111 (with symbol Rg) was officially approved as of November 1, 2004.] | 2026-01-25T03:08:01.784528 |
550,198 | 3.539733 | http://en.wikipedia.org/wiki/Storyboarding | |This article needs additional citations for verification. (December 2009)|
Storyboards are graphic organizers in the form of illustrations or images displayed in sequence for the purpose of pre-visualizing a motion picture, animation, motion graphic or interactive media sequence.
The storyboarding process, in the form it is known today, was developed at the Walt Disney Studio during the early 1930s, after several years of similar processes being in use at Walt Disney and other animation studios.
The storyboarding process can be very time-consuming and intricate. Many large budget silent films were storyboarded but most of this material has been lost during the reduction of the studio archives during the 1970s. The form widely known today was developed at the Walt Disney studio during the early 1930s. In the biography of her father, The Story of Walt Disney (Henry Holt, 1956), Diane Disney Miller explains that the first complete storyboards were created for the 1933 Disney short Three Little Pigs. According to John Canemaker, in Paper Dreams: The Art and Artists of Disney Storyboards (1999, Hyperion Press), the first storyboards at Disney evolved from comic-book like "story sketches" created in the 1920s to illustrate concepts for animated cartoon short subjects such as Plane Crazy and Steamboat Willie, and within a few years the idea spread to other studios.
According to Christopher Finch in The Art of Walt Disney (Abrams, 1974), Disney credited animator Webb Smith with creating the idea of drawing scenes on separate sheets of paper and pinning them up on a bulletin board to tell a story in sequence, thus creating the first storyboard. The second studio to switch from "story sketches" to storyboards was Walter Lantz Productions in early 1935, by 1936 Harman-Ising and Leon Schlesinger also followed suit. By 1937-38 all studios were using storyboards.
Gone with the Wind (1939) was one of the first live action films to be completely storyboarded. William Cameron Menzies, the film's production designer, was hired by producer David O. Selznick to design every shot of the film.
Storyboarding became popular in live-action film production during the early 1940s, and grew into a standard medium for previsualization of films. Pace Gallery curator, Annette Micheloson, writing of the exhibition Drawing into Film: Director's Drawings, considered the 1940s to 1990s to be the period in which "production design was largely characterized by adoption of the storyboard". Storyboards are now an essential part of the creation progress.
A film storyboard is essentially a large comic of the film or some section of the film produced beforehand to help film directors, cinematographers and television commercial advertising clients visualize the scenes and find potential problems before they occur. Besides this storyboards also help estimate the cost of the overall production and saves time. Often storyboards include arrows or instructions that indicate movement.
In creating a motion picture with any degree of fidelity to a script, a storyboard provides a visual layout of events as they are to be seen through the camera lens. And in the case of interactive media, it is the layout and sequence in which the user or viewer sees the content or information. In the storyboarding process, most technical details involved in crafting a film or interactive media project can be efficiently described either in picture, or in additional text.
A common misconception is that storyboards are not used in theatre. They are frequently special tools that directors and playwrights use to understand the layout of the scene. The great Russian theatre practitioner Constantin Stanislavski developed storyboards in his detailed production plans for his Moscow Art Theatre performances (such as of Chekhov's The Seagull in 1898). The German director and dramatist Bertolt Brecht developed detailed storyboards as part of his dramaturgical method of "fabels."
In animation and special effects work, the storyboarding stage may be followed by simplified mock-ups called "animatics" to give a better idea of how the scene will look and feel with motion and timing. At its simplest, an animatic is a series of still images edited together and displayed in sequence with a rough dialogue and/or rough sound track added to the sequence of still images (usually taken from a storyboard) to test whether the sound and images are working effectively together.
This allows the animators and directors to work out any screenplay, camera positioning, shot list and timing issues that may exist with the current storyboard. The storyboard and soundtrack are amended if necessary, and a new animatic may be created and reviewed with the director until the storyboard is perfected. Editing the film at the animatic stage can avoid animation of scenes that would be edited out of the film. Animation is usually an expensive process, so there should be a minimum of "deleted scenes" if the film is to be completed within budget.
Often storyboards are animated with simple zooms and pans to simulate camera movement (using non-linear editing software). These animations can be combined with available animatics, sound effects and dialog to create a presentation of how a film could be shot and cut together. Some feature film DVD special features include production animatics.
Animatics are also used by advertising agencies to create inexpensive test commercials. A variation, the "rip-o-matic", is made from scenes of existing movies, television programs or commercials, to simulate the look and feel of the proposed commercial. Rip, in this sense, refers to ripping-off an original work to create a new one.
A photomatic (probably derived from 'animatic' or photo-animation) is a series of still photographs edited together and presented on screen in a sequence. Usually, a voice-over, soundtrack and sound effects are added to the piece to create a presentation to show how a film could be shot and cut together. Increasingly used by advertisers and advertising agencies to research the effectiveness of their proposed storyboard before committing to a 'full up' television advertisement.
The photomatic is usually a research tool, similar to an animatic, in that it represents the work to a test audience so that the commissioners of the work can gauge its effectiveness.
Originally, photographs were taken using colour negative film. A selection would be made from contact sheets and prints made. The prints would be placed on a rostrum and recorded to videotape using a standard video camera. Any moves, pans or zooms would have to be made in camera. The captured scenes could then be edited.
Digital photography, web access to stock photography and Non-linear editing programs have had a marked impact on this way of film making also leading to the term 'digimatic'. Images can be shot and edited very quickly to allow important creative decisions to be made 'live'. Photo composite animations can build intricate scenes that would normally be beyond many test film budgets.
Photomatic was also the trademarked name of many of the booths found in public places which took photographs by coin operation. The Photomatic brand of the booths were manufactured by the International Mutoscope Reel Company of New York City. Earlier versions took only one photo per coin, and later versions of the booths took a series of photos. Many of the booths would produce a strip of four photos in exchange for a coin.
Some writers have used storyboard type drawings (albeit rather sketchy) for their scripting of comic books, often indicating staging of figures, backgrounds and balloon placement with instructions to the artist as needed often scribbled in the margins and the dialogue/captions indicated. John Stanley and Carl Barks (when he was writing stories for the Junior Woodchuck title) are known to have used this style of scripting.
Storyboards are used today by industry for planning ad campaigns, commercials, a proposal or other business presentations intended to convince or compel to action. Consulting firms teach the technique to their staff to use during the development of client presentations, frequently employing the "brown paper technique" of taping mock-up presentation slides to a large piece of kraft paper which can be rolled up for easy transport. The initial storyboard may be as simple as slide titles on Post-It notes, which are then replaced with draft presentation slides as they are created.
Storyboards also exist in accounting in the ABC System(Activity Based Costing System) to develop a detailed process flowchart which visually shows all activities and the relationships among activities. They are used in this way to measure the cost of resources consumed, identify and eliminate non-value-added costs, determine the efficiency and effectiveness of all major activities, and identity and evaluate new activities that can improve future performance.
A "quality storyboard" is a tool to help facilitate the introduction of a quality improvement process into an organisation.
"Design comics" are a type of storyboard used to include a customer or other characters into a narrative. Design comics are most often used in designing web sites or illustrating product usage scenarios during design. Design comics were popularized by Kevin Cheng and Jane Jao in 2006.
Storyboards are now becoming more popular with novelists. Because most novelists write their stories by scenes rather than chapters, storyboards are useful for plotting the story in a sequence of events and rearranging the scenes accordingly..
More recently the term storyboard has been used in the fields of web development, software development and instructional design to present and describe, in written, interactive events as well as audio and motion, particularly on user interfaces and electronic pages.
Storyboarding is used in software development as part of identifying the specifications for a particular software. During the specification phase, screens that the software will display are drawn, either on paper or using other specialized software, to illustrate the important steps of the user experience. The storyboard is then modified by the engineers and the client while they decide on their specific needs. The reason why storyboarding is useful during software engineering is that it helps the user understand exactly how the software will work, much better than an abstract description. It is also cheaper to make changes to a storyboard than an implemented piece of software.
One advantage of using storyboards is that it allows (in film and business) the user to experiment with changes in the storyline to evoke stronger reaction or interest. Flashbacks, for instance, are often the result of sorting storyboards out of chronological order to help build suspense and interest.
The process of visual thinking and planning allows a group of people to brainstorm together, placing their ideas on storyboards and then arranging the storyboards on the wall. This fosters more ideas and generates consensus inside the group.
Storyboards for films are created in a multiple step process. They can be created by hand drawing or digitally on a computer. The main characteristics of a storyboard are:
- Visualize the storytelling.
- Focus the story and the timing in several key frames (very important in animation).
- Define the technical parameters: description of the motion, the camera, the lighting, etc.
If drawing by hand, the first step is to create or download a storyboard template. These look much like a blank comic strip, with space for comments and dialogue. Then sketch a "thumbnail" storyboard. Some directors sketch thumbnails directly in the script margins. These storyboards get their name because they are rough sketches not bigger than a thumbnail. For some motion pictures, thumbnail storyboards are sufficient.
However, some filmmakers rely heavily on the storyboarding process. If a director or producer wishes, more detailed and elaborate storyboard images are created. These can be created by professional storyboard artists by hand on paper or digitally by using 2D storyboarding programs. Some software applications even supply a stable of storyboard-specific images making it possible to quickly create shots which express the director's intent for the story. These boards tend to contain more detailed information than thumbnail storyboards and convey more of the mood for the scene. These are then presented to the project's cinematographer who achieves the director's vision.
Finally, if needed, 3D storyboards are created (called 'technical previsualization'). The advantage of 3D storyboards is they show exactly what the film camera will see using the lenses the film camera will use. The disadvantage of 3D is the amount of time it takes to build and construct the shots. 3D storyboards can be constructed using 3D animation programs or digital puppets within 3D programs. Some programs have a collection of low resolution 3D figures which can aid in the process. Some 3D applications allow cinematographers to create "technical" storyboards which are optically-correct shots and frames.
While technical storyboards can be helpful, optically-correct storyboards may limit the director's creativity. In classic motion pictures such as Orson Welles' Citizen Kane and Alfred Hitchcock's North by Northwest, the director created storyboards that were initially thought by cinematographers as to be impossible to film. Such innovative and dramatic shots had "impossible" depth of field and angles where there was "no room for the camera" - at least not until creative solutions were found to achieve the ground-breaking shots that the director had envisioned.
- Graphic organizer
- Script breakdown
- List of film-related topics
- 'The Story of Walt Disney' (Henry Holt, 1956)
- 1936 documentary Cartoonland Mysteries
- 2006 Information Architecture Summit wrapup, boxesandarrows.com, 19 April 2006
- Media related to Storyboards at Wikimedia Commons | 2026-01-26T21:41:57.133458 |
928,482 | 3.969243 | https://simple.wikipedia.org/wiki/Algae | Biology and taxonomy[change | edit source]
Algae are a large and diverse group of simple, typically autotrophic organisms, ranging from unicellular to multicellular forms. The largest and most complex marine forms are called seaweeds. They are, like plants, and "simple" because they lack the many distinct organs found in land plants. For that reason they are not classified as plants.
Though the prokaryotic Cyanobacteria (formerly referred to as blue-green algae) were included as "Algae" in older textbooks, this is outdated. The term Algae is now used for eukaryotic organisms. All true algae therefore have a nucleus enclosed within a membrane and chloroplasts bound in one or more membranes. However, algae are definitely not a monophyletic group, as they do not all descend from a common algal ancestor. Modern taxonomists propose splitting them up into monophyletic groups, but there is at present no consensus as to the details.
Algae lack the various structures that characterize land plants, such as leaves, roots, and other organs that are found in plants. Nearly all algae have photosynthetic machinery ultimately derived from the cyanobacteria, and so produce oxygen as a by-product of photosynthesis, unlike other photosynthetic bacteria such as purple and green bacteria. Some unicellular species rely entirely on external energy sources and have limited or no photosynthetic apparatus.
Fossilized filamentous algae from the Vindhya basin have been dated back to 1.6 to 1.7 billion years ago.
Life style[change | edit source]
Ecology[change | edit source]
Algae are usually found in damp places or water, and are common on land as well as water. However, terrestrial algae are usually rather inconspicuous and are far more common in moist, tropical regions than dry ones. Algae lack vascular tissues and other adaptations to live on land, but they can endure dryness and other conditions in symbiosis with a fungus as lichen.
The various sorts of algae play significant roles in aquatic ecology. Microscopic forms that live suspended in the water column are called phytoplankton. They provide the food base for most marine food chains. Kelp grows mostly in shallow marine waters. Some are used as human food or harvested for agar or fertilizer. Kelp can grow in large stands called kelp forests. These forests prevent some of the damage from waves. Many different species live in them, including sea urchins, sea otters, and abalone.
Some algae may harm other species. Some algae may reproduce a lot, and make an algal bloom. These algae may produce protective toxins which can kill fish in the water. Dinoflagellates secrete a compound that turns the flesh of fish into slime. The algae then consume this nutritious liquid.
Symbiosis[change | edit source]
Algae have evolved a number of symbiotic partnerships with other organisms. The most famous is the plant-like lichen, which are each formed by a fungus with an alga. It is a highly successful life-form, and twenty thousand 'species' are known. In all cases the lichen are quite different in appearance and life-style from either constituent; it is possibly the most complete symbiosis known. Both constituents gain from their access to niches with low nutrient value, which is where lichen are found.
Less well known are the algal relationships with animals. Reef-building corals are basically social Cnidarian polyps. Corals are dependent on light, because the algae are important partners, and they require light. Corals have evolved structures, often tree-like, which offer the algae maximum access to light. The coral weakens the algal cell walls, and digests about 80% of the food synthesised by the algae. The corals' waste-products provide nutrients for the algae so, as with lichen, both partners gain from the association. The algae are golden-brown flagellate algae, often of the genus Symbiodinium. A curious feature of the partnership is that the coral may eject the algae in hard times, and regain them later. The ejection of the algal partner is called bleaching, because the coral loses its colour.p200
Other types of Cnideria, such as sea anemones and jellyfish, also contain algae. Jellyfish with algae behave so that their partners get the best light during the day, and descend to depths at night, where the water is rich in nitrates and brown with decay. Sea slugs and clams are also well known for harbouring algae. Both groups are molluscs. The sea slugs graze on coral, and are the same colour as the coral they graze. They are able to separate the algae from the polyp tissues they digest. The algal cells are moved to its tentacles, where they continue to live. The otherwise defenceless slug gains both camouflage and nutrition.p204 The giant clam keeps algae in its mantle, which is revealed when the clam is open. The coloured mantle has places where the skin is transparent, and acts like a lens to concentrate light on the algae beneath. When the algae get too numerous, the clam digests them.p203
References[change | edit source]
- Nabors, Murray W. (2004). Introduction to Botany. San Francisco, CA: Pearson Education, Inc.
- Allaby, M ed. (1992). "Algae". The Concise Dictionary of Botany. Oxford: Oxford University Press.
- Round (1981).
- Patrick J. Keeling (2004). "Diversity and evolutionary history of plastids and their hosts". American Journal of Botany 91: 1481–1493. doi:10.3732/ajb.91.10.1481. http://www.amjbot.org/cgi/content/full/91/10/1481.
- Laura Wegener Parfrey, Erika Barbero, Elyse Lasser, Micah Dunthorn, Debashish Bhattacharya, David J Patterson, and Laura A Katz (December 2006). "Evaluating Support for the Current Classification of Eukaryotic Diversity". PLoS Genet. 2 (12): e220. doi:10.1371/journal.pgen.0020220. PMC 1713255. PMID 17194223. http://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=1713255.
- Bengtson S, Belivanova V, Rasmussen B, Whitehouse M. 2009. The controversial "Cambrian" fossils of the Vindhyan are real but more than a billion years older. PNAS 106:7729-34.
- Attenborough, David. 1995. The private life of plants. BBC, London. Chapter 5 Living together. | 2026-02-01T17:44:24.301736 |
862,342 | 3.697973 | http://www.slideshare.net/jpangcog/marine-biomeorig-11851722 | The marine biome is the biggest biome in the world! It covers about 70% of the earth.
It includes five main oceans: the Pacific, Atlantic, Indian, Arctic, and Southern, as well as many smaller Gulfs and Bays.
There is about one cup of salt per gallon of water in the ocean.
The constant motion of the ocean results in currents and waves that may either be warm or cold depending on the weather and temperature of that area. Temperatures in the ocean range from just around freezing at the pole and in the deep waters, to tropical clear waters that are as warm as a bathtub. The average temperature of all oceans is about 39°F (4°C). Heat from the sun warms only the surface of the water. Deep down, oceans everywhere are cold and dark
LAYERS OF THE OCEAN
LAYERS OF THE OCEAN
The ocean is divided up into three vertical zones. The top layer is called the euphotic zone and it is the area of the ocean where light can penetrate. The next layer is the disphotic zone. This area is too deep for lots of light to reach. Instead, the light here looks like our twilight on land. The deepest part of the ocean is called the aphotic zone, or deep sea. The water here is awfully cold, completely dark, and low in nutritional content.
Bathypelagic, or Midnight Zone- between 1,000 and 4,000 meters (13,000 ft) in depth, the bathypelagic zone is a region of total darkness, with a very low level of dissolved oxygen and nutrients, low temperature and extremely high pressure. Adaptations of the inhabitants of the bathypelagic zone include a slow metabolism, bioluminescence, and hinged jaws and distensible stomachs that allow them to swallow prey that is several times larger than they are.
Abyssopelagic, or Lower Midnight Zone- the abyssopelagic zone, extending down from a depth of 4,000 meters, is similar to the bathypelagic, but more extreme in terms of pressure.
Hadropelagic Zone- taking its name from the Greek Hades, or underworld, the hadropelagic zone refers to the deep ocean trenches, some of which exceed 9,000 meters (30,000 ft) in depth, and have pressures of 16,000 psi. Average temperatures hover around freezing, with the exception of hydrothermal vents, where water heated by the magma of Earth's mantle boils out. Organisms of the hadropelagic zone are highly specialized and cannot survive if they are removed to shallower water with lower pressure
Epipelagic, or Sunlit Zone- this is the topmost layer of the pelagic zone, from the surface to a depth of about 200 meters (660 ft). It receives enough sunlight to support photosynthetic organisms such as seaweed and phytoplankton, and also has a high level of dissolved oxygen due to the action of the waves. The epipelagic zone is home to the majority of ocean life, including large predatory fish such as tuna and sharks, small forage fish such as herrings, sardines and anchovies, dolphins, whales and other marine mammals, sea turtles, and numerous other species.
Mesopelagic, or Twilight Zone- ranging from 200 to 1,000 meters (3,280 ft), the mesopelagic zone receives very little sunlight, and photosynthetic organisms cannot survive in this layer. The pressure increases and the temperature and dissolved oxygen levels continue to drop with increasing depth. The fish that live in the mesopelagic zone are predominantly small filter feeders such as lanternfish, and larger predator fish including sabertooth fish. The filter feeders make vertical migrations up to the epipelagic zone by night to feed on plankton, returning to the darkness by day to hide from predators.
A clam is a type of shellfish . Clams can be found in saltwater and freshwater . Clams eat plankton , and are eaten by small sharks and squid . Clams can be eaten by people. They may be found on menus in restaurants that serve seafood . Clams are a fairly common form of bivalve , therefore making it part of the phylum mollusca . There are many clams in the ocean , but some can also be found in some lakes, streams, and rivers.
Dolphins are mammals from the order of Cetacea , the Whales . They are animals that usually live in salt water, like the sea , but certain species can live in rivers .
The name "dolphin" is used for oceanic dolphins and river dolphins , but oceanic dolphins and river dolphins are not directly related.
Sharks are part of a group of fish called Chondrichthyes , with skeletons made of cartilage instead of bone . Cartilage is rubbery stuff that is softer than bone. Cartilaginous fish also include skates and rays. There are more than 350 different kinds of sharks, such as the Great White and Whale sharks . Fossils show that sharks have been around for 420 million years, since the early Silurian .
The Octopus is a cephalopod mollusc in the order Octopoda. Octopuses have two eyes and four pairs of arms equipped with suckers. An octopus has a hard beak, and its mouth is at the center point of the arms.
Most octopuses have no internal or external skeleton, allowing them to squeeze through tight places. Octopuses are intelligent predators with a taste for crabs .
Sea snakes , or "seasnakes", are poisonous elapid snakes that live in marine environments for most or all of their lives. Together with sea turtles they are among the best-known marine reptiles . They evolved from their ancestors who lived on land. Some sea snakes still have some of the behaviour and traits of their ancestors left (like Laticauda who can also move a little on land). Most sea snakes are well adapted to living in the water all the time. They are unable to even move on land.
B rown alga, Hedophyllum sessile, of the North Pacific, characterized by a compact mass of fronds resembling a cabbage
Sea anemones are a group of water-dwelling, predatory animals of the order Actiniaria ; they are named after the anemone , a terrestrial flower . Sea anemones are classified in the phylum Cnidaria , class Anthozoa, subclass Zoantharia. Anthozoa often have large polyps that allow for digestion of larger prey and also lack a medusa stage. As cnidarians , sea anemones are closely related to corals , jellyfish , tube-dwelling anemones , and Hydra .
Mangroves are various kinds of trees up to medium height and shrubs that grow in saline coastal sediment habitats in the tropics and subtropics – mainly between latitudes 25° N and 25° S. The remaining mangrove forest areas of the world in 2000 was 53,190 square miles (137, 760 km²) spanning to 118 countries and territories
Phytoplankton are the autotrophic Component of the plankton community. Most phytoplankton are too small to be individually seen with the unaided eye . However, when present in high enough numbers, they may appear as a green discoloration of the water due to the presence of chlorophyll within their cells (although the actual color may vary with the species of phytoplankton present due to varying levels of chlorophyll or the presence of accessory pigments such as phycobiliproteins , xanthophylls , etc.).
Seaweed is a loose colloquial term encompassing macroscopic , multicellular , benthic marine algae . The term includes some members of the red , brown and green algae . Seaweeds can also be classified by use (as food, medicine, fertilizer, industrial, etc.). | 2026-01-31T17:17:15.056514 |
825,973 | 3.865788 | http://www.pbs.org/wgbh/globalconnections/mideast/timeline/text/tgeography.html | 1517-1918: The Ottoman Empire extends over most of the Arab world.
The Ottoman Empire begins in the 1300s in what is now Turkey. Between 1516 and 1517, the Ottomans conquer the Arab provinces. Islam is one of the major forces holding the diverse empire together. Ottoman law, in fact, is derived from both Islamic law and edicts of the sultan. In the 1700s and 1800s, though, the once-powerful Ottoman Empire starts to lose power. On the hunt for new territories to conquer, Great Britain, France, and Russia begin to interfere in the affairs and territories of the Ottoman Empire as well as in Egypt. The Ottomans retain control over the Balkans until the early 1900s, and over most of the Arab world until 1918. On the losing side of World War I, their lands are dispersed to Allied powers, including Great Britain and France.
November 17, 1869: The Suez Canal, a crucial communication and transportation link between the Mediterranean and Red Seas, opens in Egypt.
Designed to give European powers better access to Middle Eastern, East Asian, and South Asian markets, the Suez Canal is built by France (using Egyptian workers) over 10 years. The French later sell the Canal to the British, who control it for 84 years before Egypt nationalizes it. It is wide enough to accommodate most ships and, at 120 miles long, is the longest canal in the world without locks.
1901: The Jewish National Fund is established to purchase land in Palestine.
Under the guidance of Theodor Herzl, the Jewish National Fund (JNF) is established to purchase land in Palestine. The JNF makes its first purchase in 1903, and at the 1948 declaration of the State of Israel, Jews will own nearly 7 percent of the whole country.
1902: Egypt's Aswan Dam, built by the British, opens.
The original Aswan Dam, or Aswan Low Dam, is built by the British. In 1970, it will be determined that the Aswan Low Dam is neither large enough nor strong enough to control extreme flooding, and a second High dam will be built.
1905: Ottoman-controlled Northern Yemen and British-controlled Southern Yemen are officially divided.
In 1918, the Violet Line, as it is known, is a boundary drawn to separate the Ottoman and British spheres of influence in Yemen and to prevent future clashes. It is literally drawn on a map with a ruler, using violet ink. This line will later form the border between Northern and Southern Yemen when these lands gain statehood in the 1960s. The two divisions are united in 1990.
1907: Persia (Iran) is divided into three zones, each one controlled by a different country.
To protect their economic interests in the region, Russia and Great Britain divide Persia into three zones. Russia controls the northern zone, Great Britain the southern zone, and the Shah of Iran controls the neutral middle zone.
May 1908: Oil is discovered in Persia (Iran).
British adventurer William Knox D'Arcy strikes oil in 1908, seven years after obtaining drilling rights to the land from the Persian government. In 1909, D'Arcy joins with Burmah Oil to form the Anglo-Persian Oil Company in 1909. By 1917, the British government, which owns 51 percent of the company, is the most influential power in Persia. Britain uses the company's reserves during World War I.
March 1915-January 1916: An estimated 500,000 are injured and 100,000 die when Ottoman forces fight against an Allied attack at Gallipoli.
Two waterways -- the Dardanelles and Bosporus Straits -- provided the only passage between the Black Sea and the Mediterranean Sea; thus, this was the only supply route between France and Britain and their ally Russia. The Allied forces wanted to wrest control of these waterways from Ottoman strongholds along the Gallipoli Peninsula, and committed nearly a half million troops in their attempt to do so. Naval and air strikes were followed by troop landings and ground combat at close range. The standoff was epic, and the number of casualties on both sides high. Ultimately, the Turkish forces repelled the Allied attack.
With so many Allied troops committed to the unsuccessful campaign at Gallipoli, Germany was able to more easily pursue its military objectives on the eastern front, and World War I continued another two years. The courage shown by the Turkish forces in defending their positions, as well as the leadership of Mustafa Kemal, served as great examples for their War of Liberation, which followed in 1920.
May 1916: British and French negotiate the Sykes-Picot Agreement.
A secret understanding negotiated during World War I between Great Britain and France (with Russian consent), the Sykes-Picot agreement outlines the division of Ottoman-controlled lands into various French- and British-administered areas. The agreement is named after its negotiators, Sir Mark Sykes of Britain and Georges Picot of France.
The agreement, implemented in 1919, contradicts the agreement the British made with the Arabs at the start of the war (the Hussein-McMahon Correspondence), which promised the Arabs independence of what is now Syria, Palestine (Israel), Jordan, Iraq, and the Arabian Peninsula.
1918-1922: A nationalist movement in Egypt leads to Egyptian independence.
Saad Zaghlul leads a delegation to meet with the ruling British High Commissioner and demand independence for Egypt. He is refused, and his subsequent arrest and deportation spark anti-British riots. The growing popular support of the nationalistic Wafd Party -- "wafd" is Arabic for "delegation" -- prompts Britain to grant Egypt limited independence in February 1922 and install a king as head of state. Britain, which has served as Egypt's protectorate since 1914, retains control over essential government institutions, including the parliament; finances; education; and the Sudan. It also keeps troops in the Suez Canal zone. Egypt will gain full independence after World War II.
July 24, 1922: The League of Nations issues a mandate to Britain to establish a national home for the Jewish people in Palestine.
Following the disintegration of the Ottoman Empire after World War I, the territories formerly under the empire's control are divided between France and Britain. In 1920, the principal Allied powers award Britain the mandate for Palestine. Two years later, the League of Nations confirms the mandate, which lays out the terms under which Britain is given responsibility for the temporary administration of Palestine on behalf of both the Jews and Arabs living there. According to the mandate, Britain "shall be responsible for placing the country [Palestine] under such political, administrative, and economic conditions as will secure the establishment of the Jewish national home ... and also for safeguarding the civil and religious rights of all the inhabitants of Palestine, irrespective of race or religion." (from the Balfour Declaration)
1923: Oil is discovered in Iraq.
The first oil strike floods the countryside with oil for 10 days before workers can bring it under control. The well produces 80,000 barrels of oil a day. In 1934, the first oil pipeline connects Iraq with Tripoli in Lebanon. A second line to Haifa, Palestine, opens in January 1935.
1930s-1950s: Oil exploration begins in the desert, and later offshore, of what is now the United Arab Emirates (UAE).
Only 150,000 people, many of them nomadic Bedouins, inhabit the land that will comprise the UAE. With no roads, schools, hospitals, or factories, these people experience one of the lowest standards of living in the developing world until oil is discovered in the region.
October 19, 1954: Britain agrees to leave the Suez Canal and its occupation of Egypt.
Egypt and Britain conclude a pact on the Suez Canal, ending 72 years of British occupation. In return, Egypt agrees to maintain freedom of canal navigation. The last of the 80,000-strong British force leaves the canal zone by June 14, 1956.
July 26, 1956: Egypt nationalizes the Suez Canal.
Most likely in response to the U.S. decision to revoke its foreign aid pledge to help build the Aswan High Dam project, Nasser decides to nationalize the Suez Canal. Its toll revenues provide a significant source of needed income. This angers Britain and France, the former owners of the canal.Related Links
October 31-November 7, 1956: Suez Crisis: Israel, Britain, and France attack Egypt after the Egyptian president Nassar nationalizes the Suez Canal.
Britain and France conspire to recapture the canal they once owned, with Israeli assistance. Israel invades Sinai, and Britain and France "intervene" and occupy the canal zone. They withdraw under U.S. and Soviet pressure, unsuccessful in their attempt.Related Links
1959: Oil is discovered in Libya.
The oil boom provides Libya with newfound financial independence, transforming a country with one of the lowest standards of living into one full of opportunities, with growing employment and plans for improved housing, health care, and education. Investing much of its oil profits in other parts of the economy, Libya expands its industry, mining, and agricultural base, irrigating new areas of the desert. Most of the large farms, which are owned by the government, produce foods that were formerly imported, including corn, wheat, and citrus fruits, as well as cattle, sheep, and poultry.
1959: The first big oil reserve is discovered just off the coast of Abu Dhabi (now part of the United Arab Emirates).
Oil is first discovered off of Abu Dhabi in 1959. Just a year later, oil is also found in Abu Dhabi's desert. Dubai, Sharjah, and Ras al-Khaimah follow with discoveries of their own over the next several years. Abu Dhabi, once known as a fishing village, is today the richest of all the emirates. Dubai, originally known for its pearl trade, is the second richest.Related Links
1964: Conflict over access to fresh water from the Jordan River pits Israel against its Arab neighbors.
The countries sharing the basin of the Jordan River have extremely limited sources of fresh water, and water rights have been one of the leading sources of conflict in this troubled region. In 1964, Israel's National Water Carrier system, a complex of canals, pipelines, and tunnels built to convey water to the coastal plain of Israel and the Negev Desert, began diverting water from the Jordan River Basin. This diversion led to the Arab Summit of 1964, where a plan was developed to divert the headwaters of the Jordan River into Syria and Jordan -- preventing Jordan River water from reaching Israel. As the activities of the Headwater Diversion Plan began to take shape from 1965-67, Israel attacked construction sites. These incidents regarding water issues led up to the outbreak of the Six-Day War in June 1967.
June 5-10, 1967: The Six-Day War is fought between Israel and the Arab states.
Conflict ignites after three weeks of increasing tensions, including a massive Arab troop buildup in the Sinai Peninsula, as well as an Egyptian blockade of the Straits of Tiran in the Red Sea of ships to or from Israel. On June 5, 1967, Israel responds by launching a surprise attack on Egypt. Other Arab nations, including Syria, Iraq, Kuwait, and Jordan, join Egypt in the fighting. Israel seizes the Golan Heights from Syria, Sinai and the Gaza Strip from Egypt, and East Jerusalem and the West Bank from Jordan before a cease-fire is agreed upon.
June 5, 1967: Egypt closes the Suez Canal in conjunction with the Six-Day War.
Closed during the Six-Day War by the Egyptians, the Suez Canal becomes part of the boundary separating Egypt and the Israeli-occupied Sinai Peninsula after the war. Remaining closed for the next eight years, Egypt loses considerable revenue. Many ships built after the closing (especially tankers) are too large to navigate the canal.
1970: The Aswan High Dam is built in Egypt, controlling the Nile's annual flood but changing the river's ecosystem.
A second, or "High," Aswan Dam is built with Soviet assistance to replace the older, less effective Aswan "Low" Dam. The dam has stopped the river's annual floods by trapping its waters in a reservoir and slowly releasing it during the dry season. This allows farmers along the Nile to plant year round. Unfortunately, the dam also traps the river's fertile silt, forcing the use of artificial fertilizers by farmers and causing pollution. Other effects of the dam are riverbank erosion and high levels of soil salinity.
1971: Natural gas is discovered in northeast Qatar.
The North Gas Field is among the top five largest natural gas reserves in the world.
November 1973: Saudi Arabia leads an oil boycott against the U.S. and other Western countries.
A supporter of Egypt, Jordan, and Syria in the 1967 Six-Day War against Israel, Saudi Arabia still harbors resentment when the Yom Kippur War (October War) erupts. In retaliation for U.S. support of Israel, Saudi Arabia participates in a 1973 Arab oil boycott of the U.S. and other Western nations. The price of oil quadruples, dramatically increasing Saudi Arabia's wealth and political influence.Related Links
March 6, 1975: Iraq and Iran sign the Algiers Agreement, ending their border disputes.
On March 6, 1975, Iraq and Iran sign a treaty known as the Algiers Agreement, or more precisely the Iran-Iraq Treaty on International Borders and Good Neighborly Relations, whose provisions are brokered by Jordan's King Hussein. The signing takes place at an OPEC convention in Algiers. The agreement delineates the international border between the two countries as the deepest point of the Shatt al-Arab estuary, as opposed to its eastern shore. Baghdad agrees to the treaty in return for Tehran's commitment to stop covert U.S. and Iranian support for the Kurds. In 1980 Iraqi president Saddam Hussein invades Iran, hoping in part to reverse the 1975 agreement.
September 22, 1980: Iraq invades Iran.
Though the reasons behind the war are complex, border skirmishes and a dispute over rights to the Shatt al-Arab waterway contribute to the warfare. Iraq seizes thousands of square miles and several important oil fields. Over an eight-year period, more than 500,000 Iraqis and Iranians die, with neither side able to claim victory.Related Links
1982: Oman launches programs designed to combat pollution and prevent other environmental catastrophes.
During the 1980s in Oman, oil and tar from passing ships cover the country's beaches, pollution endangers many of its migratory birds, and corals are being damaged by anchors, fishing nets, and other equipment. One plan to eliminate oil spills focuses on building an area where tankers can safely discharge their ballast.
Mid-1980s: Yemen and Saudi Arabia clash over the discovery of oil in the Empty Quarter.
Oil reserves are discovered in the Empty Quarter, a vast desert that extends over much of Northern Yemen and southeastern Saudi Arabia. Conflicting claims to the potentially valuable land cause conflict, largely because there is no defined boundary between the two countries.
1986: Commercial extraction of Yemen's natural oil reserves begin.
Earnings from oil production and refinement will result in significant contributions to the Yemeni economy over the next decade. Talks of the reunification of Northern and Southern Yemen accelerate.
1992: Heavy soil erosion prompts two Turkish businessmen to raise public awareness of environmental issues.
Businessmen Hayrettin Karaca and Nihat Gokyigit establish the Turkish Foundation for Combating Soil Erosion, for Reforestation and the Protection of Natural Habitats (TEMA) in 1992. Because 45 percent of Turkey's work force is involved in agriculture and nearly 80 percent of total land area is threatened by soil erosion in particular, this is considered a major concern in Turkey.Related Links
1994: Saudi production of desalinated water reaches cities in the center of the kingdom.
Because of its lack of fresh water resources, Saudi Arabia develops a process to remove salt from sea water (desalination) to serve the water needs of its people. Saudi Arabia currently produces more desalinated water than any other country in the world. This water is used both for drinking water and agricultural irrigation. In 1994, the production capacity for desalinated water had reached 714,218,000 gallons per day -- enough water to cover the needs of the cities on the eastern and western coasts as well as some cities inland. By 2000, the capital city of Riyadh would receive desalinated water from the Gulf, 500 kilometers away.Related Links
November 1994: The Atat¸rk Dam opens in Turkey.
The Atat¸rk Dam is one of 22 planned dams and 19 planned hydroelectric plants on the Euphrates and Tigris Rivers. The overall project costs exceed $34 billion and result in the displacement of largely Kurdish populations.Related Links
1998-1999: A drought reduces water levels in Israel's Lake Kinneret to dangerously low levels.
Lake Kinneret contains most of Israel's water supply. As a desert region, Israel and the rest of the Middle East engage in ongoing negotiations about water supplies, water partnerships, and water technologies.
February 4-May 30, 1998: Two of the poorest and most isolated provinces in Afghanistan are rocked by two earthquakes just three months apart.
Two major quakes measuring 6.1 and 6.9 on the Richter scale originate from nearly the same site in the northeast provinces of Takhar and Badakshan. Landslides level homes and villages, trapping many under rubble and leaving thousands of terrified survivors clinging to exposed mountainsides. An estimated 10,000 people are killed and 45,000 left homeless.Related Links
August 17, 1999: Nearly 18,000 die when two major earthquakes hit western Turkey.
The August earthquake, registering 7.8 on the Richter scale, is centered near the city of Izmit, in densely populated western Turkey. In addition to the 18,000 deaths, another 27,000 people are injured. Damage extends to 340,000 houses and businesses. The quake is believed to have pushed Anatolia four feet closer to Europe. On November 12, another 760 are killed and 5,000 injured when a second large earthquake, measuring 7.2, hits Duzce. The total damage for the two quakes is estimated at between $10 billion and $25 billion.Related Links
August 2000: Natural gas is discovered off the coast of Israel.
Should the recently discovered reserves of natural gas off of Israel's coast prove large, tapping them could reduce the country's immense dependence on foreign suppliers of energy, as could Israeli research into solar and wind power. Currently, for political reasons, Israel's energy demand is met by suppliers outside of the Arab world.
January 28, 2001: Egypt, Lebanon, Syria, and Jordan sign an agreement on a $1 billion gas pipeline project.
The project promises to build two pipelines to transport Egyptian natural gas to Middle East partners and to European markets.
March 1, 2001: The Hawar Islands are awarded to Bahrain over Qatar's objections.
The International Court of Justice settles a five-year-old dispute between neighboring countries Bahrain and Qatar over territorial rights to the Hawar Islands and adjoining natural-gas fields in the Gulf of Bahrain.
April 9-11, 2001: An international commission gathers in Lebanon to discuss sustainable development.
The Economic and Social Commission for Western Asia (ESCWA) convenes a Thematic Round Table in Beirut to discuss regional concerns about sustainable development, fresh water supplies, land use, poverty, standards of living, and technology. The commission representatives prepare for the "Rio + 10" World Summit on Sustainable Development.
May 2002: Locusts invade Afghanistan's northern plains, threatening crop production.
The lack of an effective control program has allowed hundreds of millions of locusts to threaten nearly 70 percent of the crops in parts of northern Afghanistan, the country's most productive agricultural area. Several million rural households are potentially affected by the swarm. Insecticides and traditional trench traps are being used to combat the insects.Related Links | 2026-01-31T02:15:55.648572 |
239,003 | 3.678485 | http://mathforum.org/library/problems/sets/discrete_difficulty.html | Coding for Level of Difficulty:
For a full explanation, see A Rubric for Coding Problem Difficulty, from: Renninger, K. A. & Feldman-Riordan, C. (in preparation). "Technology as a tool for developing students' mathematical thinking." (The help of Crystal Akers and Alice Henriques in clarifying this coding scheme is gratefully acknowledged.)
Coding of problem difficulty focuses on the mathematical challenges represented by the problem, the difficulty of the mathematical concept, and the difficulty of mathematical calculations for students at a given level of problem solving. The rating scale consists of 5 levels of difficulty, wherein a Level 5 problem is a very difficult problem for students in a given grade band.
Level 1. Only one concept needs to be worked on; the mathematics is rudimentary and represents prior knowledge rather than something new. Example:
A Traveling Salesperson Problem. Tom's mother needs to make several stops around the city and then get home. What is the shortest route without going back to one of the stops?
In this problem, there is only one concept that needs to be worked on, finding the shortest round trip for the mother to complete her errands. The problem can be solved using prior mathematical knowledge.
Level 2. Either a) the concept is clearly stated within the problem, and the mathematics is challenging for students at the given level of the PoW, or b) the concept requires some "stretching" for students at this level, and the mathematics is based on prior knowledge. (Note: Problems that require attention to explanation are likely to be found at Level 3, rather than Level 2, because of the difficulty involved in explaining mathematical understanding.) Example:
This classic number theory problem investigates properties of prime numbers, perfect squares, and counting in a problem that involves opening and closing locker doors.
This problem could be solved by creating a model and simulating the opening and closing of the lockers. The mathematics involved, factors and multiples, is prior knowledge for students in this grade band. The requirement to "discover" a pattern and to determine which lockers are touched exactly twice, however, makes this a level 2 rather than a level 1 problem.
Level 3. The problem (a) contains a "twist" or additional problem requirement that students in this grade band may overlook even though they can complete the problem accurately, and (b) requires discourse knowledge of mathematical concepts and basic mathematical ability appropriate to students at this level. (Note: At the elementary level, multiple parts within a problem make what may initially appear to be a Level 3 problem into a Level 4 problem.) Example:
After the parade, the people on the float I was on shook hands with each other. The Mayor came over and shook hands with only the people he knew. How many people did he know if there were 1625 handshakes altogether?
Although this problem may at first appear to be the traditional handshake problem, it has a slight "twist": the mayor only shakes hands with the people that he knows. Providing an additional challenge, solving the problem requires that students explain mathematically how to account for this "twist."
Level 4. The problem includes the elements listed in difficulty Level 3 and contains an algorithm new to students in this grade band; students may miss the problem by getting bogged down in the math but not by missing the concept; students may not finish the problem or may not attempt all parts of the problem. Example:
This problem involves the Jefferson, Adams, and Webster apportionment methods.
This problem is complicated by the fact that students must consider three different apportionment methods. Students could become bogged down in the calculations in any of the methods. An added requirement is that students recognize that the U.S. Constitution guarantees that every state is entitled to at least one representative. Missing this element may lead students astray.
Level 5. The problem includes the elements listed in difficulty Level 3 and requires discourse knowledge of mathematical concepts and mathematical ability above the level of the students in this grade band; or it contains a concept, theorem, or algorithm that a rater familiar with this mathematics topic does not recognize. Example:
This problem is based on the Stable Marriage Algorithm, which requires students to make the best match possible between a set of girls and a set of boys desiring to date each other.
This problem requires that students learn about a new concept, the Stable Marriage Algorithm, and understand it well enough to apply it to the problem. The problem also includes many smaller questions that students might overlook.
© 1994-2012 Drexel University. All rights reserved.
The Math Forum is a research and educational enterprise of the Drexel University School of Education. | 2026-01-21T21:25:06.649612 |
1,108,674 | 3.583986 | http://www.sms.si.edu/irlspec/Eichhornia_crassipes.htm | Potentially Misidentified Species:
In Florida, Eichhornia crassipes may be confused with the floating form of a
somewhat similar appearing native aquatic plant, frog's bit (Limnobium
spongia). The presence of small,
white flowers and petioles that are not bulbous or inflated aid in
distinguishing the native plant from water hyacinth (Langeland and Burks 1998).
II. HABITAT AND DISTRIBUTION
Eichhornia crassipes can be found in all types of Florida freshwater habitats.
The species is native to the Amazon Basin of Brazil but has been introduced to
tropical and subtropical regions around the world (Langeland and Burks 1998).
Holm et al. (1977) noted that at the time of publication 56 countries including
the United States had reported it as a noxious weed.
Within the U.S., E. crassipes occurs throughout the southeast north to
Virginia and west to Texas, as well as in California and Hawaii. Seasonal
escapes from cultivation are reported from New York, Kentucky, Tennessee and
Missouri, but populations apparently do not survive through winter. The plant
previously occurred in Arizona, Arkansas, and Washington State but is now
considered eradicated in these locations (Ramey 2001).
E. crassipes occurs in all but a handful of counties in the Florida
peninsula and about half the counties in the panhandle and north Florida
Eichhornia crassipes is a freashwater species that is widespread in all six
counties within the IRL watershed.
III. LIFE HISTORY AND POPULATION BIOLOGY
Age, Size, Lifespan:
Aerial portions of Eichhornia crassipes generally grow to 0.5 m in height, although individuals in some Asian populations may reach nearly 1 m (Gopal 1987).
Water hyacinth mats are capable of attaining incredibly high plant density and
biomass. A single hectare of dense E. crassipes mat can contain more
than 360 metric tons of plant biomass. Exhaustive management efforts in
Florida over the last 20 years have considerably reduced the amount of aquatic
habitat choked by this invader (UF/IFAS 2001).
Water hyacinth is capable of sexual and asexual reproduction and both modes are
important to the species' success as a pernicious aquatic invader. In mild
climates, plants can flower year-round, and from early spring to late fall
elsewhere. They can produce an abundance of seeds (Flora of North America 2003,
Langeland and Burks 1998). A study by Barrett (1980b) confirmed that tropical
E. crassipes populations produced twice as many seeds as did temperate
populations and attributed the difference to higher rates of pollinating insect
visitation in the tropics. Seed germination tends to occur when water levels
are down and the seedlings can grow in saturated soils.
Vegetative reproduction occurs via the breaking off of rosettes of clonal
individuals. The stolons (horizontal shoots capable of forming new shoots and
adventitious roots from nodes) are easily broken by wind or wave action and
floating clonal plants and mats are readily transported via wind or water
movement (Barrett 1980a, Langeland and Burks 1998).
E. crassipes produces a thin walled, capsule-like fruit that is
protected within structures that form from the perianth, the outer whorls of
the flowers. Each capsule can hold as many as four hundred-fifty 4-mm long x 1-mm thick seeds
(Gopal 1987). Germination typically occurs in wet soil.
IV. PHYSICAL TOLERANCES
Although Eichhornia crassipes is excluded from cold climates due to temperature
limitations, it does exhibit a degree of freeze tolerance. Aerial portions of
the plant killed back by moderate freeze events can quickly regrow from
submerged stem tips protected from freezing by water (Langeland and Burks
Holm et al. (1977) indicate that water hyacinth is intolerant of brackish
conditions. Experimental studies by de Casabianca and Laugier (1995)
demonstrated an inverse relationship between salinity and Eichhornia crassipes
plant yield; no plant production and cankerous plants resulted at salinities
above 6 ppt and irreversible physiological damage occurred above 8 ppt. Water
hyacinth is capable of growing in low salinity coastal lagoon habitats, e.g.,
in West Africa during the rainy season (ISSG).
V. COMMUNITY ECOLOGY
Small forage fish utilize the floating mats and suspended root masses as a
refuge, although anthocyanins and other soluble plant pigments in the roots may
protect them from herbivores (Gopal 1987).
VI. INVASION INFORMATION
The U.S. invasion history of water hyacinth is well documented. The Brazil
native was first introduced to the U.S. as an ornamental aquatic plant at a New
Orleans, LA exposition in 1884. Eichhornia crassipes escaped from cultivation to
arrive in Florida by 1890, and over the ensuing 60 years, dense mats of this
highly invasive plant had taken over more than 50,000 ha of Florida freshwater
habitat (Gopal and Sharma 1981, Schmitz et al. 1993).
The amount of Florida aquatic habitat choked by dense water hyacinth mats is
currently much less than during the first 100 years after the arrival of the
species. Waterways are kept clear of dense infestations only through
extraordinary management efforts involving field crews engaged in full time
mechanical removal and biocidal control of E. crassipes. Complete
eradication from Florida is impossible.
Potential to Compete With Natives:
Water hyacinth is a Category 1 invasive exotic in Florida, capable of altering
native plant communities by displacing native species and changing community
structures or ecological functions (FLEPPC).
Holm et al. (1977) describe Eichhornia crassipes as one of the worst weeds in
The capacity of water hyacinth to invade and overtake aquatic habitat is
astounding. Growth rates are explosive and vegetative population doubling can
take place in 1-3 weeks (Mitchell 1976, Wolverton and McDonald 1979,
Langeland and Burkes 1998).
The species can quickly dominate natural areas and can dramatically alter the
species composition, structure, and function of native plant and animal
communities (Langeland and Burks 1998).
The suspended root system may account for up to half of the plant biomass. The
adventitious roots are clonal plants that break off of the parent to
immediately thrive on their own and are also capable of taking root in moist
soil in low water conditions.
Possible Economic Consequences of Invasion:
Large, dense Eichhornia crassipes mats can degrade water quality and can choke
waterways. Plant respiration and biomass decay can result in oxygen depletion
and fish kills (FDEP undated).
The documented negative economic impacts of water hyacinth invasion worldwide
have included the clogging of irrigation channels, choking off of navigational
routes, smothering of rice paddies, loss of fishing areas, increase in breeding
habitat available to disease-transmitting mosquitoes, and others (Room and
Fernando 1992, ISSG).
associated with removal and maintenance control of water hyacingth are
Barrett S.C.H. 1980a. Sexual reproduction in Eichhornia crassipes (water
Hyacinth). 1. Fertility of clones from diverse regions. Journal of Applied
Barrett S.C.H. 1980b. Sexual reproduction in Eichhornia crassipes (water
hyacinth). II. Seed production in natural populations. The Journal of Applied
de Casabianca M.-L. and T. Laugier. 1995. Eichhornia crassipes
production on petroliferous wastewaters: effects of salinity. Bioresource
Flora of North America. 2003. 26:39-41. Published online.
Florida Department of Environmental Protection. Undated. Water hyacinth
management - A good example of maintenance control in Florida. FDEP Bureau of
Invasive Plant Management Circular 19. 3 p.
Gopal B. and Sharma K.P. 1981. Water-hyacinth (Eichhornia crassipes),
most troublesome weed of the world. Hindasia Publications, Delhi, India. 219 p.
Gopal B. 1987. Water hyacinth. Elsevier Science Publishers, Amsterdam. 471 p.
Holm LG, Plucknett DL, Pancho JV, Herberger JP. 1977. The world's worst weeds:
Distribution and biology. Honolulu: University Press of Hawaii. 609 pp.
Langeland KA, and KC Burks (Eds.). 1998. Identification and Biology of
Non-Native Plants in Florida's Natural Areas. UF/IFAS. 165 p.
Mitchell D.S. 1976. The growth and management of Eichhornia crassipes
and Salvinia spp. In their native environment and in alien situations.
In: Varshney C.K. amd J. Rzoska (Eds.). Aquatic weeds in southeast Asia. W.
Junk Publishers, Netherlands. 396 p.
Ramey V. 2001. Non-Native Invasive Aquatic Plants in the United States:
Eichhornia crassipes. Center for Aquatic and Invasive Plants, University
of Florida and Sea Grant. Available online.
Room P.M. and I.V.S. Fernando. 1992. Weed invasions countered by biological
control: Salvinia molesta and Eichhornia crassipes in Sri Lanka,
Aquatic Botany 42:99-107.
Schmitz D.C., Schardt J.D., Leslie A.J., Dray F.A., Osbourne J.A. and B.V.
Nelson. 1993. The ecological impact and management history of three invasive
alien aquatic plant species in Florida. In: McKnight (Ed.). Biological
pollution-The control and impact of invasive exotic species. Indiana Academy
of Science, Indianapolis. 261 p.
Wolverton B.C. and and R.C. McDonald. 1979. Waterhyacinth (Eichhornia
crassipes) productivity and harvesting studies. Economic Botany 33:1-10.
J. Masterson, Smithsonian Marine Station
Submit additional information, photos or comments
Page last updated: June 30, 2007 | 2026-02-04T09:02:46.799793 |
708,361 | 3.623757 | http://www.baylor.edu/lakewaco_wetlands/index.php?id=34631 | |Butterfly Gardening Bibliography of online and print resources.|
Butterfly and Caterpillar Gardening
by Sharon Peregrine Johnson
In Caterpillars in the Field and Garden, Allen states that “Happily, butterflies are still plentiful throughout much of the United States. However, in the areas where most people live, the urban and suburban population centers, both the diversity of butterfly species and the numbers of individual butterflies have been greatly diminished. The major reason for the reduced numbers of butterflies is that people have destroyed their habitats. Plots of land are replaced by a paved parking lot or the "normal American suburban landscape of a limited palette of non-native shrubs, flowers, and grasses, such as rhododendrons, roses, and lawn grasses that is a sterile environment to butterflies."
This loss of habitat can be changed since "most butterflies are small and do not need vast vistas of wilderness to survive" [Allen].
Why save butterflies?
According to Allen, butterflies are:
- Early warning indicators of the deterioration of' the environment.
- Significant actors on the ecological stage.
- Serve as food for other animals.
- Act as pollinators of many plants.
Additionally, they represent beauty, freedom, and the human soul to many culutres and civilizations and have a direct positive effect on future generations.
BUTTERFLY AND CATERPILLAR GARDENING
- Adult food sources/Nectar Plants –In the garden, these are most often plants that provide nectar for adult butterflies.
- Host plants – Plants that provide a site for the butterfly to lay eggs and a food source for the emerging caterpillar.
- Shelter – Woody plants located near the nectar plants will provide butterflies with shelter during bad weather and at night.
- Provide windbreaks to shelter butterflies
- Flat stones on which adult butterflies can bask and warm up.
- Water – Butterflies cannot drink from open water, but prefer instead prefer very wet sand or soil where they can obtain salts.
- Pesticides - Treat it manually (if possible). Fireants are predators on the butterfly larvae and should be controlled using growth hormone treatment (not poisons).
Planning a Successful Butterfly Garden
- Be familiar with general gardening techniques.
- Determine which species live in your area and which ones you want to attract.
- Plan your vegetable garden so that you include sufficient cabbage family plants (cabbage, turnips, broccoli, kale, etc.) and carrot family is (carrots, dill, and parsley) to account for the needs of both your family and butterflies [Opler].
- Research – Before you begin planting and determine what species of butterfly you want to attract, which plants to use and how much space is required. Butterfly gardening books, local nurseries, local clubs and organizations, and websites can help you make decisions. Also, ask your local native plant society if there are restrictions on particular plant species in your area.
- Select a site – Sunny location with spots for basking, and shelter and rain and sources of fresh water. Mud or sand puddles are used by adult male butterflies to obtain essential salts, needed for reproduction.
- Allen, T. J., Brock, J. P., & Glassberg, J.(2005) Caterpillars in the field and garden. Oxford: University Press.
- Central Texas Butterfly Gardening. (2005). Suggestions for making a ButterflyGarden in Central Texas. Austin: University of Texas. http://www.utexas.edu/tmm/tnhc/entomology/butterfly/bfgarden.html
- Cowley, M. (2005). Native butterfly gardening. Florida: NSIS. http://www.nsis.org/butterfly/butterfly.html.
- Opler, P. A. , & Wright, A. B. (1999). A Field guide to western butterflies. New York: Houghton Mifflin Co. | 2026-01-29T04:16:56.234307 |
668,361 | 3.59259 | http://www.cs.nyu.edu/courses/spring09/V22.0202-002/lectures/lecture-02.html | Start Lecture #2
Homework: Read Chapter 1 (Introduction)
Software (and hardware, but that is not this course) is often implemented in layers. The higher layers use the facilities provided by lower layers.
Alternatively said, the upper layers are written using a more powerful and more abstract virtual machine than the lower layers.
In other words, each layer is written as though it runs on the virtual machine supplied by the lower layers and in turn provides a more abstract (pleasant) virtual machine for the higher layers to run on.
Using a broad brush, the layers are.
An important distinction is that the kernel runs in privileged/kernel/supervisor mode); whereas compilers, editors, shell, linkers. browsers etc run in user mode.
The kernel itself is itself normally layered, e.g.
The machine independent I/O part is written assuming
virtual (i.e. idealized) hardware.
For example, the machine independent I/O portion simply reads a
block from a
But in reality one must deal with the specific disk controller.
Often the machine independent part is more than one layer.
The term OS is not well defined. Is it just the kernel? How about the libraries? The utilities? All these are certainly system software but it is not clear how much is part of the OS.
As mentioned above, the OS raises the abstraction level by providing a higher level virtual machine. A second related key objective for the OS is to manage the resources provided by this virtual machine.
The kernel itself raises the level of abstraction and hides details. For example a user (of the kernel) can write to a file (a concept not present in hardware) and ignore whether the file resides on a floppy, a CD-ROM, or a hard disk. The user can also ignore issues such as whether the file is stored contiguously or is broken into blocks.
Well designed abstractions are a key to managing complexity.
The kernel must manage the resources to resolve conflicts between users. Note that when we say users, we are not referring directly to humans, but instead to processes (typically) running on behalf of humans.
Typically the resource is shared or multiplexed between the users. This can take the form of time-multiplexing, where the users take turns (e.g., the processor resource) or space-multiplexing, where each user gets a part of the resource (e.g., a disk drive).
With sharing comes various issues such as protection, privacy, fairness, etc.
Homework: What are the two main functions of an operating system?
Answer: Concurrency! Per Brinch Hansen in Operating Systems Principles (Prentice Hall, 1973) writes.
The main difficulty of multiprogramming is that concurrent activities can interact in a time-dependent manner, which makes it practically impossibly to locate programming errors by systematic testing. Perhaps, more than anything else, this explains the difficulty of making operating systems reliable.
Homework: 1. (unless otherwise stated, problems numbers are from the end of the current chapter in Tanenbaum.)
The subsection heading describe the hardware as well as the OS; we are naturally more interested in the latter. These two development paths are related as the improving hardware enabled the more advanced OS features.
One user (program; perhaps several humans) at a time. Although this time frame predates my own usage, computers without serious operating systems existed during the second generation and were now available to a wider (but still very select) audience.
I have fond memories of the Bendix G-15 (paper tape) and the IBM 1620 (cards; typewriter; decimal). During the short time you had the machine, it was truly a personal computer.
Many jobs were batched together, but the systems were still uniprogrammed, a job once started was run to completion without interruption and then flushed from the system.
A change from the previous generation is that the OS was not reloaded for each job and hence needed to be protected from the user's execution. Previously, the beginning of your job contained the trivial OS-like support features.
The batches of user jobs were prepared offline (cards to tape) using a separate computer (an IBM 1401 with a 1402 reader/punch). The tape was brought to the main computer (an IBM 7090/7094) where the output to be printed was written on another tape. This tape went back to the 1401 and was printed on a 1403.
In my opinion this is the biggest change from the OS point of view. It is with multiprogramming (multiple users executing concurrently) that we have the operating system fielding requests whose arrival order is non-deterministic (at the pragmatic level). Now operating systems become notoriously hard to get right due to the inability to test a significant percentage of the possible interactions and the inability to reproduce bugs on request.
efficientuser of resources, but is more difficult.
spoolingtheir own jobs on a remote terminal. Deciding when to switch and which process to switch to is called scheduling.
Homework: Why was timesharing not widespread on second generation computers?
Remark: I very much recommend reading all of 1.2, not for this course especially, but for general interest. Tanenbaum writes well and is my age so lived through much of the history himself.
The picture above is very simplified. (For one thing, today separate buses are used to Memory and Video.)
A bus is a set of wires that connect two or more devices.
Only one message can be on the bus at a time.
All the devices
receive the message:
There are no switches in between to steer the message to the desired
destination, but often some of the wires form an address that
indicates which devices should actually process the message.
Only at a few points will we get into sufficient detail to need to understand the various processor registers such as program counters, stack pointers, and Program Status Words (PSWs). We will ignore computer design issues such as pipelining and superscalar.
We do, however, need the notion of a trap, that is an instruction that atomically switches the processor into privileged mode and jumps to a pre-defined physical address. We will have much more to say about traps later in the course.
Many of the OS issues introduced by muti-processors of any flavor are also found in a uni-processor, multi-programmed system. In particular, successfully handling the concurrency offered by the second class of systems, goes a long way toward preparing for the first class. The remaining multi-processor issues are not covered in this course.
We will ignore caches, but will (later) discuss demand paging, which is very similar (although demand paging and caches use largely disjoint terminology). In both cases, the goal is to combine large, slow memory with small, fast memory to achieve the effect of large, fast memory. We cover caches in our computer design (aka architecture) courses (you can access my class notes off my home page).
The central memory in a system is called RAM (Random Access Memory). A key point is that it is volatile, i.e. the memory loses its data if power is turned off.
ROM (Read Only Memory) is used for (low-level control) software that often comes with devices on general purpose computers, and for the entire software system on non-user-programmable devices such as microwaves and wristwatches. It is also used for non-changing data. A modern, familiar ROM is CD-ROM (or the denser DVD, or the even denser Blu-ray). ROM is non-volatile.
But often this unchangable data needs to be changed (e.g., to fix bugs). This gives rise first to PROM (Programmable ROM), which, like a CD-R, can be written once (as opposed to being mass produced already written like a CD-ROM), and then to EPROM (Erasable PROM), which is like a CD-RW. An EPROM is especially convenient if it can be erased with a normal circuit (EEPROM, Electrically EPROM or Flash RAM).
As mentioned above when discussing OS/MFT and OS/MVT, multiprogramming requires that we protect one process from another. That is we need to translate the virtual addresses of each program into physical addresses such that the physical address of each process do not clash. The hardware that performs this translation is called the MMU or Memory Management Unit.
When context switching from one process to another, the translation must change, which can be an expensive operation.
When we do I/O for real, I will show a real disk opened up and illustrate the components
Devices are often quite difficult to manage and a separate computer, called a controller, is used to translate OS commands into what the device requires.
The bottom of the memory hierarchy, tapes have large capacities, tiny cost per byte, and very long access times.
In addition to the disks and tapes just mentioned, I/O devices include monitors (and graphics controllers), NICs (Network Interface Controllers), Modems, Keyboards, Mice, etc.
The OS communicates with the device controller, not with the device itself. For each different controller, a corresponding device driver is included in the OS. Note that, for example, many graphics controllers are capable of controlling a standard monitor, and hence the OS needs many graphics device drivers.
In theory any SCSI (Small Computer System Interconnect) controller can control any SCSI disk. In practice this is not true as SCSI gets inproved to wide scsi, ultra scsi, etc. The newer controllers can still control the older disks and often the newer disks can run in degraded mode with an older controller.
Three methods are employed.
We discuss these alternatives more in chapter 5. In particular, we explain the last point about halving bus accesses.
On the right is a figure showing the specifications for an Intel chip set introduced in 2000. Many different names are used, e.g., hubs are often called bridges. Most likely due to their location on the diagram to the right, the Memory Controller Hub is called the Northbridge and the I/O Controller Hub the Southbridge.
As shown on the right the chip set has two different width PCI buses. The figure below, which includes some devices themselves, does not show the two different PCI buses. This particular chip set supplies USB, but others do not. In the latter case, a PCI USB controller may be used.
The use of ISA (Industry Standard Architecture) is decreasing but is still found on most southbridges.
Note that the diagram below, which is close to figure 1-12 of the 3e differs from the figure to the right in at least two respects. The connection between the bridges is a proprietary bus and the PCI bus is generated by the Northbridge. The figure on the right is definitely correct for the specific chip set described and is very similar to the Wikipedia article.
Remark: In January 2008, I received an email reply from Tanenbaum stating that he will try to fix the diagram in the book in the next printing.
When the power button is pressed control starts at the BIOS a
(typically flash) ROM in the system.
Control is then passed to (the tiny program stored in) the MBR
(Master Boot Record), which is the first 512-byte block on the
Control then proceeds to the first block in the
partition and from there the OS (normally via an OS loader) is
The above assumes that the boot medium selected by the bios was the hard disk. Other possibilities include: floppy, CD-ROM, NIC.
There is not much difference between mainframe, server, multiprocessor, and PC OS's. Indeed 3e has considerably softened the differences given in the 2e. For example Unix/Linux and Windows runs on all of them.
This course covers these four classes (or one class).
Used in data centers, these systems ofter tremendous I/O capabilities and extensive fault tolerance.
Perhaps the most important servers today are web servers. Again I/O (and network) performance are critical.
A multiprocessor (as opposed to a multi-computer or multiple computers or computer network or grid) means multiple processors sharing memory and controlled by a single instance of the OS, which typically can run on any of the processors. Often it can run on several simultaneously.
These existed almost from the beginning of the computer age, but now are not exotic. Indeed even my laptop is a multiprocessor.
The operating system(s) controlling a system of multiple computers often are classified as either a Network OS, which is basically a collection of ordinary PCs on a LAN that use the network facilities available on PC operating systems. Some extra utilities are often present to ease running jobs on other processors.
A more sophisticated Distributed OS is a
seamless version of the above where the boundaries
between the processors are made nearly invisible to users (except
This subject is not part of our course (but often is covered G22.2251).
In the recent past some OS systems (e.g., ME) were claimed to be tailored to client operation. Others felt that they were restricted to client operation. This seems to be gone now; a modern PC OS is fully functional. I guess for marketing reasons some of the functionality can be disabled.
This includes PDAs and phones, which are rapidly merging.
The only real difference between this class and the above is the
restriction to very modest memory.
very modest keeps getting bigger and some phones now
include a stripped-down linux.
The OS is
part of the device, e.g., microwave ovens,
and cardiac monitors.
The OS is on a ROM so is not changed.
Since no user code is run, protection is not as important. In that respect the OS is similar to the very earliest computers. Embedded OS are very important commercially, but not covered much in this course.
Embedded systems that also contain sensors and communication devices so that the systems in an area can cooperate.
As the name suggests, time (more accurately timeliness) is an important consideration. There are two classes: Soft vs hard real time. In the latter missing a deadline is a fatal error—sometimes literally. Very important commercially, but not covered much in this course.
Very limited in power (both meanings of the word).
This will be very brief.
Much of the rest of the course will consist of
filling in the details.
A process is program in execution. If you run the same program twice, you have created two processes. For example if you have two editors running in two windows, each instance of the editor is a separate process.
Often one distinguishes the state or context of a process—its
address space (roughly its memory image), open files), etc.—
from the thread of control.
If one has many threads running in the same task,
the result is a
The OS keeps information about all processes in the process table. Indeed, the OS views the process as the entry. This is an example of an active entity being viewed as a data structure (cf. discrete event simulations), an observation made by Finkel in his (out of print) OS textbook.
The set of processes forms a tree via the fork system call.
The forker is the parent of the forkee, which is
called a child.
If the system always blocks the parent until the child finishes, the
tree is quite simple, just a line.
But the parent (in many OSes) is free to continue executing and in
particular is free to fork again producing another child.
A process can send a signal to another process to
cause the latter to execute a predefined function (the signal
It can be tricky to write a program with a signal handler since the
programmer does not know when in the
mainline program the
signal handler will be invoked.
Each user is assigned a User IDentification (UID) and all processes created by that user have this UID. A child has the same UID as its parent. It is sometimes possible to change the UID of a running process. A group of users can be formed and given a Group IDentification, GID. One UID is special (the superuser or administrator) and has extra privileges.
Access to files and devices can be limited to a given UID or GID. | 2026-01-28T13:53:06.826508 |
768,532 | 4.163286 | http://www.technologyreview.com/news/514656/a-more-efficient-jet-engine-is-made-from-lighter-parts-some-3-d-printed/ | A new generation of engines being developed by the world’s largest jet engine maker, CFM (a partnership between GE and Snecma of France), will allow aircraft to use about 15 percent less fuel—enough to save about $1 million per year per airplane and significantly reduce carbon emissions.
The first of these new engine, called LEAP, will feature a technology that has never been used in a large-scale production jet engines before: ceramic composite materials that weigh far less than the metal alloys they’ll replace and can endure far higher temperatures. The engine will also make use of parts produced through 3-D printing, a new kind of manufacturing that can produce complex shapes that would be difficult or impossible to make with conventional manufacturing techniques (see “10 Breakthrough Technologies 2013: Additive Manufacturing”). These technologies could eventually be used to make more parts of the engine, leading to further advances in efficiency, says Gareth Richards, LEAP program manager for GE Aviation.
Even though construction of the first engine began less than two weeks ago, the company already has orders for 4,500. They will be used in the Airbus A320neo, the Boeing 737 Max, and a new plane from China, the Comac C919. In addition to saving money, the engine will help manufacturers comply with current and anticipated regulations designed to reduce emissions of carbon dioxide and pollutants such as smog-forming nitrogen oxides (NOx).
One of the key innovations is the use of ceramic matrix composites developed by GE. Ceramics can withstand high temperatures, but they’re normally too brittle for use in engines. Researchers at GE developed a way to reinforce them with silicon carbide fibers, which makes them as resilient as metal.
The ceramics will decrease the amount of energy used to cool off engine parts. Current engines operate at temperatures that are actually higher than the melting point of the nickel metal alloys used inside them; to keep them from melting, the engine diverts air from a compressor inside the engine through tiny holes in the parts, creating a protective cooling layer. The ceramic composite doesn’t require this cooling, so the air can instead be used to generate thrust.
In the LEAP engine, the ceramic matrix composites will replace only some of the nickel alloy parts. But in the future, they could be used for more engine parts, further reducing losses from cooling. This change could also allow engines to run at higher temperatures, making it possible to get more thrust from a given amount of fuel. Furthermore, composites could make engines lighter—parts made from these materials weigh one-third as much as the equivalent nickel alloy parts.
The engine will also feature 3-D-printed parts that can improve the engine’s efficiency and lower emissions. The system is more sophisticated and powerful than the desktop 3-D printers that have gained attention recently. Instead of depositing materials, it uses a laser to turn metal powder into solid shapes, layer by layer. The method simplifies the manufacturing of precisely shaped fuel nozzles that help the engine run at high temperatures without producing nitrogen oxides (see “Additive Manufacturing” and “GE and EADS to Print Parts for Airplanes”).
Another engine maker, Pratt & Whitney, is developing its own advanced engine that can reduce fuel consumption by about 15 percent; buyers of the Airbus A320neo can choose either the Pratt & Whitney engine or the CFM one. But Pratt & Whitney takes a far different approach to improving efficiency. Rather than using composites, it’s introducing gears that help different parts of the engine move at optimal speeds (see “More Efficient Jet Engine Gets in Gear” and “Hybrid Wing Uses Half the Fuel of a Standard Airplane”). | 2026-01-30T03:31:56.356009 |
1,015,136 | 3.610384 | http://spacrs.wordpress.com/what-is-critical-race-theory/ | Critical Race Theory was developed out of legal scholarship. It provides a critical analysis of race and racism from a legal point of view. Since its inception within legal scholarship CRT has spread to many disciplines. CRT has basic tenets that guide its framework. These tenets are interdisciplinary and can be approached from different branches of learning.
CRT recognizes that racism is engrained in the fabric and system of the American society. The individual racist need not exist to note that institutional racism is pervasive in the dominant culture. This is the analytical lens that CRT uses in examining existing power structures. CRT identifies that these power structures are based on white privilege and white supremacy, which perpetuates the marginalization of people of color. CRT also rejects the traditions of liberalism and meritocracy. Legal discourse says that the law is neutral and colorblind, however, CRT challenges this legal “truth” by examining liberalism and meritocracy as a vehicle for self-interest, power, and privilege. CRT also recognizes that liberalism and meritocracy are often stories heard from those with wealth, power, and privilege. These stories paint a false picture of meritocracy; everyone who works hard can attain wealth, power, and privilege while ignoring the systemic inequalities that institutional racism provides.
Intersectionality within CRT points to the multidimensionality of oppressions and recognizes that race alone cannot account for disempowerment. “Intersectionality means the examination of race, sex, class, national origin, and sexual orientation, and how their combination plays out in various settings.” This is an important tenet in pointing out that CRT is critical of the many oppressions facing people of color and does not allow for a one–dimensional approach of the complexities of our world.
Narratives or counterstories, as mentioned before, contribute to the centrality of the experiences of people of color. These stories challenge the story of white supremacy and continue to give a voice to those that have been silenced by white supremacy. Counterstories take their cue from larger cultural traditions of oral histories, cuentos, family histories and parables. This is very important in preserving the history of marginalized groups whose experiences have never been legitimized within the master narrative. It challenges the notion of liberalism and meritocracy as colorblind or “value-neutral” within society while exposing racism as a main thread in the fabric of the American foundation.
Another component to CRT is the commitment to Social justice and active role scholars take in working toward “eliminating racial oppression as a broad goal of ending all forms of oppression”. This is the eventual goal of CRT and the work that most CRT scholars pursue as academics and activists.
The Critical Race Theory movement can be seen as a group of interdisciplinary scholars and activists interested in studying and changing the relationship between race, racism and power. This is crucial to understand in order to fully realize the goals of CRS in SPA. CRT is an amalgamation of concepts that have been derived from the Civil Rights and ethnic studies discourses. In the 1970s, a number of lawyers, activists, and scholars saw the work of the Civil Rights as being stalled and in many instances negated. They also saw the liberal and positivist views of laws as being colorblind and ignorant of the racism that is pervasive in the law.
The works of Derrick Bell and Alan Freeman have been attributed to the start of CRT. Bell and Freeman were frustrated with the slow pace of racial reform in the United Sates. They argued that the traditional approaches of combating racism were producing smaller gains than in previous years. Thus, Critical Race Theory is an outgrowth of Critical Legal Studies (CLS), which was a leftist movement that challenged traditional legal scholarship. These CRT scholars continued forward and were joined by Richard Delgado. In 1989, they held their first conference in Madison, Wisconsin. This was the beginning of the CRT as movement.
CRT has more recently had some spin-offs from the original movement. Latina/o Critical Theory (LatCrit), feisty queer-crit interest group, and Asian American Legal Scholarship are examples of the sub-disciplines within CRT. These sub-disciplines address specific issues that affect each unique community. For LatCrit and Asian American scholars they examine language and immigration policies, whereas, a small emerging group of Indian scholars examine indigenous people’s sovereignty and claims to land. This displays the diversity even within the CRT disciplines that hold CRT to maintain its multidisciplinary approach.
Delgado et al (2001, p. 51)
Dixson et al (2006, p. 4)
Solórzano (1998)
Solórzano (1998)
Delgado et al. (2001, p. 2)
Ladson-Billings (1998, p. 10)
Delgado et al. (2001, p. 4) | 2026-02-02T22:29:45.738724 |
94,058 | 3.515001 | http://www.eufic.org/article/en/page/FTARCHIVE/artid/gmos-debate/ | The debate over genetically modified (GM) foods has been going on for some years now, with much of the discussion centered on whether or not these foods are safe to eat. Thanks to scientific research, improved understanding of the technology and new regulations, most parties involved in the GM debate now agree that the food and food ingredients derived from currently available genetically modified crops are not likely to present a risk for human health.
A crucial point to remember when considering the safety of GM foods, is that it is the food as it is consumed that must be examined, not the production process in isolation. This means that the properties and overall safety of the food needs to be assessed, just as we do with foods produced using conventional methods. European Union legislation requires that GM products are submitted to a rigorous safety evaluation before authorisation is given for human consumption.
The European Union, the World Health Organisation (WHO) and the Food and Agriculture Organisation (FAO) of the United Nations agreed on a methodology referred to as "substantial equivalence" as the most practical approach to assess the safety of GM foods and food ingredients.
"Substantial equivalence" focuses on the product rather than the production process. It is a rigorous procedure including a detailed list of parameters and characteristics that need to be considered including molecular characterisation of the genetic modification, agronomic characterisation, nutritional and toxicological assessments.
The 'substantial equivalence' approach acknowledges that the goal of the assessment cannot be to establish absolute safety. The important conclusion is that if, after the evaluation, the safety of the new product is comparable (substantially equivalent) to a conventional counterpart then the level of "risk" is comparable to foods that we have consumed safely for thousand of years. However, if the GM product has new traits or characteristics that make it no longer substantially equivalent (such as a higher level of a vitamin), then an additional assessment is required. This assessment focuses on the effects the new trait may have on the safety of the new food and may include various types of tests to demonstrate the safety.
The debate on GM food is far from over. Various environmental issues and the safety assessment of future generations of GM products with unique characteristics are among the issues that will continue to elicit discussion, research and testing.
It is however evident that much progress has been made with regard to developing a consensus on the food safety. As the number of GM products available in the market slowly grows, consumers can be assured that they have been subjected to a rigorous evaluation and that food authorities worldwide agree on their safety for human health.
Available evidence shows that GM foods are "not likely to present human health risks" and therefore "these foods may be eaten."
Dr Gro Harlem Brundtland, WHO Director-General, 28 Aug 2002.Current scientific research confirms the safety of GM food.
Dr Jacques Diouf, FAO Director General, 30 August 2002.
- FAO/WHO (1991) Strategies for assessing the safety of foods produced by biotechnology. Report of a joint FAO/WHO Consultation. WHO. Switzerland.
- OECD (1993) Safety evaluation of foods produced by modern biotechnology: concepts and principles. OECD, Paris, France.
- Genetic Modification and Food. Consumer Health and Safety. ILSI Europe Concise Monograph Series, 2001
- WHO, Food Safety Programme, 20 Questions on genetically modified (GM) foods, 2002 | 2026-01-19T17:29:29.008851 |
254,056 | 3.928577 | http://www.sciencecodex.com/herbivores_select_on_floral_architecture_in_a_south_african_birdpollinated_plant-92960 | Floral displays, such as the color, shape, size, and arrangement of flowers, are typically thought to have evolved primarily in response to selection by pollinators—for animal-pollinated species, being able to attract animal vectors is vital to an individual plant's reproductive success. But can herbivores also exert similarly strong selective forces on floral characters? New research on two sister species in South Africa suggests that this may indeed be the case for inflorescence architecture in the rat's tail plant, Babiana ringens. By modifying the primary location of its floral display in response to pressure from mammalian herbivores, B. ringens may have not only reduced floral herbivory, but may also have enhanced pollination by providing a specialized perch for its principal pollinator.
Endemic to the Cape region of South Africa, Babiana ringens produces bright red flowers that are situated close to the ground on an unusual inflorescence axis that protrudes above the floral display. Its primary pollinator, the malachite sunbird, is attracted to the flower's red color and abundant nectar—the color, shape (tubular), and size of the flowers indicates that these characters likely evolved in response to sunbirds. In previous research, Bruce Anderson (University of Stellenbosch, South Africa) and colleagues discovered that the protruding modified inflorescence axis serves as a perch for sunbirds, allowing them to turn upside down in a perfect position to access nectar and facilitate pollination.
In Babiana ringens, the inflorescence axis is modified such that growth of the apical side branches is suppressed and flowers are only produced on a single branch at the ground level. A close sister species, B. hirsuta, exhibits similar inflorescence morphology, except that flowers are produced on side branches all along the stalk and not just at the base.
Caroli de Waal, a graduate student at the University of Toronto, was intrigued by how the bird perch in B. ringens may have originated. This required determining the potential selective forces responsible for influencing the inflorescence architecture in the two Babiana species. Based on field observations, De Waal and colleagues investigated whether herbivory might have played a role in the evolution of this unique bird perch. Their findings were recently published in the American Journal of Botany (http://www.amjbot.org/content/early/2012/05/10/ajb.1100295.full.pdf+html).
"It is hard not to be curious about the origin of the curious rat's tail of Babiana ringens, which is unique in the flowering plants," commented Anderson.
"We noticed that in populations of the sister species, B. hirsuta, many plants had suffered damage from herbivores, with the upper portions of the stems completely eaten off," De Waal noted. "We then started to wonder: what if herbivory could contribute to selection for the floral display in B. ringens?"
Given the close phylogenetic relatedness of these species and their similar floral morphologies and pollinators, De Waal and colleagues hypothesized that the specialized bird perch in B. ringens may have originated from a B. hirsuta-like ancestor through reduction in the production of the side branches. This could happen if mammalian herbivores preferentially eat apical flowers and left the basal flowers alone.
Indeed, when De Waal and co-authors compared herbivory rates in three B. hirsuta and three B. ringens populations, they found much higher levels of herbivore damage to B. hirsuta---as much as 53% of the inflorescences were eaten. Moreover, they found that Cape grysbok (the primary herbivore) mostly grazed the top parts of inflorescences, leaving the basal parts to continue flowering. In comparison, herbivory in B. ringens was much lower and the ground-level flowers were never eaten.
To further test their idea, De Waal and colleagues conducted a field experiment with B. hirsuta in which they manipulated flower position along the stalk – they removed side branches in plants so that they displayed either flowers only at the top or only at the bottom of the stems (the latter resembling the display of B. ringens). Using cages, they then excluded herbivores from half of the treatments.
"Our most important result," De Waal stated, "is that flowers at the tips of stems were eaten by browsing antelope, whereas flowers at ground-level were consistently ignored." Notably, this was even the case for plants that were not protected from herbivores by cages, yet were manipulated to have only ground-level flowers.
"Significantly, plants with only ground-level flowers produced more seeds," she said. "This means that plants that manage to escape damage by herbivores also have higher reproductive success, and this may be the reason why B. ringens evolved its unusual ground-level flowers."
Moreover, Anderson adds, "The overwhelming herbivore preference for plants with only apical flowers indicates why plants like B. ringens, with basal flowers, could have evolved while plants with only apical flowers would be maladapted."
"Our results indicate that in addition to pollinators, herbivores can also be important selective agents on inflorescence architecture," concluded Anderson. "This is significant because most scientists have attributed variation in floral display to selection by pollinators and the importance of herbivores is often forgotten." | 2026-01-22T02:39:28.919012 |
725,022 | 4.01341 | http://www.cs.mtu.edu/~shene/COURSES/cs3621/NOTES/geometry/geo-tran.html | When talking about geometric transformations, we have to be very careful about the object being transformed. We have two alternatives, either the geometric objects are transformed or the coordinate system is transformed. These two are very closely related; but, the formulae that carry out the job are different. We only cover transforming geometric objects here.
We shall start with the traditional Euclidean transformations that do not change lengths and angle measures, followed by affine transformation. Finally, we shall talk about projective transformations.
The Euclidean transformations are the most commonly used transformations.
An Euclidean transformation is either a translation, a rotation, or a
reflection. We shall discuss translations and rotations only.
Therefore, if a line has an equation Ax + By + C = 0, after plugging the formulae for x and y, the line has a new equation Ax' + By' + (-Ah - Bk + C) = 0.
If a point (x, y) is rotated an angle a about the coordinate origin to become a new point (x', y'), the relationships can be described as follows:
Thus, rotating a line Ax + By + C = 0 about the origin a degree brings it to a new equation:
(Acosa - Bsina)x' + (Asina + Bcosa)y' + C = 0
Translations and rotations can be combined into a single equation like the following:
The above means that rotates the point (x,y) an angle a about the coordinate origin and translates the rotated result in the direction of (h,k). However, if translation (h,k) is applied first followed by a rotation of angle a (about the coordinate origin), we will have the following:
Therefore, rotation and translation are not commutative!
In the above discussion, we always present two matrices, A and B, one for transforming x to x' (i.e., x'=Ax) and the other for transforming x' to x (i.e., x=Bx'). You can verify that the product of A and B is the identity matrix. In other words, A and B are inverse matrices of each other. Therefore, if we know one of them, the other is the inverse of the given one. For example, if you know A that transforms x to x', the matrix that transforms x' back to x is the inverse of A.
Let R be a transformation matrix sending x' to x: x=Rx'. Plugging this equation of x into a conic equation gives the following:
Rearranging terms yields
This is the new equation of the given conic after the specified transformation. Note that the new 3-by-3 symmetric matrix that represents the conic in a new position is the following:
Now you see the power of matrices in describing the concept of transformation.
The above translates points by adding a vector <p, q, r>.
Rotations in space are more complex, because we can either rotate about the x-axis, the y-axis or the z-axis. When rotating about the z-axis, only coordinates of x and y will change and the z-coordinate will be the same. In effect, it is exactly a rotation about the origin in the xy-plane. Therefore, the rotation equation is
With this set of equations, letting a be 90 degree rotates (1,0,0) to (0,1,0) and (0,1,0) to (-1,0,0). Therefore, the x-axis rotates to the y-axis and the y-axis rotates to the negative direction of the original x-axis. This is the effect of rotating about the z-axis 90 degree.
Based on the same idea, rotating about the x-axis an angle a is the following:
Let us verify the above again with a being 90 degree. This rotates (0,1,0) to (0,0,1) and (0,0,1) to (0,-1,0). Thus, the y-axis rotates to the z-axis and the z-axis rotates to the negative direction of the original y-axis.
But, rotating about the y-axis is different! It is because the way of measuring angles. In a right-handed system, if your right hand holds a coordinate axis with your thumb pointing in the positive direction, your other four fingers give the positive direction of angle measuring. More precisely, the positive direction for measuring angles is from the z-axis to x-axis. However, traditionally the angle measure is from the x-axis to the z-axis. As a result, rotating an angle a about the y-axis in the sense of a right-handed system is equivalent to rotating an angle -a measuring from the x-axis to the z-axis. Therefore, the rotation equations are
Let us verify the above with rotating about the y-axis 90 degree. This rotates (1,0,0) to (0,0,-1) and (0,0,1) to (1,0,0). Therefore, the x-axis rotates to the negative direction of the z-axis and the z-axis rotates to the original x-axis.
A rotation matrix and a translation matrix can be combined into a single matrix as follows, where the r's in the upper-left 3-by-3 matrix form a rotation and p, q and r form a translation vector. This matrix represents rotations followed by a translation.
You can apply this transformation to a plane and a quadric surface just as what we did for lines and conics earlier.
Euclidean transformations preserve length and angle measure. Moreover, the
shape of a geometric object will not change. That is, lines transform to
lines, planes transform to planes, circles transform to circles, and
ellipsoids transform to ellipsoids. Only the position and orientation of the
object will change. Affine transformations are generalizations of Euclidean
transformations. Under affine transformations, lines transforms to lines;
but, circles become ellipses. Length and angle are not preserved.
In this section, we shall discuss scaling, shear and general affine
Scaling can be applied to all axes, each with a different scaling factor. For example, if the x-, y- and z-axis are scaled with scaling factors p, q and r, respectively, the transformation matrix is:
How far a direction is pushed is determined by a shearing factor. On the xy-plane, one can push in the x-direction, positive or negative, and keep the y-direction unchanged. Or, one can push in the y-direction and keep the x-direction fixed. The following is a shear transformation in the x-direction with shearing factor a:
The shear transformation in the y-direction with shearing factor b is the following:
In space, one can push in two coordinate axis directions and keep the third one fixed. The following is the shear transformation in both x- and y-directions with shearing factors a and b, respectively, keeping the z-coordinate the same:
Let us take a look at the effect of this shear transformation. Expanding the matrix equation gives the following:
x' = x + azThus, a point (x, y, z) in space is transformed to (x + az, y + bz, z). Therefore, the z-coordinate does not change, while (x, y) is ``pushed'' in the direction of (a, b, 0) with a factor z.
y' = y + bz
z' = z
The following is the shear transformation in xz-direction:
The following is the shear transformation in yz-direction:
Comparing with all previous discussed matrices, rotations and translations included, you will see that all of them fit into this form and hence are affine transformations. Affine transformations do not alter the degree of a polynomial, parallel lines/planes are transformed to parallel lines/planes, and intersecting lines/plane are transformed to intersecting lines and planes. However, affine transformations do not preserve lengths and angle measures and as a result they will change the shape of a geometric object. The following shows the result of a affine transformation applied to a torus. A torus is described by a degree four polynomial. The red surface is still of degree four; but, its shape is changed by an affine transformation.
Note that the matrix form of an affine transformation is a 4-by-4 matrix with the fourth row 0, 0, 0 and 1. Moreover, if the inverse of an affine transformation exists, this affine transformation is referred to as non-singular; otherwise, it is singular. We do not use singular affine transformations in this course.
Projective transformations are the most general "linear" transformations
and require the use of homogeneous coordinates. Given a point in space in
homogeneous coordinate (x,y,z,w) and its image under a projective
transform (x',y',z',w'), a projective transform has the following
In the above, the 4-by-4 matrices must be non-singular (i.e., invertible). Therefore, projective transformations are more general than affine transformations because the fourth row does not have to contain 0, 0, 0 and 1.
Projective transformation can bring finite points to infinity and points at infinity to finite range. Let us take a look at an example. Consider the following projective transformation:
Obviously, this transformation sends (x,y,w)=(1,0,1) to (x',y',w') = (1,-1,0). That is, this projective transformation sends (1,0) on the xy-plane to the point at infinity in direction <1,-1>. From the right-hand side of the matrix equation x=Px' we have
x = 2x' + y'Let us consider a circle x^2 + y^2 = 1. Plugging the above equations into the circle equation changes it to the following:
y = x' + y'
w = 2x' + y' + w'
x2 + 2xy + y2 - 4xw - 2yw - w2 = 0Dividing the above by w^2 to convert it back to conventional form yields
x2 + 2xy + y2 - 4x - 2y - 1 = 0This is a parabola! (Why?) Therefore, a circle that has no point at infinity is transformed to a parabola that does have point at infinity.
While projective transformations, like affine transformations, do not change the degree of a polynomial, two parallel (i.e., intersecting) lines/planes can be transformed to two intersecting (i.e., parallel) lines/planes. Please verify this fact yourself.
Although we do not use these facts and the concept of projective transformations immediately, it will be very helpful in later lectures.
Matrix Multiplication and Transformations
We have introduced to you several transformations. We always show
to you two forms, one from x to x' and the other the
inverse from x' to x. In many cases, one may need several
transformations to bring an object to its desired position. For example,
one may need a transformation in matrix form q=Ap bringing
p to q, followed by a second transformation
r=Bq bringing q to r, followed by yet another
transformation s=Cr bringing r to s. The net
effect of p -> q -> r -> s can be summarized into
a single transformation represented by the product of all involved matrices.
Note that the first (resp., last) transformation matrix is the
right-most (resp., left-most) in the multiplication sequence.
s = Cr = C(Bq) = CBq = CB(Ap) = CBApTherefore, to compute the net effect, we just compute CBA and use it as a single transformation, which brings p to s.
Let us take a look at an example. We want to perform the following transformations to an object:
Therefore, the net effect of transforming a point x of the initial object to the corresponding point x' after the above four transformations is computed as x' = Hx = DCBAx. | 2026-01-29T10:26:20.287476 |
93,831 | 3.532954 | http://phys.org/news180704455.html | (PhysOrg.com) -- Physicists theorize that quantum phenomena could provide a major boost to batteries, with the potential to increase energy density up to 10 times that of lithium ion batteries. According to a new proposal, billions of nanoscale capacitors could take advantage of quantum effects to overcome electric arcing, an electrical breakdown phenomenon which limits the amount of charge that conventional capacitors can store.
In their study, Alfred Hubler and Onyeama Osuagwu, both of the University of Illinois at Urbana-Champaign, have investigated energy storage capacity in arrays of nano vacuum tubes, which contain little or no gas. When the tubes' gap size - or the distance between electrodes - is about 10 nanometers wide, electric arcing is suppressed, preventing energy loss. Further, each tube can be addressed individually, making the technology digital and offering the possibility for data storage in conjunction with energy storage.
The physicists calculated that the large electric field exhibited under these conditions could lead to an energy density anywhere between two and 10 times greater than that of today's best battery technologies. The scientists also estimated that the power density (i.e., the charge-discharge rates) could be orders of magnitude greater than that of today's batteries. In addition, the nature of the charging and discharging avoids the leakage faced by conventional batteries, so that the nano vacuum batteries waste very little energy and have a virtually unlimited lifetime.
The scientists say that it may be possible to build a prototype of the battery in the next year. Since the energy density is independent from the materials used, the nano vacuum tubes could be built from inexpensive, non-toxic materials. The nano vacuum tubes could also be fabricated using existing photolithographic techniques, and could be easily combined with integrated circuits.
As for the possibility of data storage, the physicists explain that each nano vacuum tube can have two gates, an energy gate and an information gate. Each nano vacuum tube can also be charged and discharged individually, in any arbitrary order. By inserting a MOSFET (metal-oxide-semiconductor field-effect transistor) in the wall of a nano vacuum tube, the state of the tube can be determined without charging or discharging it.
"For example, to store the number 22, one would convert it to binary notation 22 = 10110," the scientists wrote in their paper. "Then use the energy gates to charge the first, third and fourth tube and leave the second and fifth tube uncharged. When the energy gate holds a charge, it induces an electric field in the MOSFET that partially cancels the electric field from the electrodes of the information gate, which modifies the threshold voltage of the MOSFET. During read-out, a voltage slightly above the regular threshold voltages is applied to the information gate, and the MOSFET channel will become conducting or remain insulating, depending on the voltage threshold of the MOSFET, which depends on the charge on the energy gate. The current flow through the MOSFET channel is measured and provides a binary code, reproducing the stored data."
As Hubler explained in a recent article in MIT's Technology Review, the digital quantum battery concept can be viewed in different ways as a variation of several technologies.
"If you look at it from a digital electronics perspective, it's just a flash drive," Hubler said. "If you look at it from an electrical engineering perspective, you would say these are miniaturized vacuum tubes like in plasma TVs. If you talk to a physicist, this is a network of capacitors."
Hubler has applied for DARPA funding to develop a prototype of the digital quantum battery, and find out what will actually happen when loading the nano vacuum tubes with large amounts of energy.
Explore further: A quantum simulator for magnetic materials
More information: Alfred W. Hubler and Onyeama Osuagwu. "Digital quantum batteries: Energy and information storage in nano vacuum tube arrays." To be published in Complexity. | 2026-01-19T17:24:54.890224 |
1,111,864 | 3.655 | http://www.daviddarling.info/encyclopedia/R/rheumatoid_arthritis.html | Rheumatoid arthritis is an inflammatory
disease that causes pain, swelling, stiffness, and loss of function in the
joints. It has several special features that
make it different from other kinds of arthritis.
For example, rheumatoid arthritis generally occurs in a symmetrical pattern,
meaning that if one knee or hand is involved, the other one also is. The
disease often affects the wrist joints and
the finger joints closest to the hand. It can
also affect other parts of the body besides the joints. In addition, people
with rheumatoid arthritis may have fatigue, occasional fevers,
and a general sense of not feeling well.
Rheumatoid arthritis affects people differently. For some people, it lasts
only a few months or a year or two and goes away without causing any noticeable
damage. Other people have mild or moderate forms of the disease, with periods
of worsening symptoms, called flares, and periods in which they feel better,
called remissions. Still others have a severe form of the disease that is
active most of the time, lasts for many years or a lifetime, and leads to
serious joint damage and disability.
Features of Rheumatoid Arthritis
- Tender, warm, swollen joints
- Symmetrical pattern of affected joints
- Joint inflammation often affecting the wrist and finger joints
closest to the hand
- Joint inflammation sometimes affecting other joints, including
the neck, shoulders, elbows, hips, knees, ankles, and feet
- Fatigue, occasional fevers, a general sense of not feeling well
- Pain and stiffness lasting for more than 30 minutes in the morning
or after a long rest
- Symptoms that last for many years
- Variability of symptoms among people with the disease
Although rheumatoid arthritis can have serious effects on a person's life
and well-being, current treatment strategies – including pain-relieving
drugs and medications that slow joint damage, a balance between rest and
exercise, and patient education and support programs – allow most
people with the disease to lead active and productive lives. In recent years,
research has led to a new understanding of rheumatoid arthritis and has
increased the likelihood that, in time, researchers will find even better
ways to treat the disease.
How rheumatoid arthritis develops and progresses
A joint is a place where two bones meet. The ends of the bones are covered
by cartilage, which allows for easy movement
of the two bones. The joint is surrounded by a capsule that protects and
supports it. The joint capsule is lined with a type of tissue called synovium,
which produces synovial fluid, a
clear substance that lubricates and nourishes the cartilage and bones inside
the joint capsule.
|A joint (the place where two bones meet) is surrounded
by a capsule that protects and supports it. The joint capsule is lined
with a type of tissue called synovium, which produces synovial fluid
that lubricates and nourishes joint tissues. In rheumatoid arthritis,
the synovium becomes inflamed, causing warmth, redness, swelling,
and pain. As the disease progresses, the inflamed synovium invades
and damages the cartilage and bone of the joint. Surrounding muscles,
ligaments, and tendons become weakened. Rheumatoid arthritis also
can cause more generalized bone loss that may lead to osteoporosis
(fragile bones that are prone to fracture).
Like many other rheumatic diseases, rheumatoid arthritis is an autoimmune
disease (auto means self), so-called because a person's immune
system, which normally helps protect the body from infection and disease,
attacks joint tissues for unknown reasons. White blood cells, the agents
of the immune system, travel to the synovium and cause inflammation
(synovitis), characterized by warmth, redness, swelling, and pain –
typical symptoms of rheumatoid arthritis. During the inflammation process,
the normally thin synovium becomes thick and makes the joint swollen and
puffy to the touch.
As rheumatoid arthritis progresses, the inflamed synovium invades and destroys
the cartilage and bone within the joint. The surrounding muscles,
ligaments, and tendons
that support and stabilize the joint become weak and unable to work normally.
These effects lead to the pain and joint damage often seen in rheumatoid
arthritis. Researchers studying rheumatoid arthritis now believe that it
begins to damage bones during the first year or two that a person has the
disease, one reason why early diagnosis and treatment are so important.
Other parts of the body
Some people with rheumatoid arthritis also have symptoms in places other
than their joints. Many people with rheumatoid arthritis develop anemia,
or a decrease in the production of red blood cells. Other effects that occur
less often include neck pain and dry eyes and mouth. Very rarely, people
may have inflammation of the blood vessels, the lining of the lungs, or
the sac enclosing the heart.
Occurrence and impact of rheumatoid arthritis
Scientists estimate that about 1.3 million people, or about 0.6 percent
of the U.S. adult population, have rheumatoid arthritis. Interestingly,
some recent studies have suggested that although the number of new cases
of rheumatoid arthritis for older people is increasing, the overall number
of new cases may actually be going down.
Rheumatoid arthritis occurs in all races and ethnic groups. Although the
disease often begins in middle age and occurs with increased frequency in
older people, children and young adults also develop it. Like some other
forms of arthritis, rheumatoid arthritis occurs much more frequently in
women than in men. About two to three times as many women as men have the
By all measures, the financial and social impact of all types of arthritis,
including rheumatoid arthritis, is substantial, both for the Nation and
for individuals. From an economic standpoint, the medical and surgical treatment
for rheumatoid arthritis and the wages lost because of disability caused
by the disease add up to billions of dollars annually. Daily joint pain
is an inevitable consequence of the disease, and most patients also experience
some degree of depression, anxiety, and feelings of helplessness. For some
people, rheumatoid arthritis can interfere with normal daily activities,
limit job opportunities, or disrupt the joys and responsibilities of family
life. However, there are arthritis self-management programs that help people
cope with the pain and other effects of the disease and help them lead independent
and productive lives.
Searching for the causes of rheumatoid arthritis
Scientists still do not know exactly what causes the immune system to turn
against itself in rheumatoid arthritis, but research over the last few years
has begun to piece together the factors involved.
Genetic (inherited) factors: Scientists
have discovered that certain genes known to play a role in the immune
system are associated with a tendency to develop rheumatoid arthritis.
Some people with rheumatoid arthritis do not have these particular genes;
still others have these genes but never develop the disease. These somewhat
contradictory data suggest that a person's genetic makeup plays an important
role in determining if he or she will develop rheumatoid arthritis, but
it is not the only factor. What is clear, however, is that more than one
gene is involved in determining whether a person develops rheumatoid arthritis
and how severe the disease will become.
Even though all the answers are not known, one thing is certain: rheumatoid
arthritis develops as a result of an interaction of many factors. Researchers
are trying to understand these factors and how they work together.
Environmental factors: Many scientists think that something must
occur to trigger the disease process in people whose genetic makeup makes
them susceptible to rheumatoid arthritis. A viral or bacterial infection
appears likely, but the exact agent is not yet known. This does not mean
that rheumatoid arthritis is contagious: a person cannot catch it from
Other factors: Some scientists also think that a variety of hormonal
factors may be involved. Women are more likely to develop rheumatoid arthritis
than men, pregnancy may improve the disease, and the disease may flare
after a pregnancy. Breastfeeding may also aggravate the disease. Contraceptive
use may alter a person's likelihood of developing rheumatoid arthritis.
Scientists think that levels of the immune system molecules interleukin
12 (IL-12) and tumor necrosis factor-alpha (TNF-α) may change along
with the changing hormone levels seen in pregnant women. This change may
contribute to the swelling and tissue destruction seen in rheumatoid arthritis.
These hormones, or possibly deficiencies or changes in certain hormones,
may promote the development of rheumatoid arthritis in a genetically susceptible
person who has been exposed to a triggering agent from the environment.
Diagnosing and treating rheumatoid arthritis
Diagnosing and treating rheumatoid arthritis requires a team effort involving
the patient and several types of health care professionals. A person can
go to his or her family doctor or internist or to a rheumatologist. A rheumatologist
is a doctor who specializes in arthritis and other diseases of the joints,
bones, and muscles. As treatment progresses, other professionals often help.
These may include nurses, physical or occupational therapists, orthopaedic
surgeons, psychologists, and social workers.
Studies have shown that patients who are well informed and participate actively
in their own care have less pain and make fewer visits to the doctor than
do other patients with rheumatoid arthritis.
Patient education and arthritis self-management programs, as well as support
groups, help people to become better informed and to participate in their
own care. Self-management programs teach about rheumatoid arthritis and
its treatments, exercise and relaxation approaches, communication between
patients and health care providers, and problem solving. Research on these
programs has shown that they help people:
- understand the disease
- reduce their pain while remaining active
- cope physically, emotionally, and mentally
- feel greater control over the disease and build a sense of confidence
in the ability to function and lead full, active, and independent lives.
Rheumatoid arthritis can be difficult to diagnose in its early stages for
several reasons. First, there is no single test for the disease. In addition,
symptoms differ from person to person and can be more severe in some people
than in others. Also, symptoms can be similar to those of other types of
arthritis and joint conditions, and it may take some time for other conditions
to be ruled out. Finally, the full range of symptoms develops over time,
and only a few symptoms may be present in the early stages. As a result,
doctors use a variety of the following tools to diagnose the disease and
to rule out other conditions:
Medical history: This is the patient's
description of symptoms and when and how they began. Good communication
between patient and doctor is especially important here. For example,
the patient's description of pain, stiffness, and joint function and how
these change over time is critical to the doctor's initial assessment
of the disease and how it changes over time.
Physical examination: This includes the doctor's examination of
the joints, skin, reflexes, and muscle strength.
Laboratory tests: One common test is for rheumatoid factor, an
antibody that is present eventually in the blood of most people with rheumatoid
arthritis. (An antibody is a special protein made by the immune system
that normally helps fight foreign substances in the body.) Not all people
with rheumatoid arthritis test positive for rheumatoid factor, however,
especially early in the disease. Also, some people test positive for rheumatoid
factor, yet never develop the disease. Other common laboratory tests include
a white blood cell count, a blood test for anemia, and a test of the erythrocyte
sedimentation rate (often called the sed rate), which measures inflammation
in the body. C-reactive protein is another common test that measures disease
X-rays: X-rays are used
to determine the degree of joint destruction. They are not useful in the
early stages of rheumatoid arthritis before bone damage is evident, but
they can be used later to monitor the progression of the disease.
Doctors use a variety of approaches to treat rheumatoid arthritis. These
are used in different combinations and at different times during the course
of the disease and are chosen according to the patient's individual situation.
No matter what treatment the doctor and patient choose, however, the goals
are the same: to relieve pain, reduce inflammation, slow down or stop joint
damage, and improve the person's sense of well-being and ability to function.
Good communication between the patient and doctor is necessary for effective
treatment. Talking to the doctor can help ensure that exercise and pain
management programs are provided as needed, and that drugs are prescribed
appropriately. Talking to the doctor can also help people who are making
decisions about surgery.
Goals of Treatment
- Relieve pain
- Reduce inflammation
- Slow down or stop joint damage
- Improve a person's sense of well-being and ability to function
Current Treatment Approaches
- Slow down or stop joint damage
- Routine monitoring and ongoing care
Health behavior changes: Certain activities can
help improve a person's ability to function independently and maintain a
Rest and exercise: People with rheumatoid
arthritis need a good balance between rest and exercise, with more rest
when the disease is active and more exercise when it is not. Rest helps
to reduce active joint inflammation and pain and to fight fatigue. The
length of time for rest will vary from person to person, but in general,
shorter rest breaks every now and then are more helpful than long times
spent in bed.
Exercise is important for maintaining healthy and strong muscles, preserving
joint mobility, and maintaining flexibility. Exercise can also help people
sleep well, reduce pain, maintain a positive attitude, and lose weight.
Exercise programs should take into account the person's physical abilities,
limitations, and changing needs.
Joint care: Some people find using a splint for a short time around
a painful joint reduces pain and swelling by supporting the joint and
letting it rest. Splints are used mostly on wrists and hands, but also
on ankles and feet. A doctor or a physical or occupational therapist can
help a person choose a splint and make sure it fits properly. Other ways
to reduce stress on joints include self-help devices (for example, zipper
pullers, long-handled shoe horns); devices to help with getting on and
off chairs, toilet seats, and beds; and changes in the ways that a person
carries out daily activities.
Stress reduction: People with rheumatoid arthritis face emotional
challenges as well as physical ones. The emotions they feel because of
the disease – fear, anger, and frustration – combined with
any pain and physical limitations can increase their stress level. Although
there is no evidence that stress plays a role in causing rheumatoid arthritis,
it can make living with the disease difficult at times. Stress also may
affect the amount of pain a person feels. There are a number of successful
techniques for coping with stress. Regular rest periods can help, as can
relaxation, distraction, or visualization exercises. Exercise programs,
participation in support groups, and good communication with the health
care team are other ways to reduce stress.
Healthful diet: With the exception of several specific types of
oils, there is no scientific evidence that any specific food or nutrient
helps or harms people with rheumatoid arthritis. However, an overall nutritious
diet with enough – but not an excess of – calories, protein,
and calcium is important. Some people may need to be careful about drinking
alcoholic beverages because of the medications they take for rheumatoid
arthritis. Those taking methotrexate may need to avoid alcohol altogether
because one of the most serious long-term side effects of methotrexate
is liver damage.
Climate: Some people notice that their arthritis gets worse when
there is a sudden change in the weather. However, there is no evidence
that a specific climate can prevent or reduce the effects of rheumatoid
arthritis. Moving to a new place with a different climate usually does
not make a long-term difference in a person's rheumatoid arthritis.
Medications: Most people who have rheumatoid arthritis
take medications. Some medications are used only for pain relief; others
are used to reduce inflammation. Still others, often called disease-modifying
antirheumatic drugs (DMARDs), are used to try to slow the course of the
disease. The person's general condition, the current and predicted severity
of the illness, the length of time he or she will take the drug, and the
drug's effectiveness and potential side effects are important considerations
in prescribing drugs for rheumatoid arthritis. The table below shows currently
used rheumatoid arthritis medications, along with their uses and effects,
side effects, and monitoring requirements.
Biologic response modifiers are new drugs used for the treatment of rheumatoid
arthritis. They can help reduce inflammation and structural damage to the
joints by blocking the action of cytokines,
proteins of the body's immune system that trigger inflammation during normal
immune responses. Three of these drugs, etanercept (Enbrel), infliximab
(Remicade), and adalimumab (Humira), reduce inflammation by blocking the
reaction of TNF-a molecules. Another drug, called anakinra (Kineret), works
by blocking a protein called interleukin 1 (IL-1) that is seen in excess
in patients with rheumatoid arthritis.
For many years, doctors initially prescribed aspirin
or other pain-relieving drugs for rheumatoid arthritis, as well as rest
and physical therapy. They usually prescribed more powerful drugs later
only if the disease worsened.
Today, however, many doctors have changed their approach, especially for
patients with severe, rapidly progressing rheumatoid arthritis. Studies
show that early treatment with more powerful drugs, and the use of drug
combinations instead of one medication alone, may be more effective in reducing
or preventing joint damage. Once the disease improves or is in remission,
the doctor may gradually reduce the dosage or prescribe a milder medication.
Surgery: Several types of surgery are available
to patients with severe joint damage. The primary purpose of these procedures
is to reduce pain, improve the affected joint's function, and improve the
patient's ability to perform daily activities. Surgery is not for everyone,
however, and the decision should be made only after careful consideration
by patient and doctor. Together they should discuss the patient's overall
health, the condition of the joint or tendon that will be operated on, and
the reason for, as well as the risks and benefits of, the surgical procedure.
Cost may be another factor. Commonly performed surgical procedures include
joint replacement, tendon reconstruction, and synovectomy.
Joint replacement: This is the most frequently
performed surgery for rheumatoid arthritis, and it is done primarily to
relieve pain and improve or preserve joint function. Artificial joints
are not always permanent and may eventually have to be replaced. This
may be an important consideration for young people.
Tendon reconstruction: Rheumatoid arthritis can damage and even
rupture tendons, the tissues that attach muscle to bone. This surgery,
which is used most frequently on the hands, reconstructs the damaged tendon
by attaching an intact tendon to it. This procedure can help to restore
hand function, especially if the tendon is completely ruptured.
Synovectomy: In this surgery, the doctor actually removes the inflamed
synovial tissue. Synovectomy by itself is seldom performed now because
not all of the tissue can be removed, and it eventually grows back. Synovectomy
is done as part of reconstructive surgery, especially tendon reconstruction.
Routine monitoring and ongoing care: Regular medical
care is important to monitor the course of the disease, determine the effectiveness
and any negative effects of medications, and change therapies as needed.
Monitoring typically includes regular visits to the doctor. It also may
include blood, urine, and other laboratory tests and X-rays.
People with rheumatoid arthritis may want to discuss preventing osteoporosis
with their doctors as part of their long-term, ongoing care. Osteoporosis
is a condition in which bones become weakened and fragile. Having rheumatoid
arthritis increases the risk of developing osteoporosis for both men and
women, particularly if a person takes corticosteroids. Such patients may
want to discuss with their doctors the potential benefits of calcium and
vitamin D supplements, hormone therapy, or other treatments for osteoporosis.
Alternative and complementary therapies: Special
diets, vitamin supplements, and other alternative approaches have been suggested
for treating rheumatoid arthritis. Although many of these approaches may
not be harmful in and of themselves, controlled scientific studies either
have not been conducted on them or have found no definite benefit to these
therapies. Some alternative or complementary approaches may help the patient
cope or reduce some of the stress associated with living with a chronic
illness. As with any therapy, patients should discuss the benefits and drawbacks
with their doctors before beginning an alternative or new type of therapy.
If the doctor feels the approach has value and will not be harmful, it can
be incorporated into a patient's treatment plan. However, it is important
not to neglect regular health care.
|Analgesics and Nonsteroidal
Anti-inflammatory Drugs (NSAIDs)
||Analgesics relieve pain; NSAIDs
are a large class of medications useful against pain and inflammation.
A number of NSAIDs are available over the counter. More than a dozen
others – including a subclass called COX-2 inhibitors –
are available only with a prescription.
||NSAIDs can cause stomach irritation
or, less often, can affect kidney function. The longer a person uses
NSAIDs, the more likely he or she is to have side effects, ranging
from mild to serious. Many other drugs cannot be taken when a patient
is being treated with NSAIDs because they alter the way the body uses
or eliminates these other drugs. NSAIDs sometimes are associated with
serious gastrointestinal problems, including ulcers, bleeding, and
perforation of the stomach or intestine. People over age 65 and those
with any history of ulcers or gastrointestinal bleeding should use
NSAIDs with caution.
||Check with your health care provider
or pharmacist before you take NSAIDs. Before taking traditional NSAIDs,
let your provider know if you drink alcohol or use blood thinners
or if you have any of the following: sensitivity or allergy to aspirin
or similar drugs, kidney or liver disease, heart disease, high blood
pressure, asthma, or peptic ulcers.
||Nonprescription medications used to
relieve pain. Examples are aspirin-free Anacin, Excedrin caplets,
Panadol, Tylenol, and Tylenol Arthritis.
||Usually no side effects when taken as
||Not to be taken with alcohol or with
other products containing acetaminophen. Not to be used for more than
10 days unless directed by a physician.
|Aspirin is used to reduce pain, swelling,
and inflammation, allowing patients to move more easily and carry
out normal activities. It is generally part of early and ongoing therapy.
||Upset stomach; tendency to bruise easily;
ulcers, pain, or discomfort; diarrhea;
or indigestion; nausea or vomiting.
||Doctor monitoring is needed.
|NSAIDs help relieve pain within hours
of admin-istration in dosages available over-the-counter (available
for all three medications). They relieve pain and inflammation in
dosages available in prescription form (ibu-profen and ketoprofen).
It may take several days to reduce inflammation.
||For all traditional NSAIDs: Abdominal
or stomach cramps, pain, or discomfort; diarrhea; dizziness; drowsiness
or light-headedness; headache; heartburn or indigestion; peptic ulcers;
nausea or vomiting; possible kidney and liver damage (rare).
||For all traditional NSAIDs: Before taking
these drugs, let your doctor know if you drink alcohol or use blood
thinners or if you have or have had any of the following: sensitivity
or allergy to aspirin or similar drugs, kidney or liver disease, heart
disease, high blood pressure, asthma, or peptic ulcers.
||These are steroids given by mouth or
injection. They are used to relieve inflammation and reduce swelling,
redness, itching, and allergic reactions.
||Increased appetite, indigestion, nervousness,
||For all corticosteroids, let your doctor
know if you have one of the following: fungal infection, history of
tuberculosis, underactive thyroid,
herpes simplex of the eye, high blood pressure, osteoporosis, or stomach
|These steroids are available in pill
form or as an injection into a joint. Improvements are seen in several
hours up to 24 hours after administration. There is potential for
serious side effects, especially at high doses. They are used for
severe flares and when the disease does not respond to NSAIDs and
||Osteoporosis, mood changes, fragile
skin, easy bruising, fluid retention, weight gain, muscle weakness,
onset or worsening of diabetes, cataracts, increased risk of infection,
hyper-tension (high blood pressure).
||Doctor monitoring for continued effectiveness
of medication and for side effects is needed.
||These are common arthritis medications.
They relieve painful, swollen joints and slow joint damage, and several
DMARDs may be used over the disease course. They take a few weeks
or months to have an effect, and may produce significant improvements
for many patients. Exactly how they work is still unknown.
||Side effects vary with each medicine.
DMARDs may increase risk of infection, hair loss, and kidney or liver
||Doctor monitoring allows the risk of
toxicities to be weighed against the potential benefits of individual
||This drug was first used in higher doses
in cancer chemotherapy and organ transplantation. It is used in patients
who have not responded to other drugs, and in combination therapy.
||Cough or hoarseness, fever or chills,
loss of appetite, lower back or side pain, nausea or vomiting, painful
or difficult urination, unusual tiredness or weakness.
||Before taking this drug, tell your doctor
if you use allopurinol or have kidney or liver disease. This drug
can reduce your ability to fight infection, so call your doctor immediately
if you develop chills, fever, or a cough. Regular blood and liver
function tests are needed.
||This medication was first used in organ
transplantation to prevent rejection. It is used in patients who have
not responded to other drugs.
||Bleeding, tender, or enlarged gums;
high blood pressure; increase in hair growth; kidney problems; trembling
and shaking of hands.
||Before taking this drug, tell your doctor
if you have one of the following: sensitivity to castor
oil (if receiving the drug by injection), liver or kidney disease,
active infection, or high blood pressure. Using this drug may make
you more susceptible to infection and certain cancers. Do not take
live vaccines while on this drug.
||It may take several months to notice
the benefits of this drug, which include reducing the signs and symptoms
of rheumatoid arthritis.
||Diarrhea, eye problems (rare), headache,
loss of appetite, nausea or vomiting, stomach cramps or pain.
||Doctor monitoring is important, particularly
if you have an allergy to any antimalarial drug or a retinal abnormality.
|Gold sodium thiomalate
||This was one of the first DMARDs used
to treat rheumatoid arthritis.
||Redness or soreness of tongue; swelling
or bleeding gums; skin rash or itching; ulcers or sores on lips, mouth,
or throat; irritation on tongue. Joint pain may occur for one or two
days after injection.
||Before taking this drug, tell your doctor
if you have any of the following: lupus, skin rash, kidney disease,
or colitis. Periodic urine and blood
tests are needed to check for side effects.
||This drug reduces signs and symptoms
and slows structural damage to joints caused by arthritis.
||Bloody or cloudy urine; congestion in
chest; cough; diarrhea; difficult, burning, or painful urination or
breathing; fever; hair loss; headache; heartburn; loss of appetite;
nausea and/or vomiting; skin rash; stomach pain; sneezing; and sore
||Before taking this medication, let your
doctor know if you have one of the following: active infection, liver
disease, known immune deficiency, renal insufficiency, or underlying
malignancy. You will need regular blood tests, including liver function
tests. Leflunomide must not be taken during pregnancy because it may
cause birth defects in humans.
||This drug can be taken by mouth or by
injection and results in rapid improvement (it usually takes 3-6 weeks
to begin working). It appears to be very effective, especially in
combination with infliximab or etanercept. In general, it produces
more favorable long-term responses compared with other DMARDs such
as sulfasalazine, gold sodium thiomalate, and hydroxychloroquine.
||Abdominal discomfort, chest pain, chills,
nausea, mouth sores, painful urination, sore throat, unusual tiredness
||Doctor monitoring is important, particularly
if you have an abnormal blood count, liver or lung disease, alcoholism,
immune-system deficiency, or active infection. Methotrexate must not
be taken during pregnancy because it may cause birth defects in humans.
||This drug works to reduce the signs
and symptoms of rheumatoid arthritis by suppressing the immune system.
||Abdominal pain, aching joints, diarrhea,
headache, sensitivity to sunlight, loss of appetite, nausea or vomiting,
||Doctor monitoring is important, particularly
if you are allergic to sulfa drugs or aspirin, or if you have a kidney,
liver, or blood disease.
|Biologic Response Modifiers
||These drugs selectively block parts
of the immune system called cytokines. Cytokines play a role in inflammation.
Long-term efficacy and safety are uncertain.
||Increased risk of infection, especially
tuberculosis. Increased risk of pneumonia,
and listeriosis (a foodborne illness caused by the bacterium Listeria
||It is important to avoid eating undercooked
foods (including unpasteurized cheeses, cold cuts, and hot dogs) because
undercooked food can cause listeriosis for patients taking biologic
|Tumor Necrosis Factor Inhibitors
|These medications are highly effective
for treating patients with an inadequate response to DMARDs. They
may be prescribed in combination with some DMARDs, particularly methotrexate.
Etanercept requires subcutaneous (beneath the skin) injections two
times per week. Infliximab is taken intravenously (IV) during a 2-hour
procedure. It is administered with methotrexate. Adalimumab requires
injections every 2 weeks. Long-term efficacy and safety are uncertain.
||Etanercept: Pain or burning
in throat; redness, itching, pain, and/or swelling at injection site;
runny or stuffy nose.
Infliximab: Abdominal pain, cough, dizziness, fainting,
headache, muscle pain, runny nose, shortness of breath, sore throat,
Adalimumab: Redness, rash, swelling, itching, bruising, sinus
infection, headache, nausea.
|Long-term efficacy and safety are uncertain.
Doctor monitoring is important, particularly if you have an active
infection, exposure to tuberculosis, or a central nervous system disorder.
Evaluation for tuberculosis is necessary before treatment begins.
|This medication requires daily injections.
Long-term efficacy and safety are uncertain.
||Redness, swelling, bruising, or pain
at the site of injection; head-ache; upset stomach; diarrhea; runny
nose; and stomach pain.
||Doctor monitoring is required.
• HEALTH AND DISEASE
Source: National Institute of Arthritis and Musculoskeletal
and Skin Diseases | 2026-02-04T10:23:47.492210 |
142,343 | 4.119994 | http://www.sciencedaily.com/releases/2000/08/000830073152.htm | Aug. 30, 2000 Since climate change affects everyone on Earth, scientists have been trying to pinpoint its causes. For many years, researchers agreed that climate change was triggered by what they called "greenhouse gases," with carbon dioxide (CO2) from burning of fossil fuels such as coal, oil, and gas, playing the biggest role. However, new research suggests fossil fuel burning may not be as important in the mechanics of climate change as previously thought.
NASA funded research by Dr. James Hansen of the Goddard Institute for Space Studies, New York, NY, and his colleagues, suggests that climate change in recent decades has been mainly caused by air pollution containing non-CO2 greenhouse gases, particularly tropospheric ozone, methane, chlorofluorocarbons (CFCs), and black carbon (soot) particles.
Since 1975, global surface temperatures have increased by about 0.9 degrees Fahrenheit, a trend that has taken global temperatures to their highest level in the past millennium. "Our estimates of global climate forcings, or factors that promote warming, indicate that it is the processes producing non-CO2 greenhouse gases that have been more significant in climate change," Hansen said.
"The good news is that the growth rate of non-CO2 greenhouse gases has declined in the past decade, and if sources of methane and tropospheric ozone were reduced in the future, further changes in climate due to these gases in the next 50 years could be near zero," Hansen explained. "If these reductions were coupled with a reduction in both particles of black carbon and CO2 gas emissions, this could lead to a decline in the rate of climate change."
Black carbon particles are generated by burning coal and diesel fuel and cause a semi-direct reduction of cloud cover. This reduction in cloud cover is an important factor in Earth's radiation balance, because clouds reflect 40 percent to 90 percent of the Sun's radiation depending on their type and thickness. Black carbon emission is not an essential element of energy production and it can be reduced or eliminated with improved technology.
Hansen's research looked at trends in various greenhouse gases and noted that the growth rate of CO2 in the atmosphere doubled between 1950 and 1970, but leveled off from the late 1970s to the late 1990s.
The other critical piece of information this research is based on, in addition to greenhouse gas levels, is observed heat storage, or warmer ocean temperatures, over the last century. Heat storage in the ocean provides a consistency check on climate change. The ocean is the only place that energy forms an imbalance. In this case a warming can accumulate, and global ocean data reveals that ocean heat content has increased between the mid-1950s and the mid-1990s.
Hansen's paper, "Global Warming in the 21st Century an Alternate Scenario," will appear in the August 29th version of the Proceedings of the National Academy of Sciences.
More information on the paper can be found at: http://www.pnas.org/papbyrecent.html.
NASA's Office of Earth Sciences, Headquarters, Washington, DC, sponsor research that studies how human-induced and natural changes affect our global environment. For more information about the Earth Sciences Enterprise, please see: http://www.earth.nasa.gov.
Other social bookmarking and sharing tools:
Note: Materials may be edited for content and length. For further information, please contact the source cited above.
Note: If no author is given, the source is cited instead. | 2026-01-20T10:37:22.476278 |
1,026,300 | 3.705143 | http://stackoverflow.com/questions/10376740/big-theta-notation-what-exactly-does-big-theta-represent/12338937 | I'm really confused about the differences between big O, big Omega, and big Theta notation. I understand that big O is the upper bound and big Omega is the lower bound, but what exactly does big Theta represent? I have read that it means "tight bound", but what does that mean?
It means that the algorithm is both big-O and big-Omega in the given function. For example, if it is Theta(n), then there is some constant K such that your function (run-time, whatever), is larger than n*K for sufficiently large n, and some other constant k such that your function is smaller than n*k for sufficiently large n. In other words, for sufficiently large n, it is sandwiched between two linear functions.
First let's understand what big O, big Theta and big Omega are. They are all sets of functions.
Big O is giving upper asymptotic bound, while big Omega is giving a lower bound. Big Theta gives both.
Everything that is
For example, merge sort worst case is both
A bit deeper mathematic explanation:
We usually use it to analyze complexity of algorithms (like the merge sort example above). When we say "Algorithm A is
Why we care for the asymptotic bound of an algorithm?
To demonstrate this issue, have a look at the following graphs:
It is clear that
In the above example,
(1)Usually, though not always - when the analysis class (worst/average/...) is missing, we really mean it is the worst case.
Theta(n): A function
when we say
O(n): It gives only upper bound (may or may not be tight)
ex: The bound
o(n): It gives only upper bound (never a tight bound)
I just wanna give you a first impression and some basic ideas. Then you should do more research on the internet or look for some Youtube video
For big-O-notation, you just need to remember there are two functions: f(x) and g(x) and f(x) is less than g(x) when x is super large. f(x) ∈ O(g(x))
For big-omega-notation, you just need to remember that there are two functions : f(x) and k(x) and when x is a super large number, f(x) is still large than k(x) f(x) ∈ Ω (k(x))
when you have the first impression about big-o-notation and big-omega-notation then you should do search on Youtube. | 2026-02-03T02:32:28.083672 |
863,334 | 3.650233 | http://www.aph.gov.au/Parliamentary_Business/Committees/House_of_Representatives_Committees?url=/jscc/report/chapter1.htm | Chapter 1 Introduction
The online environment is an integral part of modern economic and social
activities, and a vast resource of information, communication, education and
This chapter introduces the online environment, platforms and access and
the relevant cyber-safety issues and outlines the responsibilities of the
Australian governments. The chapter concludes with an overview of the inquiry
process and an outline of the report.
The online environment
The online environment is an essential tool for all Australians,
including children and young people less than 18 years of age.
The ability to use online tools effectively provides both a skill for life and
the means to acquire new skills.
The Internet brings with it many advantages and benefits to
children; their use of media permits them to gain and share knowledge in a
variety of new and engaging ways. The Web 2.0 world allows children to create
and share their own content and express their ideas, thoughts and experiences
on a worldwide stage. The Internet allows children to go far beyond their homes
and communities; they are able to explore the world, immerse themselves in
different cultures, different geographies and different periods in history with
the click of a mouse. The skills they learn through their online exploration in
early life prepare them for their future, providing them with not just
knowledge but also with abilities far beyond those skills that can be taught in
The power and usefulness of the online environment, and of social
networking sites in particular, was convincingly demonstrated during the
widespread floods in Queensland early in 2011.
The Internet has brought unprecedented freedoms to millions
of people worldwide: the freedom to create and communicate, to organise and
influence, to speak and be heard. The Internet has democratised access to human
knowledge and allowed businesses small and large to compete on a level playing
field. It’s put power in the hands of people to make more informed choices and
decisions. Taken together, these new opportunities are redefining what it means
to be an active citizen.
This environment brings significant benefits by sharing information,
allowing them to keep in touch, at work and at play. As of 21 March 2011,
Facebook advised the Committee that:
Facebook has nearly 11 million active users who have visited
the site in Australia within the past 30 days. Over nine million users visit
every week and over seven million visit every day.
It is also a valuable tool for breaking down physical boundaries. There
are more mobile phones in Australia than people, 78 percent of households have
computer access and 72 percent have Internet access.
Almost half of the mobile phones have an Internet capability and one-third of users
access the Internet regularly on their phones. The benefits can be
multifaceted, for example, for Indigenous young people:
For an Indigenous child it may be a connection to culture. It
may be a connection to religious and spiritual pursuits. It may be a connection
to family in other countries. Whatever that may look like for a child or young
person, it is something that in a non-digital world they may have limited or
very challenging access to.
This environment is not static, and Australians are ‘utterly voracious’
in their adoption of online technologies. As they are introduced, new
applications are therefore likely to be taken up enthusiastically by interested
individuals and groups in the community. Some students continue to use email, however,
there has been a rapid uptake of more portable technologies and social
networking sites to communicate.
Dr Helen McGrath’s research from 2009 suggests that young people use the
Internet for an average of one hour and 17 minutes per day, including almost 50
minutes for messages, visiting social websites and emails; 15 minutes for games
online against other players, and 13 minutes for homework on the computer
and/or the Internet.
While there are potential safety issues for all those who go online, for
the vast majority of users, the online environment is a positive and safe
place. In Australia:
In the 12 months prior to April 2009, an estimated 2.2
million (79%) children accessed the Internet either during school hours or
outside of school hours. The proportion of males (80%) accessing the Internet
was not significantly different from females (79%). The proportion of children
accessing the Internet increased by age, with 60% of 5 to 8 year olds accessing
the Internet compared with 96% of 12 to 14 year olds.
The benefits of online applications for young people in our society are
accompanied by exposure to a range of potential dangers. Some of the most
obvious include cyber-bullying, access to or accessing illegal and prohibited
material, online abuse, inappropriate social and health environments, identity
theft and breaches of privacy.
One thing that both the online and offline world have in
common is that many of these risks are created by the children, either putting
themselves in harm’s way or harming other children. The high profile risks,
which have been reported by media, include the dangers of sexual exploitation
and solicitation, online harassment and exposure to inappropriate images.
However, the principal risks that come with Internet use by children today are
the problems of cyberbullying, sexting, and self-harm websites.
In addition to cyber-safety issues, this environment can also be a veil
for an array of criminal behaviour including various online threats, the sale
of illicit drugs and, increasingly, the sale of illegal pharmaceuticals.
Young people have a limited capacity to make decisions about their own
information. As they must rely on others to ensure that their interests and
rights are protected, they are particularly vulnerable to a range of safety and
criminal activities online.
The Government’s commitment to addressing cyber-safety issues for young
people is reflected in the establishment of this Inquiry in March 2010 as the
response of the Australian Parliament to community concerns about the impact of
threats to young people from the online environment.
Australian authorities have considered problems caused by cyber-crime. A
National Cyber-Crime Working Group was established in May 2010 to enable
jurisdictions to work cooperatively to combat these crimes.
Online crime has no borders and evidence can be transitory, highly
perishable and, often, located overseas. Potential online threats are becoming
more sophisticated through the use of networks to distribute material, and the
protection of material by encryption.
Significant research has been published over many years about the
attitudes and behaviour of those less than 18 years of age in Australia. Given
the speed of recent changes in the range and affordability of ways to enter the
online environment, there is a lack of longitudinal data. Methodologies used
differ from study to study making comparisons difficult in terms of its impact
on that important group. In the absence of such studies, many bodies and groups
appear to have developed ways to correct perceived problems in this
environment, perhaps without an adequate evidential basis.
One witness did not think that ‘much more research is required’, as so
much is already available:
We all know what the problem is ... We have to solve it ... a
greater understanding of what is available from technology could help the
broader community focus...
Defining the online environment
Throughout this Inquiry, the term ‘online environment’ was widely used
without any attempt to define it. The Stride Foundation
drew attention to some of the components of this environment, generally
delivered through Internet platforms.
This environment covers many means of informing and communicating with
people. It is invisible, and for most urban Australians, can be accessed
virtually: anywhere, at any time, from many devices, using any of those
technological means. For most Australians, this environment can also be
accessed with relative ease from a wide variety of locations: at home, work,
school, libraries, university, TAFE colleges, public institutions such as art
galleries, Internet cafes, coffee shops, book stores, etc.
The online environment allows users to do many things, including for
example: sending/receiving emails/texts; sending images and making phone calls
via Skype; paying bills; searching for and downloading material from websites
(including for e-books); retrieving music, TV programs or movies; taking and
sending photographs; joining chat rooms or live discussion forums; writing
blogs; listening to FM or digital radio, etc.
Apart from the mobile phone, the Internet remains the best known, and
most used platform or application in the online environment. As Professor
Landfeldt noted, the Internet is a ‘very fragmented world’ with a large number
of computing devices connected via communication links all using some common
standards, such as the Internet Protocol. It is a platform on which a wide
range of different and accessible content can be found.
The most commonly accessed content is within one of these services, the
world wide web. It is far from certain that it will remain the dominant
platform for information exchange and retrieval in the future.
There are now some very interesting developments from
Stanford University and Berkeley that together have come up with an alternative
routing infrastructure that goes to the core of forwarding traffic on the
internet, changing the very fabric of forwarding. This is gaining traction with
the big manufacturers ... There are also big efforts in putting anonymisation
into the network and security so that, instead of having completely open
channels for all communication, you are looking more at securing your data
transfers, because it is not up for grabs for the entire world. It is very easy
to wire tap and look at data that goes across the internet today. But there are
clear signs that there is a lot of interest in changing that.
The online environment is constantly changing, with newer alternatives
fast gaining ground. The ability to communicate has expanded greatly in the
past few years through the widespread use of social networking sites. In
Australia, the fraction of peer-to-peer traffic is ever-increasing and the
uptake of alternative media consumption is growing, particularly live streaming
video and audio.
The Internet is the most frequently used source of information and
advice for young people. This opens up a range of possibilities, including
concerns that access might be to the ‘not-so-great’ sites that also exist. Of
course, as well as these online resources, there are organisations like Berry
Street and the Inspire Foundation offering support to young people on a range
of issues through their mental health and well-being programs.
Many people now navigate via a Global Positioning Satellite. Gaming
consoles such as Xbox and Playstation can also be part of the online
environment, as can other communications services such as YahooMail and MSN.
The Internet and other platforms can now be easily accessed on
increasingly capable mobile phones and smartphones, tablets, personal digital
assistants, etc. These are more powerful and provide greater options for
communication than advanced desktop machines of only a few years ago.
Laptops have become smaller and lighter, and ‘notebook’ variants are highly
The online environment has changed greatly following the introduction of
popular social networking sites and feeds, such as Facebook, Bebo and Twitter
and includes sites for the very young such as Club Penguin. Individuals elect to
join these sites, providing photographs and information about themselves and
their activities. Other people are asked to join as ‘friends’, to be in contact
and exchange information and photographs, etc. Originators have some control
over the release of personal information. The contents of individuals’ account
are monitored by the sites. Considerable publicity has been given to the risks
implicit in the use of these sites.
As the applications mentioned above are not intended to be a definitive
list, in this Report the broadest possible range will be treated as belonging
to the online environment.
Access to the online environment
The System Administrators’ Guild of Australia referred to Australian
Bureau of Statistics’ figures which showed that, at December 2009, there were
over nine million business and personal subscribers to Internet services in
Australia. ABS also found that, in 2009, 72 percent of Australian houses have
Internet access, and that 79 percent of children five to 14 years old used the
Internet. At that time, homes were slightly more usual sites for usage than
schools: 73 to 69 percent.
- Computers were
available in more than 71% of households with 3–4 year olds, increasing to more
than 90% of homes with 7–8 year olds, and in almost all households with 8–17
year olds (98%).
- Internet access was
available in more than 65% of households with 3–4 year olds, increasing to more
than 72% of homes with 7–8 year olds, 87% of homes with 8–11 year olds, and
more than 90% of households with 12–17 year olds.
- Eighty-four percent
of 7–8 year olds sometimes used the Internet at home to find information for
school, send emails, chat online, surf the internet, play games, or to
access/download music or movies.
- Among 8 to
17-year-olds, use of the Internet for homework and leisure activities increased
with age, from 61% of 8–11 year olds, to 83% of 12–14 year olds and 88% of
15–17 year olds.
- Some 74% of parents
of 7–8 year olds in the study were happy with their child’s media use.
While these figures suggest an online society, some people do not own
computers. Public libraries, government cafes for older people or Internet
cafes are often their only means of access to the Internet, emails, etc. While
some places are not accessed often by the community, for some users, they may
be their only access points.
Research by the Australian Communications and Media Authority (ACMA) in
2009 showed that:
The Internet is a regular part of the everyday lives of
children and young people aged eight to 17 years, and it is used regularly
within both school and home environments.
ACMA added that the use of the Internet, including finding information
for academic purposes, and social networking can become regular from the age of
Australia now has a generation of people who have never been without
online access and have integrated it fully into their lives. Another
generation, brought up in the time of other communications systems, may not
fully understand or utilise technology in the same way. In between these
groups, there are many other people whose interest and skills in the online
environment depend on the situation in which they find themselves. The latter
groups can feel disempowered in situations where young people may know far more
about the online environment than they do.
People less than 18 years old can easily bypass physical access points which
may have filters or other safety measures. Many submissions dealt
with a proposed mandatory, national, filtering system.
That there are groups of parents/carers with different levels of
expertise, time and interest is important when considering ways to integrate
these groups into school communities. This issue will be addressed in Chapter
Worldwide, Facebook has over 500 million active users: less than 12
percent are less than 18, more than half are over 35, while the fastest growing
demographic is between 40 and 60 years old. It has been estimated
that ‘about half’ the Internet users in Australia are on Facebook. An
Australian study revealed that 61 percent of all mothers aged from 45 to 65
years had a Facebook page. Nevertheless, young people and adults use this
technology in different ways. Dr McGrath considered that all adults do not
organise their social lives using social networking sites, and often fail to understand
this use of technology.
While most Australian children have access to the online environment at
a variety of places and via a range of platforms, there are other groups who
are disadvantaged. Lack of access to the online environment can have particular
impacts on some children, and this will be
addressed in Chapter 2.
The Interactive Games & Entertainment Association pointed out that
new and evolving technologies are and will be central to the lives of young
people, to be adapted, discarded, rapidly and often indiscriminately. The
Association believed that young people should be granted freedom to explore and
interact in the online environment. At the same time, steps must be taken to
minimise inherent risks and to provide the same levels of caution exercised as
in the ‘real’ world.
Protection of young people is compacted by the rapid evolution of
technology, and the fact that education, research and the law inevitably lag
behind these developments. While access is easy and
varied, many young people are not aware of or disregard possible consequences
of their actions in the online environment. These consequences can be serious
and last forever.
The term ‘cyber-safety’ was used widely throughout the Inquiry. As it
was largely undefined, its meaning and scope were unclear and there is a need
to identify the key issues to clarify some of the myths surrounding it.
Mr Geordie Guy stated that it was ‘a made-up term or a ‘“neologism”...
native to the Australian government, child protection agencies... and
organisations seeking to commercially supply solutions to the perceived
problem’, and that there was no such globally accepted term.
The Office of the Privacy Commissioner noted that it is ‘a broad concept
that concerns minimising the risks to children online from a range of negative
influences including inappropriate social behaviours, abuse, identity theft and
breaches of privacy.’ This concept will be
used in this Report.
The Australian Psychological Society noted that, while there are risks
in the online environment, they were often ‘over-exaggerated’ with the media
portraying worst case scenarios. ‘Technology’ is often blamed for behaviour
rooted in wider social problems, and in the range of issues characterising
Most young people are aware of cyber-safety measures and have
incorporated these practices into their everyday online activities. The
‘average’ young person seems to have mechanisms to deal with online risks: good
family or peer-to-peer relationships and critical decision-making skills. It is
often the marginalised young people, disconnected from the community, for whom
cyber-safety can become an issue.
Adult responses to cyber-safety issues
The Cooperative Research
Centre for Young People, Technology and Wellbeing noted that conventional
approaches to cyber-safety for young people tend to focus on risk management,
typically through educational and regulatory means.
The Centre believed that thinking about cyber-safety in these terms
failed to acknowledge the expertise of young people in technology and the use
of the Internet. Most cyber-safety programs are delivered at schools, removed
from other settings, such as family or work, and the social relationships with
peers, parents/carers and other adults in which young people regularly engage.
The focus on cyber-safety and risk management means, therefore, that
there is relatively little evidence about adults’ concerns about the online
environment, and particularly about young people’s use of social networking sites.
The Centre stated that it is vital that young people’s perspectives are
incorporated in the cyber-safety debate in ways that empower them and develop
meaningful policies and programs.
Parents/carers have the ultimate responsibility for educating and protecting
their children, including in the online environment. Adults and young people
use technology in different ways, and new communications technologies are
becoming increasingly foreign to many parents/carers, thus ‘reducing their
ability to protect their children.’ More often than not, children know more
about the Internet and mobile phones, etc, than adults. Rapidly emerging new
technologies are increasingly leaving many adults behind.
Moreover, parents/carers often feel an additional lack of involvement or
control because they do not fully understand how their children use their
knowledge about the online environment, and are fearful about online risks.
Teachers may also have a limited understanding of children’s use of technology.
Parents/carers and teachers can therefore have such limited understanding and
awareness of the issues that they are ‘very reluctant’ to deliver, and totally
lack confidence in delivering, such curriculum material or information about
cyber-safety as is available in Australia.
As seen by adults, threats implicit in the online environment include:
‘Internet addiction’; and
lack of sleep.
Some young people are ‘fearless but naïve’
and dismissive of these risks and fears. They can be more concerned about slow
Internet connections and viruses on their computers. For example, the Alannah
and Madeline Foundation noted that ‘nearly all’ the young people it has
interviewed have experienced or witnessed cyber-bullying, and consider it
‘common and extremely unpleasant’. With other online
threats, these matters will be addressed in Part 2 and the results of the
Committee’s Are you safe online survey are provided throughout the
Australian Government responsibilities
Many Australian Government Departments and agencies have policy and
regulatory responsibilities in the online environment.
The Department of Broadband, Communication and the Digital Economy
is responsible for developing a vibrant, sustainable and internationally
competitive broadband, broadcasting and communications sector and through this,
promote the digital economy for the benefit of all Australians.
Within that Department, the Australian Communications and Media
Authority (ACMA) has been operating in cyber-safety space for more than ten
years. Via the Online Content Scheme, in the Broadcasting Services Act 1992
(the Act), its role is:
- to investigate
complaints about prohibited and potentially prohibited online content, and
- to facilitate a
system of co-regulation where the internet industry develops codes of practice
that are registered by the Australian Communications and Media Authority.
Under the Act, the Authority is also responsible for liaison with
regulatory and other relevant overseas bodies to develop cooperative
arrangements for the regulation of the Internet. This includes issuing
take-down notices to Australian hosts of prohibited content, and a blacklist of
a range of inappropriate sites.
ACMA undertakes research into the online environment, and has a
significant range of effective educational programs. Increasingly, ‘a large
part’ of its role, resources and activities is in delivering a broad range of
cyber-safety, educational and awareness programs.
Chaired by a senior officer from the Department, the Consultative
Working Group on Cybersafety was established in 2008 to advise the
Australian Government on best practice safeguards and priorities for action by
government and industry. It comprises representatives from industry, community
organisations and Government bodies such as the Australian Communications and
Media Authority, the Attorney-General’s Department and the Australian Federal
Police. The Working Group is
consider those aspects of cyber-safety faced by Australian
- provide information to Government on measures required to operate
and maintain world’s best practice safeguards for Australian children engaging
in the digital economy; and
- advise the Government on priorities for action by government and
The Consultative Working Group on Cybersafety and the Youth Advisory
Group are the Government’s main vehicles for cyber-safety consultation. The
Youth Advisory Group provides the Government with advices about issues such as
law enforcement, filtering, education and research initiative from a young
person’s perspective. The Consultative Working Group on Cybersafety considers
that the Youth Advisory Group will continue to be crucial in providing the
views of children and young people about:
the nature of young people’s online engagement;
emerging cyber-safety risks; and
how best to tackle these risks from the young person’s
In December 2010, the Minister for Broadband, Communications and the
Digital Economy, Senator the Hon Stephen Conroy, launched the Cyber Safety Help
The Department of Education, Employment and Workplace Relations
provides national leadership in education and workplace training, transition to
work and conditions and values in the workplace. As one of the current
initiatives, the Australian Government is providing $2.4 billion over seven
years to contribute to teaching and learning in Australian schools, preparing
students for further education, training and to live and work in a digital
world. Through the Digital Education Revolution, funding has been provided for:
New information and communications technology equipment for all
secondary schools, for students in Years 9 to 12, through the National Secondary
Schools Computer Fund;
Deployment of high speed broadband connections to schools;
Collaboration with States/Territories and Deans of Education to
ensure new and continuing teachers have access to training in the use of ICT
that enables them to enrich student learning;
Online curriculum tools and resources supporting the national
curriculum and specialist subjects such as languages;
Parents to participate in their children’s education through
online learning; and
Supporting mechanisms to provide vital assistance for schools in
the deployment of ICT.
The Attorney-General’s Department is responsible for
administering Government policy on criminal law and law enforcement, including
cyber-crime, cyber security and anti-discrimination. This includes such issues
as cyber-racism, identity security and classification, grooming and procuring offences
by targeting predatory behaviour occurring through carriage services.
The Australian Federal Police (AFP) is the principal law
enforcement agency through which the Australian Government pursues its law
enforcement interests. The AFP is unique in Australian law enforcement in that
its functions relate both to community policing and to investigations of
offences against Commonwealth law enforcement in Australia and overseas. It has
responsibilities for child protection matters.
The Australian Institute of Criminology is Australia's national
research and knowledge centre on crime and justice. It seeks to promote justice
and reduce crime by undertaking and communicating evidence-based research to
inform policy and practice. Its functions include conducting criminological research;
communicating the results of research; conducting or arranging conferences/seminars; and publishing material arising from its work.
It has worked closely with the Attorney-General’s Department, the AFP
and other agencies to undertake research into technology-enabled crime. In
2007, the Institute was commissioned to report on existing literature
concerning the use of social networking sites for sexual grooming, the extent
and nature of the problem, and effective ways in which to address it. The
resulting publications have been cited many times in this Report.
The Office of the Privacy Commissioner is an independent
statutory body whose purpose is to promote and protect privacy in Australia.
Established under the Privacy Act 1988 (Cth), it has responsibilities
for the protection of individuals’ personal information handled by Australian
and Australian Capital Territory Government agencies, and personal information
held by all large private sector organisations, health service providers and
some small businesses.
The Commonwealth Director of Public Prosecutions is responsible
for the prosecution of criminal offences against the laws of the Commonwealth,
and to conduct proceedings for the confiscation of the proceeds of crimes
committed against the Commonwealth.
In the context of this Inquiry, the role of the Australian Customs
and Border Protection Service is to regulate the movement of prohibited and
restricted goods across Australia’s borders, including goods purchased on the
The Commonwealth Ombudsman safeguards the community in its
dealings with Australian Government agencies. It handles complaints,
conducts investigations, performs audits and inspections, encourages good
administration, and carries out specialist oversight tasks.
State and Territory responsibilities
School education, policing and legal matters within each jurisdiction
are primarily responsibilities of State/Territory governments. These matters
will be addressed in relevant parts of this Report.
Current Parliamentary inquiries
In March 2011, the Joint Standing Committee on the National Broadband
Network was formed to inquire into and report on the rollout of the Network. It
will provide progress reports every six months, from 31 August 2011, to both
Houses of Parliament and shareholder Ministers on a range of matters related to
the Network until completion and it is operational.
The House of Representatives Standing Committee on Infrastructure and
Communications is inquiring into the role and potential of the National
Broadband Network. The Committee is due to report its findings by the end of
Previous Parliamentary reports
On 7 April 2011, the Senate Environment and Communications References
Committee tabled a report titled The adequacy of protections for the privacy
of Australians online. It made several recommendations that are relevant to
this Inquiry, and these will be addressed in Chapter 5.
The 2010 Report by the House of Representatives Standing Committee on Communications, Hackers,
Fraudsters and Botnets: Tackling the Problem of Cyber Crime, addressed ‘the
incidence of cybercrime on consumers’. This Report examines different but
related issues. It seeks to make its contribution to knowledge of the benefits
of, and the potential perils created by, the online environment. These perils
are especially important for users who are less than 18 years old.
Other relevant reports include:
House of Representatives Standing Committee on Employment,
Education and Training: Sticks and Stones: Report on Violence in Australian
House of Representatives Standing Committee on Communications,
Information Technology and the Arts: From Reel to Unreal: Future
opportunities for Australia's film, animation, special effects and electronic
games industries (2004);
Senate Standing Committee on the Environment, Communications and
the Arts: Sexualisation of children in the contemporary media environment (2008);
House of Representatives Standing Committee on Family, Community,
Housing and Youth: Avoid the Harm - Stay Calm. Report on the Inquiry into
the impact of violence on young Australians (2010).
In 2009, the NSW Legislative Council’s General Purpose Standing
Committee (No 2) released a report Inquiry into Bullying of Children and Young
People. A number of its recommendations concerned cyber-bullying.
Australian Law Reform Commission Inquiry
The Government has asked the Australian Law Reform Commission to review
the definition of ‘Refused Classification’ material, as part of a wider review
of the National Classification System.
Joint Select Committee on Cyber-Safety
Conduct of the Inquiry
In the last Parliament, the House of Representatives agreed to establish
the Committee on 25 February 2010. On 11 March 2010, the Senate agreed to this
proposal. As the Inquiry was incomplete at the prorogation of that Parliament,
In the 43rd Parliament, the House of Representatives agreed
on 16 November 2010 to the re-establishment of the Committee, with slightly
different terms of reference. The Senate agreed on 17 November 2010. The
revised terms of reference can be found at p. xxi.
The Committee wrote to all Ministers, State Premiers/Chief Ministers,
organisations and individuals who had forwarded submissions to the original
Inquiry seeking additional submissions.
The Inquiry was advertised in The Australian at fortnightly
intervals, and featured on a number of occasions in About the House and
Sky News, House of Representatives Alert Services, Facebook, Google and
In all, 152 submissions and 16 supplementary submissions were received
in response to the invitations to contribute to the Inquiry. A list of
submissions is at Appendix A.
A list of other documents of relevance to the Inquiry that were formally
received by the Committee as Exhibits is at Appendix B.
Three roundtable discussions were held in Melbourne and Sydney in June
and July 2010. Evidence was given by:
The information and communications technology industry;
The Australian Federal Police;
Non-government organisations working with young people;
Professional bodies and unions;
Representatives of parents/carers;
Corporations such as Telstra and Symantec; and
Content providers such as Yahoo!7.
The Committee also took evidence at public hearings in Adelaide,
Brisbane, Canberra, Hobart and Melbourne. A list of organisations and
individuals who gave evidence to the Inquiry at the roundtables and public
hearings is at Appendix C.
In addition, the Committee conducted two school forums, one at McGregor State
School in Brisbane for Grade 7 students, and the other for Years 9 to 12 in
Hobart with students attending from Calvin Secondary School; Cosgrove High
School; Elizabeth College; Tasmanian Academy; Guilford Young College; MacKillop
Catholic School; New Town High; Ogilvie High School; and St Michael’s
The Committee also conducted two online surveys of young people in
relation to cyber-safety issues. A total of 33,751 young people completed:
18,159 for those less than 12 years old and 15,592 for 13 to 18 year olds.
Additional information and the methodology used in the survey is at Appendix D.
Table 1.1 Number of survey respondents by gender and age
Figure 1.1 Number of survey respondents
by gender and age
Figure 1.2 Committee
Chair, Senator Dana Wortley, during a small group discussion with students at
McGregor State School.
Figure 1.3 The
Committee during discussions with students and teachers at McGregor State
Copies of all submissions and transcripts that were authorised for
publication are available electronically from the Committee’s website, at
Overview of this Report
The structure of this Report is based on the Inquiry’s Terms of
Part 1: Introduction
Part 1 provides the necessary background material to the Inquiry. This
section defines and describes the online environment, and defines
‘cyber-safety’. It outlines the roles of Commonwealth, State and Territory
Government departments and agencies with policy and regulatory
responsibilities, in general terms, in the online environment. It then describes
legal responsibilities for combating online crime in Australia.
Chapter 2 outlines the environment in which young people find
themselves, including the major stakeholders. It describes two potential
problem areas for young people: ‘real’ and ‘online’ worlds and privacy. There
are at least four groups of young adults who are disadvantaged in the online
environment. While they may have access via school libraries, their entry to it
can be problematic. Some of the negative features of that environment, for
adults and parents/carers particularly, is then outlined.
Part 2: Cyber-safety
The four Chapters of Part 2 should be regarded as a unit. Chapters 4 to
6 deal with specific abuses of cyber-safety; cyber-bullying, cyber-stalking,
online grooming , sexting, privacy and identity theft, and other cybersafety
complexities such as fraud, ‘technology addictions’, online gambling and
illegal and inappropriate content. Chapter 7 outlines the responses of young
people to the Committee’s online survey in relation to how young people make
the decision on whether or not to post.
Part 3: Educational strategies
Part 3 covers the measures necessary to support schools, teacher and the
wider school community. Chapter 8 explores a range of ways to support schools
to increase cyber-safety and, in particular, to reduce cyber-bullying. Chapter
9 focuses on support for teachers and chapter 10 looks at the broader school
Part 4: Enforcement
This part of the report outlines the various legal and policing aspects
of these abuses, including existing Commonwealth and State/Territory sanctions
against them. Chapter 11 outlines legislative approaches. Chapter 12 addresses
policing. Chapter 13 focuses on the proposal to establish an online ombudsman
to act on cyber-safety issues.
Part 5: Australian and international responses
Chapter 14 deals with achieving best practice in Australia by government
initiatives, industry and non-government organisations. Similarly Chapter 15
examines various international responses to cyber-safety issues.
Chapter 16 examine the likely benefits of new and existing technologies.
Chapter 17 focuses specifically on the mandatory national filtering system
Part 6: Conclusions
Chapter 18 summarises the views of students, and report’s conclusions
are in Chapter 19.
Results of the Inquiry
To involve young people, and hear what they have to say, an online survey
was undertaken. As noted above, 33,751 responses were received, and the results
are used throughout this Report. It gains depth from some very informative and
sometimes distressing, anonymous contributions.
The most significant, general points to emerge from the range of
material received by this Inquiry included:
the need for children and young people to be in control of their
own experiences in the online environment through better education, knowledge
the need for enhanced privacy provisions in the online
the short-term need for more detailed and longitudinal Australian
research on how young people are interacting with the online environment, and
emerging technologies in particular. Then based on that research, there is a
requirement for a cooperative national response, based on a range of
educational programs. To be effective, a combination of carefully designed and
targeted programs is needed for the use of parents/carers and teachers, and the
varied needs of the different developmental stages of Australian young people;
the need for parents/carers, teachers and all those who engage
with young people to become more informed, and gain an understanding of online
technology and its many uses. | 2026-01-31T17:39:59.231775 |
27,098 | 3.509186 | http://news.discovery.com/earth/weather-extreme-events/drought-dust-bowl-120719.htm | - Moderate to severe drought conditions now cover 64 percent of the lower 48 states.
- This is one of the most widespread droughts of the last century.
- Corn crops are already suffering from hot and dry conditions.
This summer, record high temperatures have scorched much of the United States. Add little rainfall to the intense heat and the result has been corn crop failure and worries about much worse.
In fact, moderate to severe drought now covers nearly 64 percent of the lower 48 states, according to statistics released today by the U.S. Drought Monitor at the University of Nebraska, Lincoln,making it one of the most widespread droughts in the past century.
The nation hasn't seen a drought this big in more than 50 years, according to a report released earlier this week by the National Climatic Data Center,part of the National Oceanic and Atmospheric Administration.
As the extended dry period threatens corn and other crops, fears are growing that we are headed for another Dust Bowl, a series of severe droughts in the 1930s that filled the prairie states with plumes of dusty soil.
But the Dust Bowl was about more than just dry weather.
The country was also in the midst of recession, adding social upheaval to the uncertain climate, said John Nielsen-Gammon, an atmospheric scientist at Texas A&M University in College Station. Unlike today, small family farms dominated rural areas at the time. And agricultural practices were less sophisticated than they are now -- all factors that contributed to the period's extreme environmental and economic consequences.
The Dust Bowl also involved year after year of severe drought, unlike the current situation, which has only just begun in most places.
"For most of the central and northern United States, this is a drought that has only developed over the past 90 days, so at this point it's just a single year of brief but intense drought," Nielsen-Gammon said. "Most farmers can deal with one year of drought."
"If this were to continue like this next year and the year after that," he added, "then we'd be talking about major long-term impacts like the Dust Bowl."
On a geological scale, the Northern Plains states have experienced a handful of long-term dry periods since the last Ice Age, Nielsen-Gammon said. For several hundred years at a time over the past few millennia, the Great Plains have been nothing but sand dunes.
More recently, over the last 100 years, droughts have repeatedly occurred from Texas to Minnesota for reasons that are hard to predict and hard to explain. Unlike winter weather patterns, which are often controlled by El Niño's and other variations in Pacific Ocean's surface temperatures, summer rainfall levels can be random. Often, Nielsen-Gammon said, whether a summer is wet or dry comes down to luck.
This year, after a winter with little snow, a large ridge of high pressure has stalled over the central United States, deflecting most storms into Canada and New England, said David Miskus, a meteorologist at NOAA's Climate Prediction Center in Silver Spring, Md.
In the Midwest, Miskus said, the current drought is comparable to one that struck in 1988. More severe were the Dust Bowl droughts in the 1930s along with a series of droughts that lasted for years in the 1950s. In general, he added, it's difficult to compare the severity of current droughts with ones that occurred before 2000, when meteorologists adopted a new method for tracking data.
And every drought is different: Some are localized and intense. Some are widespread and long lasting.
Already, the drought of 2012 is causing problems for both farmers and consumers. Dry topsoil and excessive heat have had the biggest impact so far on corn crops, leading to lower yields and rising prices, which will likely translate to more expensive meat and dairy products, too. The U.S. Agriculture Department announced on Monday that 38 percent of the nation's corn crop is currently in poor or very poor condition. Soybeans may be next.
But, Nielsen-Gammon said, the nation learned some important lessons from that period that make an identical scenario unlikely to happen again.
To reduce soil erosion, for example, farmers have adopted new tilling, plowing and planting techniques that aim to avoid large patches of barren, heavily disturbed soil. Starting with Roosevelt-era projects, irrigation infrastructure is also much better now. Society has also changed.
"Back in the Dust Bowl, most people lived in rural areas on farms and that's no longer the case, so it's not going to have the same kind of impact as it did back then," Nielsen-Gammon said. "We can worry now, but it's clear the Dust Bowl was worse."
It's hard to know what's to come, Miskus said. Some models are predicting an El Niño event later this year, which would likely bring above-average precipitation to the south in the winter and spring. But now is the critical period for Plains-state farmers, who are sweating it out through corn-pollination season.
Over the next three months, the Climate Prediction Center's models are projecting continued hot and dry conditions and intensifying drought throughout much of the central and eastern regions of the country, according to data released this morning. The Southwest is one region where wet weather is likely to return.
A few good soakings could go a long way toward alleviating concerns. But time is of the essence.
"The good news is that this drought formed recently," Miskus said. "Since it's still short-term, if we get into a wet spell, it could be improved pretty quickly."
"If it's going to rain," he added, "it's got to start here pretty quick." | 2026-01-18T18:31:05.800896 |
296,870 | 3.664331 | http://en.wikipedia.org/wiki/%ce%92-lactam_antibiotic | Core structure of penicillins (top) and cephalosporins (bottom). β-lactam ring in red.
|Biological target||Penicillin binding protein|
β-Lactam antibiotics (beta-lactam antibiotics) are a broad class of antibiotics, consisting of all antibiotic agents that contains a β-lactam ring in their molecular structures. This includes penicillin derivatives (penams), cephalosporins (cephems), monobactams, and carbapenems. Most β-lactam antibiotics work by inhibiting cell wall biosynthesis in the bacterial organism and are the most widely used group of antibiotics. Up until 2003, when measured by sales, more than half of all commercially available antibiotics in use were β-lactam compounds.
Bacteria often develop resistance to β-lactam antibiotics by synthesizing a β-lactamase, an enzyme that attacks the β-lactam ring. To overcome this resistance, β-lactam antibiotics are often given with β-lactamase inhibitors such as clavulanic acid.
Medical use
β-Lactam antibiotics are indicated for the prophylaxis and treatment of bacterial infections caused by susceptible organisms. At first, β-lactam antibiotics were mainly active only against Gram-positive bacteria, yet the recent development of broad-spectrum β-lactam antibiotics active against various Gram-negative organisms has increased their usefulness.
Adverse effects
Adverse drug reactions
Pain and inflammation at the injection site is also common for parenterally administered β-lactam antibiotics.
Immunologically mediated adverse reactions to any β-lactam antibiotic may occur in up to 10% of patients receiving that agent (a small fraction of which are truly IgE-mediated allergic reactions, see amoxicillin rash). Anaphylaxis will occur in approximately 0.01% of patients. There is perhaps a 5%-10% cross-sensitivity between penicillin-derivatives, cephalosporins, and carbapenems; but this figure has been challenged by various investigators.
Nevertheless, the risk of cross-reactivity is sufficient to warrant the contraindication of all β-lactam antibiotics in patients with a history of severe allergic reactions (urticaria, anaphylaxis, interstitial nephritis) to any β-lactam antibiotic.
Mode of action
β-Lactam antibiotics are bacteriocidal, and act by inhibiting the synthesis of the peptidoglycan layer of bacterial cell walls. The peptidoglycan layer is important for cell wall structural integrity, especially in Gram-positive organisms, being the outermost and primary component of the wall. The final transpeptidation step in the synthesis of the peptidoglycan is facilitated by DD-transpeptidases which are penicillin-binding proteins (PBPs). PBPs vary in their affinity for binding penicillin or other β-lactam antibiotics. The amount of PBPs varies among bacterial species.
β-Lactam antibiotics are analogues of d-alanyl-d-alanine—the terminal amino acid residues on the precursor NAM/NAG-peptide subunits of the nascent peptidoglycan layer. The structural similarity between β-lactam antibiotics and d-alanyl-d-alanine facilitates their binding to the active site of PBPs. The β-lactam nucleus of the molecule irreversibly binds to (acylates) the Ser403 residue of the PBP active site. This irreversible inhibition of the PBPs prevents the final crosslinking (transpeptidation) of the nascent peptidoglycan layer, disrupting cell wall synthesis.
β-Lactam antibiotics block not only the division of bacteria, including cyanobacteria, but also the division of cyanelles, the photosynthetic organelles of the glaucophytes, and the division of chloroplasts of bryophytes. In contrast, they have no effect on the plastids of the highly developed vascular plants. This is supporting the endosymbiotic theory and indicates an evolution of plastid division in land plants.
Under normal circumstances, peptidoglycan precursors signal a reorganisation of the bacterial cell wall and, as a consequence, trigger the activation of autolytic cell wall hydrolases. Inhibition of cross-linkage by β-lactams causes a build-up of peptidoglycan precursors, which triggers the digestion of existing peptidoglycan by autolytic hydrolases without the production of new peptidoglycan. As a result, the bactericidal action of β-lactam antibiotics is further enhanced.
Two structural features of β-lactam antibiotics have been correlated with their antibiotic potency. The first is known as "Woodward's parameter", h, and is the height (in angstroms) of the pyramid formed by the nitrogen atom of the β-lactam as the apex and the three adjacent carbon atoms as the base. The second is called "Cohen's parameter", c, and is the distance between the carbon atom of the carboxylate and the oxygen atom of the β-lactam carbonyl. This distance is thought to correspond to the distance between the carboxylate-binding site and the oxyanion hole of the PBP enzyme. The best antibiotics are those with higher h values (more reactive to hydrolysis) and lower c values (better binding to PBPs).
Modes of resistance
By definition, all β-lactam antibiotics have a β-lactam ring in their structure. The effectiveness of these antibiotics relies on their ability to reach the PBP intact and their ability to bind to the PBP. Hence, there are two main modes of bacterial resistance to β-lactams:
Enzymatic hydrolysis of the β-lactam ring
If the bacterium produces the enzyme β-lactamase or the enzyme penicillinase, the enzyme will hydrolyse the β-lactam ring of the antibiotic, rendering the antibiotic ineffective. (An example such enzyme is NDM-1, discovered in 2009.) The genes encoding these enzymes may be inherently present on the bacterial chromosome or may be acquired via plasmid transfer (plasmid mediated resistance), and β-lactamase gene expression may be induced by exposure to β-lactams.
The production of a β-lactamase by a bacterium does not necessarily rule out all treatment options with β-lactam antibiotics. In some instances, β-lactam antibiotics may be co-administered with a β-lactamase inhibitor. For example, Augmentin (FGP) is made of amoxicillin, a β-lactam antibiotic, and clavulanic acid, a β-lactamase inhibitor. The clavulanic acid is designed to overwhelm all β-lactamase enzymes, bind irreversibly to them, and effectively serve as an antagonist so that the amoxicillin is not affected by the β-lactamase enzymes.
However, in all cases where infection with β-lactamase-producing bacteria is suspected, the choice of a suitable β-lactam antibiotic should be carefully considered prior to treatment. In particular, choosing appropriate β-lactam antibiotic therapy is of utmost importance against organisms with inducible β-lactamase expression. If β-lactamase production is inducible, then failure to use the most appropriate β-lactam antibiotic therapy at the onset of treatment will result in induction of β-lactamase production, thereby making further efforts with other β-lactam antibiotics more difficult.
Possession of altered penicillin-binding proteins
As a response to increased efficacy of β-lactams, some bacteria have changed the proteins to which β-lactam antibiotics bind. β-Lactams cannot bind as effectively to these altered PBPs, and, as a result, the β-lactams are less effective at disrupting cell wall synthesis. Notable examples of this mode of resistance include methicillin-resistant Staphylococcus aureus (MRSA) and penicillin-resistant Streptococcus pneumoniae. Altered PBPs do not necessarily rule out all treatment options with β-lactam antibiotics.
β-Lactams are classified according to their core ring structures. For example:
- β-Lactams fused to thiazolidine rings are named penams.
- β-Lactams fused to 2,3-dihydrothiazole rings are named penems.
- β-Lactams fused to 2,3-dihydro-1H-pyrrole rings are named carbapenems.
- β-Lactams fused to 3,6-dihydro-2H-1,3-thiazine rings are named cephems.
- β-Lactams not fused to any other ring are named monobactams.
To date, two distinct methods of biosynthesizing the β-lactam core of this family of antibiotics have been discovered. The first pathway discovered was that of the penams and cephems. This path begins with a nonribosomal peptide synthetase (NRPS), ACV synthetase (ACVS), which generates the linear tripeptide δ-(L-α-aminoadipyl)-L-cysteine-D-valine (ACV). ACV is oxidatively cyclized (two cyclizations by a single enzyme) to bicyclic intermediate isopenicillin N by isopenicillin N synthase (IPNS) to form the penam core structure. Various transamidations lead to the different natural penicillins.
The biosynthesis of cephems branch off at isopenicillin N by an oxidative ring expansion to the cephem core. As with the penams, the variety of cephalosporins and cephamycins come from different transamidations, as is the case for the penicillins.
While the ring closure in penams and cephems is between positions 1 and 4 of the β-lactam and is oxidative, the clavams and carbapenems have their rings closed by two-electron processes between positions 1 and 2 of the ring. β-Lactam synthetases are responsible for these cyclizations, and the carboxylate of the open-ring substrates is activated by ATP. In clavams, the β-lactam is formed prior to the second ring; in carbapenems, the β-lactam ring is closed second in sequence.
The biosynthesis of the β-lactam ring of tabtoxin mirrors that of the clavams and carbapenems. The closure of the lactam ring in the other monobactams, such as sulfazecin and the nocardicins, may involve a third mechanism involving inversion of configuration at the β-carbon.
See also
- List of β-lactam antibiotics
- ATC code J01C Beta-lactam antibacterials, penicillins
- ATC code J01D Other beta-lactam antibacterials
- Cell wall
- Discovery and development of cephalosporins
- History of penicillin
- Holten KB, Onusko EM (August 2000). "Appropriate prescribing of oral beta-lactam antibiotics". American Family Physician 62 (3): 611–20. PMID 10950216.
- Elander, R. P. (2003). "Industrial production of beta-lactam antibiotics". Applied microbiology and biotechnology 61 (5–6): 385–392. doi:10.1007/s00253-003-1274-y. PMID 12679848.
- Rossi S (Ed.) (2004). Australian Medicines Handbook 2004. Adelaide: Australian Medicines Handbook. ISBN 0-9578521-4-2.
- Pichichero ME (April 2005). "A review of evidence supporting the American Academy of Pediatrics recommendation for prescribing cephalosporin antibiotics for penicillin-allergic patients". Pediatrics 115 (4): 1048–57. doi:10.1542/peds.2004-1276. PMID 15805383.
- Fisher, J. F.; Meroueh, S. O.; Mobashery, S. (2005). "Bacterial Resistance to β-Lactam Antibiotics: Compelling Opportunism, Compelling Opportunity†". Chemical Reviews 105 (2): 395–424. doi:10.1021/cr030102i. PMID 15700950.
- Britta Kasten und Ralf Reski (1997): β-lactam antibiotics inhibit chloroplast division in a moss (Physcomitrella patens) but not in tomato (Lycopersicon esculentum). Journal of Plant Physiology 150, 137-140
- Nangia, A.; Biradha, K.; Desiraju, G.R. (1996) "Correlation of biological activity in β-lactam anitbiotics with Woodward and Cohen structural parameters: A Camridge database study." J Chem Soc, Perkin Trans 2 (5), 943–53.
- Woodward, R.B. (1980) "Penems and related substances." Phil Trans Royal Soc Chem B 289(1036), 239–50.
- Cohen, N.C. (1983) "β-lactam antibiotics: Geometrical requirements for anti-bacterial activities." J Med Chem 26(2), 259–64.
- Drawz, S. M.; Bonomo, R. A. (2010). "Three Decades of β-Lactamase Inhibitors". Clinical Microbiology Reviews 23 (1): 160–201. doi:10.1128/CMR.00037-09. PMC 2806661. PMID 20065329.
- Dalhoff, A.; Janjic, N.; Echols, R. (2006). "Redefining penems". Biochemical Pharmacology 71 (7): 1085–1095. doi:10.1016/j.bcp.2005.12.003. PMID 16413506.
- Lundberg, M.; Siegbahn, P. E. M.; Morokuma, K. (2008). "The Mechanism for Isopenicillin N Synthase from Density-Functional Modeling Highlights the Similarities with Other Enzymes in the 2-His-1-carboxylate Family†". Biochemistry 47 (3): 1031–1042. doi:10.1021/bi701577q. PMID 18163649.
- Bachmann, B. O.; Li, R.; Townsend, C. A. (1998). "Β-Lactam synthetase: A new biosynthetic enzyme". Proceedings of the National Academy of Sciences of the United States of America 95 (16): 9082–9086. doi:10.1073/pnas.95.16.9082. PMC 21295. PMID 9689037.
- Townsend, CA; Brown, AM; Nguyen, LT (1983). "Nocardicin A: Stereochemical and biomimetic studies of monocyclic β-lactam formation". Journal of the American Society 105 (4): 919–927. | 2026-01-22T18:13:39.945138 |
786,287 | 3.873621 | http://archive.hpcwire.com/hpcwire/2013-08-26/preparing_for_solar_storms.html | August 26, 2013
In the absence of the sun, life as we know it would not exist. In addition to providing just the right amount of heat and light for third planet inhabitants, the sun is responsible for circadian rhythms, vitamin D production and photosynthesis. However, this life-sustaining orb also carries the potential for severe destruction. Via a phenomenon known as solar wind, the sun ejects a sea of protons, electrons and ionized atoms in all directions at speeds of a million miles per hour or more. If these particles were to reach earth, the radiation would threaten human health, while the massive onslaught of charged particles would disrupt power grids, communication networks and electronic devices.
|3D global hybrid simulation of Earth's magnetosphere. Magnetic field lines are color coded based on their origin and termination point.
Courtesy: H. Karimabadi and B. Loring. Source: NICS
Solar wind is just one kind of space weather, the term for environmental conditions in near-Earth space. Solar flares, explosive storms that occur on the surface of the sun, eject blasts of charged particles with a ferocity that's equivalent to 10 million volcanic eruptions. Less frequent, but even more dangerous than solar wind or flares, are coronal mass ejections, or CMEs. These eruptions of plasma from inside the sun's corona can set off space-weather events called geomagnetic storms that can wreck havoc on our planet's inhabitants and its technology.
This non-stop space attack is held in check by a natural shield, a magnetic field known as the magnetosphere. Created by Earth's magnetic dipole, this field extends out into space for 37,000 miles. The magnetosphere stops most charged particles from entering Earth's atmosphere. However, it is not a perfect solution. Enough solar particles get through the magnetic net to pose a serious hazard to power grids, communication networks and living creatures.
Supercomputing to the rescue
A research group led by Homa Karimabadi of the University of California, San Diego, is investigating the effects of space weather on the magnetosphere.
"Earth's magnetic field provides a protective cocoon, but it breaks during strong solar storms," explains Karimabadi.
Karimabadi teamed up with visualization specialist Burlen Loring of Lawrence Berkeley National Laboratory (LBNL) to create a topological map of Earth's magnetosphere, using the supercomputing resources at National Institute for Computational Science (NICS).
"The 'topomap' helps us find the location of the magnetic field lines from different sources [for example, the magnetic field of Earth versus the magnetic field of the solar wind]," said Karimabadi.
Currently, researchers can track storms, but there are no tools available for predicting storms. In this first-of-its-kind project, the researchers will leverage the map to build global kinetic simulations of the magnetosphere and space-weather effects.
The simulations are both compute- and data- intensive. A single job can require 100,000 central processing units and take 48-hours or longer to complete. It's only since the advent of petascale supercomputers like Jaguar, Nautilus and Kraken, that such complex phenomenon as this can begin to be unraveled. And according to Karimabadi, even the fastest computers of our era still fall short. It's ultimately an exascale problem, he says.
"Handling and analysis of massive datasets resulting from our simulations are quite challenging. Partnering with the [visualizations] group at LBNL has been critical in developing tools to analyze our data sets," says Karimabadi.
Improved predictive capabilities are crucial in order to prepare for space-weather events. A feature article on the research notes it will lead to "a better understanding of how space weather affects our magnetosphere allows scientists to more accurately predict the impact of solar activity on our planet."
Last July, a geometric superstorm, spawned by a CME, was narrowly avoided. If the storm had taken place only a few days sooner, there would likely have been far-reaching consequences. Such storms have the potential to take down power grids on a national or even international scale.
As previous events (in 1859 and 1989) have taught us, the danger is real. "There is an urgent need to develop accurate forecasting models," Karimabadi asserts. "A severe space-weather effect can have dire financial and national-security consequences, and can disrupt our everyday lives on a scale that has never been experienced by humanity before."
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC? | 2026-01-30T10:26:53.738664 |
1,056,721 | 3.50124 | http://en.wikipedia.org/wiki/Board_game | A board game is a game that involves counters or pieces moved or placed on a pre-marked surface or "board", according to a set of rules. Games can be based on pure strategy, chance (e.g. rolling dice), or a mixture of the two, and usually have a goal that a player aims to achieve. Early board games represented a battle between two armies, and most current board games are still based on defeating opposing players in terms of counters, winning position, or accrual of points (often expressed as in-game currency).
There are many different types and styles of board games. Their representation of real-life situations can range from having no inherent theme (e.g. checkers), to having a specific theme and narrative (e.g. Cluedo). Rules can range from the very simple (e.g. Tic-tac-toe) to those describing a game universe in great detail (e.g. Dungeons & Dragons) – although most of the latter are role-playing games where the board is secondary to the game, serving to help visualize the game scenario.
The amount of time required to learn to play or master a game varies greatly from game to game. Learning time does not necessarily correlate with the number or complexity of rules; some games having profound strategies (e.g. chess or Go) possess relatively simple rulesets.
Board games have been played in most cultures and societies throughout history; some even pre-date literacy skill development in the earliest civilizations. A number of important historical sites, artifacts, and documents shed light on early board games. These include:
- Jiroft civilization gameboards
- Senet, found in Predynastic and First Dynasty burials of Egypt, c. 3500 BC and 3100 BC respectively; the oldest board game known to have existed, Senet was pictured in a fresco found in Merknera's tomb (3300–2700 BC)
- Mehen, another ancient board game from Predynastic Egypt
- Go, an ancient board game originating in China
- Patolli, a board game originating in Mesoamerica played by the ancient Aztec
- Royal Game of Ur, the Royal Tombs of Ur contain this game, among others
- Buddha games list, the earliest known list of games
- Pachisi and Chaupar, ancient board games of India
- c. 3500 BC: Senet is played in Predynastic Egypt as evidenced by its inclusion in burial sites. Senet is also depicted in the tomb of Merknera.
- c. 3000 BC: Mehen board game from Predynastic Egypt, played with lion-shaped game pieces and marbles
- c. 3000 BC: Ancient backgammon set, found in the Burnt City in Iran
- c. 2560 BC: Board of the Royal Game of Ur (found at Ur Tombs)
- c. 2500 BC: Paintings of Senet and Han being played depicted in the tomb of Rashepes
- c. 1500 BC: Painting of board game at Knossos
- c. 500 BC: The Buddha games list mentions board games played on 8 or 10 rows.
- c. 500 BC: The earliest reference to Pachisi in the Mahabharata, the Indian epic
- c. 400 BC: Two ornately decorated liubo gameboards from a royal tomb of the State of Zhongshan in China
- c. 400 BC: The earliest written reference to Go (weiqi) in the historical annal Zuo Zhuan; Go mentioned in the Analects of Confucius (c. 5th century BC)
- 116–27 BC: Marcus Terentius Varro's Lingua Latina X (II, par. 20) contains earliest known reference to Latrunculi (often confused with Ludus Duodecim Scriptorum, Ovid's game mentioned below)
- 1 BC – 8 AD: Ovid's Ars Amatoria contains earliest known reference to Ludus Duodecim Scriptorum
- 1 BC – 8 AD: The Roman Game of Kings, of which little is known, is more or less a contemporary of Latrunculi.
- c. 43 AD: The Stanway Game is buried with the Druid of Colchester.
- c. 200 AD: A stone Go board with a 17×17 grid from a tomb at Wangdu County in Hebei, China
- 220–265: Backgammon enters China under the name t'shu-p'u (source: Hun Tsun Sii).
- c. 400 onwards: Tafl games played in Northern Europe
- c. 600: The earliest references to chaturanga written in Subandhu's Vasavadatta and Banabhatta's Harsha Charitha early Indian books
- c. 600: The earliest reference to shatranj written in Karnamak-i-Artakhshatr-i-Papakan
- c. 700: Date of the oldest evidence of Mancala games, found in Matara, Eritrea and Yeha
- c. 800–900: The earliest reference to Quirkat or alquerque referred to in Abu al-Faraj al-Isfahani's Kitab al-Aghani ("Book of Songs")
- c. 1283: Alfonso X of Castile in Spain commissioned Libro de ajedrez, dados, y tablas (Libro de los Juegos (The Book of Games)) translated into Castilian from Arabic and added illustrations with the goal of perfecting the work.
- 1759: A Journey Through Europe published by John Jefferys, the earliest board game with a designer whose name is known
- 1874: Parcheesi is trademarked by Selchow & Righter.
- c. 1930: Monopoly stabilises into the version that is popular today.
- 1931: The first commercial version of Battleship is published under the name "Salvo".
- 1938: The first version of Scrabble is published by Alfred Butts under the name "Criss-Crosswords".
- 1957: Risk is released.
- 1970: Mastermind is designed by Mordecai Meirowitz.
- c. 1980: German-style board games begin to develop as a genre.
- 1995: The Settlers of Catan is first published in Germany.
- 1996: Fischerandom chess (Chess960) is publicly announced in Argentina by Bobby Fischer.
Many board games are now available as video games, which can include the computer itself as one of several players, or as a sole opponent. The rise of computer use is one of the reasons said to have led to a relative decline in board games. Many board games can now be played online against a computer and/or other players. Some websites allow play in real time and immediately show the opponents' moves, while others use email to notify the players after each move (see the links at the end of this article). Modern technology (the internet and cheaper home printing) has also influenced board games via the phenomenon of print-and-play board games that you buy and print yourself.
Some board games make use of components in addition to—or instead of—a board and playing pieces. Some games use CDs, video cassettes, and, more recently, DVDs in accompaniment to the game.
Around the year 2000 the board gaming industry began to grow with companies such as Fantasy Flight Games, Z-Man Games, or Indie Boards and Cards, churning out new games which are being sold to a growing (and slightly massive) audience worldwide.
In seventeenth and eighteenth century colonial America, the agrarian life of the country left little time for game playing though draughts (checkers), bowling, and card games were not unknown. The Pilgrims and Puritans of New England frowned on game playing and viewed dice as instruments of the devil. When the Governor William Bradford discovered a group of non-Puritans playing stool-ball, pitching the bar, and pursuing other sports in the streets on Christmas Day, 1622, he confiscated their implements, reprimanded them, and told them their devotion for the day should be confined to their homes.
Early United States
In Thoughts on Lotteries (1826) Thomas Jefferson wrote:
Almost all these pursuits of chance [i.e., of human industry] produce something useful to society. But there are some which produce nothing, and endanger the well-being of the individuals engaged in them or of others depending on them. Such are games with cards, dice, billiards, etc. And although the pursuit of them is a matter of natural right, yet society, perceiving the irresistible bent of some of its members to pursue them, and the ruin produced by them to the families depending on these individuals, consider it as a case of insanity, quoad hoc, step in to protect the family and the party himself, as in other cases of insanity, infancy, imbecility, etc., and suppress the pursuit altogether, and the natural right of following it. There are some other games of chance, useful on certain occasions, and injurious only when carried beyond their useful bounds. Such are insurances, lotteries, raffles, etc. These they do not suppress, but take their regulation under their own discretion.
The board game, Traveller's Tour Through the United States was published by New York City bookseller F. Lockwood in 1822 and today claims the distinction of being the first board game published in the United States.
As the United States shifted from agrarian to urban living in the nineteenth century, greater leisure time and a rise in income became available to the middles class. The American home, once the center of economic production, became the locus of entertainment, enlightenment, and education under the supervision of mothers. Children were encouraged to play board games that developed literacy skills and provided moral instruction.
The earliest board games published in the United States were based upon Christian morality. The Mansion of Happiness (1843), for example, sent players along a path of virtues and vices that led to the Mansion of Happiness (Heaven). The Game of Pope or Pagan, or The Siege of the Stronghold of Satan by the Christian Army (1844) pitted an image on its board of a Hindu woman committing suttee against missionaries landing on a foreign shore. The missionaries are cast in white as "the symbol of innocence, temperance, and hope" while the pope and pagan are cast in black, the color of "gloom of error, and...grief at the daily loss of empire".
Commercially produced board games in the middle nineteenth century were monochrome prints laboriously hand-colored by teams of low paid young factory women. Advances in paper making and printmaking during the period enabled the commercial production of relatively inexpensive board games. The most significant advance was the development of chromolithography, a technological achievement that made bold, richly colored images available at affordable prices. Games cost as little as US$.25 for a small boxed card game to $3.00 for more elaborate games.
American Protestants believed a virtuous life led to success, but the belief was challenged mid-century when Americans embraced materialism and capitalism. The accumulation of material goods was viewed as a divine blessing. In 1860, The Checkered Game of Life rewarded players for mundane activities such as attending college, marrying, and getting rich. Daily life rather than eternal life became the focus of board games.The game was the first to focus on secular virtues rather than religious virtues, and sold 40,000 copies its first year.
Game of the District Messenger Boy, or Merit Rewarded is a board game published in 1886 by the New York City firm of McLoughlin Brothers. The game is a typical roll-and-move track board game. Players move their tokens along the track at the spin of the arrow toward the goal at track's end. Some spaces on the track will advance the player while others will send him back.
In the affluent 1880s, Americans witnessed the publication of Algeresque rags to riches games that permitted players to emulate the capitalist heroes of the age. One of the first such games, The Game of the District Messenger Boy, encouraged the idea that the lowliest messenger boy could ascend the corporate ladder to its topmost rung. Such games insinuated that the accumulation of wealth brought increased social status. Competitive capitalistic games culminated in 1935 with Monopoly, the most commercially successful board game in United States history.
McLoughlin Brothers published similar games based on the telegraph boy theme including Game of the Telegraph Boy, or Merit Rewarded (1888). Greg Downey notes in his essay, "Information Networks and Urban Spaces: The Case of the Telegraph Messenger Boy" that families who could afford the deluxe version of the game in its chromolithographed, wood-sided box would not "have sent their sons out for such a rough apprenticeship in the working world."
Luck, strategy, and diplomacy
Most board games involve both luck and strategy. But an important feature of them is the amount of randomness/luck involved, as opposed to skill. Some games, such as chess, depend almost entirely on player skill. But many children's games are mainly decided by luck: for example, Candy Land and Chutes and Ladders require no decisions by the players. A player may be hampered by a few poor rolls of the dice in backgammon, Risk, Monopoly, or cribbage, but over many games a skilled player will win more often. While some purists consider luck to be an undesirable component of a game, others counter that elements of luck can make for far more diverse and multi-faceted strategies, as concepts such as expected value and risk management must be considered.
A second aspect is the game information available to players. Some games (chess being a classic example) are perfect information games: each player has complete information on the state of the game. In other games, such as Tigris and Euphrates or Stratego, some information is hidden from players. This makes finding the best move more difficult, and may involve estimating probabilities by the opponents.
Another important aspect of some games is diplomacy, that is, players making deals with one another. Two-player games usually do not involve diplomacy (cooperative games being the exception). Thus, negotiation generally features only in games with three or more players. An important facet of The Settlers of Catan, for example, is convincing players to trade with you rather than with opponents. In Risk, two or more players may team up against others. Easy diplomacy involves convincing other players that someone else is winning and should therefore be teamed up against. Advanced diplomacy (e.g. in the aptly named game Diplomacy) consists of making elaborate plans together, with the possibility of betrayal.
Luck may be introduced into a game by a number of methods. The most common method is the use of dice, generally six-sided. These can decide everything from how many steps a player moves their token, as in Monopoly, to how their forces fare in battle, such as in Risk, or which resources a player gains, such as in The Settlers of Catan. Other games such as Sorry! use a deck of special cards that, when shuffled, create randomness. Scrabble does something similar with randomly picked letters. Other games use spinners, timers of random length, or other sources of randomness. Trivia games have a great deal of randomness based on the questions a player must answer. German-style board games are notable for often having less luck element than many North American board games.
While there has been a fair amount of scientific research on the psychology of older board games (e.g., chess, Go, mancala), less has been done on contemporary board games such as Monopoly, Scrabble, and Risk. Much research has been carried out on chess, in part because many tournament players are publicly ranked in national and international lists, which makes it possible to compare their levels of expertise. The works of Adriaan de Groot, William Chase, Herbert A. Simon, and Fernand Gobet have established that knowledge, more than the ability to anticipate moves, plays an essential role in chess-playing. This seems to be the case in other traditional games such as Go and Oware (a mancala game), but data is lacking in regard to contemporary board games.
With crime you deal with every basic human emotion and also have enough elements to combine action with melodrama. The player’s imagination is fired as they plan to rob the train. Because of the gamble they take in the early stage of the game there is a build up of tension, which is immediately released once the train is robbed. Release of tension is therapeutic and useful in our society, because most jobs are boring and repetitive.
Linearly arranged board games have been shown to improve children's spatial numerical understanding. This is because the game is similar to a number line in that they promote a linear understanding of numbers rather than the innate logarithmic one.
There are a number of different categories that board games can be classified into, although considerable overlap exists, and a game may belong in several categories. The following is a list of some of the most common:
- Abstract strategy games – e.g. chess, checkers, Go, Reversi, Tafl games, or modern games such as Abalone, Stratego, Hive, or GIPF
- Alignment games – e.g. Renju, Gomoku, Connect6, Nine Men's Morris, or Tic-tac-toe
- Auction games – e.g. Hoity Toity
- Chess variants – e.g. Grand Chess or xiangqi
- Configuration games – e.g. Lines of Action, Hexade, or Entropy
- Connection games – e.g. Havannah or Hex
- Cooperative games – e.g. Max the Cat, Caves and Claws, or Pandemic
- Cross and circle games – e.g. Yut, Ludo, or Aggravation
- Deduction games – e.g. Mastermind or Black Box
- Dexterity games – e.g. Tumblin' Dice or Pitch Car
- Economic simulation games – e.g. The Business Game
- Educational games – e.g. Arthur Saves the Planet, Cleopatra and the Society of Architects, or Shakespeare: The Bard Game
- Elimination games – e.g. Yoté, Alquerque, Fanorona, or draughts
- Family games – e.g. Roll Through the Ages, Birds on a Wire, or For Sale
- Fantasy games – e.g. Shadows Over Camelot
- German-style board games or Eurogames – e.g. The Settlers of Catan, Carson City, or Puerto Rico
- Historical simulation games – e.g. Through the Ages or Railways of the World
- Large multiplayer games – e.g. Take It Easy or Swat (2010)
- Learning/communication non-competitive games – e.g. The Ungame (1972)
- Mancala games – e.g. Oware or The Glass Bead Game
- Musical games – e.g. Spontuneous
- Negotiation games – e.g. Diplomacy
- Paper-and-pencil games – e.g. Tic-tac-toe or Dots and Boxes
- Physical skill games – e.g. Camp Granada
- Position games (no captures; win by leaving the opponent unable to move) – e.g. Mū Tōrere or the L game
- Race games – e.g. Pachisi, backgammon, or Worm Up
- Roll-and-move games – e.g. Monopoly or Life
- Share-buying games (games in which players buy stakes in each other's positions) – typically longer economic-management games
- Single-player puzzle games – e.g. Peg solitaire or Sudoku
- Spiritual development games (games with no winners or losers) – e.g. Transformation Game or Psyche's Key
- Stacking games – e.g. Lasca or DVONN
- Territory games – e.g. Go or Reversi
- Tile-based games – e.g. Scrabble, Tigris and Euphrates, or Evo
- Train games – e.g. Ticket to Ride, Steam, or 18xx
- Trivia games – e.g. Trivial Pursuit
- Two-player-only themed games – e.g. En Garde or Dos de Mayo
- Unequal forces (or "hunt") games – e.g. Fox and Geese or Tablut
- Wargames – ranging from Risk, Diplomacy, or Axis & Allies, to Attack! or Conquest of the Empire
- Word games – e.g. Scrabble, Boggle, or What's My Word? (2010)
- bit: see piece.
- board: see gameboard.
- capture: a method that removes another player's piece(s) from the board. For example: in checkers, if a player jumps the opponent's piece, that piece is captured. In some games, captured pieces remain in hand and can be reentered into active play (e.g. shogi, Bughouse chess).
- card: a piece of cardboard often bearing instructions, and usually chosen randomly from a deck by shuffling.
- cell: see hex and space.
- counter: see piece.
- custodian capture (or custodial capture): a capture method whereby an enemy piece is captured by being blocked on adjacent sides by opponent pieces. (Typically laterally by two sides as in Tablut and Hasami shogi, or laterally by four sides as in Go.)
- deck: a stack of cards.
- die/dice: modern six-sided dice are used to generate random numbers in many games – e.g. one die per player in backgammon, two dice per player in Monopoly. Role-playing games typically use one or more polyhedral dice. Games such as Pachisi and chaupur traditionally use cowrie shells. The games Zohn Ahl and Hyena chase use dice sticks. The game yut uses yut sticks.
- displacement capture: a capture method whereby a capturing piece replaces the captured piece on its square, cell, or point on the gameboard.
- game piece: see piece.
- gameboard: the (usually quadrilateral) surface on which one plays a board game. The namesake of the board game, gameboards would seem to be a necessary and sufficient condition of the genre, though card games that do not use a standard deck of cards (as well as games that use neither cards nor a gameboard) are often colloquially included. Most games use a standardized and unchanging board (chess, Go, and backgammon each have such a board), but many games use a modular board whose component tiles or cards can assume varying layouts from one session to another, or even while the game is played.
- gameplay: can refer to a game's strategy, tactics, conventions, or mechanics.
- gamespace: a gameboard for a three-dimensional game. (E.g., the 5×5×5 cubic gameboard for Raumschach.)
- hex: in hexagon-based board games, this is the common term for a standard space on the board. This is most often used in wargaming, though many abstract strategy games such as Abalone, Agon, hexagonal chess, GIPF Project games, and connection games use hexagonal layouts.
- in hand: a piece "in hand" is one not currently in play on the gameboard, but may be entered into play on a turn. Examples are captured pieces in shogi or Bughouse chess, able to be "dropped" into play as a move; or pieces initially in hand at the start of the game, e.g. game Chessence.
- jump: to bypass one or more pieces or spaces. Depending on the context, jumping may also involve capturing or conquering an opponent's piece. See also Game mechanic#Capture/eliminate.
- leap: see jump.
- man: see piece.
- meeple: see piece.
- orthogonal: a horizontal (straight left or right) or vertical (straight forward or backward) direction a piece moves on a gameboard.
- move: see turn.
- pass: the voluntary or involuntary forfeiture of a turn by a player.
- piece (or bit, checker, chip, counter, draughtsman, game piece, man, meeple, mover, pawn, player piece, playing piece, token, unit): a player's representative on the gameboard made of a piece of material made to look like a known object (such as a scale model of a person, animal, or inanimate object) or otherwise general symbol. Each player may control one or more pieces. Some games involve commanding multiple pieces, such as chess pieces or Monopoly houses and hotels, that have unique designations and capabilities within the parameters of the game; in other games, such as Go, all pieces controlled by a player have the same capabilities. In some modern board games, such as Clue, there are other pieces that are not a player's representative (i.e. weapons). In some games, pieces may not represent or belong to any particular player. See also Counter (board wargames).
- player: the participant(s) in the game. See also Player (game).
- point: see space.
- polyhedral dice: see die/dice.
- replacement capture: see displacement capture.
- rule: a condition or stipulation by which a game is played.
- ruleset: the comprehensive set of rules which define and govern a game.
- space: a physical unit of progress on a gameboard delimited by a distinct border, and not further divisible according to the game's rules. Alternatively, a unique position on the board on which a piece in play may be located. For example, in Go, the pieces are placed on grid line intersections, called points, and not in the areas bounded by the borders, as in chess. The bounded area geometries can be square (e.g. chess), rectangular (e.g. shogi), hexagonal (e.g. Chinese checkers), triangular (e.g. Bizingo), quadrilateral (e.g. Three-player chess), or other shapes (e.g. Circular chess). See also Game mechanic#Movement.
- square: see space.
- token: see piece.
- turn: a player's opportunity to move a piece or make a decision that influences gameplay. Turns to move usually alternate equally between competing players or teams. See also Turn-based game.
- Going Cardboard—a documentary, including interviews with game designers and game publishers
- Interactive movie—DVD games
- History of games
- List of board games
- List of game manufacturers
- Mind sport
- Tabletop game
- Snakes and Lattes—a board game café
- BoardGameGeek—a board game community and website database
- Piccione, Peter A. (July–August 1980). "In Search of the Meaning of Senet". Archaeology: 55–58. Retrieved 2008-06-23.
- "''Okno do svita deskovych her''". Hrejsi.cz. 1998-04-27. Retrieved 2010-02-12.
- "World's Oldest Backgammon Discovered In Burnt City". Payvand News. December 4, 2004. Retrieved 2010-05-07.
- Schädler, Dunn-Vaturi, Ulrich, Anne-Elizabeth. "BOARD GAMES in pre-Islamic Persia". Encyclopædia Iranica. Retrieved 2010-05-07.
- Brumbaugh, Robert S. (1975). "The Knossos Game Board". American Journal of Archaeology (Archaeological Institute of America) 79 (2): 135–137. doi:10.2307/503893. JSTOR 503893. Retrieved 2008-06-23.
- Rawson, Jessica (1996). Mysteries of Ancient China. London: British Museum Press. pp. 159–161. ISBN 0-7141-1472-3.
- "Confucius". Senseis.xmp.net. 2006-09-23. Retrieved 2010-02-12.
- "Varro: Lingua Latina X". Thelatinlibrary.com. Retrieved 2010-02-12.
- Games Britannia – 1. Dicing with Destiny, BBC Four, 1:05am Tuesday 8 December 2009
- John Fairbairn's Go in Ancient China[dead link]
- Murray 1951, pp.56, 57.
- Burns, Robert I. "Stupor Mundi: Alfonso X of Castile, the Learned." Emperor of Culture: Alfonso X the Learned of Castile and His Thirteenth-Century Renaissance. Ed. Robert I. Burns. Philadelphia: University of Pennsylvania P, 1990. 1–13.
- Sonja Musser Golladay, "Los Libros de Acedrex Dados E Tablas: Historical, Artistic and Metaphysical Dimensions of Alfonso X’s Book of Games" (PhD diss., University of Arizona, 2007), 31. Although Golladay is not the first to assert that 1283 is the finish date of the Libro de Juegos, the a quo information compiled in her dissertation consolidates the range of research concerning the initiation and completion dates of the Libro de Juegos.
- Smith, Quinti (October 2012). "The Board Game Golden Age". Retrieved 2013-05-10.
- Jensen, Jennifer (2003). "Teaching Success Through Play: American Board and Table Games, 1840-1900". Magazine Antiques. bnet. Retrieved 2009-02-07.
- Fessenden, Tracy (2007). Culture and Redemption: Religion, the Secular, and American Literature. Princeton University Press. p. 271. Retrieved 2009-02-07.
- Hofer, Margaret K. (2003). The Games We Played: The Golden Age of Board & Table Games. Princeton Architectural Press. Retrieved 2009-02-07.
- Weber, Susan, and Susie McGee (n.d.). "History of the Game Monopoly". Archived from the original on 10 February 2009. Retrieved 2009-02-03.
- Downey, Greg (November 1999). "Information Networks and Urban Spaces: The Case of the Telegraph Messenger Boy". Antenna. Mercurians. Retrieved 2009-02-07.[dead link]
- de Voogt, Alex, & Retschitzki, Jean (2004). Moves in mind: The psychology of board games. Psychology Press. ISBN 1-84169-336-7.
- Stealing the show. Toy Retailing News, Volume 2 Number 4 (December 1976), p. 2
- "Playing Linear Number Board Games—But Not Circular Ones—Improves Low-Income Preschoolers’ Numerical Understanding".
- Austin, Roland G. "Greek Board Games." Antiquity 14. September 1940: 257–271
- Bell, R. C. (1983). The Boardgame Book. Exeter Books. ISBN 0-671-06030-9.
- Bell, R. C. (1979) [1st Pub. 1960, Oxford University Press, London]. Board and Table Games From Many Civilizations I (Revised ed.). Dover Publications Inc. ISBN 0-671-06030-9.
- Bell, R. C. (1979) [1st Pub. 1969, Oxford University Press, London]. Board and Table Games From Many Civilizations II (Revised ed.). Dover Publications Inc. ISBN 0-671-06030-9.
- Diagram Group (1975). Midgley, Ruth, ed. The Way to Play. Paddington Press Ltd. ISBN 0-8467-0060-3.
- Falkener, Edward (1961) . Games Ancient and Oriental and How to Play Them. Dover Publications Inc. ISBN 0-486-20739-0.
- Fiske, Willard. Chess in Iceland and in Icelandic Literature—with historical notes on other table-games. Florentine Typographical Society, 1905.
- Gobet, Fernand; de Voogt, Alex, & Retschitzki, Jean (2004). Moves in mind: The psychology of board games. Psychology Press. ISBN 1-84169-336-7.
- Golladay, Sonja Musser, "Los Libros de Acedrex Dados E Tablas: Historical, Artistic and Metaphysical Dimensions of Alfonso X’s Book of Games" (PhD diss., University of Arizona, 2007)
- Gordon, Stewart (July–August 2009). "The Game of Kings". Saudi Aramco World (Houston: Aramco Services Company) 60 (4): 18–23. (PDF version)
- Grunfeld, Frederic V. (1975). Games of the World. Holt, Rinehart and Winston. ISBN 0-03-015261-5.
- Mohr, Merilyn Simonds (1997). The New Games Treasury. Houghton Mifflin Company. ISBN 1-57630-058-7.
- Murray, H. J. R. (1913). A History of Chess (Reissued ed.). Oxford University Press. ISBN 0-19-827403-3.
- Murray, H. J. R. (1978). A History of Board-Games other than Chess (Reissued ed.). Hacker Art Books Inc. ISBN 0-87817-211-4.
- Parlett, David. Oxford History of Board Games. Oxford University Press, 1999. ISBN 0-19-212998-8
- Pritchard, D. B. (1982). Brain Games. Penguin Books Ltd. ISBN 0-14-00-5682-3.
- Pritchard, David (1994). The Family Book of Games. Brockhampton Press. ISBN 1-86019-021-9.
- Rollefson, Gary O., "A Neolithic Game Board from Ain Ghazal, Jordan," Bulletin of the American Schools of Oriental Research, No. 286. (May 1992), pp. 1–5.
- Sackson, Sid (1983) [1st Pub. 1969, Random House, New York]. A Gamut of Games. Arrow Books. ISBN 0-09-153340-6.
- Schmittberger, R. Wayne (1992). New Rules for Classic Games. John Wiley & Sons Inc. ISBN 978-0471536215.
|Wikimedia Commons has media related to Board games.|
- Board Games on the Open Directory Project
- International Society for Board Game Studies | 2026-02-03T13:30:37.216306 |
766,153 | 3.661565 | http://quizlet.com/3542848/organelles-structure-and-functions-flash-cards/ | a jellylike fluid inside the cell in which the organelles are suspended
The thin layer surrounding a cell's contents. Acts as a gatekeeper. Consists of a phospolipid bilayer in which proteins are embedded; communicates with other cells.
a part of the cell containing DNA and RNA and responsible for growth and reproduction
thread like structures that have genetic info that is passed down from one generation to the next
double membrane perforated with pores that control the flow of materials in and out of the nucleus
The organelle where ribosomes are made, synthesized and partially assembled, located in the nucleus
Powerhouse of the cell, organelle that is the site of ATP (energy) production
Rough Endoplasmic Reticulum
System of internal membranes within the cytoplasm. Membranes are rough due to the presence of ribosomes. functions in transport of substances such as proteins within the cytoplasm
Smooth Endoplasmic Reticulum
no ribosomes are found on surface; contains collection of enzymes that perform special tasks that include the synthesis of membrane lipids and detoxification; the liver contains a lot of smooth er
stack of membranes in the cell that modifies, sorts, and packages proteins from the endoplasmic reticulum
cell organelle filled with enzymes needed to break down certain materials in the cell
Contain oxidase enzymes that detoxify alcohol, hydrogen peroxide, and other harmful chemicals
specialized peroxisomes found in fat-storing tissues of plant seeds that have enzymes which convert fatty acids to sugars; used for energy until photosynthesis occurs to produce its own sugar
series of compartments that molecules pass through before reaching lysosomes, they sort the ingested molecules and recycle some back to plasma membrane. Contain enzymes used in oxidative reactions that break down lipids and destroy toxic molecules.
formed by phacogytosis; pinches off of the plasma membrane and encloses a food particle
saclike organelles that expand to collect excess water and contract to squeeze the water out of the cell
a microscopic network of actin filaments and microtubules in the cytoplasm of many living cells that gives the cell shape and coherence
Hollow tubes made of a protein called tubulin
fine, threadlike proteins found in the cell's cytoskeleton
whiplike tails found in one-celled organisms to aid in movement
One of two tiny structures located in the cytoplasm of animal cells near the nuclear envelope; play a role in cell division.
a thin membrane around the cytoplasm of a cell
Primary Cell Wall
A relatively thin and flexable cell wall furthest outside that is first secreted by a plant cell
Secondary Cell Wall
in plants, a strong and durable matrix often deposited in several laminated layers for cell protection and support.
A membranous sac in a mature plant cell with diverse roles in reproduction, growth, and development.
stores starch, pigments, and toxic substances in plants
organelle found in cells of plants and some other organisms that captures the energy from sunlight and converts it into chemical energy
plastid containing pigments other than chlorophyll usually yellow or orange carotenoids
Unpigmented plasids that store starch grains, abundent in cells of stems, tubers, and seeds
A common precursor of the different plastids, including the chloroplast, chromoplast, and amyloplast.
non membrane bounded organelles responsible for protein synthesis
short structures projecting from a cell and containing bundles of microtubules that move a cell through its surroundings or move fluid over the cell's surface
Factory= Security guard
Factory= Work area
Factory= Boss's Office
Factory= Assembly Line
Factory= Power house
Factory= Shipping, storing, packing
Factory= Clean up crew
Factory= Pulley on a machine
a part of an organism consisting of an aggregate of cells having a similar structure and function
a collection of tissues that carry out a specialized function of the body
group of organs that work together to perform a specific function | 2026-01-30T02:36:35.038674 |
231,725 | 3.619619 | http://www.centauri-dreams.org/?p=21091 | I’m delighted that we keep finding solar systems so different from our own. The discovery of two new planets that are roughly the size of the Earth just confirms the feeling — in a galaxy of dazzling fecundity, every system we look at has its own peculiarities to instruct and delight us. The system around the star called Kepler-20 (from its designation by the space observatory studying planetary transits) is a case in point. Yes, it has small, rocky worlds, but it also has three larger planets, and all five orbit closer than the orbit of Mercury in our own system. Kepler-20 is a G-class star somewhat cooler than the Sun located some 950 light years from Earth in the constellation Lyra.
Moreover, while we once assumed that smaller planets orbited close to stars while larger gas giants orbited further out in the system (again based on our own system and our assumptions about it), our new discoveries point to different scenarios. In Kepler-20 we have a system where the larger planets (all smaller than Neptune) orbit in alternating fashion with the rocky planets. We get a big planet and then a little one, then another large one, another rocky planet, followed by a third large world. Kepler 20-b, 20c and 20d are the large planets, with diameters of 24,100, 39,600, and 35,400 kilometers respectively, and orbital periods of 3.7, 10.9, and 77.6 days.
“The Kepler data are showing us some planetary systems have arrangements of planets very different from that seen in our solar system,” said Jack Lissauer, planetary scientist and Kepler science team member at NASA’s Ames Research Center in Moffett Field, Calif. “The analysis of Kepler data continues to reveal new insights about the diversity of planets and planetary systems within our galaxy.”
As you would expect, it’s the small planets that command the headlines this morning, their presence seen as another step in the goal of finding an Earth-sized planet in the habitable zone. These worlds at least fit the bill in terms of size, although water on the surface is out of the question, as temperatures are thought to reach 760 degrees Celsius on the inner world and 426 degrees Celsius on the outer. With an orbit of 6.1 days, Kepler 20e is equivalent to 0.87 times the size of Earth, with a diameter of 11,100 kilometers. Kepler-20f, orbiting the host star every 19.6 days, has a diameter of 13,200 kilometers, quite close to the size of the Earth. Both planets are expected to be rocky, with masses less than 1.7 and 3 times that of Earth.
Image: This chart compares the first Earth-size planets found around a sun-like star to planets in our own solar system, Earth and Venus. NASA’s Kepler mission discovered the new found planets, called Kepler-20e and Kepler-20f. Kepler-20e is slightly smaller than Venus with a radius .87 times that of Earth. Kepler-20f is a bit larger than Earth at 1.03 times the radius of Earth. Venus is very similar in size to Earth, with a radius of .95 times that our planet. Prior to this discovery, the smallest known planet orbiting a sun-like star was Kepler-10b with a radius of 1.42 that of Earth, which translates to 2.9 times the volume. Credit: NASA/Ames/JPL-Caltech.
So there we are, the smallest planets yet confirmed around a Sun-like star, a likely case of planetary migration in which planets form much farther from the host star and migrate inward because of interactions with the protoplanetary disk. Although Kepler’s transit methodology is finely tuned, the radial velocity signature of planets like the smaller two around Kepler-20 is out of the detection range of current technology. Kepler’s ‘long stare’ at the starry patch in Cygnus, Lyra and Draco was able to turn up the planetary candidates here, but intense computer simulations by way of follow-up were necessary to demonstrate the likelihood of the smaller detections being planets.
The paper is Fressin et al., “Two Earth-sized planets orbiting Kepler-20,” published online in Nature (20 December 2011). Abstract available. | 2026-01-21T18:54:20.377881 |
641,671 | 4.184995 | http://kidsactivitiesblog.com/18156/early-learning | Young children learn well when they use hands-on projects. This is a great activity that promotes early learning, specifically working with numbers, but it is also very easy to make and use.
Two simple everyday objects can be combined to make an early learning tool. In this case I took an empty frosting container and some clothes pins and made a number game. The numerals were written around the top of the frosting container. Dots were written on the end of the clothes pin that clips to objects.
Working with Numbers
Children have to clip the clothes pin with the correct number of dots over the right number.
Yes, this is a simple activity for working with numbers, but it involves using several early learning skills. Children have to not only have an understanding of how many semi- concrete object or dot are on each clothes pin. They also have to use small motor skills to open the clothes pin and keep it open until the clothes pin is above the right number.
If you have a child that needs to see things displayed in a linear fashion, you may want to have them line up the clothes pins in number order before they do the matching activity on the cup.
You can make this activity self checking by placing the dots on the inside of the frosting container right behind their numeral. Children can easily peek inside to see if they have the right answer.
Oh! You could also add a handful of beans into the frosting can. Some children need to count out numbers using physical objects. The physical act of moving objects helps them build an understanding of numbers.
Adding counting beans to this early learning activity would allow it to include ways to see a number in concrete, semi- concrete, and abstract manors.
No matter how you decide to use this tool for working with numbers I think you and your child will have fun playing with numbers. | 2026-01-28T04:22:43.590097 |
871,054 | 3.925242 | http://web.uvic.ca/~mroth/teaching/445/Concepts_Elementary.htm | Teachers want their students to understand. In order to understand students have to associate new knowledge to that which they already know. Often, however, we teachers seem to be at a loss of achieving this goal, and many of our students resort to the memorization of facts which they seem to forget shortly after the next test or exam. More so, probing shows that this knowledge is disjointed, piecemeal, and fragmented. Research has indicated that the key difference between experts and novices does not lie in the number of concepts in their vocabularies. Rather, the difference lies in the hierarchical and lateral integration of the experts knowledge framework . To help students to organize concepts into a meaningful framework, Novak invented a learning heuristic. He called this heuristic concept mapping .
The heuristic of concept mapping was designed (1) to assist learners in understanding concepts and the relations between them; (2) to establish hierarchical relationships among the concepts; and (3) to recognize the propositional nature of knowledge. We use concept maps in a variety of contexts. Our students map the key concept in an assigned reading, or they represent the findings from a laboratory investigation. Figure 1 shows a concept map which a grade six student completed after several experiments with thermometers. The concept map quite clearly shows that the students not merely remembered simple phrases, but that she could interconnect different aspects of the thermometer into a web of propositions. For this girl, knowledge was not simply a body of disjointed statements and fact-like propositions, but a connected whole.
[Click for Figure 1]
We found that working on concept maps encourages students to explain information that they "know" by gut feeling, but which they don't really understand. Encouraging them to make their own ideas explicit, to describe these ideas in their own words, helps students realize that they have not yet constructed a complete understanding; they have to look again, consider the ideas in the context of the initial experiment, and try to connect the ideas to each other and to their prior knowledge. Also, we found that when designing concepts maps, students frequently realize that they don't know how two ideas are related, and this leads them to develop new questions to investigate.
Much of our introduction follows the instruction prepared by Novak and Gowin (1984). We begin our instruction with two lists of words. The first list includes familiar object words, such as car, dog, chair, table, and house. The second list includes event words such as raining, running, thunderstorm, and birthday party. Projecting one list at a time, we ask the children what the words describe until they have identified the lists as describing objects and events. We then introduce the word "concept" as label for both object and event words.
In the next step, we list words such as are, where, then, with, and the. Even without much prompting, our students invariably will come up with the label "linking words" for this set of terms. These terms are used to link pairs of concepts to form propositions, that is, simple sentences that have meaning.
A fourth list of words includes proper nouns such as Toronto, Michael Jordan, Mississippi, Lake Ontario, and Christmas. We help students arrive at the distinction between proper nouns, that is names of specific places, events, and objects on the one hand, and concepts as labels for regularities on the other.
Using concepts from our lists we have children construct several simple sentences. In this way, we show the meaning of a term. To help students understand the importance of sentences for the conveyance of meaning, we ask them the following. "Can you tell me how much I know about basket ball if I say 'Michael Jordan.'" By constructing simple sentences with 'Michael Jordan' we can show that some sentences which express knowledge are correct in that they are shared widely (Michael Jordan is a basketball star), while others are incorrect (Michael Jordan is quarterback).
At this stage, we let students work in groups of three or four because they are of great support to each other. We have children identify the main concepts in an excerpt from a science textbook at the appropriate level. To make the first concept map not too complex, there should not be more than 10 to 12 concepts. So that the students can underline or highlight these concepts, we type out the text rather than using their own textbook. Then we hand to each group of students a sufficient amount of 1.5" x 2" slips of paper on which they write all the concepts they identified. The students rank order these slips with concepts marked on them in order of importance. At this point, they are ready to begin there first concept map.
Using a sample concept map on a different topic projected on the wall, we point out that the most inclusive, the most general concepts appear on top of the concept map. More specific and less general, less inclusive concepts appear further down in the map. We let students read sections of this map so that they can experience how much knowledge can be represented in such a map. With the sample map still projected, the students then proceed to fan out there concept labels on the paper slips, while they talk about how pairs of concepts can be linked. When they are satisfied with their arrangements, students transfer their arrangements to their notebooks. We emphasize the use of pencils, so that they can make changes even as they link concepts.
Our students use concept maps in several different contexts. First, they concept map what they know when they do hands-on activities. Before the students begin their investigations, we have them list the concepts which they already know about the topic. After they have completed their investigation, the students add to their previous list all those which they newly learned. Then they prepare a concept map which includes both their previous concepts and the new ones. In this way, the students experience for themselves how new concepts tie to those that they already had. As such, concept maps fit well into a program that makes use of the Learning Cycle, during which students invent new concepts for which the teacher provided new labels. To help students integrate these new labels into that which they already know, concept mapping is an ideal experience. Figure 2 presents a good example of a grade 6 student's map after an experiment on boiling as one example of a phase change.
[Click for Figure 2]
Concept maps are also ideal to help students understand textual materials. We sometimes ask students to read something at home. To give them an incentive in integrating what they read to what they already know, we ask them to make a concept map on their own. To make sure that they have used all the terms they had previously identified, some students prefer to list all the concepts on the margin of their paper. In this way, they can cross off those concepts already used. The concept map in Figure 3 is a map of concept from the Silver Burdett Science 6 textbook (Mallinson, Mallinson, Smallwood, & Valentino, 1985) which summarizes a student's reading on changes of matter.
[Click for Figure 3]
We find that concept maps are especially well suited to help students construct an overview of the science content which they learned over a period of time. By providing students with a list of concepts from all the experiments and/or chapters in the reference book, we make sure that the students will map the key concepts. Through concept mapping, the students are then enabled to tie together the various ideas spread throughout the chapters of a unit. While students usually perceive science content as a sequence of topics, this activity helps them to build an integrated framework.
Concept mapping also helps us teachers to organize our thoughts about teaching a unit. It has been shown , concept maps have helped teachers from grade 4-8 to develop science curricula which are hierarchically arranged, integrated, and conceptually driven. For example, the curriculum within which the student activities leading to concept map in Figure 3 were embedded was mapped by the teacher at a larger scale (zoom out) in Figure 4. For each of the concepts, we provide a series of hands-on activities. On the zoom out, we provided an indication of the various lesson topics. We draw a specific concept map for each lesson which contains not only the more abstract concepts, but also details about individual experiments. This detailed map would be very close to that which the students drew in Figure 3. Ideally, the students should have the opportunity to conduct all these investigations on their own, so that the knowledge represented in the map has a solid foundation in the experiences of the students. The concept map aids us to plan the curricular unit, and helps us to assist students in integrating all the hands-on activities into a consistent and meaningful structure.
[Click for Figure 4]
In our own work, we have been able to establish the usefulness of concept mapping in several respects . First, concept mapping in groups helps to engage students in a discourse through which they make sense of their experiences, whether these were hands-on or reading experiences. Second, in order to talk about science concepts to their peers (and the teacher), students have to externalize their own meanings. In the process, they have to evaluate, integrate, and elaborate on their understanding in ways which leads to improve their own comprehension. Third, concept mapping allows students to integrate the various experiences in the elementary science classroom to an integrated, wholistic understanding of science. Finally, concept mapping as a collaborative activity allows us teachers to evaluate both the process and the products of students' comprehension activity: We can observe students as they try to make sense, and we have available the final products of their work to assess this understanding.
For those teachers who want to use concept maps to evaluate students and to assign marks, Novak and Gowin (1984) provided a scoring scheme . Assign one point for each valid link; assign five points for each level of valid hierarchy; and assign ten points for each valid cross-link, that is, for a link which connects various sub-hierarchies. You can divide the students score by that which was produced by a teacher about the same topics. In some cases it can happen that students will achieve a higher score than the teacher's reference map.
If you consistently use concept maps with your students, they will soon understand how and why concept maps are constructed. Figure 5 shows a concept map about concept mapping drawn by one of our grade 6 students. It clearly shows the connections which students make between concepts and events on the one hand, and the experiments on the other. The student also expressed the importance of hierarchy, the relationship between concepts and linking words, the relative freedom in constructing the map ("CONCEPT MAPS can be any SIZE/SHAPE"), and that the concept maps help in meaningful learning.
[Click for Figure 5]
We have observed an overwhelmingly positive attitude towards science courses in general and towards concept mapping in specific. In their concept maps about science, the students express that science is fun. They like concept mapping because it shows them how things are connected, and because they can work together in groups. We feel that teaching and learning in the science class or laboratory should be a meaningful experience. Concept maps have become an important tool for both students and teachers in our school to get the most out of our classes. Students know how all the lessons and/or experiences are interconnected and they can make sense of, that is, can wholistically visualize, an otherwise seemingly sequential curriculum. The teachers know how to put all the lessons together so that each one builds on the next in such a manner that the students can construct meaning.
Chi, M. T. H., Feltovich, P. J., & Glaser, R. (1981). Categorization and representation of physics problems by experts and novices. Cognitive Science, 5, 121-152.
Novak, J. D. & Gowin, D. B. (1984). Learning how to learn. Cambridge: Cambridge University Press
Mallinson, G. G., Mallinson, J. B., Smallwood, W. L., & Valentino, C. (1985). Silver Burdett science 6. Morristown, NJ: Silver Burdett Co.
Roth, W.-M. (1993). Construction sites: Science labs and classrooms. In K. Tobin (Ed.), The practice of constructivism in science education (pp. 145-170). Washington, D.C.: American Association for the Advancement of Science.
Roth, W.-M., & Roychoudhury, A. (1993). The concept map as a tool for the collaborative construction of knowledge: A microanalysis of high school physics students. Journal of Research in Science Teaching, 30, 503-534.
Starr, M. L. & Krajcik, J. S. (1990). Concept maps as heuristics for science curriculum development: Toward improvement in process and product. Journal of Research in Science Teaching, 27(10), 987-1000.
Figure 1. A concept map drawn after a several experiments with thermometers
Figure 2. A concept map drawn after one of the experiments on phase changes. Here, boiling was the event observed.
Figure 3. A concept map as a summary of a textbook reading by a sixth grade student.
Figure 4. A concept map for planning the grade six curriculum. Various lessons are indicated on this concept map summarizing the whole unit.
Figure 5. A sixth grader summarizes his experience and knowledge about concept mapping. | 2026-01-31T20:49:41.701547 |
505,524 | 3.648741 | http://files.bookboon.com/ai/Shannon-Guessing-Game.html | Shannon Guessing Game NetLogo Model
Produced for the book series "Artificial Intelligence";
Author: W. J. Teahan; Publisher: Ventus Publishing Aps, Denmark.
powered by NetLogo
view/download model file: Shannon-Guessing-Game.nlogo
WHAT IS IT?
This model shows how a language model can be constructed from some training text and then used to predict text - i.e. play the "Shannon Guessing Game", a game where the agent (human or computer) tries to predict upcoming text, one letter at a time, based on the prior text it has already seen.
Note: There is some possible confusion of the term "model" for this NetLogo "model". NetLogo uses the term "model" to refer to a program/simulation. This can be confused with the use of the term "model" in the phrase "language model" which is a mechanism for assigning a probability to a sequence of symbols by means of a probability distribution. In the information listed below, the two types of models will be explicitly distinguished by using the phrases "NetLogo model" and "language model".
The type of language model used in this NetLogo model is a fixed order Markov model - that is, according to the Markov property, the probability distribution for the next step and future steps depends only on the current state of the system and does not take into consideration the system's previous state. The language model is also fixed order - that is, the probability is conditioned on a fixed length prior context.
HOW IT WORKS
The language model is based on the PPMC compression algorithm devised by John Cleary and Ian Witten. It uses a back-off mechanism called escaping, to smooth the probability estimations with lower order models in order to overcome the zero frequency problem. This is the problem that occurs when an upcoming character has not been seen before in the context, and therefore has zero frequency. Assigning a zero probability estimate will result in an infinite code length (since log 0 = infinity) which means that it is impossible to encode the occurrence using this estimate. To overcome this, the estimates are calculated by smoothing with lower order models using the escape mechanism.
The escape probability is estimated using Method C devised by Alistair Moffat (hence why it is called PPMC). The smoothing method is often erroneously called Witten-Bell smoothing although it was invented by Alistair Moffat. The PPMC language model implemented by this NetLogo model does not implement full exclusions or update exclusions.
WHAT IS ITS PURPOSE?
The purpose of this NetLogo model is show how to implement a language model based on the PPMC compression scheme, and then use it to predict text one character at a time.
HOW IT WORKS
The data being modelled is conceptualised as a stream of events (i.e. analogously to Event Stream Processing or ESP) where events occur with a specific order for a specific stream, but at the same time can occur simultaneously on separate streams.
A training text sequence is used to train the language model. This language model is then used to process a test text sequence. Each stream event from the training data is represented by a state turtle agent, and paths between states are represented by path link agents. When the language model is used for prediction, such as when playing the Shannon Guessing Game as in this NetLogo model, then walker turtle agents are used to walk the network of states to determine which of the current contexts are active. These are then used to make the probability estimates. Text turtle agents are used for maintaining information about a particular text such as sums and a variable for storing the language model.
HOW TO USE IT
To use this NetLogo model, perform the following steps:
1. Select a training text to load using the which-training-text slider.
2. Load and build a language model for it using the load-training-text button.
3. Select a testing text to perform the prediction on using the which-testing-text slider.
4. Then predict the testing text one character at a time by pressing the predict-text button.
The model's Interface buttons are defined as follows:
- load-training-text: This loads the training text specified by the which-training-text slider. At the same time, it constructs the language model from it.
- load-testing-text: This loads the testing text specified by the which-testing-text slider.
- predict-text: This predicts the text in the testing text one character at a time.
The model's Interface slider and choosers are defined as follows:
- max-depth-of-tree: This is the maximum depth of the context tree. The order of the model is (max-depth-of-tree - 1).
- output-options: This determines how much output is written to the output box during the prediction phase.
- which-training-text: This slider allows the user to select from a small but eclectic list of natural language texts to use as the data for training the language model.
- which-testing-text: This slider allows the user to select from a small but eclectic list of natural language texts to use as the data on which the language model is tested by predicting each character one after the other.
THINGS TO NOTICE
Notice how the prediction improves as the max-depth-of-tree variable is increased from 1 to 5 or 6, then drops off slightly. However, this depends to a great extent on the training text used to build the language model and how closely it is related to the testing text.
Notice how poor the prediction is when the training and testing texts are from different languages.
Notice also that for the first few characters in the testing text, the prediction is relatively poor. Why is this?
In contrast, notice how well the language model does at predicting the testing text if it is trainedStream on exactly the same text (i.e. the training and texts are exactly the same). Note: This situation seldomly appears in real life prediction because usually the testing sequence is completely unpredictable. Note also that the best the language model for higher order models can do for predicting the next character is often only 1/2 or 1 bit to encode even in the situation where it has been trained on the testing text. Why is this? Hint: What would happen if the length of the training text was much longer?
THINGS TO TRY
Try altering the max-depth-of-tree to see how this affects the prediction.
Try different combinations of training and testing texts.
EXTENDING THE MODEL
Extend the PPMC language model so that it implements full exclusions and update exclusions.
Can you devise further methods to improve the prediction of the language model?
See the Language Modelling NetLogo model and the Cars Guessing Game NetLogo model.
CREDITS AND REFERENCES
This model was created by William John Teahan.
To refer to this model in publications, please use:
Shannon Guessing Game NetLogo model.
Teahan, W. J. (2010). Artificial Intelligence. Ventus Publishing Aps.
; Shannon Guessing Game model. ; ; Shows how the Shannon Guessing Game works. ; ; Copyright 2010 William John Teahan. All Rights Reserved. ; breed [texts text] breed [states state] breed [walkers walker] directed-link-breed [paths path] states-own [ depth ;; depth in the tree stream ;; the name of the stream of sensory or motor events event ;; the sensory or motor event event-count ;; the number of times te event has occurred ] walkers-own [ location ;; the location of a state the walker is currently visiting depth ;; depth in the tree model ] texts-own [ model ;; the model associated with the text total ;; total of lengths of paths traversed window ;; window of recent path lengths wcount ;; sum of path lengths stored in the window ] globals [ testing-chars ;; the testing string of characters testing-char ;; the current testing character testing-pos ;; position in the testing string encoding-list ;; the encoding list for the current testing character total-entropy ;; total cross-entropy for the sequence being encoded ] to load-training-text-char-events [filename window-size this-model] ;; loads the training text character events from the file. ;; window-size is the length of the window (context-size + 1) ;; where window-size = max depth of tree + 1 ;; i.e. order of context model = max depth of tree - 1 let i 0 let j 0 let k 0 let chars "" let event-list let item-list import-drawing "Image-Shannon.jpg" ; put something in environment to look at create-texts 1 [ set color black ; don't make this turtle visible on the screen set model (word this-model " Chars") set total 0 set window set wcount 0 hide-turtle ; make it invisible ] file-open filename ;; Read in all the data in the file set k 0 ; k is the number of characters let ch "null" while [not file-at-end?] [ set chars file-read-line set i 0 while [i + window-size <= length chars] [ set event-list set j window-size while [j > 0] [ set ch item (i + j - 1) chars set item-list (list "char" ch) ifelse (event-list = ) [ set event-list (list item-list) ] [ set event-list fput item-list event-list ] set j j - 1 ] set event-list fput (list "model" (word this-model " Chars")) event-list add-events event-list set i i + 1 set k k + 1 ] ] file-close output-type k output-print " characters loaded" end to start-new-walker ;; starts a new walker at the root of the training text model let this-state one-of states with [stream = "model"] create-walkers 1 [ set size 0 ; do not make it visible set depth 0 ; at root of the tree set location this-state ;; start at first state move-to location ] end to update-walkers [this-stream this-event] ;; updates all the walkers by processing the stream's event let new-location nobody ask walkers [ ask location [ set new-location one-of out-path-neighbors with [stream = this-stream and event = this-event]] if (new-location = nobody) [ die ] move-to new-location set location new-location set depth depth + 1 ] start-new-walker end to kill-walkers ;; kills all the current walkers ask walkers [ die ] end to load-training-text ca ; clear everything including the state model and walkers let filename which-training-text-filename load-training-text-char-events filename max-depth-of-tree + 1 which-training-text end to load-testing-text ;; loads the testing text characters from the file specified by the chooser which-text. clear-output set testing-chars "" set testing-pos 0 ; reset prediction point from start of text file-open which-testing-text-filename ;; Read in all the data in the file let ch "null" set testing-chars "" while [not file-at-end?] [ set testing-chars (word testing-chars " " file-read-line) ; replace eoln with space ] file-close kill-walkers start-new-walker end to list-predictions ;; lists the model's predictions given the current walker's positions let d 0 let i 0 let l nobody let types 0 let tokens 0 let testing-count 0 let found-char false set encoding-list if (output-options = "Show all predictions") [ output-print "\nPredictions (at various depths in the training model's tree):" output-print "(Each character is listed followed by its frequency of occurrence and" output-print "estimated probability)" ] foreach sort-by [[depth] of ?1 > [depth] of ?2] walkers with [depth <= max-depth-of-tree] [ set i 0 set d [depth] of ? set l [location] of ? ; find out the number of types and tokens in first pass ; also work out encoding of current character set types 0 set tokens 0 ask [out-path-neighbors] of l [ set types types + 1 set tokens tokens + event-count if (testing-char = event) [ set testing-count event-count ] ] if found-char = false [ ifelse (testing-count = 0) ; zero frequency problem [ set encoding-list lput (list types (types + tokens)) encoding-list ] [ set found-char true set encoding-list lput (list testing-count (types + tokens)) encoding-list ]] if (output-options = "Show all predictions") [ output-type "depth " output-type d output-type ": " output-type "(types = " output-type types output-type " tokens = " output-type tokens output-type " escape probability = " output-type types output-type "/" output-type (types + tokens) output-type ")" ; list all the predictions in the second pass foreach sort-by [[event-count] of ?1 > [event-count] of ?2] ([out-path-neighbors] of l) [ if (remainder i 5) = 0 [ output-print "" ] output-type " \"" output-type ([event] of ?) output-type "\"" output-type " [" output-type ([event-count] of ?) output-type "]" output-type " (" output-type ([event-count] of ?) output-type "/" output-type (types + tokens) output-type ")" set i i + 1 ] output-print "" ] ] if found-char = false ; new character never been seen before; order -1 encoding [ set encoding-list lput (list 1 256) encoding-list ] ; 256 is size of alphabet end to list-encoding ;; lists the encoding for the current character let this-entropy 0 if (output-options != "Show total only") [ output-type "\nEncodings and entropy for next character in sequence \"" output-type testing-char output-print "\":" ] let i 0 foreach encoding-list [ if (output-options != "Show total only") [ if (i > 0) [ output-type " X" ] ; stands for times (i.e. multiplication) output-type " " output-type (first ?) output-type "/" output-type (last ?) ] set this-entropy this-entropy - log ((first ?) / (last ?)) 2 set i i + 1 ] set total-entropy total-entropy + this-entropy if (output-options != "Show total only") [ output-print "" output-type "Cross-Entropy for this character = " output-type this-entropy output-print " bits" output-type "Total Cross-Entropy for all characters so far = " output-type total-entropy output-print " bits" ] end to predict-text let output-text-width 80 ; width of the output text display before wrapping to next line clear-output output-print "Text so far:" output-type "\"" let p 0 while [p <= testing-pos] [ ifelse (p + output-text-width > testing-pos) [ output-type substring testing-chars p testing-pos output-print "\"" ] [ output-print substring testing-chars p (p + output-text-width) ] set p p + output-text-width ] if (testing-pos > length testing-chars) [ user-message "Reached end of testing string!" stop ] set testing-char (item testing-pos testing-chars) list-predictions list-encoding user-message "Press OK to continue for next character:" update-walkers "char" (item testing-pos testing-chars) set testing-pos testing-pos + 1 end to-report which-text-filename [which-text] ;; reports the filename associated with the chooser which-text. let filename "" if (which-text = "English Bible") [ set filename "Text_english_bible.txt" ] if (which-text = "German Bible") [ set filename "Text_german_bible.txt" ] if (which-text = "Shakespeare's As You Like It") [ set filename "Text_shakespeare_as_you_like_it.txt" ] if (which-text = "Jane Austen's Pride and Prejudice") [ set filename "Text_austen_p_and_p.txt" ] if (which-text = "Jane Austen's Sense and Sensibility") [ set filename "Text_austen_s_and_s.txt" ] report filename end to-report which-training-text-filename ;; reports the training text filename. report which-text-filename which-training-text end to-report which-testing-text-filename ;; reports the testing text filename. report which-text-filename which-testing-text end to add-events [list-of-events] ;; add events in the list-of-events list to the events tree. ;; each item of the list-of-events list must consist of a two itemed list. ;; e.g. [[hue 0.9] [brightness 0.8]] let this-depth 0 let this-stream "" let this-event "" let this-state nobody let next-state nobody let these-states states let matching-states let matched-all-so-far true foreach list-of-events [ set this-stream first ? set this-event last ? ;; check to see if state already exists set matching-states these-states with [stream = this-stream and event = this-event and depth = this-depth] ifelse (matched-all-so-far = true) and (count matching-states > 0) [ set next-state one-of matching-states ask next-state [ set event-count event-count + 1 ] set these-states [out-path-neighbors] of next-state ] [ ;; state does not exist - create it set matched-all-so-far false create-states 1 [ set depth this-depth set stream this-stream set event this-event set event-count 1 set size 0 ; these states are never visualised (thereare too many of them) set next-state self ] ] if (this-state != nobody) [ ask this-state [ create-path-to next-state ]] ;; go down the tree set this-state next-state set this-depth this-depth + 1 ] end ; ; Copyright 2010 by William John Teahan. All rights reserved. ; ; Permission to use, modify or redistribute this model is hereby granted, ; provided that both of the following requirements are followed: ; a) this copyright notice is included. ; b) this model will not be redistributed for profit without permission ; from William John Teahan. ; Contact William John Teahan for appropriate licenses for redistribution for ; profit. ; ; To refer to this model in publications, please use: ; ; Teahan, W. J. (2010). Shannon Guessing Game NetLogo model. ; Artificial Intelligence. Ventus Publishing Aps. ; | 2026-01-26T03:50:09.487497 |
623,059 | 3.564814 | http://education.exeter.ac.uk/dll/studyskills/note_taking_skills.htm | Department of Lifelong Learning: Study Skills Series
Note taking skills Note taking activities • Note taking reading list • Appendix A • Appendix B (PDF version for offline)
When you are at university, the sheer amount of information that is delivered to you can be daunting and confusing. You may even think that you have to copy down everything you hear or read. When you are at a face-to-face lecture it is sometimes difficult to tell what is important and what is not. Distance learning students might feel the need to copy out fact after fact from readings and textbooks. When preparing for an exam or assignment, it is tempting to produce extensive notes on page after page of A4 paper. These methods of note taking are generally time consuming and ineffective and there is an easier way!
Effective note taking should have a purpose, should be well organised, and can be a time saving skill. This information sheet outlines the basic lecture and written source note taking skills. Firstly, we will try to understand why notes are an important part of studying. Then we will learn how to take, organise and store notes. At the end of this information sheet you will find an activity that can be used to test yourself. Your tutor or the Student Support Officer can provide feedback on this activity.
When you’ve finished this study skills package, you should be able to:
While most students anticipate that they will have to take notes at university, not many students take the time to discover how to take effective notes. In fact, some students even try to avoid taking notes by using tape recorders or by sharing notes with other students. Initially, these strategies may seem like a good idea, but in an academic context note taking is as important as assignment writing in that you are taking in information and then writing it back out again as a learning process (Rowntree, 1976: 112). Tape recorders and ‘buddy’ note-taking arrangements should only ever be used in addition to your original notes, and never as a substitute.*
The following list provides a few reasons why note taking is an important activity:
It may be tempting not to take notes and to just sit back and listen to an interesting lecture or to become engrossed in an interesting reading. The disadvantage of these strategies is that at the end of the lecture or reading you may only have a vague recollection of the important and sometimes assessable issues. The lecture will be over with no chance to revisit the material, or the reading may have to be re-read, which is time consuming and sometimes tedious. The taking of effective notes during the lecture or while you are reading is an important academic activity that helps you to concentrate, stimulates your ability to recall, and helps you to be organised.
* Please note: Students with dyslexia and other learning disabilities may find the use of a tape recorder beneficial to learning. However, please contact the Student Support Officer for advice on how to best use a tape recorder in addition to note-taking.
Now that you understand the reason for taking notes, let’s learn how your note taking can become effective. This section will be broken into three parts; the first section will cover a range of general note taking tips, the next will deal with taking effective notes from reading material, and the last will deal with taking effective notes from lectures.
It is important to determine which pieces of information in a lecture or reading are important and which pieces are not. The best way to do this is to be critical when you read or listen. Ask yourself if the information you’re hearing is IMPORTANT, RELEVANT, and CREDIBLE. In other words, does the information demonstrate a major point, does it relate to the subject matter, and is it believable or supported?
When writing down notes, try to distinguish between facts, opinions, and examples. It is important to write down relevant facts. Facts are ‘true’ statements that should be supported by research or evidence. It is also important to write down important, relevant, educated opinions. For example, if the lecturer is giving a lecture that compares the ideas of different theorists, it would be important to write down a summary of each theorist’s opinion in your notes. Lecturers and authors use examples to help explain difficult concepts and to maintain your interest. While you might find the example interesting, it is not important to write down all the examples. You may like to write a reference to an example that was particularly interesting or as a means of reminding you to do more research in a particular area. Rather than relying on the examples that the lecturer or author provides, when reviewing your notes, try to think of your own examples.
When reading or listening, don’t write out notes word for word. Notes should not be an exact copy of the lecture or reading. They should be a summary of the main ideas and should be used to help jog your memory.
Use shortcuts that you will understand and that will make the writing process quicker. Abbreviations (‘eg’ instead of ‘for example’), symbols (= instead of ‘equals’), and drawings can sometimes help you take notes more quickly.
Use font, colour and size to draw attention to important points. For example, you might like to use a different colour pen to write down facts, opinions, and examples. You might use different writing sizes to indicate main points as being separate from supporting evidence.
When making notes, print clearly where possible. If your writing is poor, use a word processor when reviewing your notes, leaving spaces for handwritten diagrams and mind maps.
Be critical of the material you are listening to or that you are reading. How does the material compare with what you have heard or read previously? Does the argument follow a logical pattern and is it clear of false argument? Do you understand all of the points and if not, where are the gaps? What questions are still unanswered for you? Why weren’t these answered in the lecture/reading?
Understand what you are looking for in the reading. Are you looking to gain a general understanding or are you searching for specific information or support for an argument?
A well structured reading, should begin by outlining the main premise, argument or ideas in the first few sentences, and certainly in the first paragraph. Pick out the main premise and write it down (see activity 1). Each paragraph after that should contain evidence that the author uses to support the main premise.
If you understand the premise, don’t read the examples given to support it. Never include examples in your notes. Only include the facts, avoid experiences and anecdotes where possible.
Rowntree (1976: 40-64) outlines what he calls the ‘SQ3R’ approach to reading and note taking from text. He suggests that students should use the following activities in order to get the most from a reading in the most efficient way.
It is important that you understand why you are attending the lecture. Prepare for a lecture and think about what you are hoping to achieve. Think about the lecture topic in relation to your other methods of study and information input and think about what you would like to learn or have explained more clearly.
Remember that you cannot revisit lecture material, so you might consider using a tape recorder or buddy system to supplement your own notes. Always revisit your notes as soon as possible after taking them and never rely solely on someone else’s notes.
The lecturer should summarise his or her main points at regular points during the lecture. Look out for help during the introduction where the lecturer may give a linear-type list of the topics to be covered. Also listen for breaks between topics where the lecturer might summarise the most important points they have just covered. At the end of the lecture, another summary should be provided that may help you review your notes and determine if you have missed any important information. If this is the case, be sure to approach the lecturer for clarification on any points that you did not fully understand or to help you complete your notes.
An example of a mind map
At the ‘review’ stage of the SQ3R approach, you may find mind mapping to be a useful technique. Also, this technique may be useful when taking notes in lectures. Essentially, you are creating a visual diagram that represents all of the ideas from a reading or lecture. Most importantly, you are showing how the ideas are interrelated and you are creating accessible, interesting notes. This technique is particularly useful for students with dyslexia, as it allows you to avoid re-reading notes through the creation of visual diagrams.
Notes can take on two main forms: linear and spray-type diagrams. There are many different techniques and you will find one that is best for you. Have a look at Appendix A and Appendix B to see an example of each.
As soon as it is possible, outside the lecture or away from the reading, re-read your notes and re-write them if necessary into a clearer format. Here are some more tips on organising and storing your notes.
Note taking is an important academic task that helps you to remember what you have learnt and helps you to review materials for re-use in revision and assignments. It is important that you are critical when note taking and that you only write or draw what you will need later on, and that you record the information in a format that is easy to understand. You should look out for clues about what is important. The lecturer or author will organise his or her material in a logical way so try to utilise their organisational skills when note taking. When taking notes you might like to try different study techniques such as the SQ3R approach or you might like to use a more visual approach such as a spray diagram. And most importantly, after taking effective notes, it is important to organise and store your notes effectively. Effective note taking should reduce your study time, should increase your retention of knowledge, and should provide you with a summarised list of resources for your future projects.
If you need any further help with this topic, please contact your tutor or the Student Support Officer, or you may wish to consult the ‘Note taking reading list’.
(Samantha Dhann 2001)
Note taking skills Note taking activities • Note taking reading list • Appendix A • Appendix B | 2026-01-27T22:02:09.695422 |
485,530 | 3.604441 | http://www.slate.com/articles/business/the_dismal_science/2011/05/the_persistence_of_hate.single.html | The Persistence of Hate
German communities that murdered Jews in the Middle Ages were more likely to support the Nazis 600 years later.
From Rosa Parks' refusal to move to the back of the bus in Montgomery, Ala., to the "Little Rock Nine," who defied school segregation in Arkansas, most of the civil rights clashes of the 20th century played out on the turf where the Confederacy had fought to preserve slavery 100 years earlier.
If a century seems like a long time for a culture of racism to persist, consider the findings of a recent study on the persistence of anti-Semitism in Germany: Communities that murdered their Jewish populations during the 14th-century Black Death pogroms were more likely to demonstrate a violent hatred of Jews nearly 600 years later. A culture of intolerance can be very persistent indeed.
Changing any aspect of culture—the norms, attitudes, and "unwritten rules" of a group—isn't easy. Beliefs are passed down from parent to child—positions on everything from childbearing to religious beliefs to risk-taking are transmitted across generations. Newcomers, meanwhile, may be attracted by the culture of their chosen home—Europeans longing for smaller government and lower taxes choose to move to the United States, for example, while Americans looking for Big Brotherly government move in the other direction. Once they arrive, these migrants tend to take on the attitudes of those around them—American-born Italians hold more "American" views with each subsequent generation.
"Good" cultural attitudes—like trust and tolerance—may thus be sustained across generations. But the flipside is that "bad" attitudes—mutual hatred and xenophobia—may also persist.
The authors of the new study, Nico Voigtländer of UCLA and Joachim Voth of the Universitat Pompeu Fabra in Spain, examine the historical roots of the virulent anti-Semitism that found expression in Nazi-era Germany. In a sense, their analysis can be seen as providing a foundation for the highly controversial thesis put forth by former Harvard professor Daniel Goldhagen in Hitler's Willing Executioners. Goldhagen argued that the German people exhibited a deeply rooted "eliminationist" anti-Semitism that had developed over centuries, which made them ready accomplices in carrying out Hitler's Final Solution.
To compare medieval anti-Semitism to more recent animosity toward Jews, the researchers combine historical records from Germania Judaica, which documented the Jewish communities of the Holy Roman Empire, with data on the rise of anti-Semitism under Hitler, collected in Klaus-Dieter Alicke's Encyclopaedia of Jewish Communities in German-speaking Areas.
To illustrate their approach, Voigtländer and Voth draw a comparison between the cities of Würzburg and Aachen, two small cities a couple of hundred miles apart with populations of little more than 100,000 in 1933 but with very different responses to Nazi ideology.
Each city had a Jewish community dating back to at least the 13th century. When the Black Death came in 1348, it wiped out about half the population of Europe. In Germany, the plague was widely blamed on Jews poisoning wells. Jews in Würzburg had already been targeted with a violent pogrom 50 years earlier, allegedly for "desecration of the hosts" in a local church, though it may have had more to do with the large sum owed to Jewish moneylenders by a local count. With another attack imminent, in 1349, the community chose mass suicide instead. In Aachen, no Black Death pogroms occurred, despite warnings from other communities that if the city failed to take action, its Jews might poison its wells.
Fast-forward nearly 600 years. Pogroms were rare prior to Hitler's election in 1933, but not unheard of. Würzburg was among the 37 communities that targeted their Jewish communities with Weimar-era attacks. In national elections in 1928 the Nazi Party, running on an emphatically anti-Semitic platform, received 6.3 percent of the vote in Würzburg, close to double the Nazi vote share in the rest of the district. In Aachen, about 1 percent of the vote went to the Nazis. Once the Nazis took power, 44 percent of the population of Wurzburg was deported to concentration camps. In Aachen, 37 percent of the population was deported—still a tragically high figure, but notably lower than for Würzburg.
Voigtländer and Voth find that these patterns generalize to German cities more broadly. In cities with Black Death pogroms, Jews were six times more likely to have been targeted with attack during the 1920s than in places like Aachen. Similarly, the Nazi party vote share was 1.5 times higher in communities with Black Death pogroms. To the best of their ability, the authors base their calculation on an "apples-to-apples" comparison of communities with fairly similar geographies and other attributes. (In their introduction, Voigtländer and Voth highlight the sharp differences in the treatment of Jews through the ages in communities no more than 20 miles apart.)
Not all cities like Würzburg were so unwavering in their anti-Semitism, however. Those with more of an outward orientation—in particular, cities that were a part of the Hanseatic League of Northern Europe, which brought outside influence via commerce and trade—showed almost no correlation between medieval and modern pogroms. The same was true for cities with high rates of population growth—with sufficient in-migration, the newcomers may have changed the attitudes of the local culture.
This gets us back to what's become of North-South racism in the United States since the 1950s. America is a country of immigrants, and more important, a country with high mobility within its borders, particularly over the last century. This doesn't mean that racism has disappeared, though perhaps we can expect it to be distributed more evenly. There's some evidence that America's melting pot is having exactly this effect. For example, in response to the 2005-07 World Values Survey, whites living in South Atlantic states were no more likely than New Englanders to say that they wouldn't want a black neighbor. Germany's Hanseatic League cities seem a better comparison for the shifting landscape of American cultures than Würzburg or Aachen.
What of Würzburg today? It was flattened in a March 1945 firebombing that left little standing and thousands dead. By the end of the war, the city's men were mostly dead or in POW camps, leaving the women to rebuild from the rubble. One might at least hope for a fresh start. I asked professor Voth about whether Würzburgers' culture of anti-Semitism has changed in the postwar years. The city has had its share of neo-Nazi rallies, though City Hall has tried (unsuccessfully) to shut them down. In the 2009 election, nearly half of its votes went to the conservative Christian Social Union party, often associated with anti-immigrant policies. Does this suggest that there are anti-Semitic sentiments simmering beneath the surface as well? Professor Voth isn't sure but plans on finding out: Using 21st-century survey data, he and co-author Voigtländer hope to see whether a culture of hatred in Würzburg and elsewhere can survive even the pounding of nearly 1,300 tons of Allied bombs. | 2026-01-25T19:23:45.044604 |
965,194 | 3.597314 | http://inhabitat.com/study-finds-that-climate-change-may-cause-wars/el-nino-civil-war-3/ | Climate change might be on track to do more than reduce the Earth’s resources and jeopardize the health of mankind — it could cause us all to start fighting with each other. The Center for the Study of Civil War (CSCW) just published research in the journal Nature showing that global climate change has the ability to stoke hostile conditions in developing nations. The study looked at years that the El Niño weather patterns (which are pretty close indicators of what will happen in the future with global climate change) were in effect and found that in poorer nations that are prone to drought, the threat of civil war doubled. It looks like green design might be useful for more than saving Mother Nature – if widely instituted, our greener ways could stop us all from shooting each other.
El Niño is a weather phenomenon that causes warming on the surface of the tropical eastern Pacific Ocean every five years. The warming of the ocean then causes reciprocal extreme weather like floods, storms and droughts in different regions of the world. CSCW looked at the instances of civil war worldwide in the past six decades — 234 civil wars in 175 countries — and found that the outbreak of civil war in tropical countries doubled in the years that El Niño occurred. Modeling on that data the researchers found that hotter, drier conditions stoked 48 civil wars that would not have occurred if El Niño hadn’t happened. “This represents the first major evidence that global climate is a major factor in organized patterns of violence,” said Solomon Hsiang of Columbia University, the lead author of the research.
“It is frankly difficult to see why that won’t carry over to a world that is disrupted by global warming,” said Mark Cane of Columbia University. “If these smaller, shorter-lasting and, by and large, less serious kinds of changes associated with El Niño have this effect, it is hard to imagine that the more pervasive changes that come with anthropogenic climate change are not also going to have negative effects on civil conflict.” Researchers believe that issues arise as droughts and flooding destroy crops, and food and arid land become scarce. The quest for a place to live that will sustain life provokes violence over land ownership.
One green warrior attempting to reverse this very problem is Allan Savory and his Operation Hope project — which won the 2010 Buckminster Fuller Challenge. Savory and his team work to reverse desertification in drought prone regions with the introduction of livestock and whole-system farming practices. The researchers at Columbia University who worked on this study note that governments and world leaders should be prepared for these issues to arise. As we march on our earthly destruction and the Earth’s climate begins to change we might not be able to avoid this unfortunate phenomenon. | 2026-02-02T05:42:20.861240 |
334,625 | 3.685537 | http://en.wikipedia.org/wiki/Glossitis | |Classification and external resources|
Glossitis is inflammation of the tongue. It causes the tongue to swell and change color. Finger-like projections on the surface of the tongue (papillae) may be lost, causing the tongue to appear smooth.
Glossitis usually responds well to treatment if the cause of inflammation is removed. The disorder may be painless, or it may cause tongue and mouth discomfort. In some cases, glossitis may result in severe tongue swelling that blocks the airway, a medical emergency that needs immediate attention.
- Tongue swelling.
- Smooth appearance to the tongue due to pernicious anemia (Vitamin B12 Deficiency).
- Tongue color changes (usually dark "beefy" red).
- Sore and tender tongue.
- Difficulty with chewing, swallowing, or speaking.
A health care provider should be contacted if symptoms of glossitis persist for longer than 10 days, if tongue swelling is severe, or if breathing, speaking, chewing, or swallowing become difficult.
Causes, incidence, and risk factors
- Bacterial or viral infections (including oral herpes simplex).
- Poor hydration and low saliva in the mouth may allow bacteria to grow more readily.
- Mechanical irritation or injury from burns, rough edges of teeth or dental appliances, or other trauma
- Tongue piercing. Glossitis can be caused by the constant irritation by the ornament and by colonization of Candida albicans in site and on the ornament.
- Exposure to irritants such as tobacco, alcohol, hot foods, or spices.
- Allergic reaction to toothpaste, mouthwash, breath fresheners, dyes in confectionery, plastic in dentures or retainers, or certain blood-pressure medications (ACE inhibitors).
- Administration of ganglion blockers (e.g., Tubocurarine, Mecamylamine).
- Disorders such as iron deficiency anemia, pernicious anemia and other B-vitamin deficiencies, oral lichen planus, erythema multiforme, aphthous ulcer, pemphigus vulgaris, syphilis, and others.
- Occasionally, glossitis can be inherited.
- Albuterol (bronchodilator medicine)
- Migratory glossitis, also known as geographic tongue, is a very common condition that affects the anterior two thirds of the dorsal and lateral tongue mucosa of 1% to 2.5% of the population. When lesions involve other oral mucosa in addition to the dorsal and lateral tongue, the condition is called migratory stomatitis (or ectopic geographic tongue); in this uncommon condition, lesions infrequently involve also the ventral tongue and buccal or labial mucosa, the soft palate and floor of the mouth. The etiology and pathogenesis of this phenomenon are still unknown. Symptoms may be appeared in the atrophic areas especially upon consumption of spicy, acidic, or hot food, cheese, or alcoholic beverages. It is unclear why the condition becomes suddenly symptomatic many years after presentation.
The goal of treatment is to reduce inflammation. Treatment usually does not require hospitalization unless tongue swelling is severe. Good oral hygiene is necessary, including thorough tooth brushing at least twice a day, and flossing at least daily. Corticosteroids such as prednisone may be given to reduce the inflammation of glossitis. For mild cases, topical applications (such as a prednisone mouth rinse that is not swallowed) may be recommended to avoid the side effects of swallowed or injected corticosteroids. Antibiotics, antifungal medications, or other antimicrobials may be prescribed if the cause of glossitis is an infection. Anemia and nutritional deficiencies (such as a deficiency in niacin, riboflavin, iron, or Vitamin E) must be treated, often by dietary changes or other supplements. Avoid irritants (such as hot or spicy foods, alcohol, and tobacco) to minimize the discomfort.
Good oral hygiene (thorough tooth brushing and flossing and regular professional cleaning and examination) may be helpful to prevent these disorders. Drinking plenty of water and the production of enough saliva, aid in the reduction of bacterial growth. Minimizing irritants or injury in the mouth when possible can aid in the prevention of glossitis. Avoiding excessive use of any food or substance that irritates the mouth or tongue may also help.
See also
- Levin Liran, Zadik Yehuda (October 2007). "Oral Piercing: Complications and Side Effects". Am J Dent 20 (5): 340–344. PMID 17993034.
- Zadik Yehuda, Burnstein Saar, Derazne Estella, Sandler Vadim, Ianculovici Clariel, Halperin Tamar (March 2010). "Colonization of Candida: prevalence among tongue-pierced and non-pierced immunocompetent adults". Oral Dis 16 (2): 172–5. doi:10.1111/j.1601-0825.2009.01618.x. PMID 19732353.
- Zadik Y, Drucker S, Pallmon S (Aug 2011). "Migratory stomatitis (ectopic geographic tongue) on the floor of the mouth" (PDF). J Am Acad Dermatol 65 (2): 459–60. doi:10.1016/j.jaad.2010.04.016. PMID 21763590.
- "Pain in the tongue". | 2026-01-23T08:17:20.870905 |
299,955 | 3.673637 | http://www.ciscopress.com/articles/article.asp?p=174107&seqNum=4 | Classless Interdomain Routing
CIDR is a mechanism developed to help alleviate the problem of exhaustion of IP addresses and growth of routing tables. The idea behind CIDR is that blocks of multiple addresses (for example, blocks of Class C address) can be combined, or aggregated, to create a larger classless set of IP addresses, with more hosts allowed. Blocks of Class C network numbers are allocated to each network service provider; organizations using the network service provider for Internet connectivity are allocated subsets of the service provider's address space as required. These multiple Class C addresses can then be summarized in routing tables, resulting in fewer route advertisements. (Note that the CIDR mechanism can be applied to blocks of Class A, B, and C addresses; it is not restricted to Class C.)
CIDR is described further in RFC 1518, An Architecture for IP Address Allocation with CIDR, and RFC 1519, Classless Inter-Domain Routing (CIDR): An Address Assignment and Aggregation Strategy, available at http://www.cis.ohio-state.edu/cgi-bin/rfc/rfc1519.html. RFC 2050, Internet Registry IP Allocation Guidelines, specifies guidelines for the allocation of IP addresses. It is available at http://www.cis.ohio-state.edu/cgi-bin/rfc/rfc2050.html.
Most CIDR debates revolve around summarizing blocks of Class C networks into large blocks of addresses. As a general rule, Internet service providers (ISPs) implement a minimum route advertisement standard of /19 address blocks. A /19 address block equals a block of 32 Class C networks. (In some cases, smaller blocks might be advertised, such as with a /21 mask [eight Class C networks].) Addressing is now so limited that networks such as 126.96.36.199/8 are being divided into blocks of /19 that are assigned to major ISPs, which allows further allocation to customers. CIDR combines blocks of addresses regardless of whether they fall within a single classful boundary or encompass many classful boundaries.
Figure 1-20 shows an example of CIDR and route summarization. The Class C network addresses 192.168.8.0/24 through 192.168.15.0/24 are being used and are being advertised to the ISP router. When the ISP router advertises the available networks, it can summarize these into one route instead of separately advertising the eight Class C networks. By advertising 192.168.8.0/21, the ISP router indicates that it can get to all destination addresses whose first 21 bits are the same as the first 21 bits of the address 192.168.8.0.
Figure 1-20 CIDR Allows a Router to Summarize Multiple Class C Addresses
The mechanism used to calculate the summary route to advertise is the same as shown in the "Route Summarization" section. The Class C network addresses 192.168.8.0/24 through 192.168.15.0/24 are being used and are being advertised to the ISP router. To summarize these addresses, find the common bits, as shown here (in bold):
The route 192.168.00001xxx.xxxxxxxx or 192.168.8.0/21 (also written as 192.168.8.0 255.255.248.0) summarizes these eight routes.
In this example, the first octet is 192, which identifies the networks as Class C networks. Combining these Class C networks into a block of addresses with a mask of less than /24 (the default Class C network mask) indicates that CIDR, not route summarization, is being performed.
Key Point: CIDR Versus Route Summarization
The difference between CIDR and route summarization is that route summarization is generally done within, or up to, a classful boundary, whereas CIDR combines several classful networks.
In this example, the eight separate 192.168.x.0 Class C networks that have the prefix /24 are combined into a single summarized block of 192.168.8.0/21. (At some other point in the network, this summarized block may be further combined into 188.8.131.52/16, and so on.)
Consider another example. A company that uses four Class B networks has the IP addresses 172.16.0.0/16 for Division A, 172.17.0.0/16 for Division B, 172.18.0.0/16 for Division C, and 172.19.0.0/16 for Division D. They can all be summarized as a single block: 172.16.0.0/14. This one entry represents the whole block of four Class B networks. This process is CIDR; the summarization goes beyond the Class B boundaries. | 2026-01-22T19:24:24.346946 |
804,830 | 3.516692 | http://en.wikipedia.org/wiki/Infinity_focus | |This article does not cite any references or sources. (December 2009)|
In optics and photography, infinity focus is the state where a lens or other optical system forms an image of an object an infinite distance away. This corresponds to the point of focus for parallel rays. The image is formed at the focal point of the lens.
In practice, not all photographic lenses are capable of achieving infinity focus by design. A lens used with an adapter for close-up focusing, for example, may not be able to focus to infinity. Failure of the human eye to achieve infinity focus is diagnosed as myopia.
All optics are subject to manufacturing tolerances; even with perfect manufacture, optical trains experience thermal expansion. Focus mechanisms must accommodate part variations; even custom-built systems may have some means of adjustment. For example, telescopes such as the Mars Orbiter Camera, which are nominally set to infinity, have thermal controls. Deviations from its operating temperature are actively compensated to prevent shifts of focus.
- Hyperfocal distance, a distance beyond which all objects can be brought into an "acceptable" focus
- Near and far field, equivalent concept in radio wavelengths
|This optics-related article is a stub. You can help Wikipedia by expanding it.| | 2026-01-30T17:43:44.813593 |
955,814 | 3.71877 | http://arstechnica.com/science/2012/10/exoplanet-found-right-next-door-in-alpha-centauri/?comments=1&post=23388160 | Today, planet hunters announced evidence that there's a planet orbiting one of our closest stellar neighbors. One of the three stars of the α Centauri star system shows the sort of periodic changes in brightness that are a hallmark of the presence of an orbiting planet. And, even though the new world would be far too hot to support liquid water, the astronomers who discovered it point out small planets tend to form in groups. Odds are good that there are additional planets lurking further out from the host star.
Rapid advances in planet-hunting have led to an ever-increasing catalog of exoplanets, but most of these orbit distant stars. In contrast, the a Centauri system "is a household name," as Greg Laughlin of UC Santa Cruz put it. Just over four light years from Earth, the system includes two bright stars, Centauri A and B orbiting each other with an 80 year period, along with a red dwarf called Proxima Centauri. Centauri B has a Sun-like mass, but is quite a bit dimmer.
The planet was detected using the radial velocity method. As a massive body orbits its host star, it exerts a gravitational pull on it, pulling the star in slightly different directions as its position shifts. These create a small acceleration in the star itself, usually on the order of a few meters per second2. That, in turn, shows up in the light emitted by the star as Doppler shifts in the light it emits, which vary with the orbital period of the planet.
Detecting these, however, can be a challenge, as a long catalog of factors can also cause periodic changes in the star's output. The authors of the paper describing the find list them as, "instrumental noise, stellar oscillation modes, granulation at the surface of the star, rotational activity, long-term activity induced by a magnetic cycle, the orbital motion of the binary composed of a Centauri A and B, light contamination from a Centauri A, and imprecise stellar coordinates."
To get around these, the authors relied on a massive catalog of observations, made using the HARPS instrument on a 3.6 meter telescope at the European Southern Observatory. Over a span of nearly four years, the authors made multiple observations of Centauri B, often several observations a night, spaced two hours apart. This let them average out short-term variability on the span of hours, and reconstruct that star's equivalent of the solar cycle, in which its activity increased over the course of their observations.
One by one, they factored out all of the periodicities they could account for. What was left was a hint of a signal with a periodicity of 3.3 days. This was incredibly weak—in fact, the smallest yet detected—at only 0.8 meters/second2 acceleration. But, even though this signal was much smaller than some of the noise they filtered out, the authors calculated there was a "false alarm probability" of less than one percent. In other words, it's probably a planet. As scientist told a press conference earlier today, if it was any place other than Alpha Centauri, there would be nothing extraordinary about the claims.
But don't start building the colony ship just yet. With a 3.3 day orbit, the planet is only 0.04 Astronomical Units (1 AU is the typical distance from the Earth to the Sun). That makes this planet blazingly hot, at about 1,500 Kelvin. One of its discoverers indicated this would ensure the surface is "not solid, more like lava." The radial velocity method lets you estimate the lower bound on the mass of the planet. Assuming it's orbiting roughly in a plane that faces edge-on to Earth, it has a mass roughly equivalent to our home planet.
Even though the new planet is likely well outside the habitable zone, we shouldn't give up on α Centauri. The plane of the two large stars of α Centauri is oriented nearly face on to Earth, and forces that govern star formation would make it likely that any planetary disks would form in this same plane. That means the planet is more likely to be on the low end of the mass estimates—in other words, close to Earth-sized. As the discoverers noted, about 70 percent of the small planets we've discovered have been in systems with multiple planets. So, the chances of finding something else further out are much higher than you might otherwise expect.
The HARPS team (which was represented by Stéphane Udry and Xavier Dumusque of the Geneva Observatory) estimate that, based on Centauri B's habitable zone (which is roughly centered on distance that's equivalent to Venus' orbit) they should be able to spot a Super-Earth (having five to 10 times Earth's mass) in the habitable zone. And, given the probability that the system's plane is oriented towards Earth, we could also use an orbiting observatory to watch for planets transiting in front of Centauri B.
We may have to wait a bit, though. The team told the press conference the orbit of the system's two large stars could be problematic; they were coming very close to each other over the next four years. This would make observations extremely challenging. It'll be eight years or more before we'll have good conditions for observations again. But, on the plus side, telescope tech is advancing dramatically these days, and a decade's worth of progress will put us in a much better position to learn something about our neighbors.
What about visiting? Laughlin estimates that, given our current technologies, any probe we sent wouldn't arrive for about 40,000 years. So that's probably a no-go, "given our propensity for instant gratification." But there are some unproven propulsion ideas that could get us there much more quickly, and Laughlin said that, should this find ignite enough interest, we may look into those more seriously. | 2026-02-02T02:35:56.391047 |
700,533 | 3.532013 | http://www.factmonster.com/encyclopedia/society/rajputs.html | Rajputs (räjˈpōts) [key] [Sanskrit, = son of a king], dominant people of Rajputana, an historic region now almost coextensive with the state of Rajasthan, NW India. The Rajputs are mainly Hindus (although there are some Muslim Rajputs) of the warrior caste; traditionally they have put great value on etiquette and the military virtues and take great pride in their ancestry. Of these exogamous clans, the major ones were Rathor, Kachchwaha, Chauhan, and Sisodiya. Their power in Rajputana grew in the 7th cent., but by 1616 all the major clans had submitted to the Mughals. With the decline of Mughal power in the early 18th cent., the Rajputs expanded through most of the plains of central India, but by the early 19th cent. they had been driven back by the Marathas, Sikhs, and British. Under the British, many of the Rajput princes maintained independent states within Rajputana, but they were gradually deprived of power after India attained independence in 1947.
See S. M. Rameshwar, Resurgent Rajasthan (1962); L. Minturn, The Rajputs of Kahlpur (1966); D. Sharma, Lectures on Rajput History and Culture (1970).
The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved.
More on Rajputs from Fact Monster:
See more Encyclopedia articles on: Peoples (except New World) | 2026-01-29T01:26:25.320000 |
872,170 | 4.550106 | http://illuminations.nctm.org/Lesson.aspx?id=3064 | On the day before the lesson, distribute the Find a Cliché
activity sheet. Instruct each student as homework to find a cliché or
familiar phrase that is familiar, appropriate, and at least 5 words.
Preferably, each student's phrase should be different from every other
student's phrase, so encourage students to search for a unique cliché.
Do not share the purpose of this homework.
As the students enter class, collect the cliché homework exercise.
Quickly sort through the clichés to make sure they are appropriate and
familiar, and remove any duplicates. Because students will be working
in pairs, you will only need half as many clichés as you have students.
You also may wish to have a few extra clichés of your own to add to the
collection to make sure that each pair gets a unique cliché.
Ask: "What is a proof?" Through discussion, students should
recognize that a proof is a group sentences leading to a conclusion.
Remind students that those sentences create a chain of logically valid
deductions using agreed-upon assumptions, definitions, or previously
proven statements. Typically, a proof is used to show that the
concluding statement must be true. Clarify that a proof is a
presentation tool, not a solution method for a problem. In problem
solving, the goal is to carry out appropriate, logical steps to find
unknown values or a solution. In a proof, the conclusion is already
known. The goal is to assemble supporting statements to make a
convincing argument that the conclusion is correct.
A joke can be similar to a proof. Jokes are often a chain of
statements called the "setup" that are used to reach an unexpected
conclusion called a "punch line." The punch line should be surprising
but familiar. In this exercise, you will be writing a joke that has the
same structure as a paragraph-style proof. Show the Jokes as Proofs overhead to reinforce this comparison.
Group the students into pairs, allowing them to create their joke collaboratively. Distribute the Jokes as Proofs
activity sheet and one cliché to each group. Students should not get
their own cliché so they can later find humor in how someone modified
it. Have students record their cliché for Question 1.
Explain that just as the conclusion of a proof is usually known
before the proof is created, they will begin by creating the punch line
of the joke. Warn students not to modify the cliché so much that the
resulting sentence is incomprehensible or does not sound like the
original cliché. For example, "Don't cry over spilled milk" could
become "Don't lie over milled silk." It sounds like the original
sentence and has the same number of words and syllables, but is
slightly altered to give it an entirely new meaning. The phrase "Phone
cry clover filled bilk" should not be used because it doesn't make
sense as a sentence. Also, "Don't cry over spoiled mints" wouldn't work
either because it doesn't sound similar enough to the original
sentence. They also need to be careful to modify the cliché enough to
make it interesting. You also may want to remind them to keep their
Next have students write the setup to the joke, creating a story in
which the modified cliché (the punch line) is the conclusion. Like a
proof, in which every premise is necessary to reach the conclusion,
every sentence in the setup should lead the reader directly to the
punch line. Also like a proof, in which every part of the conclusion is
explained, the story should justify every part of the punch line.
Explain that when they are done, students will have successfully
written a joke in a form analogous to a paragraph-style proof. While it
may be funny to extend the joke when telling it to make it longer and
heighten the imagery and suspense, this joke should be succinct like a
proof. If done correctly, every sentence in the setup will be required.
That is, the omission of any sentence would cause the punch line to
make less sense.
Ask groups to exchange and edit each other's jokes to make them more
concise. The goal is to have the shortest collection of premises reach
the same conclusion, while still justifying every element of the
When they have finished, have the students read the jokes to the
class. Depending on the size of your class or the amount of time you
have, you may want to limit the joke reading to a few volunteer groups.
Reiterate that a proof is different from problem solving in that a
proof is a presentation tool and that the conclusion is known before
the proof is written. | 2026-01-31T21:16:42.407696 |
83,616 | 3.829141 | http://www.readwritethink.org/classroom-resources/lesson-plans/remember-that-book-rereading-1150.html?tab=4 | ReadWriteThink couldn't publish all of this great content without literacy experts to write and review for us. If you've got lessons plans, activities, or other ideas you'd like to contribute, we'd love to hear from you.
Find the latest in professional publications, learn new techniques and strategies, and find out how you can connect with other literacy professionals.
Teacher Resources by Grade
|1st - 2nd||3rd - 4th|
|5th - 6th||7th - 8th|
|9th - 10th||11th - 12th|
I Remember That Book: Rereading as a Critical Investigation
|Grades||9 – 12|
|Lesson Plan Type||Standard Lesson|
|Estimated Time||Seven 45-minute sessions, plus additional writing time|
New York, New York
- Explore critically their past experiences as readers in order to articulate who they are currently as readers
- Develop a rounded sense of their identities as readers by critiquing not only their reading experiences, but also their experiences as nonreaders
- Employ various writing techniques to represent their reading experiences
|1.||Ask students to write freely about their memories of reading. You may consider using this prompt: What is your earliest memory of reading? Encourage them to be honest, even if some of what they recall is not positive.
|2.||As a whole class, ask students to share with each other some of the memories they have described. This helps students hear from each other both the similarities and differences in their reading experiences.
|3.||Before you begin the discussion, call on two volunteers to create a visual representation (pictures, notes, graphs, charts, or even doodles) of their classmates' responses on chart paper labeled Remembering Reading. Note: You can keep these up on the walls around the classroom as a process reminder for the remaining sessions.
|4.||Have students get into small groups to discuss patterns they see in the class's responses. For example, many students refer to their families when remembering reading experiences (parents reading to them or siblings sharing books).
|5.||After some time for discussion, ask a member of each group to report back to the class what patterns they noticed. Write these patterns on another piece of chart paper labeled Readers' Memories. Note: This subtle distinction between the act of reading and the readers themselves is important. One goal of the project is to draw attention and explore individual readers' experiences.
Homework (due at the beginning of Session 2): Ask students to brainstorm a list of as many books that they have read over the entire course of their lives as they can think of. Students may list picture books or comic books, so you should decide if nontraditional books are welcome. If you do, you may choose to ask students to explain their choices.
|1.||Review the work done in Session 1 and the lists students developed for homework. You may want to ask a question like, "What kinds of memories do we all have about reading?"
|2.||It is important to allow students the chance to remember their long histories as readers and to see just how much they have read over time. This helps them get a sense of how they have grown and changed as readers. One way to do this is to have them use the online Graphic Map to plot which books they have read and at what grade level from preschool to their current grade.
If computers are not available, you can have students complete this map on paper. You might ask students to not only write the titles and authors of the books, but to create simple and colorful images that help them recall what the books were about.
|3.||After students have completed and printed their graphic maps, ask them these questions: What does your map tell you about who you are as a reader? How have you changed as a reader over time?
|4.||Ask a student volunteer to post classmates' responses on chart paper, adding to the collection of thoughts and reflections already on the wall.
Homework (due at the beginning of Session 3): If students have not finalized their maps they should do so for homework. These final versions can be displayed around the room, or even shared and explored together in a follow-up class activity.
|1.||Ask students to close their eyes and imagine themselves reading. Then ask them to imagine the activities they would rather be doing than reading.
|2.||Using a blank piece of paper and colored pencils, ask students to draw-using no words-what they would like to do rather than reading.
|3.||After students have had time to complete this task, ask them to form small groups of three or four and to exchange their drawings. Give students time to look at their peers' drawings and then ask each student to share what he or she sees.
|4.||Discuss as a whole class the different things they would rather be doing than reading and why.
The pictures students draw can be displayed in the classroom, and might perhaps lend themselves to a Gallery Walk, in which students view each other's images and take notes about similarities and differences between what classmates do when not reading.
Homework (due at the beginning of Session 4): Ask students to make a list of 10 books they remember reading or being read to them and rank those books from 1 to 10 in order of enjoyment, with one being the most enjoyable or fond memory.
|1.||Ask students to share their lists from homework with a partner. After both partners have shared, ask them to explain to each other why they ranked their number one book so highly.
|2.||Call on a few volunteers to share with the class their top choices and reasons for selecting them. You may wish to record some of their choices and reasons on the board.
|3.||After this initial response, ask students to write this question on the top of a piece of paper and to write a brief paragraph reply: What makes a reading experience enjoyable? After a few minutes, ask students to exchange their responses with their partners. The partners, after silently reading through the reply, write their own responses to both the question and first response. You may repeat this silent dialogue many times, and even vary it by having groups exchange with other groups.
|4.||When students have had enough time to reply several times, ask the class to discuss their responses to this question, recording their replies on poster paper.
Homework (due at the beginning of Session 5): Ask students to get a copy of their number one book. Be sure to ask them not to open the book until they bring them to class, when each student will open his or her book together. You may want to give this assignment for over a weekend, and give specific suggestions about how to get the book if you think students will have trouble (for example, using an online library catalogue).
|1.||Before students open their books, ask them to write as much as they can recall about their original reading of the book. Questions for them to consider include:
|2.||After students have had sufficient time for remembering and recording their previous readings, ask them to take out their rereading books. You might ask students to open their books at the same time; this can be an effective ceremony that builds a certain shared experience.
|3.||Direct students to begin rereading their books for the remaining of the class time.
Homework (due at the beginning of Session 6): Students should finish reading their books, or if their selections are longer, should establish a schedule for completing them.
|1.||Ask students to share with a partner how it felt to be rereading their books. They may want to use the writing they did during Session 5 to compare their memories of the first reading with their rereading as there are often discrepancies, which can spark interesting conversations about memories.
|2.||Distribute Student Model Rereading Essay #1 and Student Model Rereading Essay #2. Select one to read aloud with students.
|3.||After reading it, ask students to identify how the writer went about crafting this essay. What kinds of details were included? Begin making a list of characteristics of this type of essay on chart paper.
Homework (due at the beginning of Session 7): Have students read through the second model essay and continue taking notes on the questions posed in class about how the essays were written.
|1.||Working as a class, create a list of characteristics that make up the "rereading essay." A rereading essay might:
|2.||Next, ask the class to create a question that the model essays seem to be answering. An example of this kind of question might include: What does rereading teach us about ourselves as readers? Work with students to choose a question that they can use to help guide their own rereading essays.
|3.||Give students time to begin writing in class about their initial reading of the book. You might start by having students describe in as much detail as possible the place in which they first read the book.
Final Essay Writing and Discussion
Students should write essays similar to the sample essays. Use writing workshop models you are familiar with to guide students in the writing of their essays. Along the way, be sure to have class time for sharing experiences and for exploring tougher questions such as:
- Do we always enjoy reading?
- How do we read differently for pleasure and for school?
- Why do we have to read at all?
- Does school make you want to be readers?
Encourage students to be honest with you and themselves about their memories and rereading experience. This can give students a platform for expressing experiences and feelings that affect whether or not they open books at all. Note: Models for writing workshops and discussion-based classes can be found in Bridging English by Joseph O. Milner and Lucy F. Milner and In the Middle: New Understandings About Writing, Reading, and Learning by Nancie Atwell.
In addition to checking students’ homework assignments for completion and observing their participation in classroom activities, you can use a rubric to assess drafts and the final rereading essay. Working with your students to create a rubric is the most effective way to develop a good evaluative instrument. You might use the Sample Rereading Essay Rubric as a starting point with your students; project if from a computer or overhead and ask them in what ways it fairly assesses the work in your class and what might be missing from it. This sample rubric focuses on diction and detail but you might also weave into your rubric some of the writing skills that your students might be expected to master in your course as per department, district, or state standards. | 2026-01-19T14:00:39.831133 |
413,555 | 3.852715 | http://www.thirteen.org/edonline/lessons/polyhedra/b.html | Virtual Polyhedra and the Real World
This lesson is divided into three sections:
-- Preparing for the Lesson.
-- Conducting the Lesson.
-- Managing Resources and Student Activities.
Construct several models of polyhedra in advance to anticipate difficulties
that students may have. These can be displayed as examples during the lesson.
If possible, provide many different sizes of the templates.
(per group of three students)
The following materials are recommended:
copies of paper templates (at least two or three per student)
one roll of transparent tape and/or rubber cement
a few toothpicks or Popsicle sticks to be used as "probes" to help secure
computer with Web access, draw program such as ClarisWorks, HTML editor such as PageMill or Claris Homepage
paper, pens, or crayons
You will need at least one multImedia computer workstation with
Internet access. We recommend, as a minimum, using Macintosh II
series running System 7.0 or higher, or a 386 IBM-compatible PC
running Windows 3.1 or higher. We also recommend a minimum modem speed of 14.4K bps, though 28.8K bps is preferable.
Bookmark the following sites for easy access:
Virtual Reality Polyhedra.
Henry Chasey's Polyhedra Model Collection.
Virtual Reality Polyhedra.
This site is self-contained and easy to explore. It includes interactive
exercizes for all models.
Activity: Students should print a polyhedron template and construct simple paper models from it. Have students create their own templates using
rulers, protractors, and compasses, or computer graphics software such as ClarisWorks.
Henry Chasey's Polyhedra Model Collection.|
This site has models of polyhedra created by Henry Chasey. It
provides an overview for students to consider when building their own models.
Activity: Have students create projects that incorporate the information
presented at this site. Possibilities include using different media
(cardboard, wood, Plexiglas, straws, toothpicks, etc.), different manipulative
kits, or different approaches to the
topic (bubble film experiments, 3D illusions). This project can be assigned as
homework or as a hands-on class activity depending on availability of tools and
materials. The manipulative kits are an
excellent way to provide students with high-quality activities that reinforce
many concepts of polyhedra.
This site contains thumbnails and large pictures of uniform
polyhedra. Each image can be scaled and printed.
Activity: Print out the templates presented at this site and use them to create
paper polyhedron models. Using a drawing program or a ruler, have students
create their own templates.
A computer center or lab space is ideal for doing the Virtual
polyhedra. Even so, it is helpful to put students in groups of three. This
way, students can help each other if problems or questions arise. It is also
often beneficial to bookmark sites for students ahead of time and make
suggestions, so you can be sure that students have a starting point.
The One Computer Classroom
If you have access to one computer in your classroom, you can organize your
class in several ways. Divide your classroom into two groups. Instruct
one of the groups to do paper research while the second group is working on the
computer. Bring in books, encyclopedias, etc. from the library for the group
doing paper research. Lead the group working at the computer through an
Internet search or allow the students in the class to take turns. (It may be
efficient to have a set of bookmarks ready for the students before they
start working on the computer.) When the group has finished, have them switch
places with the group doing paper research.
Look for Web Resources Together as a Class
If you have a big monitor or projection facilities, you can do an Internet search
together as a class. Make sure that every student in your class can see the
screen. Go to one of the Web sites presented in this lesson. Review the
information on the page together, then print any information that you think is
relevant. Go to a search engine page, allow your students to suggest the
search criteria for your topic, and do a Web search. Again, bookmark and/or
print the pages that you think are helpful for reference later.
Submit a Comment: We invite your comments and suggestions based on how you used the lesson in your classroom. | 2026-01-24T14:32:08.456921 |
445,336 | 4.072289 | http://news.sciencemag.org/sciencenow/2001/09/28-02.html | How to tell the age of the extinct ancestor common to two living species? Take the same gene from both and count the differences in the DNA. Then divide it by the rate at which DNA mutates, and presto! Biologists love this molecular clock and they've applied it for decades. But new research shows it may not be keeping good time.
In 1965, biochemists stunned classical biologists by showing that mutations in genes accumulate at a constant rate. Hence, the number of mutations between two species could be used to tell when species diverged, even without fossil evidence. Since then, scientists have used molecular clocks in various genes to trace evolution in species from HIV to birds to whales. And although exceptions to the rule have surfaced, most biologists remain loyal to their favorite timepiece.
But in the 25 September issue of the Proceedings of the National Academy of Sciences, geneticist Francisco Rodriguez-Trelles and his colleagues at the University of California, Irvine, show that the molecular clock could be ripe for the pawnshop. From the GenBank database, they downloaded the sequences of three well-known genes, called Gpdh, Sod, and Xdh, for 78 species, from pine trees to people. They used the data to make an evolutionary tree, which, for calibration purposes, included some key branches already dated by paleontologists.
But when the scientists started counting the number of mutations in each tree branch, they found vastly different mutation rates, even for closely related species. For example, in Drosophila obscura fruit flies, the Sod clock ticks 10 times faster than in its cousin Drosophila willistoni, while the Xdh clock keeps perfect time. Conversely, the Gpdh clock in mammals runs about 10 times faster than the one in fruit flies. Molecular clocks in general are much more "erratic" than previously thought, and practically useless to keep accurate evolutionary time, the researchers conclude. They attribute this to the vagaries of natural selection, which may at times constrain specific genetic mutations in certain lineages.
Evolutionary biologists are unhappy to hear that such a prized correlation may be flawed. David Mindell, of the University of Michigan, Ann Arbor, says it's "bad news that estimates of dates must be viewed as highly error prone." But, he adds, ultimately it's good news that, by disproving the molecular clock theory, molecular evolution in a wide variety of organisms may become better understood. | 2026-01-25T03:01:10.939769 |
637,785 | 3.558689 | http://nrich.maths.org/public/leg.php?code=-333&cl=1&cldcmpid=5817 | Can you create more models that follow these rules?
This challenge involves eight three-cube models made from
interlocking cubes. Investigate different ways of putting the
models together then compare your constructions.
Sort the houses in my street into different groups. Can you do it in any other ways?
Arrange your fences to make the largest rectangular space you can. Try with four fences, then five, then six etc.
Use your mouse to move the red and green parts of this disc. Can
you make images which show the turnings described?
What happens to the area of a square if you double the length of
the sides? Try the same thing with rectangles, diamonds and other
shapes. How do the four smaller ones fit into the larger one?
This practical problem challenges you to create shapes and patterns
with two different types of triangle. You could even try
Take 5 cubes of one colour and 2 of another colour. How many
different ways can you join them if the 5 must touch the table and
the 2 must not touch the table?
Explore the triangles that can be made with seven sticks of the
This practical investigation invites you to make tessellating
shapes in a similar way to the artist Escher.
Investigate the number of paths you can take from one vertex to
another in these 3D shapes. Is it possible to take an odd number
and an even number of paths to the same vertex?
Can you make the most extraordinary, the most amazing, the most
unusual patterns/designs from these triangles which are made in a
Explore the different tunes you can make with these five gourds.
What are the similarities and differences between the two tunes you
We went to the cinema and decided to buy some bags of popcorn so we
asked about the prices. Investigate how much popcorn each bag holds
so find out which we might have bought.
These pictures show squares split into halves. Can you find other ways?
How can you arrange the 5 cubes so that you need the smallest number of Brush Loads of paint to cover them? Try with other numbers of cubes as well.
Try continuing these patterns made from triangles. Can you create
your own repeating pattern?
Make new patterns from simple turning instructions. You can have a
go using pencil and paper or with a floor robot.
In this challenge, you will work in a group to investigate circular
fences enclosing trees that are planted in square or triangular
Is there a best way to stack cans? What do different supermarkets
do? How high can you safely stack the cans?
Can you find ways of joining cubes together so that 28 faces are
How many different cuboids can you make when you use four CDs or
DVDs? How about using five, then six?
A group of children are discussing the height of a tall tree. How would you go about finding out its height?
What is the largest number of circles we can fit into the frame
without them overlapping? How do you know? What will happen if you
try the other shapes?
How many models can you find which obey these rules?
How many shapes can you build from three red and two green cubes? Can you use what you've found out to predict the number for four red and two green?
Have a go at this 3D extension to the Pebbles problem.
Using different numbers of sticks, how many different triangles are
you able to make? Can you make any rules about the numbers of
sticks that make the most triangles?
How many different ways can you find of fitting five hexagons
together? How will you know you have found all the ways?
There are nine teddies in Teddy Town - three red, three blue and three yellow. There are also nine houses, three of each colour. Can you put them on the map of Teddy Town according to the rules?
What is the smallest cuboid that you can put in this box so that
you cannot fit another that's the same into it?
Use the interactivity to investigate what kinds of triangles can be
drawn on peg boards with different numbers of pegs.
What do these two triangles have in common? How are they related?
How can you arrange these 10 matches in four piles so that when you
move one match from three of the piles into the fourth, you end up
with the same arrangement?
An activity making various patterns with 2 x 1 rectangular tiles.
We need to wrap up this cube-shaped present, remembering that we
can have no overlaps. What shapes can you find to use?
This tricky challenge asks you to find ways of going across rectangles, going through exactly ten squares.
If we had 16 light bars which digital numbers could we make? How
will you know you've found them all?
This challenge is to design different step arrangements, which must
go along a distance of 6 on the steps and must end up at 6 high.
In how many ways can you stack these rods, following the rules?
How could you put eight beanbags in the hoops so that there are
four in the blue hoop, five in the red and six in the yellow? Can
you find all the ways of doing this?
Let's say you can only use two different lengths - 2 units and 4
units. Using just these 2 lengths as the edges how many different
cuboids can you make?
Vincent and Tara are making triangles with the class construction set. They have a pile of strips of different lengths. How many different triangles can they make?
In this investigation, you must try to make houses using cubes. If
the base must not spill over 4 squares and you have 7 cubes which
stand for 7 rooms, what different designs can you come up with?
This problem is based on the story of the Pied Piper of Hamelin. Investigate the different numbers of people and rats there could have been if you know how many legs there are altogether!
I like to walk along the cracks of the paving stones, but not the
outside edge of the path itself. How many different routes can you
find for me to take?
Investigate the different ways you could split up these rooms so
that you have double the number.
Use the interactivity to find all the different right-angled
triangles you can make by just moving one corner of the starting
The challenge here is to find as many routes as you can for a fence
to go so that this town is divided up into two halves, each with 8
When newspaper pages get separated at home we have to try to sort
them out and get things in the correct order. How many ways can we
arrange these pages so that the numbering may be different? | 2026-01-28T03:02:17.362115 |
647,926 | 3.556026 | http://www.coderanch.com/t/387882/java/java/public-static-void-main-String | static is a keyword in the java language, and it is also a modifier. When a method or variable is declared static, it means that it can be used throughout the whole class without instantiating an object. If it isn't static, then you must create an object to invoke the non-static method or non-static variable. You are correct about args. For instance say I had a program called Go. When I use the command java Go one two arg= one , arg= two.
static is not used to catch anything its an access modifier which tells that inside this method you don't need to create any instance of an object to execute it. As for a parameter it takes an array of strings because you might pass certain arguments to your program to let it know what to do. Special conditions that you might think of.
Static is a modifier but not an access modifier. When a member is static it means that no matter how many instances of the class you have, there is only instance of the static member which they all share. | 2026-01-28T06:37:16.645912 |
797,156 | 3.819138 | http://richardsbirdblog.com/2010/07/06/why-do-bird-wings-flap/ | There are two reasons why birds flap their wings. Primarily birds flap their wings to pull themselves forward, secondarily to lift themselves up.
Bird flight looks effortlessly simple but actually is a complexity of internal actions and reactions and outside forces. The most important factor about bird flight is that all birds are so light that in some situations they are lifted up by the air they float in.
Bird Wingspread Weight
White-tailed Kite 39″ 12 ounces
Western Sandpiper 14″ 1 ounce
Western Gull 58″ 34 ounces
Mourning Dove 18″ 4.2 ounces
Mallard 35″ 36 ounces
Barn Swallow 15″ .7 ounce
House Finch 9.5″ .75 ounce
Sandhill Crane 77″ 10 lbs 7 ounces
Of course there are other factors involved, the width of the wing and its shape, for example. But you can see from this chart why birds with short wings and heavy bodies have to beat their wings faster than those with longer wings and lighter bodies.
Thermals rise at up to four feet per second, the sink rate for gliding birds is between one and three feet per second. Riding thermals helps birds fly for long periods without any effort. With the air helping they need to produce very little ‘lift’ from their wings . We need to keep that in mind as we begin to fathom how birds fly through the air.
One source of bird lift is by flapping their wings; when a bird presses its wings down on the air beneath it, the bird’s body is pushed up, that is – lifted up. (You can watch this in many of the PBS and other programs showing films of formations of Swans or Cranes in flight. Watch as the wings move down and you will see that the bodies rise.)
The reason the birds’ bodies aren’t pushed down on the upstroke that follows is that the flight feathers ‘weathervane’ and open slightly – like Venetian blinds – on the upstroke and some air slips between the feathers. (More on this in another blog.) While the birds’ bodies are pushed down a bit, it is less then what they gained on the previous upstroke.
Another, lesser, source is the airfoil of the wing itself, specifically the upper airfoil surface of the wing. The pressure above the wing is reduced because the air passing over the wing takes longer to reach the trailing edge than that flowing underneath it and the wing (and bird) rises as a result.
Unlike airplanes that get all their lift from the airfoils of their wings, a bird’s airfoil provides only a portion of the lift needed to keep it aloft. For many birds it takes only a slight updraft – less than the hot air rising from a chimney – to rise upwards without a flap. With their broad wingspan and light weight Vultures can be seen circling for hours, coasting along with almost no effort.
Hawks and Kites as well can drift up to great heights on updrafts so faint that we humans would not feel them.
Some seabirds fly for miles without flapping, literally ‘surfing’ long corridors of updrafts. But that will be the topic for another blog.
How does the bird’s wing develop thrust?
The explanation found in most birds on books is that the wing is tilted down so the some of the lift, now angled forward, will pull the bird forward. This is simply not true, for many reasons.
First, since a bird’s airfoil produces very little lift, tilting it forward could not possibly pull a bird forward at 30 to 40 miles an hour which is typical for birds.
Second, this is a logical impossibility. If forward thrust is dependent on air flowing across the wing, but air doesn’t flow across the wing until the bird is moving, it can’t get started. (Circular reasoning: the action produces a reaction which produces the original action.) This is why a short-tailed cat can’t catch its tail no matter how fast it runs.)
Third, birds in flight do not tilt their wings downward. Just look at them in flight.
Interestingly enough, the airfoil above, the one that is shown in all the bird books you will find, is not the shape of any bird’s airfoil I know of. Actual bird wings have a pronounced under camber or curvature; it is their under camber that propels them (thrusts them) forward.
Here is how that works. As the wing presses down, the air is slightly compressed and has to go somewhere. Since the downward curvature at the leading edge of the wing prevents the displaced the air from moving out the front, it goes the only direction it can – out the rear.
“Ah ha!” says Isaac Newton, “For every action there is an equal and opposite reaction so the wing will be pushed forward.” Well yes, Mr. Newton, that’s exactly how jet engines work. The rushing air out the tail of the engine pushes the engine — and the attached airplane forward.
In a sense, the birds wing acts much like a propeller on an airplane or motorboat does. As it bites into the air (or water) the curvature pulls the propeller and engine forward.
The under camber (curvature) of a bird’s wing is quite pronounced as seen here and can push out a lot of air:
This beautiful American Avocet has just lifted off the ground. He is half way into the first power stroke. The under camber is clearly shown as is the resemblance to a curved propeller.
Under camber is determined by the musculature and bones of the wing, which does not change during flight. With every power stroke the bird is pulled forward. Since birds are very light and their wings very strong they can fly really fast. Most can easily out fly a human runner.
This first year Snowy Egret landing on the rocks clearly displays the powerful under camber which runs the full width of the wings. It is easy to deduce the power of these seemingly delicate wings.
Bird flight is an incredible process, one that amazes me the more that I learn about it.
Note: The term ‘Power Stroke’ I have used here is the term to indicate the down stroke of a wing flap, the other three segments of the wing flap are Upstroke, Upper Transition, and Lower Transition. These are more fully described in my forthcoming book: Avianautics, the Art and Science and of Flapping Flight.
Your comments & questions much appreciated | 2026-01-30T14:42:26.487881 |
1,084,009 | 3.857634 | http://ned.ipac.caltech.edu/level5/Sept01/Jones/Jones1.html | In this section, I shall give a brief overview of the properties of the Universe at large. On the largest scales it can be reasonably approximated as a homogeneous and isotropic medium in a state of uniform expansion and the equations can easily be written down. We find that such a simple model Universe can be described in terms of a few parameters, the expansion rate, the density, and perhaps the cosmological constant. Classical cosmology focusses on determining these by direct observation of the large scale distribution of galaxies. There are, however, many new techniques available for getting these parameters though studying the inhomogeneity of the Universe. These will be the subject of the following sections where many of the issues raised here will be discussed at greater length.
1.1. The Universe at Large
Hubble discovered the expansion of the Universe by plotting, for a sample of galaxies, the radial velocity of each galaxy as indicated by the redshift of its spectral lines against its apparent brightness. The fainter (and presumably more distant) galaxies had the greater recession velocities (or "redshifts"). If the distance to a galaxy was D Megaparsecs, and its radial velocity was V km s-1, then Hubble's relationship could be expressed as
where H0 is a constant (the Hubble constant) measured here in units of km s-1 Mpc-1 Implicit in the relationship is the assumption that we can calibrate the distance scale by virtue of which the apparent brightness of a galaxy can be turned into a distance.
The radial component of the velocity of a galaxy relative to the observer is inferred by observing the wavelength 0 of spectral lines that would in the laboratory have been emitted wavelength E. The difference = 0 - E is interpreted as being due to the Doppler shift caused by the fact that the galaxy was moving at velocity
relative to the observer. (We shall henceforth drop the `E' suffix on the emitted wavelength). The redshift of the galaxy (in fact the redshift of the spectra lines) is defined as
Hubble's redshift-distance relation (the "Hubble Law") later became a way of estimating the distances to galaxies simply by measuring their radial velocities Dz = H0-1 cz. (Dz has the subscript z to denote the nature of this distance estimate and to distinguish it from the true distance. We shall see later that part of the velocity cz may be due to the random motions of galaxies relative to the general cosmic expansion.)
Looked at in its most simple terms, Hubble's discovery implies that the Universe was born a finite time in our past and emerged from a state of infinite density. The subsequent discovery by Penzias and Wilson (1965) of a cosmic microwave background radiation field and its interpretation as the relict of an expansion from a hot singular state by Dicke, Peebles, Roll and Wilkinson (1965) established a definitive view of our Universe. Cosmology properly became a branch of physics, and the Hot Big Bang theory has become a paradigm of modern science.
1.1.1. Homogeneity and Isotropy
On the smallest scales the Universe contains stars that are grouped into galaxies, that are themselves grouped into clusters. Going to larger scales we have evidence for clusters of galaxy clusters, and beyond that for large scale structures ("walls" of galaxies!) extending over many tens or even hundreds of megaparsecs. Indeed pictures of the three dimensional distribution of galaxies look very inhomogeneous even on scales as large as 100 Mpc, or more. However, one should not be mislead by visual appearances. As will be explained later, this large scale inhomogeneity has rather a small amplitude in the sense that it would hardly be noticeable if the distribution of galaxies were smoothed over such large volumes. There is a clear tendency for the Universe to become more homogeneous on ever large scales.
Hubble himself commented on the remarkable large-scale isotropy of the Universe as judged from the distribution of galaxies on the sky. Today we have catalogues of galaxies penetrating to great distances (Maddox et al., 1990) and these demonstrate the isotropy of the galaxy distribution very clearly. The isotropy of the Universe is best measured through the isotropy of the cosmic microwave background radiation.
The large scale homogeneity of the Universe is more difficult to establish directly. It would seem reasonable to use the argument that we are not at the center of the Universe, so the isotropy must imply spatial homogeneity, but this is not a proof of homogeneity. The same deep galaxy catalogues provide a test of homogeneity because we can ask the question "is the Universe, sampled at various depths within this catalogue, the same?". Again the Maddox et al. (1990) catalogue provides an answer, though the method is not as simple as observing isotropy. Maddox et al. compute the galaxy clustering correlation function at various depths in their catalogue and find that the functions in the various samples scale in accordance with the hypothesis of homogeneity. Their analysis in fact goes even further than merely saying that the Universe is globally homogeneous. It has the additional implication that the deviation from homogeneity (as evidenced by the galaxy clustering) is itself the same in all their samples.
Such arguments provide compelling evidence that the Universe is not a hierarchy of the kind originally envisaged by Charlier (1908, 1922), and taken up more recently in the context of fractal distributions of galaxies by Mandelbrot (1983), Coleman, Pietronero and Sanders (1988) and others.
1.1.2. Scale Factors, Redshifts and all that
For most of what concerns us in these lectures it is sufficient to consider the Universe to be, in a first approximation, a homogeneous and isotropic distribution of particles (galaxies) that interact only through their mutual gravitational interactions. This means that we ignore any pressure contribution from their random motions, or from other components of matter. This enables us to greatly simplify the dynamical equations for the evolution of the Universe.
Consider the motion of a galaxy in the Universe that today (t0) is at distance l0 from us and that at time t was at a distance l (t). It is convenient to define the scale factor a(t) by
Since the Universe is presumed homogeneous and isotropic, then a(t) depends on neither position nor direction. It merely describes how relative the distances change as the Universe expands. We have normalized all lengths relative to their present day value and so the present value of a(t) is a(t0) = 1.
The Einstein equations (or their Newtonian equivalent) in the simple case of homogeneous and isotropic dust models give the differential equation for the scale factor in terms of the total mass density :
This is supplemented by an equation expressing the conservation of matter:
which is equivalent to
Note that (5) is not valid if there is any substantial pressure due to the matter in the universe, and in that case we also need to modify (6). We shall make these modifications at a later time when needed, for the moment we are only discussing the Universe at the present time and in its recent past when equations (5, 6, 7) are thought to be a good approximation.
The Hubble Parameter is defined as
and is a function of time. H describes the rate of expansion of the Universe and has units of inverse time. It is experimentally measured as a velocity increment per unit distance since it describes the expansion through the relationship between velocity and distance: = H l, or in more familiar notation v = H r.
We define the redshift to a galaxy at distance l to be
When we look at a distant galaxy we are looking at it as it was in the past (because of the finite light travel time). At the time we are seeing it, the scale factor a(t) was smaller than the present value (a0 = 1). It can easily be shown that the recession velocity we measure from the shift in the spectral lines is just cz, in other words, the quantities z appearing in equations (3) and (9) are the same thing.
1.1.3. Important quantities: H0, 0, c
At this point it is convenient to introduce some fundamental definitions. Hubble's expansion law states that the recession velocity of a galaxy is proportional to its distance from the observer, in other words l0. The constant of proportionality (the cosmic expansion rate) is the present value of the Hubble parameter:
H0, the present value of the Hubble Parameter, is usually called "Hubble's Constant".
There is an important value of the density, c, that can be derived from the Hubble parameter (the Hubble parameter has dimensions [time]-1). This is the density such that a uniform self-gravitating sphere of density c isotropically expanding at rate H has equal kinetic and gravitational potential energies:
Since H is a function of time, then so is c.
We can measure the density of the Universe in terms of c by introducing the density parameter :
Note that also depends on time and we shall denote the present day value of by 0. There may be a mixture of different type of matter in the universe that make up the total density . We may think, for example, of baryons, photons and perhaps some exotic elementary particles. Each of these individually has a density that can be normalized relative to c, thus each species has its own . We will, for example, denote the contribution of Baryonic material to the total cosmic density by B.
The density c has a special significance. A universe whose density is c when its expansion rate is H is referred to as an Einstein de Sitter universe. This model clearly has = 1 at all times. The expansion rate of such a universe is fixed by the density. Model universes that are denser than c = 3H2 / 8 G when their expansion rate is H will stop expanding and contract down to future singularity. Models that are less dense will expand forever. The = 1 universe is a limiting case dividing two classes of behaviour and that is why the parametrization of the density in terms of c is so useful. The behaviour of the various model universes as a function of can be seen by looking at dynamical equation for the expansion factor a(t).
Equations (5) and (7) for a(t) can be shown to integrate to
The integration constants have been derived using the boundary condition that a(t) 0 as t 0, ( / a)0 = H0 and that the present density of matter is 0 = 0 c. The standard textbooks referred to above give the solutions of this equation for general values of 0. It is sufficient here to note that the case 0 = 1 simplifies the right hand side of this equation and the solution is then particularly simple
Since a(t) = (1 + z)-1 this tells us that when we look back to a redshift z in Einstein de Sitter universe we are seeing the universe when its age is a fraction t / t0 = (1 + z)-3/2 of its present age, t0. | 2026-02-03T23:20:24.637418 |
146,322 | 3.650984 | http://www.foresight.org/nanodot/?p=2756 | In yet another step toward making nanotech transistors from graphene nanoribbons, chemically-prepared graphene nanoribbons less than 10 nm wide were found to be uniformly semiconducting, room-temperature, field-effect transistors. From Stanford University news service via PhysOrg.com “Carbon nanoribbons could make smaller, speedier computer chips“:
Stanford chemists have developed a new way to make transistors out of carbon nanoribbons. The devices could someday be integrated into high-performance computer chips to increase their speed and generate less heat, which can damage today’s silicon-based chips when transistors are packed together tightly.
For the first time, a research team led by Hongjie Dai, the J. G. Jackson and C. J. Wood Professor of Chemistry, has made transistors called “field-effect transistors”—a critical component of computer chips—with graphene that can operate at room temperature. Graphene is a form of carbon derived from graphite. Other graphene transistors, made with wider nanoribbons or thin films, require much lower temperatures.
“For graphene transistors, previous demonstrations of field-effect transistors were all done at liquid helium temperature, which is 4 Kelvin [-452 Fahrenheit],” said Dai, the lead investigator. His group’s work is described in a paper published online in the May 23 issue of the journal Physical Review Letters [abstract; arXiv preprint].
The Dai group succeeded in making graphene nanoribbons less than 10 nanometers wide, which allows them to operate at higher temperatures. “People had not been able to make graphene nanoribbons narrow enough to allow the transistors to work at higher temperatures until now,” Dai said. Using a chemical process developed by his group and described in a paper in the Feb. 29 issue of Science [abstract], the researchers have made nanoribbons, strips of carbon 50,000-times thinner than a human hair, that are smoother and narrower than nanoribbons made through other techniques.
David Goldhaber-Gordon, an assistant professor of physics at Stanford, proposed that graphene could supplement but not replace silicon, helping meet the demand for ever-smaller transistors for faster processing. “People need to realize this is not a promise; this is exploration, and we’ll have a high payoff if this is successful,” he said.
Dai said graphene could be a useful material for future electronics but does not think it will replace silicon anytime soon. “I would rather say this is motivation at the moment rather than proven fact,” he said.
Different graphene nanostructures are being explored for use in nanoelectronic circuits. Six weeks ago we cited progress from a group at the University of Manchester that used high-resolution electron-beam lithography to carve graphene quantum dots that showed promise as transistors, but which could not be reliably fabricated. This Stanford University group is using a chemical method to prepare “ultrasmooth graphene nanoribbon semiconductors” somewhat larger than the smallest graphene quantum dots, but more reliably fabricated. Which, if any, of these structures end up in practical nanoelectronic circuits remains to be seen. | 2026-01-20T12:06:12.883634 |
806,639 | 3.89669 | http://climate.nasa.gov/blog?m_y=06-2010 | Editor of NASA's Climate Change website. Manager of the Center for Climate Sciences at NASA's Jet Propulsion Laboratory. Passionate about all things science and climate change. Likes hiking, dabbling in poetry and playing piano.
June 29, 2010
posted by Dr. Amber Jenkins
From Dr. Tony Freeman,
Earth Science Manager, NASA Jet Propulsion Laboratory
Most NASA scientists study Earth from the perspective of space. They use information collected by satellites to learn how the Earth’s atmosphere, ocean and land work. But space, “the final frontier,” is not the only frontier. Researchers also collect data from the ground, ocean and air to augment space missions. In fact, many NASA Earth scientists got their start flying instruments on one of the many aircraft NASA have in their fleet, which is based at the Dryden Flight Research Center out in the Mojave Desert.
"If it all sounds a bit like Indiana Jones without the bad guys that’s because it is." A much (much) younger version of the author taken on a field trip in Belize.
Why bother with aircraft when we can fly spacecraft? Well, airborne missions enable us to do unique — and crucial — experiments in the fields of atmospheric chemistry and volcanology, for example, from altitudes that range from 100 feet (30 meters) to 60,000 feet (18 kilometers). They also help us to check and validate the performance of the instruments that fly onboard NASA satellites such as Aqua, Aura and others in the so-called “A-train” of Earth-observing satellites. And, airborne instruments are often cheaper to launch. Tethered and untethered balloons; manned aircraft ranging from small propeller craft (think Cessna) to large jet engines (think the DC-8 aircraft); unmanned airplanes such as the large military surveillance craft known as the Global Hawk — NASA uses them all.
In this last year alone, NASA has flown:
- Over Greenland to study the ice sheet and glaciers there as part of Operation IceBridge (total flying distance was more than 1.5 times round the world);
- Way out into the Pacific Ocean on a Global Hawk unmanned aerial vehicle packed with science instruments designed to look at cloud formation. Scientists, meanwhile, examined the information they collected live at their desks, through a satellite phone link.
Having done it myself, I have to say it’s really exciting to take a brand new instrument that no-one’s ever used before out into the desert, have it bolted on to an aircraft and set off on a data collection campaign somewhere in the world. Over the last decade or so, NASA has flown campaigns over the jungles of Belize and Costa Rica, the icy wastes of Antarctica and Patagonia, the ancient Khmer ruins in Cambodia, the mineral-rich deserts of Australia, the rubble of the Twin Towers in New York after 9/11, and hurricanes forming in the Atlantic off the coast of Africa.
And even cooler is the fact that for most places where we fly our aircraft and our instruments, we also send a ground team in to make field measurements at the site of interest. On the ground, I’ve had fun making snow trails on a skidoo, navigated 50 miles (80 kilometers) down the Tanana river in Alaska as far as the Yukon, hiked out on a salt pan in Death Valley in 38°C- (100°F)-plus temperatures, ridden an ox cart in Cambodia, and stopped to let a boa constrictor cross the road in the heart of the Amazon. If it all sounds a bit like Indiana Jones without the bad guys that’s because it is, and I don’t think my experiences would be considered unusual amongst my colleagues.
Airborne Earth science is currently undergoing a renaissance at NASA. You can learn more here.
June 24, 2010
posted by Dr. Amber Jenkins
From Gretchen Cook-Anderson,
NASA's Earth Science News Team
NASA's gravity mission Grace is tracking the movement of water and ice on our planet.
High above Earth's surface — 300 miles to be precise — a special set of twins continually unveils new information about our planet. They're not human twins, nor are they the constellation we know as Gemini. They've arguably, however, attained star status in their eight years in space.
They are the Gravity Recovery and Climate Experiment, or Grace, a pair of NASA and German satellites that fly about 137 miles apart, changing position relative to one another in response to variations in the pull of Earth's gravity. A microwave ranging system captures microscopic changes in the distance between the two satellites. Grace responds to gravity changes that occur when mass — primarily water and ice — on or beneath the surface changes.
And like many stars, the harmonious Grace twins have achieved some very big hits. They've racked up unprecedented observations of some of the world's most famous waterways; shed light on ice loss at the coldest reaches of the globe; and rendered first-time measurements of changes in hidden groundwater reservoirs that sustain millions daily.
Though Grace has also shaken up old ways of studying changes in solid ground — in the aftermath of earthquakes, for example — today's nod is to the mission's contribution to water science.
In celebration of a deal inked earlier this month by NASA and the German Aerospace Center to extend Grace's on-orbit life through 2015, here are just a few of the mission's greatest water- and ice-related accomplishments to date:
NASA scientists found that groundwater levels in northwestern India have been declining by an average of one foot per year. More than 26 cubic miles of groundwater disappeared between 2002 and 2008 — double the capacity of India's largest surface water reservoir and triple that of Lake Mead, the largest man-made reservoir in the United States.
Grace data confirmed the mass of ice in Antarctica decreased significantly from 2002 to 2005, enough to elevate global sea level by 0.05 inches during that period — about 13 percent of total sea level rise observed over the same four years.
Cross posted and adapted from NASA’s What on Earth blog. Gretchen is based in Chicago.
June 20, 2010
posted by Dr. Amber Jenkins
Astronaut photograph ISS023-E-58455, courtesy of the ISS Crew Earth Observations experiment and Image Science & Analysis Laboratory, Johnson Space Center.
Astronauts are lucky things. Apart from the whole being-in-space thing, they get to see sights like this one, taken from the International Space Station (ISS), on May 29, 2010. It shows the Aurora Australis, or Southern Lights — mesmerizing, ever-changing displays of light that appear in the Antarctic skies in winter.
The Southern Lights and Northern Lights (Aurora Borealis) are a by-product of the way the solar wind — a stream of electrons and protons coming from the Sun — collides with gases in the upper atmosphere. These shifting displays of colored ribbons, curtains, rays, and spots are produced by atoms, molecules and ions that have been excited by energetic charged particles travelling along magnetic field lines into the Earth's upper atmosphere.
This particular shot of the Aurora Australis was taken during a geomagnetic storm that was most likely caused by a coronal mass ejection from the Sun on May 24, 2010.
June 19, 2010
posted by Dr. Amber Jenkins
The Webby awards went down with style on Monday night, and NASA’s Global Climate Change team was there to partake in the fun. The 14th Annual Webby Awards gala was held at the Cipriani in New York City where we collected our “People’s Voice Award” for Best Science Website.
As B.J. Novak (the intern from the TV series “The Office”) hosted the awards, we shared a table with colleagues from NASA websites NASA.gov and NASA Home and City 2.0, both of whom picked up a Webby in the government website category. Oh, and we just so happened to be seated with none other than Buzz Aldrin, the second person to set foot on the Moon, and his wife Lois. Buzz was swarmed with fans the whole night long. He went up on stage to accept the award on behalf of all three NASA sites and got a standing ovation along the way. His five-word speech was “Humanity. Colonization. Phobos. Monolith. Mars.”
There were a handful of other celebs present, including Dr. Vint Cerf (co-inventor of the internet, who received the Lifetime Achievement Award and gave one of the best speeches of the night: “You ain’t seen nothing yet.”), Roger Ebert (Person of the Year), Amy Poehler (Actress of the Year) and band OK Go (for Film and Video Artist of the Year; see here for more on their collaboration with NASA folks).
Thanks to all the folks out there who voted for us and helped us win the People’s Voice Award. You spoke, the Webbys listened, and we’re super-grateful for your support. You can watch our five-word speech here.
June 15, 2010
posted by Dr. Amber Jenkins
From Mike Carlowicz,
NASA's Earth Science News Team
What do NASA techies do with their spare time? They make rock-n-roll videos. Not the big-hair, booty-shaking, smoke-and-fire kind. They help make rock videos that would make their daytime colleagues proud or jealous, or both.
Band OK Go prides itself on creative visual expressions of their music, and they wanted an extra dose of gee-whiz fun for their song "This Too Shall Pass." In early 2010, the group enlisted the help of Syyn Labs — a self-described "group of creative engineers who twist together art and technology." The Syyn Labs fraternity included (or ensnared) four staff members from NASA's Jet Propulsion Laboratory.
OK Go requested a Rube Goldberg machine as the centerpiece of a video. To borrow from wikipedia, a "Rube Goldberg machine is a deliberately over-engineered machine that performs a very simple task in a very complex fashion, usually including a chain reaction. The name is drawn from American cartoonist and inventor Rube Goldberg." Think of the classic board game Mousetrap or your favorite chain reactions from Tom & Jerry cartoons.
More than 40 engineers, techies, artists, and circus types spent several months designing, building, rebuilding, and re-setting a machine that took up two floors of a Los Angeles warehouse. The volunteers went to work after work, giving up many nights, weekends, and even some vacation days to build a machine that has drawn more than 13 million views on YouTube.
The JPL staffers included:
- Dr. Mike Pauken, a senior thermal systems engineer
- Chris Becker, a graduate student at the Art Center College of Design and a JPL intern
- Heather Knight, a former JPL engineering associate (instrumentation and robotics) who is now preparing to start work on a doctorate at Carnegie Mellon University
- Dr. Eldar Noe Dobrea, a planetary scientist working to study landing sites for the upcoming Mars Science Laboratory.
We caught up with these rock-n-roll moonlighters to learn more about the machine and video.
What was your role in the creation of the machine, and what was the inspiration behind your piece?
Eldar: My main role was to help design and construct the descent stage (2:06 to 2:28 in the video). The inspiration for the rover was a small Japanese Rube Goldberg machine that had a tiny mock-up of a mouse rover, about the size of a Hot Wheels car. It struck me that since I am representing JPL, we should have a Mars Rover in our machine.
Chris: I helped finish up the sequence of interactions and the filming. I have a couple things that I was involved with, but cannot take complete ownership of any. But during the filming, I redesigned the beginning dominos (0:06-0:18 sec.) and helped set them up between the numerous takes (60+).
Mike: I worked on the tire ramp, mostly focusing on wiring the relay circuits for the lamps that were triggered by the tire. You've got to wonder when a mechanical guy does electrical work. A friend from CalTech told me about a band making a music video featuring a Rube-Goldberg machine. Any time I've seen one in a movie, like in Pee Wee Herman's Big Adventure or Chitty Chitty Bang Bang, I've always wanted to make one myself.
Heather: I helped make sure all the modules came together in the first half of the video. I also worked on the intro, the Lego table, and the inflatables. There were a few guiding principles behind the machine. No magic: Mechanisms should be understandable and built from found objects where possible. Small to big: The size of the modules and parts becomes bigger over the course of the video. One take: As in their other videos, the band wanted the entire piece shot in one piece by a single handheld camera.
How many "takes" did it take to get the machine to work?
Mike: Before filming, it took more tries to get things right than anyone could ever have counted. Sometimes I'd spend three or four hours just fiddling with one part to get it right. Even then, it often got changed a couple days later to something else.
Heather: We learned something very important about physics in the process of making this video. It is much harder to make small things reliable. Temperature, friction, even dust all greatly effect the repeatability and timing of the small stuff. The first minute of the video failed at a rate that was tenfold of the rest of the machine. Remembering that rule about getting everything in one shot — if your module is further down the line in the video, you're in big trouble if it doesn't work! The machine took half an hour and 20 people to reset.
What's the funniest or strangest thing that happened on the set?
Chris: Realizing that a number of Ph.D.s built one thing and a clown from a circus built another part. There was no hierarchy. Everyone was there for the same purpose: to build a machine that worked and was fun!
Mike: I helped assemble the sequence between the piano and the shopping cart (1:34 to 1:41). The tetherball pole was supposed to trigger the shopping cart, but when we played the song, the timing was off. The band wanted more delay so that the cart crashed at the end of 'when the morning comes.' I added in a sequence using a director's chair, a piano cover, a waffle iron, and a 10-pound weight to give the necessary delay. Heather's shoe became part of the sequence, too.
The director's chair has a rope holding one arm in place. My first thought on holding this rope was to use an umbrella, but Heather told me there were already too many umbrellas in the machine. I rummaged around the warehouse and found a high-heeled shoe sitting around a bunch of junk, and I thought this would make a great holder for the rope. I fastened the shoe to a 2-by-4 with three large wood screws, pried off the rubber tip of the heel, and sanded it a bit to allow the rope to slip off with just the right amount of force.
Then Heather walks up with a friend, who says: 'Heather, isn't that your shoe?' I thought she was kidding, but then Heather said, 'What are you doing with my shoe?' I still thought they were making a joke, but then I could tell that Heather was serious and getting mad. Then she started laughing and said: "The machine needs a high-heeled shoe!"
What is your favorite part of the machine?
Eldar: I think the beginning, where the ball bearing jumps out of the speaker when the music begins (0:24) is absolute genius. But the guitar hitting the glasses and taking over the music (1:24) is also quite phenomenal in timing and execution. There were so many things in this machine that blew my mind.
Heather: There are various 'Easter eggs' from the band's other videos that are nestled within the machine. The most obvious is the treadmill video playing on the TV that gets smashed (2:37). But there are also references to the Notre Dame marching band video on the Lego table (1:17) — from the tall Lego drummer to the dancing grass people (I made those!).
Chris: My favorite is the falling piano! That thing took such a beating and was screwed together take after take. It only lasts for a fraction of the video, but it has such comical importance and was triggered after one of the best parts of the video — the clinking glasses.
So if you could quit the day job and get paid for such things, would you?
Mike: I don't think so because I really like my day job. And even though working on the video was great fun, if it became a full-time job, I don't think it would seem as fun anymore. The build seemed like a college frat house at times, and that would definitely go away if it became a job.
Eldar: No, I work on missions to other planets! This was fun, but the real deal is at NASA. They say that there is no business like show business. They can keep it.
Adapted from NASA’s What on Earth blog. Mike is based in Washington, DC. | 2026-01-30T18:27:23.860208 |
646,699 | 4.603993 | http://en.wikibooks.org/wiki/Digital_Circuits/Adders | Consider adding two binary numbers together:
We see that the bit in the "two's" column is generated when the addition carried over. A half-adder is a circuit which adds two bits together and outputs the sum of those two bits. The half-adder has two outputs: sum and carry. Sum represents the remainder of the integer division A+B/2, while carry is the result. This can be expressed as follows:
Half-adders have a major limitation in that they cannot accept a carry bit from a previous stage, meaning that they cannot be chained together to add multi-bit numbers. However, the two output bits of a half-adder can also represent the result A+B=3 as sum and carry both being high.
As such, the full-adder can accept three bits as an input. Commonly, one bit is referred to as the carry-in bit. Full adders can be cascaded to produce adders of any number of bits by daisy-chaining the carry of one output to the input of the next.
The full-adder is usually shown as a single unit. The sum output is usually on the bottom on the block, and the carry-out output is on the left, so the devices can be chained together, most significant bit leftmost:
A ripple carry adder is simple several full adders connected in a series so that the carry must propagate through every full adder before the addition is complete. Ripple carry adders require the least amount of hardware of all adders, but they are the slowest.
The following diagram shows a four-bit adder, which adds the numbers A[3:0] and B[3:0], as well as a carry input, together to produce S[3:0] and the carry output.
Propagation Delay in Full Adders
Real logic gates do not react instantaneously to the inputs, and therefore digital circuits have a maximum speed. Usually, the delay through a digital circuit is measured in gate-delays, as this allows the delay of a design to be calculated for different devices. AND and OR gates have a nominal delay of 1 gate-delay, and XOR gates have a delay of 2, because they are really made up of a combination of ANDs and ORs.
A full adder block has the following worst case propagation delays:
- From Ai or Bi to Ci+1: 4 gate-delays (XOR → AND → OR)
- From Ai or Bi to Si: 4 gate-delays (XOR → XOR)
- From Ci to Ci+1: 2 gate-delays (AND → OR)
- From Ci to Si: 2 gate-delays (XOR)
Because the carry-out of one stage is the next's input, the worst case propagation delay is then:
- 4 gate-delays from generating the first carry signal (A0/B0 → C1).
- 2 gate-delays per intermediate stage (Ci → Ci+1).
- 2 gate-delays at the last stage to produce both the sum and carry-out outputs (Cn-1 → Cn and Sn-1).
So for an n-bit adder, we have a total propagation delay, tp of:
This is linear in n, and for a 32-bit number, would take 65 cycles to complete the calculation. This is rather slow, and restricts the word length in our device somewhat. We would like to find ways to speed it up.
A fast method of adding numbers is called carry-lookahead. This method doesn't require the carry signal to propagate stage by stage, causing a bottleneck. Instead it uses additional logic to expedite the propagation and generation of carry information, allowing fast addition at the expense of more hardware requirements.
In a ripple adder, each stage compares the carry-in signal, Ci, with the inputs Ai and Bi and generates a carry-out signal Ci+1 accordingly. In a carry-lookahed adder, we define two new function.
The generate function, Gi, indicates whether that stage causes a carry-out signal Ci to be generated if no carry-in signal exists. This occurs if both the addends contain a 1 in that bit:
The propagate function, Pi, indicates whether a carry-in to the stage is passed to the carry-out for the stage. This occurs if either the addends have a 1 in that bit:
Note that both these values can be calculated from the inputs in a constant time of a single gate delay. Now, the carry-out from a stage occurs if that stage generates a carry (Gi = 1) or there is a carry-in and the stage propagates the carry (Pi·Ci = 1):
The table below summaries this:
We can extend the expression for the carry-out by substituting the expression for the carry-out of the previous stage:
Note that this does not require the carry-out signals from the previous stages, so we don't have to wait for changes to ripple through the circuit. In fact, a given stage's carry signal can be computed once the propagate and generate signals are ready with only two more gate delays (one AND and one OR). Thus the carry-out for a given stage can be calculated in constant time, and therefore so can the sum.
|Operation||Required Data||Gate Delays|
|Produce stage generate and propagate signals||Addends (a and b)||1|
|Produce stage carry-out signals, C1 to Cn||P and G signals, and C0||2|
|Produce sum result, S||Carry signals and addends||3|
The S, P, and G signals are all generated by a circuit called a "partial full adder" (PFA), which is similar to a full adder.
For a slightly smaller circuit, the propagate signal can be taken as the output of the first XOR gate instead of using a dedicated OR gate, because if both A and B are asserted, the generate signal will force a carry. However, this simplifiaction means that the propagate signal will take two gate delays to produce, rather than just one.
A carry lookahead adder then contains n PFAs and the logic to produce carries from the stage propagate and generate signals:
Two numbers can therefore be added in constant time, O(1), of just 6 gate delays, regardless of the length, n of the numbers. However, this requires AND and OR gates with up to n inputs. If logic gates are available with a limited number of inputs, trees will need to be constructed to compute these, and the overall computation time is logarithmic, O(ln(n)), which is still much better than the linear time for ripple adders.
Cascading Carry-Lookahead Adders
A basic carry-lookahead adder is very fast but has the disadvantage that it takes a very large amount of logic hardware to implement. In fact, the amount of hardware needed is approximately quadratic with n, and begins to get very complicated for n greater than 4.
Due to this, most CLAs are constructed out of "blocks" comprising 4-bit CLAs, which are in turn cascaded to produce a larger CLA.
A carry-save adder is a kind of adder with low propagation delay (critical path), but instead of adding two input numbers to a single sum output, it adds three input numbers to an output pair of numbers. When its two outputs are then summed by a traditional carry-lookahead or ripple-carry adder, we get the sum of all three inputs.
When adding three or more numbers together, a sequence of carry-save adders terminated by a single carry-lookahead adder provides much better propagation delays than a sequence of carry-lookahead adders. In particular, the propagation delay of a carry-save adder is not affected by the width of the vectors being added.
Carry-save adders are really completely parallel arrays of full adder circuits, with the each bit of the three input vectors loaded into each full adder's A, B, and Cin inputs. Each full adder's output S is connected to the corresponding output bit of one output, and its output Cout is connected to the next higher output bit of the second output; the lowest bit of the second output is fed directly from the carry-save's Cin input. | 2026-01-28T06:10:20.609940 |
340,800 | 3.501245 | http://kanoonline.com/mb/index.php?option=com_smf&Itemid=36&topic=730.0 | The Characteristics of the Fulani
Ismail Iro, Ph.D.firstname.lastname@example.org
FUNDING FOR THIS STUDY WAS PROVIDED BY THE AFRICAN DEVELOPMENT FOUNDATION
WASHINGTON, DC. USA
Abstract and Introduction
Should Pastoral Fulani Sedentarize?
Characteristics of the Fulani
Fulani Herding System
Traditionalism Vs. Modernism: A Look at Fulani Methods of Livestock Disease Management
Scarcity of Water as an Impediment to Pastoral Fulani Development
Nomadic Education and Education for Nomadic Fulani
Grazing Reserve Development: A Panacea to the Intractable Strife Between Farmers and Herders
Where Modernism has Failed and Traditionalism has Thrived: A Look at Commercial Ranching and Fulani Herding
Livestock Transportation and Marketing in Nigeria
The Fulani Milk Maid and Problems of Dairying in Nigeria
Diffusion of Innovation: The Fulani Response to Livestock Improvement
This section looks at the characteristics of the Fulani, including their demography, marriage and marital status, pattern of population growth, and age distribution. The section also examines the implications of rapid population increase among the Fulani. The later part of the publication is devoted to the analyses of Fulani gender, household generation, dependency-ratio and labor-force, and governance.
The Fulani are a group of West African pastoralists. They move over vast areas and come across many cultures. Known by different names, the Fulani are called Peul in Wolof, Fula in Bambara, Felaata in Kanuri, and Fulani in Hausa. The word Fulbe was first used by the German writers to refer to the Fulani (de St Croix 1945).
Legend says the Fulani originated from the Arabian Peninsula (de St Croix 1945), and migrated south-west to Senegambia. From Senegambia, they moved eastward, crossing several Sahelian and Sudanian zones, to the Red Sea (Frantz 1981). The Fulani of Nigeria are a part of this migrant, ethnic population having common occupational and biogenetic characteristics. Light-skinned with curly hair, the Fulani have pointed nose, thin lips, and slender statue (Stenning 1959).
Marriage and nuptiality
The Fulani are endogamous, marrying from cross- and parallel- cousins (Kooggal), or from clan members (Deetuki) (Ezeomah 1987). Endogamy is breaking rapidly, and the Fulani are increasingly having marital relationship with other ethnic groups, especially with the Hausawa with whom they share a common religion. Religious more than cultural differences are the main barrier to inter-ethnic marriages with the Fulani.
Marriages may be planned among families even before the birth of the children. That many marriages are arranged does not mean there are no marriages based on love and affection. The resting-season game, the Sharo, in which young suitors take whipping turns is an example of courtship based on selection and courage.
The Fulani men are also polygamous, marrying about two wives in a life time. Every normal Fulani man or woman, including those who have delayed their wedding, are expected to become married. Celibacy is uncommon among the Fulani. A question in the questionnaire seeks to find the marital status of adult members in the household. The finding reveals that about half of the respondents are married. The bulk of the singles are the youth.
Since the age of marriage affects women's fertility (number of children born per women), which affects the population demography, a question about the age at first marriage for the Fulani women is included in the schedules. The response shows that by Western standards, the Fulani marry early. Put within the context of rural Nigeria, however, the age at first marriage among the Fulani women is only slightly higher than for the non-Fulani.
Age at first marriage
Most Fulani men marry in their early twenties, and Fulani women marry in their middle to late teens. By the age of twenty-five, most women are married. Similarly, by the age of thirty, most Fulani men have had their first wedding.
The disruption of marriages through divorce is rare among the Fulani. As the data show, only about two percent of the married people ever go through a divorce. The termination of marriage due to death is slightly higher (three percent) than due to divorce. Most couples remain together for the greater part of their lives. Since divorce and widowhood lower the birth rates, Fulani women in continuous marriage have more children then those who are divorced or who are widowed. Discussions with male and female respondents who have once been divorced indicate that remarriage is frequent and occurs within a year of separation. The proportion of single men and women in the marriage age group is small, which helps maintain the relatively high population growth rate among the Fulani.
The Fulani seldom use artificial birth control. Fertility is relatively high because the number of births exceeds the number of deaths. The difference between net increase and net decrease determines the growth of a society. Fertility and mortality account for the "natural increase" in a community, although in mobile societies, migration also plays a key part. In the absence of migration data, this research relies on fertility and mortality records to estimate population growth of the Fulani.
Keeping mortality constant, the fertility level of the Fulani is linked to early marriages. A couple that marries early is likely to have more children than a couple that marries late. Most women become pregnant within the first year of their marriage and continue bearing children through the age of fifty. By the time a Fulani woman reaches her menopause, she would have given birth to five to seven children. Correlation analysis reveals that the spacing of birth is random. Women give birth to most of their children before reaching thirty years. The survival of the children depends on their health and nutritional status.
Morbidity and mortality
The variables used to estimate morbidity and mortality (longevity) in this sample of the Fulani are health status, number of live births, number of deaths in one year, and the age of the mothers. Education is not used as an index because the corresponding data for cross-tabulation have not been included in the research design. Using the average number of deaths among children born alive, the data indicate high infant mortality among the Fulani in the sample. Of the 6,471 children born alive, 1,260 (19.47%) have died before reaching their adolescence.
Survivability of the infants
One of ten Fulani children born alive will die within the first birth day, and one of five children will not reach the age of six. Coupled with low life expectancy in rural areas, the chance that a Fulani child will live to the age of fifty years is only forty-six percent. This percentage is lower than the national average, and much lower than for Sweden that has up to ninety-five percent in 1980.
Health status and causes of deaths
Less than fifteen percent of rural Nigerians have access to medical services. Most hospitals, and nearly all specialists and university teaching hospitals, are located in cities or large villages. Rural areas have small clinics and dispensaries, but are staffed with less qualified nurses and dispensary attendants. Inadequate and narrow in range, most of the drugs in the clinics are broad spectrum. Due to bad storage and inventory, the drugs expire or become contaminated.
Even among the rural dwellers, the pastoralists are disadvantaged in getting medical services. A staff of a health management board interviewed in this research summarizes the heath care delivery to the pastoral Fulani in one word: non-existent. Yet, of all Nigerians, the Fulani are the most vulnerable to diseases and natural hazards. Their mobility exposes them to common colds and allergies associated with dust, weeds, and animals. Their unprotected bodies are exposed to bites or stings from bees, snakes, scorpions, mosquitoes, house flies, and tsetse flies. The Fulani's drink water that is polluted with dirt and decomposing matter. The turbid and smelly water is also infested with visible and invisible worms and parasite larvae.
The Fulani are exposed to heat, rains, dust, winds, mist and dampness. While moving in the bush, the Fulani receive cuts from thorns, tree branches, uneven terrains, and protruding stones. Injuries also occur from falls, falling trees, accidental shootings by hunters, bites from wild and domesticated animals, and more seriously, fighting with competitors.
The interviews ask about the specific diseases afflicting the Fulani. Discussions with health officials reveal that the pastoral Fulani are plagued by diseases such as malaria, filariasis, dysentery, gangrene wounds, liver flukes, bilharziasis, asthma, rabies, sleeping sickness, hyperthermia, skin disorder, tuberculosis, constipation, and exhaustion. The commonest causes of deaths among the Fulani are similar to the major causes of fatality among non-pastoralists in the rural areas. These diseases fall into two categories, preventable and communicable. Malaria, caused by the protozoan plasmodium, is the most prevalent illness. Endemic in the tropics, it accounts for more than fifty percent of the deaths and disabilities.
Malaria is carried by the mosquitoes, an active biting insect to which the Fulani are exposed. Lush grass and stagnant pools of water, both essential to pastoralists, harbor the mosquitoes. Like the tsetse flies, the mosquitoes also feed on the blood of the animals. The Fulani seldom use insecticides or mosquito nets to fight off the insects or to reduce the harm they cause. Preventing mosquito bites is impossible during herding.
Like terrestrial insects, water-borne organisms infect the Fulani. Worms and unicellular organisms cause amoebic dysentery among the Fulani who have no clean drinking water. Air-borne diseases such as tuberculosis and cerebral spinal meningitis are also rampant, especially in congested sedentary camps. Meningitis has become an annual national epidemic, occurring at the peak of the hot-season.
Unless these afflictions become debilitating, the Fulani will use their natural immunity, local herbs, and time-healing to out live the ailments. The problem, however, arises when the pastoralists are in critical condition and need hospital attention, then, the problems of accessibility and transportation surface. The Fulani point to the inadequacy of treatment centers, which, as the findings show, are several kilometers away. Nearly half of the respondents do not have easy access to health facilities. The Fulani are also some distance away from other health facilities in their settlements or in places they settle during their migration. More than half of the Fulani live in areas without medical services. Only ten percent of the respondents report being near to a clinic or a hospital.
As a result of poor access to medical services, only serious health problems are taken to the clinics. The mild cases are treated by local herbalists. Extremely or life-threatening conditions are referred to the general hospitals in cities. Doctors at Ahmadu Bello University Teaching Hospitals report that most Fulani patients delay going to the hospital until their condition is critical. A medical staff at Murtala Mohammed Hospital in Kano observed that many Fulani patients come with terminal conditions that could have been avoided with early treatment.
Not only the distance, the costs also prevent the Fulani from getting required medical care. Sixty-one percent of the Fulani partly or fully pay for their own medical costs.
At the hospital, the pastoralists waste much time because they do not know how to register and obtain a card. For example, they cannot read the signs or which line to follow to see the doctor or to get to the pharmacy. Counter clerks mock the Fulani who do not speak English or the local language. When they see the opportunity, the health care workers extort fees from the Fulani where services are free. Some unscrupulous workers give half of the medications and sell the rest at the local pharmacy. Sometimes, doctors in public hospital sent the Fulani to their (doctors) private clinics or patent chemist for treatment or purchase of drugs at inflated prices.
The Fulani often refused to follow the drug prescriptions. They cannot understand the rationale for regimented medication. They think that taking more of the medicine will provide a quicker cure. The patients stop the medications with the slightest improvement in their health. Some patients have become used to certain drugs and are unhappy or may even refuse to take prescriptions that look different from the color, shape, smell, or packaging of the drugs, especially if such prescriptions are given by a new health worker. There is also a belief that treatments that do not involve intramuscular injections are not effective, and the Fulani think lowly of doctors who do not write injections.
As a result of the constraints in medical services, mortality is high among the Fulani, and this affects their population growth, although other non-medical factors also contribute. Even with the known fertility, mortality, and health status, it is difficult to obtain an accurate count of the Fulani population in this sample. The rapid shifts of homestead, trans-ethnic marriages, and suspicion increase the margin of error in estimating the population of Fulani.
The lack of ethnic statistics in Nigeria implied that an accurate count of the Fulani would not be available. Using information from community leaders, this research attempted to sample about ten percent of pastoral Fulani households in the sample sites. The sample figures were adjusted for over- and under-counts, then weighted by a factor of ten. Only 8.5 percent of the population is pure-nomad. This percentage corroborates information from interviews with community rulers who confirm that most Fulani in Nigeria have settled. Only a few of the three million cattle-keeping population of Nigeria remain in the wild (Waters-Bayer and Taylor-Powell 1986).
The Fulani community is growing at slightly less than the national average population growth of 3.5 percent a year. Although the number of the Fulani is growing relatively slower than the national population, the absolute increase in the Fulani population is high. Even at the lower figure of 2.8 percent annual growth rate, the size of the Fulani will almost double every thirty years.
Size of household
The average size of the household among this Fulani sample is 6.15. The figure compares to the 6.3 average household size for the Fulani in north-central Nigeria.
The average size of the household in this sample is higher than the figure widely quoted in the literature for nomadic pastoralists. However, it matches the one for a population at an advanced stage of sedentarization. A negative correlation coefficient (r= -0.89) is obtained when the increase in household size is compared with the frequency of change of settlement in one year (an indication of the level of sedentariness).
The implication of rapid population increase. A population increase will result in higher density with a far-reaching consequences on pastoral and rural sub-systems. Rapid population growth has a three-fold effect on the development of the Fulani. First, the population increase out-grows food supply (Boserup 1965). Second, social welfare amenities in the rural areas deteriorate faster than they can be replaced or repaired. Third, education increases the demand for the specialized needs of the future generation of the Fulani.
Finding the age of a Fulani person is as difficult as estimating the population of the Fulani. The Fulani, especially the elderly, have no birth records. Only a few Fulani have birth certificates. Age determination depends on memory recollection. The Fulani can remember their ages only if they are born in years of memorable events such as war, drought, disease epidemic, reign of a famous ruler, or the arrival of the first car, train, or electricity (Peil 1982). Although there is little concern about age falsification, inaccuracies can still be expected from some respondents on answers about their age.
The demography of the pastoral Fulani resembles that of the sedentary rural people. The Fulani population is base-heavy, with a large proportion of household members in the youthful age group. The data show that 38.86 percent of the males and 39.57 percent of the female are fifteen years old or younger. About forty-eight percent of the Fulani in the sample are in their reproductive ages (17-55 years).
Age and gender distribution of the household head
The Household Records Form, which probes the age and gender of the household head, indicates that the typical pastoral Fulani household is male-headed. The findings show that most of the pastoral Fulani household heads are in their thirties to mid-forties. The sharp drop in the percentage of the household heads who are over fifty-five years is because around the age of fifty, fathers start becoming grandfathers, and begin abdicating household responsibilities to their sons. The Fulani do not split the household immediately after the marriage of the first son. The head of the household, defined as the most active provider of the family, goes to the youngest, most energetic person in the household, in this case, the son.
Gender. In developing countries, the females slightly outnumber the males, although in some age groups the males dominate. The 1963 Nigerian census gives a sex-ratio of 102 males per 100 females. In this sample of the Fulani, however, even allowing for female under-reporting, the females are preponderant. The data suggest that the Fulani in the study areas have a near balanced sex-ratio of 3,779 males to 3,787 females, about 100 males for every 100.2 females. Again, this sex-ratio represents the average condition, which may vary in different age groups.
About a quarter of the Fulani in the sample live in single generation households, that is, where there is only a man and his wife (and non-uterine children). Many of these households also have one or two children from the wife or the husband's family who are adopted temporarily.
Dependency-ratio and labor-force
Slightly higher than forty percent of the Fulani depend on the head of the household for nurturance and basic care. Although the dependents are 0-15 and 65 and over, many in these age brackets are part of the undocumented, informal workers that the economists usually overlook. The working age group in this sample ranges from 8-55 years old, although not everyone in this age group works. The dependency-ratio will be much lower than forty percent if the economic contribution of this age group is added. A low dependency-ratio means less burden on the individual families, although often both the dependent and the independent population rely on the state for non-subsistence needs.
The Fulani live in a stratified society with a hierarchy of chieftaincy. Institutionalized political leadership exists, but it does not concentrate power in the hands of the elite. Administrative authority is a function of stable societies, especially those with predictable resources (Salzman 1980a). The Fulani who have become established in settlements have a cohesive political system.
In addition to having conventional authorities, the Fulani have a quasi-government system. Contrary to popular belief, the Fulani have identifiable leaders with full or partial decision-making authorities. At the village level, for example, the settled Fulani have the Sarkin Fulani, a title that has existed since the Fulani conquered Northern Nigeria. Among the pastoral Fulani, sociopolitical structure centers on a typology of leadership consisting of the Ardo (the chief or the lineage head) and the Lamido.
The Ardo is invaluable in Fulani oligarchy. He mediates between his people and the constituted authority. He works between the demands of the Fulani and the policies of the government, which often differ sharply from each other. His past role as tax collector, however, makes him unpopular among his people. With the repeal of the Jangali and the creation of states and local governments in Nigeria, the role of the ardo has become less defined. As the prominence of the ardo diminishes, so does the influence of his superior, the Lamido.
The Ardo is a subordinate of the Lamido, who is the clan head and the trustee of the famous Islamic Jihadists, Usmanu Danfodiyo. The Lamido governs his people and adjudicates the land according to the Shari'a laws. He acts as the spiritual head of the Fulani. His role as a learned man allows him to be the judge, the Imam, and the arbitrator. Like the Ardo, the Lamido is stripped of his feudal powers and responsibilities by the Penal Code (the modern law). Among today's Fulani, the Lamido, chosen by the king-makers, is less feudalistic and aristocratic.
Consensus and compromise are the rules in king-making among the Fulani. Unlike most traditional African societies where leadership is inherited, the Fulani community is more democratic in leadership selection. Once elected, the voluntary, unsalaried Fulani leader envoys unprecedented cooperation of his people. About three-quarters of the Fulani interviewed say the authority of the Fulani head supersedes that of the non-Fulani ward or even village heads. Although they favor autonomous decision-making, the Fulani rely on the kinship group for collective decision-making.
Kinship groups and socioeconomic relationships
The Fulani kinship represents an economic as well as a convivial unit, having common territory and occupation. The Fulani social structure consists of the ethnic group, clan, lineage, family, and Ruga (household).
The ethnic group. The ethnic group is the highest echelon and the conflation of the kinship groups. It embodies all members with a common origin, sharing a founding ancestor whose personage may or may not be known, or whose genealogical link may not be traced to individual members.
The clan. The clan is the sub-unit of the tribe, which anthropologists defined as the "collective descendants of a vaguely known historical ancestor" (Bonfiglioli 1993, 5). The clan members, by tradition, share mythical historical ancestry. Each clan consists of about a thousand to five thousand members. Genealogical ties among clan members are obscure.
The lineage. A clan consists of several lineage groups, although in language and territory, the distinction between the clan and the lineage is blurred. The members of a lineage, that is, descendants of a more recent male ancestor, have mutual obligations during attack, defense, or vengeance (Shanmugaratman 1992; and Bonfiglioli 1993). The lineage members, who have closer historical ancestry than the clan members, comprise five hundred to one thousand members.
The family. The family is a branch of the lineage group, and is the basic social as well as the smallest political unit organized around a patrilineal homestead. Made up of five to fifteen members, the agnatic family is created by marriages and births (Bonfiglioli 1993).
The Ruga. Within the families are compartments or household that eat at least one meal a day together, the Ruga or homestead is the domestic unit, consisting of a man, his wife or wives, unmarried children, and dependant parents. Each household represents a cattle-owning entity, headed by the eldest, most able-bodied member of the family.
To summarize, the Fulani are endogamous as well as polygamous. Celibacy is uncommon among the Fulani, who marry in their twenties. Divorce is also rare. As a result of polygamy and early marriages, the Fulani have high fertility. Despite high infant mortality, the population of the Fulani is growing fast, although slower than the national average. Household size is about six, with a near balanced sex-ratio. Age distribution is base-heavy, with children dominating. The Fulani are governed by a political structure consisting of the ethnic group, the clan, the lineage, the family, and the Ruga. Leadership among the Fulani is less aristocratic. The family is a herd-owning unit, united by common territory and occupation. Their herding system, described in the section that follows, involves frequent pastoral movement.
The section to follow will focus on the Fulani herding system, the primary occupation of pastoral Fulani. It will look into the herding tasks and the gender responsibilities. It will also examine the role of mobility in pastoral nomadism. A space is also devoted to the analyses of herd composition, species distribution, optimal herd size, and livestock population and distribution in Nigeria.
TO BE CONTINUED
Dr. Ismail Iro is a programmer and data analyst in Washington, D.C. | 2026-01-23T10:37:27.406695 |
1,114,011 | 3.533041 | http://phys.org/news/2013-04-schrodinger-equation.html | (Phys.org) —One of the cornerstones of quantum physics is the Schrödinger equation, which describes what a system of quantum objects such as atoms and subatomic particles will do in the future based on its current state. The classical analogies are Newton's second law and Hamiltonian mechanics, which predict what a classical system will do in the future given its current configuration. Although the Schrödinger equation was published in 1926, the authors of a new study explain that the equation's origins are still not fully appreciated by many physicists.
In a new paper published in PNAS, Wolfgang P. Schleich, et al., from institutions in Germany and the US, explain that physicists usually reach the Schrödinger equation using a mathematical recipe. In the new study, the scientists have shown that it's possible to obtain the Schrödinger equation from a simple mathematical identity, and found that the mathematics involved may help answer some of the fundamental questions regarding this important equation.
Although much of the paper involves complex mathematical equations, the physicists describe the question of the Schrödinger equation's origins in a poetic way:
"The birth of the time-dependent Schrödinger equation was perhaps not unlike the birth of a river. Often, it is difficult to locate uniquely its spring despite the fact that signs may officially mark its beginning. Usually, many bubbling brooks and streams merge suddenly to form a mighty river. In the case of quantum mechanics, there are so many convincing experimental results that many of the major textbooks do not really motivate the subject [of the Schrödinger equation's origins]. Instead, they often simply postulate the classical-to-quantum rules….The reason given is that 'it works.'"
Coauthor Marlan O. Scully, a physics professor at Texas A&M University, explains how physicists may use the Schrödinger equation throughout their careers, but many still lack a deeper understanding of the equation.
"Many physicists, maybe even most physicists, do not even think about the origins of the Schrödinger equation in the same sense that Schrödinger did," Scully told Phys.org. "We are often taught (see, for example, the classic book by Leonard Schiff, 'Quantum Mechanics') that energy is to be replaced by a time derivative and that momentum is to be replaced by a spatial derivative. And if you put this into a Hamiltonian for the classical dynamics of particles, you get the Schrödinger equation. It's too bad that we don't spend more time motivating and teaching a little bit of history to our students; but we don't and, as a consequence, many students don't know about the origins."
Scully added that understanding the history of both the science and the scientists involved can help in providing a deeper appreciation of the subject. In this way, the authors of the current paper are building on Schrödinger's own revolutionary discovery.
"Schrödinger was breaking new ground and did the heroic job of getting the right equation," Scully said. "How you get the right equation, is less important than getting it. He did such a wonderful job of then deriving the hydrogen atom wave function and much more. So did he understand what he had? You bet, he was really right on target. What we are trying to do is to understand more deeply the connection between classical and quantum mechanics by looking at things from different points of view, getting his result in different ways."
As the river analogy implies, there are many different ways to obtain the Schrödinger equation, with the most prominent one having been developed by Richard Feynman in 1948. But none of these approaches provides a satisfying explanation for one of the defining features of quantum mechanics: its linearity. Unlike the classical equations, which are nonlinear, the Schrödinger equation is linear. This linearity gives quantum mechanics some of its uniquely non-classical characteristics, such as the superposition of states.
In their paper, the physicists developed a new way to obtain the Schrödinger equation starting from a mathematical identity using classical statistical mechanics based on the Hamilton-Jacobi equation. To make the transition from the nonlinear classical wave equation to the linear Schrödinger equation—that is, from classical to quantum physics—the physicists made a few different choices regarding the amplitude of the wave and thereby linearized the nonlinear equation. Some of the choices resulted in a stronger coupling between the wave's amplitude and phase in comparison with the coupling in the classical equation.
"We have shown in a mathematical identity—the starting point of everything—that the choice of the coupling determines the nonlinearity or the linearity of the equation," Schleich, a physics professor at the University of Ulm, said. "In some wave equations, there is coupling between the amplitude and phase so that the phase determines the amplitude, but the amplitude does not determine the phase. In quantum mechanics, both amplitude and phase depend on each other, and this makes the quantum wave equation linear."
Because this coupling between amplitude and phase ensures the linearity of the equation, it is essentially what defines a quantum wave; for classical waves, the phase determines the amplitude but not vice versa, and so the wave equation is nonlinear.
"As we show in our paper, the Hamilton-Jacobi plus continuity logic leads to an equation which is very similar to the Schrödinger equation," Scully said. "But it's different and this difference is something that we consider important to understand. From one point of view, the extra term that comes into the nonlinear wave equation corresponding to classical physics (as opposed to the linear Schrödinger equation) shows that the classical equation is not linear and we cannot have superpositions of states. For example, we can't have right and left running waves adding to get standing waves because of this nonlinear term. It's when we have standing waves (left and right running wave solutions) that we most naturally get the eigenvalue solutions which we must, like the hydrogen atom eigenstates. So emphasizing linearity is very important."
The analysis also sheds some light on another old question regarding the Schrödinger equation, which is why does it involve an imaginary unit? In the past, physicists have debated whether the imaginary unit—which does not appear in classical equations—is a characteristic feature of quantum mechanics or whether it serves another purpose.
The results here suggest that the imaginary unit is not a characteristic quantum feature but is just a useful tool for combining two real equations into a single complex equation.
In the future, the physicists plan to extend their approach—which currently addresses single particles—to the phenomenon of entanglement, which involves multiple particles. They note that Schrödinger called entanglement the trait of quantum mechanics, and a better understanding of its origins could also reveal some interesting insight into the workings of the tiniest components of our world.
"We are presently looking at the problems from the point of view of current—how and to what extent can we regain quantum mechanics by relaxing the classical current idea and focus instead on a quantum-type current," Scully said. "From this perspective, we get into gauge invariance. There are lots of fun things that one can consider and we are trying to fit these together and see where each of these perspectives takes us. It is also fun to find out who has had ideas like this in the past and how all the ideas fit together to give us a deeper understanding of quantum mechanics. If our paper stimulates interest in this problem, it will have served its purpose."
Explore further: Extraordinary momentum and spin discovered in evanescent light waves
More information: Wolfgang P. Schleich, et al. "Schrödinger equation revisited." PNAS Early Edition. DOI: 10.1073/pnas.1302475110 | 2026-02-04T11:17:13.849998 |
1,147,899 | 3.877995 | http://www.sciencefriday.com/blogs/08/02/2010/the-connection-between-prime-numbers-and-music.html?interest=1 | Prime numbers––those divisible only by themselves and one––have confounded mathematicians for centuries. Because mathematicians rely on patterns, the fact that primes occur at seemingly random intervals (2, 3, 5, 7, 11, 13…) makes them the Holy Grail of math. Many who studied the numbers burned out in their primes, fell into depressions, or attempted suicide.
The Music of the Primes, a companion to the BBC documentary The Story of Math, delves into the history of mathematicians’ struggle to understand the primes. In 300 B.C., Greek mathematician Euclid proved that there are an infinite number of primes, but could not find a way to predict when they would show up in a sequence.
At the beginning of the 19th century, German scientist Carl Friedrich Gauss made a major breakthrough: instead of asking which numbers are prime, he asked how many are. Using a graph like the one below, Gauss predicted the rate at which primes thin out as numbers become larger. Program host Marcus du Sautoy tells us that Gauss “heard the dominant theme of the music of the primes, but he couldn’t prove it.” But we must wait to understand this the meaning of this statement.
About fifty years later, mathematician Bernhard Riemann took Gauss’ predictions one step further. Using the physics of waves of musical tones as a guide, Riemann came up with a way to give order to the distribution of the primes. For an explanation of the hypothesis, see the clip below from the documentary and read this article from Marcus du Sautoy.
Riemann’s Hypothesis was a revelation for mathematicians, but it remained just that: a hypothesis. He couldn’t prove it. Nevertheless, his parade of zeroes was a major revelation: Reimann had connected two disparate realms of mathematics––zeros and primes. When Reimann died of tuberculosis at 39, his housekeeper burned all his papers, so we’ll never know how close he was to a proof. His hypothesis became one of the greatest unsolved mathematical mysteries.
In the 1940s, British mathematician and computer pioneer Alan Turing approached the problem in a new way. He tried to prove Reimann’s Hypothesis false by building a machine that would search for rogue zeros off the line. After World War II, Turing’s machine showed that the first 1,104 zeros were on the line, but then the machine broke down. So did Turing’s life; he was persecuted for his homosexuality and ultimately committed suicide.
A major breakthrough occurred in the 1970s at Princeton. Driving a red sports car, blaring the punk rock song “American Idiot,” Du Sautoy arrives at the university and interviews Hugh Montgomery, who had noticed that the scattered zeroes on the line seemed to repel one another. He consulted physicist Freeman Dyson, who recognized that the pattern was strangely similar to a matrix used to model the nucleus of Uranium.
Here, the meaning of music of the primes finally begins to emerge. The energy levels of the nucleus of an atom are like musical notes, Du Sautoy says, and then plays several on his trumpet to illustrate. “As I blow more energy into it, the notes jump up by degrees,” he says. The energy levels of the nucleus space out, just as zeros on the line do, he explains. “The behavior of the fundamental building blocks of matter seemed to correspond to the fundamental behavior of the building blocks of maths.” This is the crescendo of the documentary (pun intended).
Despite this contribution, the riddle of the primes persists, and today mathematicians continue to devote vast amounts of time and computational power to the Riemann Hypothesis. Most believe it to be true, but no one has yet found a proof. A financier has offered $1 million to anyone who can crack the hypothesis. Du Sautoy believes that who ever does will make the primes sing.
The Music of the Primes is available on DVD here. | 2026-02-05T00:54:53.854270 |
1,051,601 | 3.723521 | http://www.earthmagazine.org/article/earliest-fossil-evidence-humans-southeast-asia | Earliest fossil evidence of humans in Southeast Asia?
Armand Salvador Mijares
Armand Salvador Mijares
Modern humans reached the islands of Southeast Asia by approximately 50,000 years ago, but our ancestors’ journey was not easy. Even during times of low sea level, a voyage to some of these islands would have required crossing open water, leaving many scientists to wonder how humans arrived on the most isolated islands. Now the story is growing more complicated: A group of archaeologists has discovered a 67,000-year-old foot bone that they say represents the earliest-known presence of humans in the northern Philippines and may be among the oldest-known traces of modern humans in all of Southeast Asia — that is, if the bone truly belongs to Homo sapiens. The bone’s small size and unusual features make it difficult to determine exactly which species of Homo it was — Homo sapiens, Homo floresiensis or something else?
In 2007, Armand Salvador Mijares of the University of the Philippines Diliman in Quezon City and colleagues uncovered the bone — a third metatarsal, one of the bones that makes up the middle part of the foot — while excavating Callao Cave on Luzon island. Nearly 3 meters beneath the cave’s floor, the team found the isolated foot bone encased in carbonized breccia. They dated the bone to a minimum age of 66,700 years ago using uranium-series radiometric dating. In a younger cave layer dated to 30,000 years ago, they recovered bone fragments of other animals, including a deer bone that contained cut marks — indirect evidence of human tools. In April and May 2009, the team found six additional bones with cut marks, Mijares says, but so far, no tools have been found.
Mijares and his colleagues recognized that the foot bone belonged to a primate, so they compared the metatarsal to the metatarsals of primates that inhabit Southeast Asia: macaque monkeys, gibbons, orangutans and Homo sapiens. They also included in the comparison Homo habilis, a species that lived in Africa 2.4 million to 1.4 million years ago.
The results were not clear cut, the team reported in the Journal of Human Evolution. They concluded that the Callao Cave bone belongs to the genus Homo, but it is unusual in several ways. The bone displays features — including odd curvatures and a narrow base — that are not seen in any species of Homo or in any other known hominin. And the bone is extremely small in its overall size and proportions. The team determined the bone is of an adult or adolescent, so the small size is not due to the individual being a child.
Although the Callao Cave bone is unique in many ways, the researchers offer several suggestions on the bone’s identity. First, the individual might have belonged to a population of pygmy Homo sapiens, similar in size to the small-statured Negritos that live in the Philippines today. Alternatively, the bone might have belonged to a member of a Homo erectus population. Homo erectus has been found in Southeast Asia, and some researchers think the species may have survived in the area until as recently as 70,000 years ago; however, very few Homo erectus foot bones have ever been found, making comparisons to the Callao Cave bone nearly impossible. Finally, the researchers point out that the bone is within the size range (and theoretical time range) of Homo floresiensis, or the “hobbit,” which has so far only been found on the island of Flores in Indonesia.
“It’s a compelling case that this is a bone that belongs to the genus Homo,” says Bill Jungers, a paleoanthropologist at Stony Brook University in New York who has studied Homo floresiensis. “Is it Homo sapiens? If I had to guess, probably yes,” he says. “I’d love to make the claim that this is an example of Homo floresiensis, but there just isn’t enough information.” Since the announcement of Homo floresiensis in 2004, anthropologists have debated whether it is a separate species or a malformed Homo sapiens. Finding a second population of Homo floresiensis would go a long way toward eliminating any lingering doubts that it is indeed a separate species, Jungers says.
Jeremy DeSilva, a functional morphologist at Boston University in Massachusetts, agrees that there is not enough information to make the claim that a second population of Homo floresiensis has been found. DeSilva is also hesitant to say whether this bone is even Homo. The bone’s unusual features are hard to explain away; thus, larger comparative analyses are still needed, he says.
This is what the team is working on now. Mijares’ co-author, Florent Détroit of the National Museum of Natural History in Paris, France, is conducting more analyses on the foot bone. Jungers has also offered the team access to his data on Homo floresiensis for further comparisons, Mijares says, something the team is now considering.
If the Callao Cave bone is indeed human, then “it’s exciting,” DeSilva says, and it adds to the scarce data regarding how humans colonized Southeast Asia. The earliest definitive evidence of modern humans in this region comes from 42,000-year-old bones found in Borneo and 47,000-year-old bones from Palawan, an island in the southwest Philippines. These islands would not have been difficult to get to during the glacial periods of the Late Pleistocene, when sea level was lower and land bridges connected many islands in Southeast Asia.
The island of Luzon, home to Callao Cave, is a different story. The island has never been connected to other landmasses via land bridges, Mijares and colleagues say, so humans would have had to cross the sea to get there. Such early ocean crossings may have been unintentional, as there’s no evidence of “boats, rafts or arks” from this period, Jungers says. Instead, he says, storms or tsunamis may have ripped chunks of land from the mainland — and any humans and other animals stuck on the floating debris may have been transported to the Philippines, Flores and other remote islands of Southeast Asia. A similar hypothesis has been used to explain how monkeys came to South America from Africa approximately 35 million years ago.
Finding more fossils from Callao Cave would help answer many of the open questions about the new foot bone, Jungers says. So far, Mijares’ team has not found any more human fossils in the cave, and they won’t be finding any more in the future. As of June 2009, the National Museum of the Philippines has claimed “exclusive rights” over the area and denied Mijares further access to the site, Mijares says. It is now up to the museum to continue the search. | 2026-02-03T11:38:36.205028 |
683,037 | 3.505557 | http://science.nasa.gov/science-news/science-at-nasa/2000/ast22sep_1/ | Interplanetary Fall Today our planet joins two other worlds in the solar
system where it is northern autumn.
Won't be long 'til
summer time is through...
The Beach Boys, "All Summer Long"
September 22, 2000 -- Every year around this time thousands of penguins rejoice to see the Sun peep above the Antarctic horizon. The return of sunlight after nearly 6 months of chilly darkness means it's time to shed a few pounds of blubber, find a mate, and bask in the sunshine. Spring is in the air.
On the other end of the world, sun-dappled puddles of water are freezing at the north pole, while Arctic bears are pondering hibernation. The north polar Sun, circling downward on a horizon-skirting 360 degree spiral, will soon be gone.
What powerful force of nature can rouse the passions of penguins
at one end of the world and put mighty bears to sleep at the
other? It's the changing of the seasons from northern summer
to fall-- a special date on the calendar that northerners call
Above: This is an image of the Automated Astrophysical Site-Testing Observatory near the US South Pole Station captured less than one day before the 2000 September equinox. The soft-yellow dawn sky presages the coming of a 6 month-long day. Click for images updated every 10 minutes.
For the past six months our planet's north pole has been tilted toward the Sun, most directly on June 21st, which was the beginning of northern summer. But, as The Beach Boys pointed out in a popular tune, there's no such thing as an Endless Summer. Today (Sept. 22nd) at 1727 UT (1:27 pm EDT) our planet's "subsolar point" crossed the equator heading south. With it, spring began in the southern hemisphere and autumn in the north.
On days when the Sun is shining
directly over the equator, daylight and darkness are of nearly
equal length. The word equinox comes from a Latin word
meaning "equal night."
Left: The red dot marks Earth's subsolar point (the location where the Sun is directly overhead) at noon Eastern Standard Time throughout the year. Equinoxes occur when the subsolar point crosses the equator, once in March (the Vernal Equinox) and again in September (the Autumnal Equinox). This animation is based on images generated by JPL's Solar System Simulator.
Contrary to the all-too-popular notion that Earth is closer to the Sun during summer and farther away during winter, seasons are not caused by the eccentricity of our planet's orbit. Indeed, during the hottest days of northern summer the Earth is at its greatest distance from the Sun.
The primary cause of seasonal extremes on Earth is the 23 degree tilt of our planet's spin axis. When the north pole is tilted toward the Sun, northern days are long and the weather is warm. Six months later, as the south pole tilts toward the Sun, the southern hemisphere takes its turn at summer.
Seasons in the two hemispheres are always reversed. When it is summer in New York, it is winter in Sydney. On a spring day in Paris, autumn leaves are falling in Argentina.
Sign up for EXPRESS SCIENCE NEWS delivery
While seasons on other planets may seem alien, they are defined just as they are on Earth. When the Sun shines down directly over a planet's equator -- that's an equinox. When one of the poles is tilted toward the Sun to its maximum extent -- that's a solstice. The equinoxes and solstices for 8 of the 9 planets are tabulated below. (Pluto is omitted, because we know so little about that distant world.)
|spin axis tilt (deg)|
The unfamiliar seasons of other worlds would strike any Earth-dweller as strange -- but some planets are more alien than others.
Take Mars for example. Mars has the highest orbital eccentricity
of any planet except Mercury and Pluto. Its distance from the
Sun varies between 1.64 and 1.36 AU over the Martian year. This
large variation, combined with an axial tilt greater than Earth's,
gives rise to a very peculiar seasonal change. When Mars is closest
to the Sun, it is wintertime at the planet's north pole. Bone-chilling
temperatures at that end of the planet plunge so low that carbon
dioxide -- the main constituent of Mars's atmosphere -- freezes
and falls to the ground. So much of the red planet's atmosphere
turns to ice, that the global atmospheric pressure drops by 25%!
Mars's atmosphere is noticeably thinner during northern winter.
Martian seasons are peculiar by Earth-standards, but they pale in comparison to seasons on Uranus. Like Earth, Uranus follows an orbit that is nearly circular; it keeps the same distance from the Sun throughout its long year. But, Uranus's spin axis is tilted by a whopping 82 degrees! This gives rise to extreme 20-year-long seasons and unusual weather. For nearly a quarter of the Uranian year (equal to 84 Earth years), the Sun shines directly over each pole, leaving the other half of the planet enveloped by darkness.
Left: This dramatic time-lapse movie captured by NASA's Hubble Space Telescope shows seasonal changes on Uranus. Once considered one of the blander-looking planets, Uranus is now revealed as a dynamic world with the brightest clouds in the outer Solar System. more info.
The northern hemisphere of Uranus is just now coming out of the grip of its decades-long winter. Sunlight, reaching some latitudes for the first time in years, warms the atmosphere and triggers gigantic springtime storms comparable in size to North America with temperatures of 300 degrees below zero. In the animation pictured left, bright clouds are probably made of crystals of methane, which condense as warm bubbles of gas percolate upwards through the atmosphere.
Mercury's seasons -- if they can be called that -- may be the most remarkable of all. Mercury rotates three times on its spin axis for each two orbits around the Sun. It is the only one of our solar system's planets or moons tidally locked in an orbital-to-rotational resonance with a ratio other than 1:1.
Mercury's weird rotation combined with the high eccentricity of its orbit would produce some very strange effects for an observer on that planet. At some longitudes, Mercurian skywatchers would see the Sun rise and gradually increase in apparent size as it slowly moved toward the zenith (i.e., the solstice!). At that point the Sun would stop, briefly reverse course, and stop again before resuming its path toward the horizon, all the while decreasing in apparent size. Observers at other points on Mercury's surface would see different but equally bizarre motions.
Temperature variations on Mercury, ranging from -200 C at night (winter) to 400 C during the day (summer), are the most extreme of any world we know.
|Parents and Educators: Please visit Thursday's Classroom for lesson plans and activities related to this story.|
Summer on Mercury is hot, but it's not the hottest spot in the solar system. The surface of Venus, warmed by a runaway greenhouse effect in Venus's carbon dioxide atmosphere, is even hotter at 470 C -- easily hot enough to melt lead. Because of Venus's thick cloud cover and circular orbit, seasonal changes in temperature are likely to be small. Indeed, of all the worlds in the solar system, ever-hot Venus might be the one with a truly Endless Summer. On the other hand, its choking acid-rich atmosphere and perpetual cloud cover would probably dampen enthusiasm for sun bathing.
There's no doubt that extraterrestrial seasons are harsh and uncomfortable. So, while you're lamenting the prospect of raking autumn leaves weekend after weekend for the next two months, take a quick mental tour of the solar system. Yard work on Earth might not seem so bad after all!
Web LinksEarth's Seasons -- A table of solstices and equinoxes from the US Naval Observatory
Join our growing list of subscribers - sign up for our express news delivery and you will receive a mail message every time we post a new story!!!
|For lesson plans and educational activities related to breaking science news, please visit Thursday's Classroom||
Production Editor: Dr. Tony Phillips
Media Contact: Steve Roy
Curator: Bryan Walls
Responsible NASA official: Ron Koczor | 2026-01-28T19:05:40.224005 |
303,206 | 3.823381 | http://www.merckmanuals.com/pethealth/dog_disorders_and_diseases/skin_disorders_of_dogs/allergies_in_dogs.html | Like people, dogs can be allergic to various substances, including plant particles and other substances in the air or substances in food. These substances are called allergens. Allergens are substances that, when inhaled or absorbed through the skin, respiratory tract, or gastrointestinal tract, stimulate histamine production, which results in inflammation.
Airborne Allergies (Atopy)
Fewer than 10% of dogs are thought to be genetically predisposed to become sensitized to allergens in the environment. Both male and female dogs can be allergic to materials in the air. Breeds predisposed to developing allergies include Chinese Shar-Peis, Wirehaired Fox Terriers, Golden Retrievers, Dalmatians, Boxers, Boston Terriers, Labrador Retrievers, Lhasa Apsos, Scottish Terriers, Shih Tzus, and West Highland White Terriers. However, any dog of any breed (or mixed breeds) can be allergic. The age of onset is generally between 6 months and 3 years. Signs are usually seasonal but may be seen all year. Itching is the most typical sign (see Skin Disorders of Dogs: Itching (Pruritus) in Dogs). The feet, face, ears, front legs, and abdomen are the most frequently affected areas, but scratching all over the body is common. Scratching can lead to secondary signs of wounds, scabbing, infection, hair loss, and scaling. Other signs of atopy include licking or chewing the paws and rubbing the face and eyes.
Allergies are identified by signs when other causes have been excluded. Allergy testing can be used to identify the offending allergens and to formulate a specific immunotherapy treatment program.
There are 3 therapeutic options: avoidance of the offending allergen(s), controlling the signs of itching, and immunotherapy (for example, an allergy vaccine). A good management plan requires the use of several different treatments, the understanding and reasonable expectations for response from the pet owner, and frequent progress evaluations so that the plan can be adjusted as needed.
Immunotherapy attempts to increase a dog's tolerance to environmental allergens. Vaccine preparation involves selection of individual allergens for a particular dog. The allergen selection is determined by matching the test results with the prominent allergens during the time of year when the dog has signs. Immunotherapy is best considered for dogs with problematic signs that occur for several months during the year. The dog must be cooperative enough to receive allergy injections. You may have to administer some injections yourself. Your veterinarian can provide training and most owners learn to administer the allergy injections very well, while others may need assistance from a capable friend or veterinary staff member. Your veterinarian will determine the frequency of the injections and the dosage given.
Treatment takes a longterm commitment. You must be willing to follow instructions accurately, be patient, and be able to communicate effectively with your veterinarian. Injections may initially increase signs. If this occurs, contact your veterinarian immediately. Improvement may not be visible for 6 months and a year of treatment may be required before you can tell if the immunotherapy is working. The best way to evaluate the treatment is to compare the degree of disease or discomfort between similar seasons. Anti-itch medication and antibiotics are often required during the initial phase of treatment.
Allergy shots improve the condition but do not cure the disease. Many animals may still require anti-itch medications during seasonal flare-ups.
Among pets, food allergies are less common than airborne allergies. Signs of food allergy are similar to airborne allergies except there is little variation in the intensity of itching from one season to another. The age of onset is variable. The distribution and intensity of itching varies between animals.
There is no reliable diagnostic test other than feeding a limited foodstuff (hypo-allergenic or elimination) diet and seeing if the itching resolves. Your veterinarian should be consulted to develop a specific test plan for your dog. The ideal food elimination diet should be balanced and nutritionally complete and not contain any ingredients that have been fed previously to your dog. Owners often do not understand that if any previously fed ingredient is present in the elimination diet, the dog may be allergic to that one ingredient and the diet trial will be a failure. The key point in any food elimination diet trial is that only novel food ingredients can be fed. This also includes treats and anything the dog eats besides its regular food.
The trial diet should be fed for up to 3 months. If marked or complete resolution in signs occurs during the elimination diet trial, food allergy can be suspected. To confirm that a food allergy exists and improvement was not just coincidental, the dog must be given the previously fed food ingredients and a relapse of signs must occur. The return of signs is usually between 1 hour and 14 days. Once a food allergy is confirmed, the elimination diet should be continued until signs disappear, which usually takes less than 14 days. At this point, previously fed individual ingredients should be added to the elimination diet for a period of up to 14 days. If signs reappear, the individual ingredient is considered a cause of the food allergy.
The foods dogs are most often allergic to include beef, chicken, eggs, corn, wheat, soy, and milk. Once the offending allergens are identified, control of the food allergy is by strict avoidance. Concurrent diseases may complicate the identification of underlying food allergies. Infrequently, a dog will react to new food allergens as it ages.
Last full review/revision July 2011 by Karen A. Moriello, DVM, DACVD; Patricia D. White, DVM, MS, DACVD; Michael W. Dryden, DVM, PhD; Carol S. Foil, DVM, MS, DACVD; William W. Hawkins, BS, DVM; Thomas R. Klei, PhD; John E. Lloyd, BS, PhD; Bernard Mignon, DVM, PhD, DEVPC; Wayne Rosenkrantz, DVM, DACVD; David Stiller, MS, PhD; Patricia A. Talcott, MS, DVM, PhD, DABVT; Alice Villalobos, DVM, DPNAP; Stephen D. White, DVM, DACVD | 2026-01-22T20:36:52.726225 |
914,259 | 4.131821 | http://scienceblogs.com/thoughtfulanimal/2011/03/08/defending-your-territory-it-pa/ | Welcome to the second installment of Animal Territoriality Week. Today, we’ll look at a case where differences in territory size can have implications for neuroanatomy. If you missed part 1 of Animal Territoriality week, check it out here.
Let’s say you have two very very closely related species. You might even call them congeneric, because they are from the same taxonomic genus. In most ways, these two species are very similar, but they differ behaviorally in some very big ways. Might those behavioral differences predict neurobiological differences?
The different species of the genus microtus display the full range of mammalian mating systems: some species are entirely monogamous, with two individuals mating for life, while other species are fully polygamous, with males having multiple female mates.
The difference in mating style leads to a very important difference in spatial memory requirements: for monogamous species, males and females both tend to live their lives in areas of land roughly the same size. For polygamous species, however, males’ home ranges are much larger in spatial extent than the home range of the females (as well as the typical home ranges of the monogamous male voles). This makes sense: the polygamous male’s home range needs to include the smaller home ranges of multiple female mates.
Importantly, these behavioral differences in the polygamous voles are only seen in mature adults during mating season. Juvenile males voles of this species do not have significantly larger home ranges compared to females, and adults do not have larger home ranges during the rest of the year. These species and sex related differences probably create an important selective pressure for spatial memory.
The first question to ask is whether or not the observed sex and species differences in the spatial extent of the home range results in differences in spatial memory, as you might hypothesize. And the basic answer is yes: under laboratory conditions, voles of polygamous species show strong gender differences in tests of spatial ability, with males outperforming females. Voles of monogamous species fail to show these sex differences under identical testing conditions.
If the sexual selection for ranging behavior has influenced the evolution of cognition (at least with respect to spatial knowledge), then it is possible that it has also influenced the evolution of the parts of the brain known to subserve spatial navigation or spatial memory.
The hippocampus is known to play an important role in spatial learning. Rodents with hippocampal lesions show impaired performance on spatial tasks. In addition, the relative size of the hippocampus is larger in birds whose food-caching locations are located within a larger area, compared with other birds who use different food-caching strategies.
To test the hypothetical relationship between ranging behavior and hippocampal volume, the researchers went out and captured 40 wild voles during breeding season – 10 male and 10 female pine voles, and 10 male and 10 female meadow voles – and compared the size of the hippocampus between sexes for each species. (It would not make sense to directly compare the species, but it would make sense to compare the sex differences in hippocampus size between species.)
On average, the hippocampi of the polygamous male were 3.2 cubic millimeters larger than those of the females. For the monogamous voles, the males’ hippocampi were only 0.5 cubic millimeters larger than the females. This aligns with the hypothesis, but comparing absolute differences between species and sexes isn’t the proper comparison, since there are sex and species differences in total brain volume. When analyzing the ratio of hippocampal volume to total brain volume across sex and species, the differences remain statistically significant. The sex difference observed in the hippocampus volume in polygamous voles was significantly larger than that observed for the monogamous voles.
One principle of brain evolution that is intuitively appealing is that larger neural mass is related to increased information processing. However, it is hard to empirically test this principle, because it is difficult to isolate neural structures and then to assess their functions. However, this was one study that was able to successfully validate this principle, at least for one neural structure (the hippocampus) which has repeatedly been shown to be involved the processing of a specific type of information (spatial processing), in two different vole species..
It is important to note that on the basis of these findings, it is not possible to conclusively determine whether or not the observed sex differences are determined biologically or if they experience results in larger hippocampi. Only a controlled laboratory study could address this question. However, the selective pressure created by mating style on spatial abilities, and the subsequent neurobiological differences observed, seem fairly well supported. The authors point out, and rightly so, that the presence or absence of potential effects of experience does not necessarily preclude an evolutionary explanation for the sex differences that they observed.
So, if you learned only one thing today: polygamous male voles have larger hippocampi which allows them to maintain their larger territories, compared with monogamous males.
Jacobs, L. (1990). Evolution of Spatial Cognition: Sex-Specific Patterns of Spatial Behavior Predict Hippocampal Size Proceedings of the National Academy of Sciences, 87 (16), 6349-6352. DOI: 10.1073/pnas.87.16.6349 | 2026-02-01T13:07:19.504839 |
277,369 | 3.789804 | http://www.scholastic.com/teachers/lesson-plan/historical-fiction-wealth-interpretations | Historical Fiction: A Wealth of Interpretations
- Grades: 3–5
- Unit Plan:
- contrast different points of view and note the change in perspective as the character's life experiences change.
- Two contrasting class sets of the Dear America books or My America books. You may want to try the following titles:
- Construction paper to be used as an "album" at the end of the unit
- Magazines, drawing paper, colored pencils, etc. for the students to use for their "albums"
Set Up and Prepare
- Venn diagrams to compare and contrast the two books (or the changes in the character's point of view)
- If you use the Dear America or My America books, set up literature circles for students to discuss the books. Make sure your role for each member is clear.
- Set up a binder for students to respond to prompts or their roles in literature circles and keep all the information in one place.
- If you use George Washington's Socks, set up questions or prompts you would like the students to respond to.
Step 1: Assess prior knowledge. For example, in the two Dear America books noted above, make sure the students know about the Civil War. What were the issues? Why did they fight? This is perfect when studying the Civil War in social studies.
Step 2: Divide your students into literature circles. They should already have had experience with this type of literature learning. If you prefer teaching full class, then use George Washington 's Socks as your literature book.
Step 3: Hand out the Dear America books (or George Washington 's Socks). Have the students discuss what they can predict from the illustrations alone.
Step 1: Either have the students read and respond to the books through literature circles or read and discuss the books as a class.
Step 2: Every other day, students respond in their binders to the book they are reading, either by doing their literature circle role or by responding to questions or prompts that the teacher provides.
Step 3: Have students use Venn diagrams to compare and contrast the two books or the changes in Matt's perception of the different "sides" in the American Revolution as the plot develops.
Step 4: As a culminating activity, students can write diary entries that would come after the end of their book. An alternate or additional activity can be creating a photo or drawn picture scrapbook reflecting what they have read. Since photography was used in the Civil War, they may choose to do this as photos or they can keep an artist's journal, using drawings. Their photos can be drawings, cut out pictures from magazines, computer pictures, or staged photos of their own. For each one, they need to have an entry, explaining what the photo or drawing shows. (This can be finished as homework.)
One fun extension for this activity is to use folk songs to show point of view. The most apparent one is "Yankee Doodle." Teach your students the song if they don't already know it. Then discuss how originally it was sung by the British to make fun of Americans, since a "doodle" meant a fool. However, the Americans took it over, changed the words, and made it into an anthem of sorts, turning the tables on the British.
- Literature Circle binder entries responding to the book read
- Culminating "picture" project
- Did the students' responses and discussion show that they understood how history is seen differently, depending on the point of view of the character?
- Did their written projects reflect their understanding of historical point of view?
- Were students engaged and focused during the work times?
- Were students able to see the differences between the two points of view in the diaries? (Dear America or My America)
- Did students see how Matt's perception of the soldiers fighting in the American Revolution changed once he actually participated in it? (George Washington's Socks)
- Students' success with Literature Circle binder entries
- Students' success with diary entries, using the point of view of the character
- Student success with artist's journals or photo albums relating to the story | 2026-01-22T10:58:17.806481 |
773,129 | 4.06138 | http://ci.pasadena.ca.us/Fire/Electrical_Devices_and_Appliances/ | Each year the Pasadena Fire Department responds to a significant number of fires and medical emergencies caused by electrical malfunction. Every year in the United States, more than 1,000 people are killed and thousands more injured in electrical fire or shock incidents. It is important to know how to use electrical appliances safely and how to recognize electrical hazards.
Most homes have two incoming voltages: 120 volts for lighting and appliance circuits and 240 volts for larger air conditioning and electric dryer circuits. When an appliance switch is turned on, electrical current flows through the wire, completing the electrical "circuit" and causing the appliance to operate. The amount of flowing current is called "amperage." Most lighting circuits in the home are 15 amp circuits. Most electric dryers and air conditioners require larger 30 amp circuits. The amount of electrical power needed to make an appliance operate is called "wattage" and is a function of the amount of current flowing through the wire (amperage), and the pressure in the system (voltage). Mathematically speaking, volts x amps = watts. So, if we have a 120 volt system and a 15 amp current, we can flow a maximum of 120 x 15 or 1,800 watts on a typical lighting or appliance circuit. When too many lights or appliances are attached to the electrical system, it will overload and overheat. This can cause the wire insulation to melt and ignite, resulting in an electrical fire. The amount of electrical current flowing through wire is affected by resistance. This is known as "ohms." Resistance causes increased heat in the wire. Heat is the byproduct that makes some appliances work, such as an iron, toaster, stove or furnace. Large current faces high resistance when moving through a small wire. This generates lots of heat. That's how an incandescent light bulb works. Resistance through the light filament causes it to heat up which gives off a bright light. Electrical resistance also is affected by the length of a wire. Operating an electrical hedge clipper with a long extension cord increases resistance and might cause the cord to overheat, melt or ignite. The same occurs if too many strands of Christmas lights are connected together. The size of electrical wire is dependent upon the amount of current required to operate a particular appliance. Wiring to the air conditioner, electric stove and electric dryer is much larger to handle the increased voltage (240) volts) and amperage (30 amps). Wiring is covered with a protective material called "insulation." Electrical circuits in homes are designed so that all components are compatible. The size of the wire, outlets and circuit breakers are designed for an anticipated electrical load. A circuit is said to be overloaded when too much current flows causing heat build up or wiring to break down. When two bare wires touch, a "short circuit" is said to occur. This can lead to sparks and fire. Deteriorated insulation is one of the most frequent causes of short circuits. A "circuit breaker" or "fuse" is a safety device designed to prevent accidental overloading of electrical circuits. They are set at a specific amperage. When that amperage is exceeded, it trips and shuts off the flow of electricity, stopping the circuit from continued overheating. When a fuse or circuit breaker trips, it is important to find the cause and correct it. Often, people will just reset the breaker or put in larger fuse. NEVER USE OVERSIZED FUSES ON CIRCUIT BREAKERS. NEVER SUBSTITUTE A PENNY OR FOIL-WRAPPED FUSE. This could cause a fire!
When a house is under construction, city inspectors visit to make sure the electrical system is in compliance with the City Building Code and the National Electrical Code. Only licensed electricians are permitted to install electrical systems. During home remodeling, when electrical circuits are added or changed, make sure to use a licensed electrician whose work complies with the electrical code. Add enough outlets in every room to avoid using multiple plugs or extension cords. Use a ground fault interrupter (G.F.I.) on circuits in the bathroom, or outdoors where water or moisture is present. G.F.I. is a type of very sensitive circuit breaker and is required by the Pasadena Municiple Code. When choosing an electrical appliance, be sure it is approved by a safety-testing laboratory. This insures that it has been constructed in accordance with nationally-accepted electrical standards and has been evaluated for safety. Use the appliance only according to manufacturer's specific instructions. If you touch an electrical appliance, wall switch or electrical cord while you are wet or standing in water, it will increase the chance of electrical shock. When using an extension cord, be sure it is designed to carry the intended load. Most cannot carry as much current as permanent wiring and tend to overheat. Do not use an extension cord in place of permanent wiring, especially if a tripping hazard exists or where there is high physical abuse, such as under a carpet. Keep electrical cords away from infants and toddlers and use tamperproof inserts on wall outlets to prevent them from sticking objects into the outlets. The cord must be protected from damage. Do not run it around objects or hang on a nail. Inspect it periodically for worn insulation and overall condition.
The potential for electrical shock or fire from an electrical appliance is very real, especially when safety recommendations are not followed. Before buying an appliance, look for the label of a recognized testing laboratory such as Underwriters Laboratory or Factory Mutual. Keep space heaters, stoves, irons and other heat-producing appliances away from furniture, curtains, bedding or towels. Also, give televisions, stereos and computers plenty of air-space so they won't overheat. Never use an appliance with a damaged cord, and be sure to use three-pronged electrical devices in three-pronged outlets. These outlets may not be available in older homes, so use a three-pronged adapter, and screw the tab onto the grounded outlet box cover. Never cut off or bend the grounding pin of the plug. If you have a polarized plug (with one side wider than the other), never file it down or try to make it reversible. Keep electrical cords out of the path of traffic. If you put cords under carpets or rugs, wires can be damaged and might result in fire. An electrical cord should never be wrapped around an appliance until the appliance has cooled. Because hair care equipment is often used in bathrooms near sinks and bathtubs, it is extremely important to be especially careful that the appliances do not come in contact with water. If one drops into water, do not touch it until you have pulled the wall plug. Protect young children by putting plastic inserts in receptacle outlets not in use to keep them from putting anything into outlets. Never put a kitchen knife or other metal object in a toaster to remove stuck bread or bagels unless it is unplugged and cooled. Install television and radio antennas where they cannot fall across power lines. Use caution when operating a tree-pruning device or using a metal ladder around power lines. Inspect appliances regularly to make sure they operate properly. If an appliance smells funny when in use, makes unusual sounds or the cord feels warm to touch, repair or replace the unit. Don't repair it yourself unless you are qualified. Keep appliances in a cool, dry place to prevent rusting.
When an electrical emergency occurs, there are several survival actions that can be taken. You should know how to trip the main circuit breaker at the electrical panel to turn off all power to the house. If an appliance smells funny or operates improperly, pull the plug if it can be done safely. If arcing, burning or smoking from an appliance occurs, turn off the power at the circuit breaker and CALL THE FIRE DEPARTMENT. Winds accompanying thunderstorms may knock down power lines or utility poles. Keep people away from the area, and call the fire department. If power lines come in contact with a vehicle, do not touch it or the vehicle. If people are inside, tell them to stay inside. If they try to exit, they may complete a grounded electrical circuit and be instantly killed. They must stay inside until the power is shut by the utility company. If a serious electrical malfunction occurs in your home, school or workplace, it is the same as a fire. Notify others, activate the fire alarm and exit promptly. If you are familiar with the operation of a fire extinguisher, you can use only a "Class C" Fire Extinguisher on an electrical fire. | 2026-01-30T05:21:08.238078 |
878,272 | 3.510478 | http://www.safefuturesct.org/learn-about-abuse/what-is-domestic-abuse | Domestic abuse between spouses or intimate partners is when one person in a marital or intimate relationship tries to control the other person. The perpetrator uses fear and intimidation and may threaten to use or may actually use physical violence. Domestic abuse that includes physical violence is called domestic violence.
The victim of domestic abuse or domestic violence may be a man or a woman. Domestic abuse occurs in traditional heterosexual marriages, as well as in same-sex partnerships. The abuse may occur during a relationship, while the couple is breaking up, or after the relationship has ended.
Domestic abuse often escalates from threats and verbal abuse to physical violence. Domestic violence may even result in murder.
The key elements of domestic abuse are:
- humiliating the other person
- physical injury
Domestic abuse is not a result of losing control; domestic abuse is intentionally trying to control another person. The abuser is purposefully using verbal, nonverbal, or physical means to gain control over the other person.
In some cultures, control of women by men is accepted as the norm. This definition speaks from the orientation that control of intimate partners is domestic abuse within a culture where such control is not the norm. Today we see many cultures moving from the subordination of women to increased equality of women within relationships.
The types of domestic abuse are:
- physical abuse (domestic violence)
- verbal or nonverbal abuse (psychological abuse, mental abuse, emotional abuse)
- sexual abuse
- stalking or cyber-stalking
- economic abuse or financial abuse
- spiritual abuse
The divisions between these types of domestic abuse are somewhat fluid, but there is a strong differentiation between the various forms of physical abuse and the various types of verbal or nonverbal abuse.
Physical abuse is the use of physical force against another person in a way that ends up injuring the person, or puts the person at risk of being injured. Physical abuse ranges from physical restraint to murder. When someone talks of domestic violence, they are often referring to physical abuse of a spouse or intimate partner.
Physical assault or physical battering is a crime, whether it occurs inside a family or outside the family. The police are empowered to protect you from physical attack.
Physical abuse includes:
- pushing, throwing, kicking
- slapping, grabbing, hitting, punching, beating, tripping, battering, bruising, choking, shaking
- pinching, biting
- holding, restraining, confinement
- breaking bones
- assault with a weapon such as a knife or gun
Mental, psychological, or emotional abuse can be verbal or nonverbal. Verbal or nonverbal abuse of a spouse or intimate partner consists of more subtle actions or behaviors than physical abuse. While physical abuse might seem worse, the scars of verbal and emotional abuse are deep. Studies show that verbal or nonverbal abuse can be much more emotionally damaging than physical abuse.
Verbal or nonverbal abuse of a spouse or intimate partner may include:
- threatening or intimidating to gain compliance
- destruction of the victim’s personal property and possessions, or threats to do so
- violence to an object (such as a wall or piece of furniture) or pet, in the presence of the intended victim, as a way of instilling fear of further violence
- yelling or screaming
- constant harassment
- embarrassing, making fun of, or mocking the victim, either alone within the household, in public, or in front of family or friends
- criticizing or diminishing the victim’s accomplishments or goals
- not trusting the victim’s decision-making
- telling the victim that they are worthless on their own, without the abuser
- excessive possessiveness, isolation from friends and family
- excessive checking-up on the victim to make sure they are at home or where they said they would be
- saying hurtful things while under the influence of drugs or alcohol, and using the substance as an excuse to say the hurtful things
- blaming the victim for how the abuser acts or feels
- making the victim remain on the premises after a fight, or leaving them somewhere else after a fight, just to “teach them a lesson”
- making the victim feel that there is no way out of the relationship
Sexual abuse includes:
- sexual assault: forcing someone to participate in unwanted, unsafe, or degrading sexual activity
- sexual harassment: ridiculing another person to try to limit their sexuality or reproductive choices
- sexual exploitation (such as forcing someone to look at pornography, or forcing someone to participate in pornographic film-making)
Sexual abuse often is linked to physical abuse; they may occur together, or the sexual abuse may occur after a bout of physical abuse.
Stalking is harassment of or threatening another person, especially in a way that haunts the person physically or emotionally in a repetitive and devious manner. Stalking of an intimate partner can take place during the relationship, with intense monitoring of the partner’s activities. Or stalking can take place after a partner or spouse has left the relationship. The stalker may be trying to get their partner back, or they may wish to harm their partner as punishment for their departure. Regardless of the fine details, the victim fears for their safety.
Stalking can take place at or near the victim’s home, near or in their workplace, on the way to the store or another destination, or on the Internet (cyber-stalking). Stalking can be on the phone, in person, or online. Stalkers may never show their face, or they may be everywhere, in person.
Stalkers employ a number of threatening tactics:
- repeated phone calls, sometimes with hang-ups
- following, tracking (possibly even with a global positioning device)
- finding the person through public records, online searching, or paid investigators
- watching with hidden cameras
- suddenly showing up where the victim is, at home, school, or work
- sending emails; communicating in chat rooms or with instant messaging (cyberstalking: see below)
- sending unwanted packages, cards, gifts, or letters
- monitoring the victim’s phone calls or computer-use
- contacting the victim’s friends, family, co-workers, or neighbors to find out about the victim
- going through the victim’s garbage
- threatening to hurt the victim or their family, friends, or pets
- damaging the victim’s home, car, or other property
Stalking is unpredictable and should always be considered dangerous. If someone is
- tracking you;
- contacting you when you do not wish to have contact;
- attempting to control you; or
- frightening you
then seek help immediately.
Cyber-stalking is the use of telecommunication technologies such as the Internet or email to stalk another person. Cyber-stalking may be an additional form of stalking, or it may be the only method the abuser employs. Cyber-stalking is deliberate, persistent, and personal.
Spamming with unsolicited email is different from cyber-stalking. Spam does not focus on the individual, as does cyber-stalking. The cyber-stalker methodically finds and contacts the victim. Much like spam of a sexual nature, a cyber-stalker’s message may be disturbing and inappropriate. Also like spam, you cannot stop the contact with a request. In fact, the more you protest or respond, the more rewarded the cyber-stalker feels. The best response to cyber-stalking is not to respond to the contact.
Cyber-stalking falls in a gray area of law enforcement. Enforcement of most state and federal stalking laws requires that the victim be directly threatened with an act of violence. Very few law enforcement agencies can act if the threat is only implied.
Regardless of whether you can get stalking laws enforced against cyber-stalking, you must treat cyber-stalking seriously and protect yourself. Cyber-stalking sometimes advances to real stalking and to physical violence.
Stalking can end in violence whether or not the stalker threatens violence. And stalking can turn into violence even if the stalker has no history of violence.
Women stalkers are just as likely to become violent as are male stalkers.
Those around the stalking victim are also in danger of being hurt. For instance, a parent, spouse, or bodyguard who makes the stalking victim unattainable may be hurt or killed as the stalker pursues the stalking victim.
Economic or financial abuse includes:
- withholding economic resources such as money or credit cards
- stealing from or defrauding a partner of money or assets
- exploiting the intimate partner’s resources for personal gain
- withholding physical resources such as food, clothes, necessary medications, or shelter from a partner
- preventing the spouse or intimate partner from working or choosing an occupation
Spiritual abuse includes:
- using the spouse’s or intimate partner’s religious or spiritual beliefs to manipulate them
- preventing the partner from practicing their religious or spiritual beliefs
- ridiculing the other person’s religious or spiritual beliefs
- forcing the children to be reared in a faith that the partner has not agreed to
Domestic violence often plays out in the workplace. For instance, a husband, wife, girlfriend, or boyfriend might make threatening phone calls to their intimate partner or ex-partner. Or the worker may show injuries from physical abuse at home.
If you witness a cluster of the following warning signs in the workplace, you can reasonably suspect domestic abuse:
- Bruises and other signs of impact on the skin, with the excuse of “accidents”
- Depression, crying
- Frequent and sudden absences
- Frequent lateness
- Frequent, harassing phone calls to the person while they are at work
- Fear of the partner, references to the partner’s anger
- Decreased productivity and attentiveness
- Isolation from friends and family
- Insufficient resources to live (money, credit cards, car)
If you do recognize signs of domestic abuse in a co-worker, talk to your Human Resources department. The Human Resources staff should be able to help the victim without your further involvement.
A strong predictor of domestic violence in adulthood is domestic violence in the household in which the person was reared. For instance, a child’s exposure to their father’s abuse of their mother is the strongest risk factor for transmitting domestic violence from one generation to the next. This cycle of domestic violence is difficult to break because parents have presented violence as the norm.
Individuals living with domestic violence in their households have learned that violence and mistreatment are the way to vent anger. Someone resorts to physical violence because
- they have solved their problems in the past with violence,
- they have effectively exerted control and power over others through violence, and
- no one has stopped them from being violent in the past.
Some immediate causes that can set off a bout of domestic abuse are:
- provocation by the intimate partner
- economic hardship, such as prolonged unemployment
Society contributes to domestic violence by not taking it seriously enough and by treating it as expected, normal, or deserved. Specifically, society perpetuates domestic abuse in the following ways.
- Police may not treat domestic abuse as a crime, but, rather, as a “domestic dispute”
- Courts may not award severe consequences, such as imprisonment or economic sanctions
- A community usually doesn’t ostracize domestic abusers
- Clergy or counselors may have the attitude that the relationship needs to be improved and that the relationship can work, given more time and effort
- People may have the attitude that the abuse is the fault of the victim, or that the abuse is a normal part of marriage or domestic partnerships
- Gender-role socialization and stereotypes condone abusive behavior by men
Community solutions may be inadequate, such that victims cannot get the help they need. For example, seeking refuge in a shelter may require a woman to leave her neighborhood, social support system, job, school, and childcare. In addition, teenagers are often not welcome at shelters, particularly teenage males. Teenage girls with children may have difficulty finding shelter because of their own age. And male victims of domestic violence have trouble finding shelters that will take them.
Domestic abuse is more common in low-income populations. Low-income victims may lack mobility and the financial resources to leave an abusive situation.
Domestic abuse knows no age or ethnic boundaries.
Domestic abuse can occur during a relationship or after a relationship has ended.
Most psychological, medical, and legal experts agree that the vast majority of physical abusers are men. However, women can also be the perpetrators of domestic violence.
The majority of stalkers are also men stalking women. But stalkers can also be women stalking men, men stalking men, or women stalking women.
To learn more, go to HelpGuide.Org. | 2026-01-31T23:46:51.678892 |
929,641 | 3.851274 | http://en.wikipedia.org/wiki/Restoration_(England) | The Restoration of the English monarchy began when the English, Scottish and Irish monarchies were all restored under Charles II after the Interregnum that followed the Wars of the Three Kingdoms. The term Restoration is used to describe both the actual event by which the monarchy was restored, and the period of several years afterwards in which a new political settlement was established. It is very often used to cover the whole reign of Charles II (1660–1685) and often the brief reign of his younger brother James II (1685-1688). In certain contexts it may be used to cover the whole period of the later Stuart monarchs as far as the death of Queen Anne and the accession of the Hanoverian George I in 1714; for example Restoration comedy typically encompasses works written as late as 1710.
The Protectorate, which followed the Commonwealth and preceded the English Restoration, might have continued if Oliver Cromwell's son Richard, who was made Lord Protector on his father's death, had been capable of carrying on his father's policies. Richard Cromwell's main weakness was that he did not have the confidence of the army. After seven months, an army faction known as the Wallingford House party removed him on 6 May 1659 and reinstalled the Rump Parliament. Charles Fleetwood was appointed a member of the Committee of Safety and of the Council of State, and one of the seven commissioners for the army. On 9 June 1659, he was nominated lord-general (commander-in-chief) of the army. However, his leadership was undermined in Parliament, which chose to disregard the army's authority in a similar fashion to the post-First Civil War Parliament. A royalist uprising was planned for 1 August 1659, but it was foiled. However, Sir George Booth gained control of Cheshire; Charles II hoped that with Spanish support he could effect a landing, but none was forthcoming. Booth held Cheshire until the end of August when he was defeated by General Lambert. The Commons, on 12 October 1659, cashiered General John Lambert and other officers, and installed Fleetwood as chief of a military council under the authority of the Speaker. The next day Lambert ordered that the doors of the House be shut and the members kept out. On 26 October a "Committee of Safety" was appointed, of which Fleetwood and Lambert were members. Lambert was appointed major-general of all the forces in England and Scotland, Fleetwood being general. The Committee of Safety sent Lambert with a large force to meet George Monck, who was in command of the English forces in Scotland, and either negotiate with him or force him to come to terms.
It was into this atmosphere that Monck, the governor of Scotland under the Cromwells, marched south with his army from Scotland. Lambert's army began to desert him, and he returned to London almost alone. Monck marched to London unopposed. The Presbyterian members, excluded in Pride's Purge of 1648, were recalled, and on 24 December the army restored the Long Parliament. Fleetwood was deprived of his command and ordered to appear before Parliament to answer for his conduct. On 3 March 1660, Lambert was sent to the Tower of London, from which he escaped a month later. He tried to rekindle the civil war in favour of the Commonwealth by issuing a proclamation calling on all supporters of the "Good Old Cause" to rally on the battlefield of Edgehill. But he was recaptured by Colonel Richard Ingoldsby, a participant in the regicide of Charles I who hoped to win a pardon by handing Lambert over to the new regime. Lambert was incarcerated and died in custody on Drake's Island in 1684; Ingoldsby was indeed pardoned.
Restoration of Charles II
On 4 April 1660, Charles II issued the Declaration of Breda, in which he made several promises in relation to the reclamation of the crown of England. Monck organised the Convention Parliament, which met for the first time on 25 April. On 8 May it proclaimed that King Charles II had been the lawful monarch since the execution of Charles I on 30 January 1649. "Constitutionally, it was as if the last nineteen years had never happened." Charles returned from exile, leaving The Hague on 23 May and landing at Dover on 25 May. He entered London on 29 May, his birthday. To celebrate his Majesty's Return to his Parliament", 29 May was made a public holiday, popularly known as Oak Apple Day. He was crowned at Westminster Abbey on 23 April 1661.
Some contemporaries described the Restoration as "a divinely ordained miracle. The sudden and unexpected deliverance from usurpation and tyranny was interpreted as a restoration of the natural and divine order". The Cavalier Parliament convened for the first time on 8 May 1661, and it would endure for over 17 years, finally being dissolved on 24 January 1679. Like its predecessor, it was overwhelmingly Royalist. It is also known as the Pensionary Parliament for the many pensions it granted to adherents of the King.
Many Royalist exiles returned and were rewarded. Prince Rupert of the Rhine returned to the service of England, became a member of the privy council, and was provided with an annuity. George Goring, 1st Earl of Norwich, returned to be the Captain of the King's guard and received a pension. Marmaduke Langdale returned and was made "Baron Langdale". William Cavendish, Marquess of Newcastle, returned and was able to regain the greater part of his estates. He was invested in 1666 with the Order of the Garter (which had been bestowed upon him in 1650), and was advanced to a dukedom on 16 March 1665.
Commonwealth regicides and rebels
|This section does not cite any references or sources. (May 2011)|
The Indemnity and Oblivion Act, which became law on 29 August 1660, pardoned all past treason against the crown, but specifically excluded those involved in the trial and execution of Charles I. 31 of the 59 commissioners (judges) who had signed the death warrant in 1649 were living.
In the ensuing trials, twelve were condemned to death, the full penalty for Fifth Monarchist. Thomas Harrison, the first person found guilty of regicide, was the seventeenth of the 59 commissioners to sign the death warrant. He was the first regicide to be hanged, drawn and quartered because he was considered by the new government still to represent a real threat to the re-established order.
In October 1660, at Charing Cross or Tyburn, London, ten were publicly hanged, drawn and quartered: Thomas Harrison, John Jones, Adrian Scroope, John Carew, Thomas Scot, and Gregory Clement, who had signed the king's death warrant; the preacher Hugh Peters; Francis Hacker and Daniel Axtell, who commanded the guards at the king's trial and execution; and John Cooke, the solicitor who directed the prosecution.
Oliver Cromwell, Henry Ireton, Judge Thomas Pride, and Judge John Bradshaw were posthumously attainted for high treason. Because Parliament is a court, the highest in the land, a bill of attainder is a legislative act declaring a person guilty of treason or felony, in contrast to the regular judicial process of trial and conviction. In January 1661, the corpses of Cromwell, Ireton and Bradshaw were exhumed and hanged in chains at Tyburn.
In 1661 John Okey, one of the regicides who signed the death warrant of Charles I, was brought back from Holland along with Miles Corbet, friend and lawyer to Cromwell, and John Barkstead, former constable of the Tower of London. They were all imprisoned in the Tower. From there they were taken to Tyburn and hanged, drawn and quartered on 19 April 1662. A further 19 regicides were imprisoned for life.
John Lambert was not in London for the trial of Charles I. At the Restoration, he was found guilty of high treason and remained in custody in Guernsey for the rest of his life. Sir Henry Vane the Younger served on the Council of State during the Interregnum even though he refused to take the oath which expressed approbation (approval) of the King's execution. At the Restoration, after much debate in Parliament, he was exempted from the Indemnity and Oblivion Act. In 1662 he was tried for high treason, found guilty and beheaded on Tower Hill on 14 June 1662.
Regrant of certain Commonwealth titles
The Instrument of Government, The Protectorate's written constitutions, gave to the Lord Protector the King's power to grant titles of honour. Over 30 new knighthoods were granted under the Protectorate. These knighthoods passed into oblivion upon the Restoration of Charles II, however many were regranted by the restored King.
Of the eleven Protectorate baronetcies, two had been previously granted by Charles I during the Civil War — but under Commonwealth legislation they not recognised under the Protectorate (hence the Lord Protector's regranting of them), however when that legislation passed in to oblivion these two baronets were entitled to use the baronetcies granted by Charles I — and Charles II regranted four more. Only one now continues: Sir Richard Thomas Willy, 14th baronet, is the direct successor of Sir Griffith Williams. Of the remaining Protectorate baronets one, Sir William Ellis, was granted a knighthood by Charles II.
Edmund Dunch was created Baron Burnell of East Wittenham in April 1658, but this barony was not regranted. The male line failed in 1719 with the death of his grandson, also Edmund Dunch, so no one can lay claim to the title.
The one hereditary viscountcy Cromwell created for certain,[a] (making Charles Howard Viscount Howard of Morpeth and Baron Gilsland) continues to this day. In April 1661, Howard was created Earl of Carlisle, Viscount Howard of Morpeth, and Baron Dacre of Gillesland. The present Earl is a direct descendant of this Cromwellian creation and Restoration recreation.
Venner rebellion (January 1661)
On 6 January 1661, about 50 Fifth Monarchists, headed by a wine-cooper named Thomas Venner, tried to gain possession of London in the name of "King Jesus". Most were either killed or taken prisoner; on 19 and 21 January 1661, Venner and 10 others were hanged, drawn and quartered for high treason.
The Church of England was restored as the national Church in England, backed by the Clarendon Code and the Act of Uniformity 1662. People reportedly "pranced around May poles as a way of taunting the Presbyterians and Independents" and "burned copies of the Solemn League and Covenant".
Historian Roger Baker argues that the Restoration and Charles' coronation mark a reversal of the stringent Puritan morality, "as though the pendulum [of England's morality] swung from repression to licence more or less overnight." Theatres reopened after having been closed during the protectorship of Oliver Cromwell, Puritanism lost its momentum, and the bawdy "Restoration comedy" became a recognisable genre. In addition, women were allowed to perform on stage for the first time. In Scotland, Episcopacy was reinstated.
To celebrate the occasion and cement their diplomatic relations, the Dutch Republic presented Charles with the Dutch Gift, a fine collection of old master paintings, classical sculptures, furniture, and a yacht.
End of the Restoration
The Glorious Revolution ended the Restoration. The Glorious Revolution which overthrew King James II of England was propelled by a union of English Parliamentarians with the Dutch stadtholder William III of Orange-Nassau (William of Orange). William's successful invasion of England with a Dutch fleet and army led to his ascending of the English throne as William III of England jointly with his wife Mary II of England.
In April 1688, James re-issued the Declaration of Indulgence and ordered all Anglican clergymen to read it to their congregations. When seven Bishops, including the Archbishop of Canterbury, submitted a petition requesting the reconsideration of the King's religious policies, they were arrested and tried for seditious libel. On 30 June 1688, a group of seven Protestant nobles invited the Prince of Orange to come to England with an army; by September it became clear that William would invade England. When William arrived on 5 November 1688, James lost his nerve, declined to attack the invading Dutch and tried to flee to France. He was captured in Kent; later, he was released and placed under Dutch protective guard. Having no desire to make James a martyr, William, Prince of Orange, let him escape on 23 December. James was received in France by his cousin and ally, Louis XIV, who offered him a palace and a pension.
William convened a Convention Parliament to decide how to handle the situation. While the Parliament refused to depose James, they declared that James, having fled to France had effectively abdicated the throne, and that the throne was vacant. To fill this vacancy, James's daughter Mary was declared Queen; she was to rule jointly with her husband William, Prince of Orange, who would be king. The English Parliament passed the Bill of Rights of 1689 that denounced James for abusing his power. The abuses charged to James included the suspension of the Test Acts, the prosecution of the Seven Bishops for merely petitioning the crown, the establishment of a standing army, and the imposition of cruel punishments. The Bill also declared that henceforth, no Roman Catholic was permitted to ascend the English throne, nor could any English monarch marry a Roman Catholic.
- Restoration comedy
- Restoration literature
- Royal Society
- Rota Club
- Restoration spectacular
- Restoration style
- Restoration, novel by Rose Tremain, and the film based on it
- Samuel Pepys, whose diary is one of the primary historical sources for this period
- 17th century Britain
- Cromwell had intended to make Bulstrode Whitelocke a viscount but it is not clear if he so before he died
- CEE staff 2007, Restoration.
- EB staff 2012, Restoration.
- Yadav 2010.
- Keeble 2002, pp. 8–10.
- Hutton 2000, p. 121.
- Chisholm 1911, p. 108.
- House of Commons Journal Volume 8, 8 May 1660
- Harris 2005, p. 47.
- Pepys Diary 23 April 1661.
- House of Commons Journal Volume 8, 30 May 1660
- Jones 1978, p. 15.
- Clark 1953, p. 3.
- Harris 2005, pp. 52–53.
- Baker, Roger (1994). Drag: A History of Female Impersonation In The Performing Arts. New York City: NYU Press. p. 85. ISBN 0814712533.
- CEE staff (2007). "Restoration". The Columbia Electronic Encyclopedia (6th ed.). Columbia University Press. Retrieved April 2012.
- Chisholm, Hugh, ed. (1911). "Lambert, John". Encyclopædia Britannica 16 (11th ed.). Cambridge University Press. pp. 108,109
- EB staff (2012). Restoration. Encyclopedia Britannica Online. Retrieved April 2012.
- Clark, Sir George (1953). The Later Stuarts 1660–1714 (2nd ed.). Oxford University Press. p. 3.
- Harris, Tim (2005). Restoration:Charles II and His Kingdoms 1660–1685. Allen Lane.
- Hutton, Ronald (2000). The British Republic 1649–1660 (2nd ed.). Macmillan. p. 121.
- Jones, J.R. (1978). Country and Court: England 1658–1714. Edward Arnold. p. 15.
- Keeble, N. H. (2002). The Restoration: England in the 1660s, History of Early Modern England Series. Oxford: Blackwell Publishers. ISBN 0-631-23617-1.
- Yadav, Alok (18 July 2010). "Historical Outline of Restoration and 18th-Century British Literature". Retrieved April 2012.
- Review of 'Revolution and Counter-Revolution in England, Ireland and Scotland 1658–60', by Brian Manning
- Chapter V. The Stewart Restoration By Sir Charles Harding Firth | 2026-02-01T18:07:24.465043 |
281,345 | 4.008784 | http://www.medcalc.org/manual/pairedttest.php | Paired samples t-test
The paired samples t-test is used to test the null hypothesis that the average of the differences between a series of paired observations is zero. Observations are paired when, for example, they are performed on the same samples or subjects.
Select the variables for sample 1 and sample 2, and a possible selection criterion for the data pairs. You can use the button to select variables and selection criteria in the variables list.
Logarithmic transformation: if the data require a logarithmic transformation (e.g. when the data are positively skewed), select the Logarithmic transformation option.
The program displays the summary statistics of the two samples followed by the mean of the differences between the paired observations, and the standard deviation of these differences, followed by a 95% confidence interval for the mean.
Note that the sample size will always be equal (only cases are included with data available for the two variables).
Next the result of the null hypothesis test is displayed. If the calculated P-value is less than 0.05, the conclusion is that the mean difference between the paired observations is statistically significantly different from 0.
If you selected the Log transformation option, the program performs the calculations on the logarithms of the observations, but reports the back-transformed summary statistics.
For the paired samples t-test, the mean difference and 95% confidence are given on the log-transformed scale.
Next, the results of the t-test are transformed back and the interpretation is as follows: the back-transformed mean difference of the logs is the geometric mean of the ratio of paired values on the original scale (Altman, 1991). | 2026-01-22T12:25:45.870832 |
192,078 | 3.651299 | http://www.sciencedaily.com/releases/2010/11/101129131814.htm | Nov. 30, 2010 Gold nanoparticles, tiny pieces of gold so small that they can't be seen by the naked eye, are used in electronics, healthcare products and as pharmaceuticals to fight cancer. Despite their positive uses, the process to make the nanoparticles requires dangerous and extremely toxic chemicals. While the nanotechnology industry is expected to produce large quantities of nanoparticles in the near future, researchers have been worried about the environmental impact of the global nanotechnological revolution.
Now, a study by a University of Missouri research team, led by MU scientist Kattesh Katti, curators' professor of radiology and physics in the School of Medicine and the College of Arts and Science, senior research scientist at the University of Missouri Research Reactor and director of the Cancer Nanotechnology Platform, has found a method that could replace nearly all of the toxic chemicals required to make gold nanoparticles. The missing ingredient can be found in nearly every kitchen's spice cabinet -- cinnamon.
The usual method of creating gold nanoparticles utilizes harmful chemicals and acids that are not environmentally safe and contain toxic impurities. In the MU study, Katti and researchers Raghuraman Kannan, the Michael J and Sharon R. Bukstein Distinguished Faculty Scholar in Cancer Research, assistant professor of radiology and director of the Nanoparticle Production Core Facility; and Nripen Chanda, a research associate scientist, mixed gold salts with cinnamon and stirred the mixture in water to synthesize gold nanoparticles. The new process uses no electricity and utilizes no toxic agents.
"The procedure we have developed is non-toxic," Kannan said. "No chemicals are used in the generation of gold nanoparticles, except gold salts. It is a true 'green' process."
"From our work in green nanotechnology, it is clear that cinnamon -- and other species such as herbs, leaves and seeds -- will serve as a reservoir of phytochemicals and has the capability to convert metals into nanoparticles," Katti said. "Therefore, our approach to 'green' nanotechnology creates a renaissance symbolizing the indispensable role of Mother Nature in all future nanotechnological developments."
During the study, the researchers found that active chemicals in cinnamon are released when the nanoparticles are created. When these chemicals, known as phytochemicals, are combined with the gold nanoparticles, they can be used for cancer treatment. The phytochemicals can enter into cancer cells and assist in the destruction or imaging of cancer cells, Katti said.
"Our gold nanoparticles are not only ecologically and biologically benign, they also are biologically active against cancer cells," Katti said.
As the list of applications for nanotechnology grows in areas such as electronics, healthcare products and pharmaceuticals, the ecological implications of nanotechnology also grow. When considering the entire process from development to shipping to storage, creating gold nanoparticles with the current process can be incredibly harmful to the environment, Chanda said.
"On one hand, you are trying to create a new, useful technology. However, continuing to ignore the environmental effects is detrimental to the progress," Kannan said.
The study was published this fall in Pharmaceutical Research.
Other social bookmarking and sharing tools:
- Nripen Chanda, Ravi Shukla, Ajit Zambre, Swapna Mekapothula, Rajesh R. Kulkarni, Kavita Katti, Kiran Bhattacharyya, Genevieve M. Fent, Stan W. Casteel, Evan J. Boote, John A. Viator, Anandhi Upendran, Raghuraman Kannan, Kattesh V. Katti. An Effective Strategy for the Synthesis of Biocompatible Gold Nanoparticles Using Cinnamon Phytochemicals for Phantom CT Imaging and Photoacoustic Detection of Cancerous Cells. Pharmaceutical Research, 2010; DOI: 10.1007/s11095-010-0276-6
Note: If no author is given, the source is cited instead. | 2026-01-21T05:08:17.743115 |
310,981 | 3.731918 | http://www.cdc.gov/violenceprevention/youthviolence/electronicaggression/ | Technology and Youth Violence
Young people are using media technology, including cell phones, personal data assistants, and the Internet, to communicate with others in the United States and throughout the world. Communication avenues, such as text messaging, chat rooms, and social networking websites (e.g., MySpace and Facebook), have allowed youth to easily develop relationships, some with people they have never met in person.
Media technology has many potential benefits for youth. It allows young people to communicate with family and friends on a regular basis. This technology also provides opportunities to make rewarding social connections for those teens and pre-teens who have difficulty developing friendships in traditional social settings or because of limited contact with same-aged peers. In addition, regular Internet access allows young people to quickly increase their knowledge on a wide variety of topics.
However, the explosion in communication tools and avenues does not come without possible risks. Youth can use electronic media to embarrass, harass or threaten their peers. Increasing numbers of teens and pre-teens are becoming victims of this new form of violence. Although many different terms-such as cyberbullying, Internet harassment, and Internet bullying-have been used to describe this type of violence, electronic aggression is the term that most accurately captures all types of violence that occur electronically. Like traditional forms of youth violence, electronic aggression is associated with emotional distress and conduct problems at school. In fact, recent research suggests that youth who are victimized electronically are also very likely to also be victimized off-line (i.e., sexually harassed, psychological or emotional abuse by a caregiver, witnessing an assault with a weapon, and being raped). 1
The Centers for Disease Control and Prevention (CDC) convened a panel of experts to discuss issues related to the emerging public health problem of electronic aggression. The panel included representatives from research universities, public school systems, federal agencies, and nonprofit organizations. A special issue of the Journal of Adolescent Health summarizes the data and recommendations from this expert panel meeting.
The following resources provide additional information on electronic aggression, youth violence prevention, and safe schools.
- Electronic Media and Youth Violence: A CDC Issue Brief for Educators and Caregivers
- Electronic Media and Youth Violence: A CDC Research Brief for Researchers
- Journal of Adolescent Health
- Technology and Youth: Protecting your Child from Electronic Aggression
- Adolescent and School Health
- CDC Podcast on Electronic Aggression [Podcast 12:32]
- Safe Youth, Safe Schools
- Understanding Bullying—Fact Sheet
- Youth Violence Prevention
- Mitchell KJ, Finkelhor D, Wolak J, et al. Youth internet victimization in a broader victimization context. J Adolesc Health 2011;48:128–134.
Get email updates
To receive email updates about this page, enter your email address:
- Centers for Disease Control and Prevention
National Center for Injury Prevention and Control (NCIPC)
4770 Buford Hwy, NE
Atlanta, GA 30341-3717
TTY: (888) 232-6348
New Hours of Operation:
- Contact CDC-INFO | 2026-01-22T23:28:48.104302 |
234,823 | 3.889503 | http://www.merckmanuals.com/vet/print/integumentary_system/ticks/overview_of_ticks.html | Ticks are obligate ectoparasites of most types of terrestrial vertebrates virtually wherever these animals are found. Ticks are large mites and thus are arachnids, members of the subclass Acari. They are more closely related to spiders than to insects. The ~850 described species are exclusively blood-sucking in all feeding stages. Ticks transmit a greater variety of infectious organisms than any other group of arthropods and, worldwide, are second only to mosquitoes in terms of their public health and veterinary importance. Some of these agents are only slightly pathogenic to livestock but may cause disease in humans; others cause diseases in livestock that are of tremendous economic importance. In addition, ticks can harm their hosts directly by inducing toxicosis (eg, sweating sickness [see Sweating Sickness], tick paralysis [see Tick Paralysis] caused by salivary fluids containing toxins), skin wounds susceptible to secondary bacterial infections and screwworm infestations, and anemia and death. International movement of animals infected with the tick-transmitted blood parasites Theileria, Babesia, and Anaplasma spp and Ehrlichia (Cowdria) ruminantium is widely restricted.
Movement of tick-infested livestock over great distances, and introduction of livestock to exotic tick species and tickborne agents to which they have no immunity or innate resistance are important factors in the extensive distribution and prevalence of many tick species and tickborne disease agents. A number of introduced tick species thrive in the vast grazing and browsing environments established during recent centuries of human and livestock population explosions.
Two of the 3 families of ticks parasitize livestock: the Argasidae (argasids, “soft ticks”) and the Ixodidae (ixodids, “hard ticks”). Although they share certain basic properties, argasids and ixodids differ in many structural, behavioral, physiologic, ecologic, feeding, and reproductive patterns. Tropical and subtropical species may undergo 1, 2, or rarely 3 complete life cycles annually. In temperate zones, there is often 1 annual cycle; in northern regions and at higher elevations in temperate regions, at least 2–4 yr are required by most species. There are 4 developmental stages: egg, larva, nymph, and adult. All larvae have 3 pairs of legs; all nymphs and adults have 4. Adults have a distinctive genital and anal area on the ventral body surface. The foreleg tarsi of all ticks bear a unique sensory apparatus—Haller's organ—for sensing carbon dioxide, chemical stimuli (odor), temperature, humidity, etc. Pheromones stimulate group assembly, species recognition, mating, and host selection.
Certain tick species that parasitize livestock can survive several months, and occasionally a few years, without food if environmental conditions permit. Tick host preferences are usually limited to a certain genus, family, or order of vertebrates; however, certain ticks are exceptionally adaptable to a variety of hosts, so each species must be evaluated separately. The larvae and nymphs of most ixodids that parasitize livestock feed on small wildlife such as birds, rodents, small carnivores, or even lizards.
In the Argasidae, the leathery dorsal surface lacks a hard plate (scutum). Male and female argasids appear to be much alike, except for the larger size of the female and differences in external genitalia. The argasid capitulum (mouthparts) arises from the anterior of the body in larvae but from the ventral body surface in nymphs and adults.
In the Ixodidae, the male dorsal surface is covered by a scutum. The scutum of the ixodid female, nymph, and larva covers only the anterior half of the dorsal surface. The ixodid capitulum arises from the anterior end of the body in each developmental stage.
The world's argasid tick fauna comprises 185 species in 4 genera, namely Argas, Carios, Ornithodoros, and Octobius in the family Argasidae. The Argasidae are highly specialized for sheltering in protected niches or crevices in wood or rocks, or in host nests or roosts in burrows and caves. Some argasid species are known to survive for several years between feedings. Most of these leathery parasites inhabit tropical or warm temperate environments with long dry seasons. Hosts are those that either rest in large numbers near the argasid microhabitat, or return from time to time to rest or breed there.
An argasid population typically parasitizes only a single kind of vertebrate and inhabits its shelter area. Argasids use multiple hosts, ie, the larvae feed on one host and drop to the substrate to molt; the several nymphal instars each feed separately, drop, and molt; adults feed several times (but do not molt). Argasid nymphs and adults feed rapidly (usually 30–60 min). Larvae of some argasids also feed rapidly; others require several days to engorge fully. Adult argasids mate off the host several times; afterward, females deposit a few hundred eggs in several batches and feed between ovipositions.
Most of the 57 described Argas spp parasitize birds that breed in colonies in trees or against rock ledges; other parasitize cave-dwelling bats. Few feed on reptiles or wild mammals and none on livestock. Several species have become important pests of domestic fowl and pigeons; among these are the vectors of Borrelia anserina (avian spirochetosis) and the rickettsia Anaplasma (Aegyptianella) pullorum (aegyptianellosis). Argas spp also cause tick paralysis, and many are vectors of a variety of arboviruses, some of which also infect humans.
Genus Carios includes 88 species, most of which are parasites of mammals, especially bats and rodents. Depending on the species, they inhabit dens or roosts of bats located in caves or tree holes or rodent burrows. Several species parasitize colonial nesting birds and dwell in the substrate or under stones and debris in ground-level bird colonies. Many of these ticks parasitize only a single host species or a group of closely-related hosts. However, some Carios ticks will feed on humans and domestic animals if the primary host is not available. C kelleyi, a tick associated with bats and bat habitats, has been reported to carry a novel spotted fever group Rickettsia and a relapsing fever spirochete closely related to Borrelia turicatae. The seabird tick C capensis has been shown to transmit West Nile virus to ducklings. The American C puertoricensis and C talaje are potential vectors of African swine fever virus.
The majority of nearly 37 species belonging to the genus Ornithodoros inhabit animal burrows and lairs in hot, arid climates and feed on most any potential hosts that enter their habitat. Larvae in this nidicolous genus do not feed, which may be related to the fact that these ticks dwell in burrows that may house hosts irregularly. A few species have adapted to living in crevices of walls and under fences where livestock are confined and also are pests of humans. Certain species are vectors of relapsing fever spirochetes (Borrelia spp) and African swine fever virus; some species cause toxicosis, and 1 species (O coriaceus) transmits a spirochete causing epizootic bovine abortion in the western USA. Numerous Ornithodoros-transmitted salivary toxins or arboviruses cause irritation or febrile illnesses in humans.
The unique argasid genus Otobius (see Ticks: Otobius spp) has 3 species, which do not feed in the adult stage. O megnini (spinose ear tick) is exceedingly specialized biologically and structurally. It infests the ear canals of pronghorn antelope, mountain sheep, and Virginia and mule deer in low rainfall biotopes of the western USA, Mexico, and western Canada. Cattle, horses, goats, sheep, dogs, various zoo animals, and humans are similarly infested. This well-concealed parasite has been transported with livestock to western South America, Galapagos, Cuba, Hawaii, India, Madagascar, and southeastern Africa. Notably, adults have nonfunctional mouthparts and remain nonfeeding on the ground but may survive for almost 2 yr. Females can deposit up to 1,500 eggs in a 2-wk period. Larvae and 2 nymphal instars feed for 2–4 mo, mostly in winter and spring. There can be 2 or more generations per year. Humans and other animals may have severe irritation from ear canal infestations, and heavily infested livestock lose condition during winter. Tick paralysis of hosts and secondary infections by larval screwworms are reported. O megnini is infected by the agents of Q fever, tularemia, Colorado tick fever, and Rocky Mountain spotted fever. Another species, O lagophilus, feeds on the heads of jackrabbits (hares) and rabbits in western USA.
The Ixodidae number >600 species, occupy many more habitats and niches than do argasids, and parasitize a greater number of vertebrates in a wider variety of environments. Most ixodid species have a 3-host life cycle; others have a 2-host cycle, and a few have a 1-host cycle. Each ixodid postembryonic developmental stage (larva, nymph, adult) feeds only once but for a period of several days. Males and females of most species that parasitize livestock mate while on the host, although some mate off the host on the ground or in burrows. Males take less food than females but remain longer on the host and can mate with several females. During inactive seasons, few or no females are found feeding, even though males may remain attached to the hosts. Such males may contribute to transmission of pathogens to new susceptible animals by serial interhost transfer. Larval and nymphal population activity generally peaks during the “off seasons” of adults, although in some species, there is overlap in the seasonal dynamics of immatures and adults.
The ixodid males, except those in the genus Ixodes, become sexually mature only after beginning to feed, after which they mate with a feeding female. Only after mating does the female become replete and proceed to develop eggs. She then detaches, drops from the host, and over a period of several days, deposits a single batch of many eggs on or near the ground, usually in crevices or under stones, leaf litter, or debris. Depending on species and quantity of female nourishment, the egg batch usually numbers 1,000–4,000 but may be >12,000. The female dies after ovipositing. Notably, ixodids (except 1- and 2-host species, which use vertebrate host animals as habitat for much of their life cycle) spend >90% of their lifetime off the host, a fact of utmost significance in planning control measures. The several-day feeding process progresses slowly; the balloon shape characteristic of engorged larvae, nymphs, and females develops only during the final half day of feeding and is followed by detaching. The dropping time at certain hours of the day or night is governed by a circadian rhythm closely associated with the activity cycle of the principal host.
It is also important, especially in understanding the epidemiology of tickborne pathogens, to know whether immatures of an ixodid species feed on the same host species as do the adults, or on smaller vertebrates. Where acceptable smaller-sized hosts are scarce, immatures of some ixodid species can feed on the same livestock hosts as adults; immatures of other species seldom or never do so.
The proximity of acceptable hosts, air temperature gradients, and atmospheric humidity during resting and questing periods are among the factors that regulate the development of each stage and, in the case of females, oviposition.
Most ixodids have a 3-host cycle. The recently hatched larvae quest for a suitable host, usually from vegetation, feed for several days, drop, and molt to nymphs, which repeat these activities and molt to adults. Of the 3-host species that parasitize livestock or dogs, a few have immatures and adults that parasitize the same kind of host; these often develop tremendous population densities. The success of ixodid species that require smaller-size hosts for immatures depends on the availability of those hosts in the livestock browsing and grazing grounds. The natural hazards inherent in the 3-host cycle have been compensated for by the benefits afforded adaptable tick species by animal husbandry practices. Only certain ixodids specific for herbivores have adapted to coexistence with livestock, and therein lies the answer to numerous livestock tick problems in Africa, where hosts for adults and immatures are abundant.
Some ixodids, especially those that parasitize wandering mammals (and also birds in certain cases) in inclement environments of the Old World, have developed a 2-host cycle in which larvae and nymphs feed on one host, and adults on another. As in 3-host species, both hosts may be different or may be the same species. Two-host parasites of livestock thrive in both inclement and clement environments and are difficult to control. This is especially true of 2-host species that feed in the ears and anal areas of livestock.
Among the most economically important ticks are several 1-host species. These parasites evolved together with herbivores that wandered in extensive ranges in the tropics (Rhipicephalus [Boophilus] spp, Dermacentor nitens, etc) or in temperate zones (D albipictus, Hyalomma scupense). Larvae, nymphs, and adults feed on a single animal until the mated, replete females drop to the ground to oviposit.
Each species has one or more favored feeding sites on the host, although in dense infestations, other areas of the host may be used. Some feed chiefly on the head, neck, shoulders, and escutcheon; others in the ears; others around the anus and under the tail; and some in the nasal passages. Other common feeding sites are the axillae, udder, male genitalia, and tail brush. Immatures and adults often have different preferred feeding sites. Attachment of the large, irritating Amblyomma spp is regulated by a male-produced aggregation-attachment pheromone, which ensures that the ticks attach at sites least vulnerable to grooming.
Last full review/revision July 2011 by Michael L. Levin, PhD | 2026-01-21T19:57:44.047967 |
229,050 | 3.605363 | http://en.m.wikipedia.org/wiki/Wendell_Phillips | Phillips was born in Boston, Massachusetts on November 29, 1811, to Sarah Walley and John Phillips, a successful lawyer, politician, and philanthropist. Phillips was schooled at Boston Latin School, and graduated from Harvard University in 1831. He went on to attend Harvard Law School, from which he graduated in 1833. In 1834, Phillips was admitted to the Massachusetts state bar, and in the same year, he opened a law practice in Boston. His professor of oratory was Edward T. Channing, a critic of flowery speakers such as Daniel Webster. Channing emphasized the value of plain speaking, a philosophy which Phillips took to heart.
On October 21, 1835, the Boston Female Anti-Slavery Society announced that George Thompson would be speaking. Pro-slavery forces posted nearly 500 notices of a $100 reward for the citizen that would first lay violent hands on him. Thompson canceled at the last minute, and William Lloyd Garrison, a newspaper writer who spoke openly against the wrongs of slavery, was quickly scheduled to speak in his place. A lynch mob formed, forcing Garrison to escape through the back of the hall and hide in a carpenter's shop. The mob soon found him, putting a noose around his neck to drag him away. Several strong men intervened and took him to the Leverett Street Jail. Phillips, watching from nearby Court Street, was a witness to the attempted lynching. After being converted to the abolitionist cause by Garrison in 1836, Phillips stopped practicing law in order to dedicate himself to the movement. He joined the American Anti-Slavery Society and frequently made speeches at its meetings. So highly regarded were his oratorical abilities that he was known as "abolition's Golden Trumpet". When Phillips joined the Massachusetts Anti-slavery Society, he horrified his family, who tried to have him thrown into an insane sanitarium. Like many of his fellow abolitionists who honored the free produce movement, Phillips took pains to avoid cane sugar and wear no clothing made of cotton, since both were produced by the labor of Southern slaves.
It was Phillips's contention that racial injustice was the source of all of society's ills. Like Garrison, Phillips denounced the Constitution for tolerating slavery. He disagreed with the argument of abolitionist Lysander Spooner that slavery was unconstitutional, and more generally disputed Spooner's notion that any unjust law should be held legally void by judges.
In 1845, in an essay titled "No Union With Slaveholders", he argued for disunion:
The experience of the fifty years ... shows us the slaves trebling in numbers -- slaveholders monopolizing the offices and dictating the policy of the Government -- prostituting the strength and influence of the Nation to the support of slavery here and elsewhere – trampling on the rights of the free States, and making the courts of the country their tools. To continue this disastrous alliance longer is madness. The trial of fifty years only proves that it is impossible for free and slave States to unite on any terms, without all becoming partners in the guilt and responsible for the sin of slavery. Why prolong the experiment? Let every honest man join in the outcry of the American Anti-Slavery Society. (Quoted in Ruchames, The Abolitionists pg. 196)
On December 7, 1837 in Boston's Faneuil Hall, Phillips' leadership and oratory established his preeminence within the abolitionist movement. Bostonians gathered at Faneuil Hall to discuss Elijah P. Lovejoy’s murder by a mob outside his abolitionist printing press in Alton, Illinois on November 7. Lovejoy died defending himself and his press from pro-slavery rioters who set fire to a warehouse storing his press and shot Lovejoy as he stepped outside to tip a ladder being used by the mob. His death engendered a national controversy between abolitionists and anti-abolitionists. At Faneuil Hall, Massachusetts attorney general James T. Austin defended the anti-abolitionist mob, comparing their actions to 1776 patriots who fought against the British. Deeply disgusted, Phillips spontaneously rebutted, praising Lovejoy’s actions as a defense of liberty. Inspired by Phillips’ eloquence and conviction, Garrison entered a partnership with him that came to define the beginning of the 1840s abolitionist movement.
On the eve of the Civil War, Phillips gave a speech at the New Bedford Lyceum in which he defended the Confederate States' right to secede: "A large body of people, sufficient to make a nation, have come to the conclusion that they will have a government of a certain form. Who denies them the right? Standing with the principles of '76 behind us, who can deny them the right? . . . I maintain on the principles of '76 that Abraham Lincoln has no right to a soldier in Fort Sumter. . . . You can never make such a war popular. . . . The North never will endorse such a war."
In 1860–1861 many abolitionists welcomed the formation of the Confederacy because it would end the Slave Power's stranglehold over the United States government. This position was rejected by nationalists like Abraham Lincoln, who insisted on holding the Union together while gradually ending slavery. Twelve days after the attack on Ft. Sumter, Phillips announced his "hearty and hot" support for the war. Disappointed with what he regarded as Lincoln's slow action, Phillips opposed his reelection in 1864, breaking with Garrison, who supported a candidate for the first time.
In the summer of 1862, Phillips' nephew, Samuel D. Phillips died at Port Royal, South Carolina where he had gone to take part in the so-called Port Royal Experiment to assist the slave population there in the transition to freedom.
After African Americans gained the right to vote under the 15th Amendment in 1870, Phillips switched his attention to other issues, such as women's rights, universal suffrage, temperance and the labor movement.
Phillips's philosophical ideal was mainly self-control of the animal, physical self by the human, rational mind, although he admired rash activists like Elijah Lovejoy and John Brown.
Historial Gilbert Osofsky has argued that Phillips's nationalism was shaped by a religious ideology derived from the European Enlightenment as expressed by Thomas Paine, Thomas Jefferson, James Madison, and Alexander Hamilton. The Puritan ideal of a Godly Commonwealth through a pursuit of Christian morality and justice, however, was the main influence on Phillips' nationalism. He favored fragmenting the American republic in order to destroy slavery, and he sought to amalgamate all the American races. Thus, it was the moral end which mattered most in Phillips' nationalism.
Equal rights for Native Americans
Phillips was also active in efforts to gain equal rights for Native Americans, arguing that the 14th Amendment also granted citizenship to Indians. He proposed that the Andrew Johnson administration create a cabinet-level post that would guarantee Indian rights. Phillips helped create the Massachusetts Indian Commission with Indian rights activist Helen Hunt Jackson and Massachusetts governor William Claflin.
Although publicly critical of President Ulysses S. Grant's drinking, he worked with Grant's second administration on the appointment of Indian agents. Phillips lobbied against military involvement in the settling of Native American problems on the Western frontier. He accused General Philip Sheridan of pursuing a policy of Indian extermination.
Public opinion turned against Native American advocates after the Battle of the Little Bighorn in July 1876, but Phillips continued to support the land claims of the Lakota (Sioux). During the 1870s Phillips arranged public forums for reformer Alfred B. Meacham and Indians affected by the country's "Indian removal" policy, including the Ponca chief, Standing Bear, and the Omaha writer and speaker, Susette LaFlesche Tibbles.
In 1904 Wendell Phillips High School in Chicago was named in Phillips' honor.
In July 1915 a monument was erected in Boston Public Garden to commemorate Phillips. Jonathan Harr's "A Civil Action" refers to the statue in recounting Mark Phillips', a descendant of Wendell Phillips', reaction to a legal victory in the case against W.R. Grace & Co. et al.
The Wendell Phillps Award, establshhed in 1896, is bestowed annually upon a member of Tufts University's senior class.
The Wendell Phillips Prize at Harvard University is awarded to the best orator in the sophomore class.
The main building of the College of the Pacific at the University of the Pacific is named the Wendell Phillips Center.
- State Street Trust Company. Forty of Boston's historic houses. 1912.
- Phillips, Wendell. Review of Spooner's Essay on the Unconstitutionality of Slavery (1847).
- Brooklyn Daily Eagle, April 13, 1861, p. 2.
- Wendell Phillips Orator And Agitator, 1909 pg. 223
- Irving H. Bartlett. Wendell Phillips, Brahmin Radical (1962)
- Hinks, Peter P, John R. McKivigan, and R. Owen Williams. Encyclopedia of Antislavery and Abolition: Greenwood Milestones in African American History. Westport, Conn.: Greenwood Press, 2007
- Hofstadter, Richard. "Wendell Phillips: The Patrician as Agitator" in The American Political Tradition (1948)
- Osofsky, Gilbert. "Wendell Phillips and the Quest for a New American National Identity" Canadian Review of Studies in Nationalism 1973 1(1): 15-46. ISSN 0317-7904
- Ruchames, Louis, ed. The Abolitionists (1963), includes segment by Wendell Phillips, "No Union With Slaveholders", January 15, 1845.
- Stewart, James Brewer. Wendell Phillips: Liberty's Hero. Louisiana State U. Press, 1986. 356 pp.
- Stewart, James B. "Heroes, Villains, Liberty, and License: the Abolitionist Vision of Wendell Phillips" in Antislavery Reconsidered: New Perspectives on the Abolitionists (Louisiana State U. Press, 1979): 168-191.
|Wikimedia Commons has media related to: Wendell Phillips|
|Wikiquote has a collection of quotations related to: Wendell Phillips|
|Wikisource has original works written by or about:
- Spartacus Educational Biography
- Article from "Impeach Andrew Johnson"
- 'Toussaint L'Ouverture' A lecture by Wendell Phillips (1861)
- The Liberator Files, Items concerning Wendell Phillips from Horace Seldon's collection and summary of research of William Lloyd Garrison's The Liberator original copies at the Boston Public Library, Boston, Massachusetts.
- Letters, 1855, n.d.. Schlesinger Library, Radcliffe Institute, Harvard University. | 2026-01-21T18:00:38.003749 |
959,226 | 3.920129 | http://info@seaturtles.org/article.php?id=1484 | Boiling Point is our completely revised report about the threats to sea turtles from global warming and climate change. It was released during international negotiations in Copenhagen by Sea Turtle Restoration Project. See the summary below or download the full report in PDF format.
Climate change due to global warming is a triple whammy for sea turtles and their unique life cycle:
1. Rises in ocean levels mean that sandy beaches where sea turtles lay their nests are getting submerged under waves and water. This prevents adult sea turtles from returning to the beaches where they hatched to repeat their ancient nesting ritual.
2. Hotter sand temperatures result in mostly female sea turtle hatchlings. Without enough males, the species cannot survive. And if the nest sand is much too hot, no eggs will hatch at all.
3. Changes to ocean currents, temperature and acidification are likely to throw sea turtles far off normal migrations and alter food availability and abundance.
If there is no action to curb greenhouse gas emissions, could
out-of-control global warming lead to the submersion of the famous mermaid statute in
Copehagen's harbor and sea turtles summering in Denmark?
CLIMATE CHANGE TAKING A TOLL
Leatherback sea turtles have been named one of the Top 10 species most threatened by climate change in the U. S. in a report titled Americas Hottest Species. Even now, we are beginning to see signs that increased global temperatures will have a devastating impact on sea turtles.
Conclusions and Recommendations
Climate change will have an impact on sea turtle populations and the people who share beaches and waters with them. This impact is magnified by the continued threats to sea turtles and human communities from industrial fishing, coastal development, and unsustainable direct harvest.
There are two main ways to reduce the impacts of climate change: reduce climate change emissions and strengthen the ability of endangered sea turtles and their ecosystems to survive climate change.
To reduce climate emissions, the U. S. and partner nations should lead the way to:
Convince wealthy industrialized countries (listed in Annex I) to agree to at least 40 percent cuts in emissions domestically by 2020, by using green energy, sustainable transport and farming and cutting energy demand.
Adopt a moratorium and phase-out of coal-fired power stations
Analyze and drastically reduce the global warming impacts of industrialized fisheries, particularly wasteful longline fisheries benefitting from fuel subsidies and government grants.
Implement a carbon tax and 100% dividend, as defined by NASA scientist James Hansen as a mechanism for putting a price on carbon without raising money for government coffers. The idea is to tax carbon at source, then redistribute the revenue equally among taxpayers, so high carbon users are penalized while low carbon users are rewarded.
Not allow cuts to be achieved by buying carbon credits from developing countries or by buying forest in developing countries to 'offset' ongoing emissions in the industrialized world.
Fund, develop and promote increased energy efficiency
Commit rich countries to providing additional money for developing countries to grow in a clean way, and to cope with the floods, droughts and famines caused by climate change while ensuring that this money is distributed fairly and transparently.
Consider and protect the rights of indigenous people and communities in all climate change actions and policies, ensuring climate justice
To strengthen the ability of endangered sea turtles to survive climate change, the U. S. should:
Require analysis, regulation, avoidance, and mitigation of greenhouse gas and global warming impacts under existing environmental laws.
Shift national endangered species conservation strategies to address the overarching threat of global warming
Develop new laws to reduce greenhouse gas emissions and mitigate global warming impacts on biodiversity.
Analyze, regulate, avoid and mitigate greenhouse gas and global warming impacts of U. S. fisheries and evaluate and prevent those impacts on sea turtles and other species wasteful longline and shrimp trawl fisheries in particular.
Respond to TIRN petition to ban the import of swordfish that does not meet minimal U. S. fishing regulations
Establish critical habitat for the Western Pacific leatherback along the U. S. West Coast to protect key migratory and foraging habitat - which is required by law and has never been done.
Designate North Pacific loggerheads (which nest in Japan but forage in Southern California and Baja) as a distinct population and to strengthen their status from threatened to endangered under the Endangered Species Act.
Designate Western North Atlantic loggerheads (which nest in Florida and Georgia) as a distinct population and to strengthen their status from threatened to endangered under the Endangered Species Act and increase protections in the loggerheads' key nesting beaches and marine habitats.
Establish a permanent, year-round no-trawl marine reserve along the South Texas coast to ensure long-term survival of the Kemps ridley sea turtle
Extend beach protections to include buffer zones in dune habitat to accommodate rise in sea levels
Increase protections for all critical sea turtle nesting, foraging and migratory habitat, including the implementation of marine protected areas and time/area closures
Reduce impacts of non-climate related threats, such as bycatch in industrial fishing gear, plastic bag pollution, and development on critical nesting beaches | 2026-02-02T03:42:51.372392 |
1,162,790 | 4.224081 | http://www.economist.com/node/8447539 | The Argus eyes of stargazing
Ever larger telescopes are planned to study the heavens in ever more detail, but with a twist
SIZE matters, at least in astronomy. Large telescopes are able to detect fainter objects than their smaller counterparts can because they gather more light. They also produce crisper images because they can resolve smaller details of the objects they are pointed at. Although space-based telescopes avoid distortion caused by the atmosphere and can thus discern finer details than their Earthbound equivalents, they are doomed to have small mirrors because rockets can carry only so much weight into space. So, if the true nature of the universe—including the composition of dark matter and dark energy, the two great known unknowns of cosmology—is to be elucidated, bigger land-based equipment is needed.
Nearer to home, big telescopes would also be able to look for Earth-sized planets around stars other than the sun. Today's machines have identified more than 200 “extrasolar” planets, but these are all rather bigger than Earth. Not only might a suitable large telescope locate Earth's cousins, but it should also be able to study their atmospheres. That, in turn, would give clues as to whether they harboured life, since a chemically unstable atmosphere (such as one rich in a reactive gas like oxygen) is evidence suggesting biochemical activity.
The problem is that big telescopes are hard to make. The crucial component, the mirror that gathers the light, is more liable to distortion, the bigger it gets. The modern fashion, therefore, is to make telescope mirrors smaller, and then get them to collaborate.
Mine's bigger than yours
One way of doing this is to build a series of independent telescopes and point them all in the same direction. Such an arrangement provides a resolution (though not a light-gathering power) equivalent to a single mirror with a diameter equal to the distance between the two telescopes that are farthest apart in the array.
This technique has been applied for a while to radio telescopes. The problem with extending it to shorter wavelengths, such as visible light, is that the accuracy with which the instruments have to be pointed is related to the wavelength they are looking in. Radio astronomers, whose wavelengths are measured in metres, can afford to be sloppy. Optical astronomers, whose wavelengths are measured in billionths of a metre, cannot.
The compromise today is to look at microwaves—the part of the spectrum that lies between radio waves and light. Astronomers in America, Europe and Japan are collaborating on the biggest microwave array to be built so far, the Atacama Large Millimetre Array now under construction in Chile. It will have up to 64 “mirrors” (actually dish-like antennae that are large versions of the sort of thing used to receive satellite television). The advantage of microwaves, other than the ease of building large telescopic arrays to look at them, is that they can pass through the interstellar dust that obscures much of the universe, and thus illuminate processes invisible to optical-wavelength astronomers.
Though optical astronomers cannot yet manage this array-building trick, they, too, have worked out how to benefit from making small mirrors collaborate. The difference is that they put the small mirrors together to form a big one inside a single instrument.
Doing that allows them to build very big mirrors indeed. The largest single-element optical telescope at the moment—confusingly called the Large Binocular Telescope, but each of its two mirrors is made as a single piece—has mirrors 8.4 metres across. The two Keck telescopes on Hawaii and the South African Large Telescope, which use mosaics of hexagonal sub-mirrors, have systems up to 11 metres across.
Even these, however, are minnows compared with what is being planned. America's National Science Foundation is now evaluating two competing designs: the Giant Magellan Telescope, some 24 metres across, and the self-explanatory Thirty Metre Telescope. A European Extremely Large Telescope is also on the drawing board. Late last month a draft design for this, with a mirror size of between 30 metres and 60 metres, was unveiled. It is based in part on what was known as the Overwhelmingly Large Telescope, an earlier venture that was abandoned after it turned out to have overwhelmingly large costs, as well.
All these telescopes will be built atop mountains in dry areas, to get as close to outer space as possible, and thus minimise the layer of air and water vapour between the sensors and the stars. Even then, achieving high resolution requires “adaptive optics”—electronic wizardry designed to undo the distorting effects of the remaining atmosphere above the instrument in question. The idea is to monitor a reference star and, by subtly adjusting the shape of the mirror, to keep this star in focus no matter what the weather. One innovation to be tested by the European Extremely Large Telescope would be to create an artificial reference star by firing a laser into the night sky. That would be a boon to those wishing to study parts of the sky that are normally void of such objects.
The new generation of part-work telescopes would operate in collaboration with space-based astronomy, of course. Just a few weeks ago NASA, America's space agency, announced that it would upgrade the Hubble space telescope. It also has plans for a new device, dubbed the James Webb space telescope and due to be launched in 2013. But modern ground-based telescopes can complement such observatories, often achieving more and costing less. Mountaintop astronomy is entering a new golden era. | 2026-02-05T06:55:12.279302 |
916,599 | 3.741937 | http://www.classicist.org/publications-and-bookshop/handbook/ionic-order/ | HANDBOOK OF THE CLASSICAL TRADITION
The Ionic Column
The Ionic column shown in the plate illustrates the Attic Base which is commonly used in the Ionic order. This base has an extra torus or “attic†above the lower torus. Both tori are separated by two fillets and a scotia.
As with the Doric order, the base is D high and has an 8/6 D wide plinth. The plinth is 1/6 D high or 1/3 the height of the base. A large torus sits above the plinth and has a fillet at its centerline. To draw the scotia, it is best to first construct the smaller torus with its flanking fillets and then place a swooping curve between the offset fillets of the two tori. Note that the upper fillet above the smaller torus is part of the shaft and not the base.
The shaft is divided into 24 semicircular hollows or flutes which are separated by fillets. Each flute is four times wider than the fillet. When drawn freehand, the flute is about 1/9 D.
The Ionic Capital is shown in front elevation which shows the scroll volutes in elevation. From the top of the abacus to the bottom of the volutes is slightly more than D.
Drawing of the capital is facilitated by the use of a dashed line on each side of the column centerline which follows the line of lower D. From that line, the square abacus projects 1/18 D. The volutes below the abacus follow the geometry of the abacus and help visually mediate between the round echinus and astragal below. On the left side of the capital, a section through the capital at the centerline shows the horizontal channel of the volutes which is straight in relation to the rounded elements.
The eye of the volute is centered down 1/3 D from the top of the abacus and across D from the center line of the column; its diameter is 1/18 D. Drawing the volutes with precision can be accomplished by following the diagram on the succeeding plate. If the volutes are drawn by hand at a small scale, they can be drawn by creating a series of half circles which get progressively smaller as they move closer to the eye. There is a space of 2/3 D between each volute.
Horizontal dimensions for the echinus and astragal can be carried over from the section to construct these elements in elevation. The echinus almost always has the egg and dart or egg and talon ornamentation. There are generally three eggs visible. Sprigs of honeysuckle fill in the gap between the end eggs and the volute.
The Ionic Entablature
The Ionic entablature can be divided into 18 parts to derive the heights of its main components. The cornice is 7 parts (7/8 D), the frieze is 6 parts (6/8 D), and the architrave is 5 parts (5/8 D). The total height of the entablature is 2 D.
The cornice projects out from the entablature 7/8 D, which is the same dimension as its width. Like the mutulary Doric Order, the cymatium consists of a large cyma recta followed by a smaller cyma reversa. The corona sits below the cymatium and should be undercut with a drip to soften the appearance of its underside and protect the building façade from water.
A large bed mold with an ovolo, dentils, and a cyma reversa sits below the corona. To determine the location of the ovolo in relation to the corona, one can first draw the dentils starting with the one centered above the column. Each dentil is 1/6 D high by 1/9 D wide. The space between dentils, or interdentil, is 1/18 D so that the width of the dentil plus the interdentil equals 1/6 D. The rightmost of the dentils is a double dentil without a space which indicates the dentil as it turns the corner. Above the projection of the fillet of the double dentil is the start of the ovolo.
The frieze is often straight but sometimes pulvinated, or bowed. A 60 degree equilateral triangle defines the pulvination and is shown as a dashed line.
The architrave has an upper and lower fascia. At the top of the upper fascia is a fillet and cyma reversa which projects from the architrave the same dimension as its width. The upper fascia is wider that the lower one. A small ovolo and fillet separates the two fascias.
The Ionic Volute Construction
The volute may be constructed at a large scale using the following steps:
1. Divide the height of the volute into 8 equal parts. From the top of the volute come down to the fifth division and draw
a circle whose diameter is equal to the height of the division. The circle represents the eye of the volute which has a diameter of 1/18 D.
2. Inscribe a square within the circle rotated 45 degrees. Divide the square into four equal quadrants as shown in the eye diagram. Divide each line which defines the quadrants into three equal divisions.
3. Label each point on the line with a number. Start with 1 at the upper right line and move counter clockwise around the square to number 4. Next move inside the square to points 5, 6, 7 and 8. Finally, label the inner divisions 9, 10, 11 and 12. Each point is where one places the compass point to construct the volute.
4. Starting with point one for the compass point move the drawing lead directly vertical to the top of the volute and swing the compass 90 degrees counter clockwise until one reaches point 2. The compass point is then placed on point 2. From point 2, one moves the compass 90 down to point 3. From point 3, the compass moves up to the right to point 4. This exercise is repeated until on reaches point 12. As one moves to point 12, the radii will become increasingly smaller. The final rotation from point 12 will bring the final arc into the eye.
5. To draw the fillet of the volute, mark a quarter division within each of the existing divisions on the eye diagram. The quarter point should be set toward the face of the circle. Starting with the quarter division between points 5 and 1, swing the compass to the quarter division between 2 and 6. Continue moving the arcs counter clockwise with the quarter division at point 12 and the center of the circle being the last compass point. | 2026-02-01T13:52:03.230605 |
551,190 | 3.676669 | http://www.reference.com/browse/lords-day | Modern Presbyterianism traces its institutional roots back to the Scottish Reformation. Local congregations are governed by Sessions made up of representatives of the congregation, a conciliar approach which is found at other levels of decision-making (Presbytery, Synod and General Assembly). Theoretically, there are no bishops in Presbyterianism; however, some groups in Eastern Europe, and in ecumenical groups, do have bishops. The office of elder is another distinctive mark of Presbyterianism: these are specially commissioned non-clergy who take part in local pastoral care and decision-making at all levels.
The roots of Presbyterianism lie in the European Reformation of the 16th century, with the example of John Calvin's Geneva being particularly influential. Most Reformed churches who trace their history back to Britain are either Presbyterian or Congregationalist in government. Presbyterian theology typically emphasizes the sovereignty of God, a high regard for the authority of the Bible, and an emphasis on the necessity of grace through faith in Christ.
In the twentieth century, some Presbyterians have played an important role in the Ecumenical Movement, including the World Council of Churches. Many Presbyterian denominations have found ways of working together with other Reformed denominations and Christians of other traditions, especially in the World Alliance of Reformed Churches. Some Presbyterian Churches have entered into unions with other churches, such as Congregationalists, Lutherans, Anglicans, and Methodists. However, others are more conservative, holding traditional interpretations of doctrines and shunning, for the most part, relations with non-Reformed bodies.
Presbyterian denominations derive their name from the Greek word presbuteros (πρεσβύτερος), which means "elder." (Presbyterian church in Acts 14:23, 20:17, Titus 1:5).
Among the early church fathers, it was noted that the offices of elder and bishop were identical, and weren't differentiated until later, and that plurality of elders was the norm for church government. St. Jerome (347-420) "In Epistle Titus", vol. iv, said, "Elder is identical with bishop, and before parties multiplied under diabolical influence, Churches were governed by a council of elders." This observation was also made by Chrysostom (349-407) in "Homilia i, in Phil. i, 1" and Theodoret (393-457) in "Interpret ad. Phil. iii", 445.
Presbyterianism was first described in detail by Martin Bucer of Strasbourg, who believed that the early Christian church implemented presbyterian polity. The first modern implementation was by the Geneva church under the leadership of John Calvin in 1541.
The Glorious Revolution of 1688 and the Acts of Union 1707 between Scotland and England guaranteed the Church of Scotland's form of government. However, legislation by the United Kingdom parliament allowing patronage led to splits in the Church, notably the Disruption of 1843 which led to the formation of the Free Church of Scotland. Further splits took place, especially over theological issues, but most Presbyterians in Scotland were reunited by 1929 union of the established Church of Scotland and the United Free Church of Scotland.England, Presbyterianism was established in secret in 1572. Thomas Cartwright is thought to be the first Presbyterian in England. Cartwright's controversial lectures at Cambridge University condemning the episcopal hierarchy of the Elizabethan Church led to his deprivation of his post by Archbishop John Whitgift and his emigration abroad. In 1647, by an act of the Long Parliament under the control of Puritans, the Church of England permitted Presbyterianism. The re-establishment of the monarchy in 1660 brought the return of Episcopal church government in England (and in Scotland for a short time); but the Presbyterian church in England continued in non-conformity, outside of the established church. By the 19th century many English Presbyterian congregations had become Unitarian in doctrine.
A number of new Presbyterian Churches were founded by Scottish immigrants to England in the 19th century and later. Following the 'Disruption' in 1843 many of those linked to the Church of Scotland eventually joined what became the Presbyterian Church of England in 1876. Some, that is Crown Court (Covent Garden, London), St Andrew's (Stepney, London)) and Swallow Street (London), did not join the English denomination, which is why there are Church of Scotland congregations in England such as those at Crown Court, and St Columba's, Pont Street (Knightsbridge) in London.
In 1972, the Presbyterian Church of England (PCofE) united with the Congregational Church in England and Wales to form the United Reformed Church (URC). Among the congregations the PCofE brought to the URC were Tunley (Lancashire) , Aston Tirrold (Oxfordshire) and John Knox Presbyterian Church, Stepney, London (now part of Stepney Meeting House URC) - these are among the sole survivors today of the English Presbyterian churches of the 17th century. The URC also has a presence in Scotland, mostly of former Congregationalist Churches. Two former Presbyterian congregations, St Columba's, Cambridge (founded in 1879), and St Columba's, Oxford (founded as a chaplaincy by the PCofE and the Church of Scotland in 1908 and as a congregation of the PCofE in 1929), continue as congregations of the URC and university chaplaincies of the Church of Scotland.
In recent years a number of smaller denominations adopting Presbyterian forms of church government have organised in England, including the International Presbyterian Church planted by evangelical theologian Francis Schaeffer of L'Abri Fellowship in the 1970s, and the Evangelical Presbyterian Church in England and Wales founded in the North of England in the late 1980s.Wales Presbyterianism is represented by the Presbyterian Church of Wales, which was originally composed largely of Calvinistic Methodists. Ulster having been strongly encouraged to emigrate by James VI of Scotland, later James I of England. An estimated 100,000 Scottish Presbyterians moved to the northern counties of Ireland between 1607 and the Battle of the Boyne in 1690. The Presbytery of Ulster was formed separately from the established church, in 1642. Presbyterians, along with Roman Catholics in Ulster and the rest of Ireland, suffered under the discriminatory Penal Laws until they were revoked in the early 19th century. Presbyterianism is represented in Ireland by the Presbyterian Church in Ireland.
Even before Presbyterianism spread abroad from Scotland there were divisions in the larger Presbyterian family, some of which later rejoined only to separate again. In what some interpret as rueful self-reproach, some Presbyterians refer to the divided Presbyterian churches as the "Split P's".
In North America, because of past--or current--doctrinal differences, Presbyterian churches often overlap, with congregations of many different Presbyterian groups in any one place. The largest Presbyterian denomination in the United States is the Presbyterian Church (U.S.A.) (PC(USA)). Other Presbyterian bodies in the United States include the Presbyterian Church in America, the Orthodox Presbyterian Church, the Evangelical Presbyterian Church, the Reformed Presbyterian Church, the Bible Presbyterian Church, the Associate Reformed Presbyterian Church (ARP Synod), the Cumberland Presbyterian Church, the Westminster Presbyterian Church in the United States(WPCUS),and the Reformed Presbyterian Church in the United States (RPCUS). All the latter bodies, with perhaps the exception of the Cumberland Presbyterians, are theologically conservative and profess some degree of evangelicalism.
The territory within about a radius of Charlotte, North Carolina is historically the greatest concentration of Presbyterianism in the Southern U.S., while an almost-identical geographic area around Pittsburgh, Pennsylvania contains probably the largest number of Presbyterians in the entire nation. With their members' traditional stress on higher education, the largest Presbyterian congregations can often be found in affluent, prestigious "uptown" suburbs of American cities.
The PC (USA), beginning with its predecessor bodies, has, in common with other so-called "mainline" Protestant denominations, experienced a significant decline in members in recent years; some estimates have placed that loss at nearly half in the last forty years.
In Canada, the largest Presbyterian denomination – and indeed the largest Protestant denomination – was the Presbyterian Church in Canada, formed in 1875 with the merger of four regional groups. In 1925, the United Church of Canada was formed with the Methodist Church, Canada, and the Congregational Union of Canada. A sizable minority of Canadian Presbyterians, primarily in southern Ontario but also throughout the entire nation, withdrew, and reconstituted themselves as a non-concurring continuing Presbyterian body. They regained use of the original name in 1939.
Presbyterianism arrived in Latin America in the 19th century. The biggest Presbyterian church is the Presbyterian Church of Brazil (Igreja Presbiteriana do Brasil ), which has around five hundred thousand members. In total, there are more than one million Presbyterian members in all of Latin America. Some Latin Americans in North America are active in the Presbyterian Cursillo Movement.Presbyterian Church of East Africa, based in Kenya is particularly strong with 500 clergy and 4 million members. African presbyterian churches often incorporate diaconal ministries including social services, emergency relief, and the operation of mission hospitals. A number of partnerships exist between presbyteries in Africa and the PC(USA), including specific connections with Lesotho, Malawi, and South Africa, and Ghana. For example, the Lackawanna Presbytery, located in Northeastern Pennsylvania, has a partnership with a presbytery in Ghana. South Korea, a congregation in Seoul, Myungsung Presbyterian Church, claims to be the largest Presbyterian Church in the world. Presbyterians are the largest Protestant denomination in that country, and there are many Korean Presbyterians in the United States, either with their own church sites or sharing space in pre-existing churches.
But prior to Mizoram, the Welsh Presbyterians (missionaries) started venturing into the north-east of India through the Khasi Hills (presently located within the state of Meghalaya in India) and established Presbyterian churches all over the Khasi Hills from 1840's onwards. Hence there is a strong presence of Prebyterians in Shillong (the present capital of Meghalaya) and the areas adjoining to it .The Welsh missionaries built their first church in Cherrapunji (aka Sohra) in 1846 which is also in Meghalaya and is renowned for being the wettest place on earth.(karikor)
In Taiwan, the Presbyterian Church in Taiwan has been an important supporter of the use of Taiwanese languages (as opposed to Mandarin Chinese, which has become dominant since the Nationalists fled to the island) as a consequence of its advocacy of vernacular scriptures and worship services.New Zealand, Presbyterian is the dominant denomination in Otago and Southland due largely to the rich Scottish and to a lesser extent Ulster-Scots heritage in the region. The area around Christchurch, Canterbury, is dominated philosophically by the Anglican (Episcopalian) denomination.
Originally there were two branches of Presbyterianism in New Zealand, the northern Presbyterian church which existed in the North Island and the parts of the South Island north of the Waitaki River, and the Synod of Otago and Southland, founded by Free Church settlers in southern South Island. The two churches merged in 1901, forming what is now the Presbyterian Church of Aotearoa New Zealand.
In Australia, Presbyterianism is the fourth largest denomination of Christianity with nearly 720,000 Australians claiming to be Presbyterian in the 2001 Commonwealth Census. Presbyterian churches were founded in each colony, some with links to the Church of Scotland and others to the Free Church, including a number founded by John Dunmore Lang. Some of these bodies merged in the 1860s. In 1901 the churches linked to the Church of Scotland in each state joined together forming the Presbyterian Church of Australia but retaining their state assemblies.
In 1977, two thirds of the Presbyterian Church of Australia, along with the Congregational Union of Australia and the Methodist Church of Australasia, combined to form the Uniting Church in Australia. The majority of the other third did not join due to disagreement with the Uniting Church's liberal views, though a portion remained due to cultural attachment.
The Presbyterian Church of Vanuatu is the largest denomination in the country with approximately one-third of the population of Vanuatu members of the church. The PCV (Presbyterian Church of Vanuatu) is headed by a moderator with offices in Port Vila. The PCV is particularly strong in in the provinces of Tafea, Shefa, and Malampa. The Province of Sanma is mainly Presbyterian with a strong Roman Catholic minority in the Francophone areas of the province. There are some Presbyterian people, but no organised Presbyterian churches in Penama and Torba both of which are traditionally Anglican. Vanuatu is the only country in the South Pacific with a significant Presbyterian heritage and membership. The PCV is a founding member of the Vanuatu Christian Council (VCC). The PCV runs many primary schools and Onesua secondary school. Although the church has lost several members due to the encroachment of American fundamentalist sects, the church is still strong especially in the rural villages. The PCV was taken to Vanuatu by missionaries from Scotland.
Presbyterians place great importance upon education and continuous study of the scriptures, theological writings, and understanding and interpretation of church doctrine embodied in several statements of faith and catechisms formally adopted by various branches of the church [often referred to as 'subordinate standards'; see Doctrine (below)]. It is generally considered that the point of such learning is to enable one to put one's faith into practice; some Presbyterians generally exhibit their faith in action as well as words, by generosity, hospitality, and the constant pursuit of social justice and reform, as well as proclaiming the gospel of Christ.
Ruling elders are usually laymen (and laywomen in some denominations) who are elected by the congregation and ordained to serve with the teaching elders, assuming responsibility for nurture and leadership of the congregation. Often, especially in larger congregations, the elders delegate the practicalities of buildings, finance, and temporal ministry to the needy in the congregation to a distinct group of officers (sometimes called deacons, which are ordained in some denominations). This group may variously be known as a 'Deacon Board', 'Board of Deacons' 'Diaconate', or 'Deacons' Court'.
Above the sessions exist presbyteries, which have area responsibilities. These are composed of teaching elders and ruling elders from each of the constituent congregations. The presbytery sends representatives to a broader regional or national assembly, generally known as the General Assembly, although an intermediate level of a synod sometimes exists. This congregation / presbytery / synod / general assembly schema is based on the historical structure of the larger Presbyterian churches, such as the Church of Scotland or the Presbyterian Church (USA) (PCUSA); some bodies, such as the Presbyterian Church in America and the Presbyterian Church in Ireland, skip one of the steps between congregation and General Assembly, and usually the step skipped is the Synod. The Church of Scotland has now abolished the Synod.
Presbyterian governance is practised by Presbyterian denominations and also by many other Reformed churches.
Presbyterianism is historically a confessional tradition, which means that the doctrines taught in the church are compared to a doctrinal standard. However, there has arisen a spectrum of approaches to "confessionalism." The manner of subscription, or the degree to which the official standards establish the actual doctrine of the church, turns out to be a practical matter. That is, the decisions rendered in ordination and in the courts of the church largely determine what the church means, representing the whole, by its adherence to the doctrinal standard.
Some Presbyterian traditions adopt only the Westminster Confession of Faith, as the doctrinal standard to which teaching elders are required to subscribe, in contrast to the Larger and Shorter catechisms, which are approved for use in instruction. Many Presbyterian denominations, especially in North America, have adopted all of the Westminster Standards as their standard of doctrine which is subordinate to the Bible. These documents are Calvinistic in their doctrinal orientation, although some versions of the Confession and the catechisms are more overtly Calvinist than some other, later American revisions. The Presbyterian Church in Canada retains the Westminster Confession of Faith in its original form, while admitting the historical period in which it was written should be understood when it is read.
The Westminster Confession is 'The principal subordinate standard of the Church of Scotland' (Articles Declaratory of the Constitution of the Church of Scotland II), but 'with due regard to liberty of opinion in points which do not enter into the substance of the Faith' (V). This formulation represents many years of struggle over the extent to which the confession reflects the Word of God and the struggle of conscience of those who came to believe it did not fully do so (e.g., William Robertson Smith). Some Presbyterian Churches, such as the Free Church of Scotland, have no such 'conscience clause'. For more detail, see the article of the Church of Scotland.
The Presbyterian Church USA has adopted the Book of Confessions, which reflects the inclusion of other Reformed confessions in addition to the Westminster documents. These other documents include ancient creedal statements, (the Nicene Creed, the Apostles' Creed), 16th century Reformed confessions (the Scots Confession, the Heidelberg Catechism, the Second Helvetic Confession, all of which were written before Calvinism had developed as a particular strand of Reformed doctrine), and 20th century documents (The Theological Declaration of Barmen and the Confession of 1967).
The Presbyterian Church in Canada developed the confessional document Living Faith and retains it as a subordinate standard of the denomination. It is confessional in format, yet like the Westiminster Confession, draws attention back to the original text of the bible.
Presbyterians in Ireland who rejected Calvinism and the Westminster Confessions formed the Non-subscribing Presbyterian Church of Ireland.
Presbyterian denominations who trace their heritage to the British Isles usually organise their church services inspired by the principles in the Directory of Public Worship, developed by the Westminster Assembly in the 1640s. This directory documented Reformed worship practices and theology adopted and developed over the preceding century by British Puritans, initially guided by John Calvin and John Knox. It was enacted as law by the Scottish Parliament, and became one of the foundational documents of Presbyterian church legislation elsewhere.
Historically, the driving principle in the development of the standards of Presbyterian worship is the Regulative principle of worship, which specifies that (in worship), what is not commanded is forbidden.
Presbyterians traditionally have held the Worship position that there are only two sacraments:
Over subsequent centuries, many Presbyterian churches modified these prescriptions by introducing non-biblical hymns, instrumental accompaniment and ceremonial vestments to worship. Still there is not a set in stone "Presbyterian" worship style. Although there are set services for the " Lord's Day", one can find a service to be low church(semi to non liturgical) to " High Church"(highly liturgical bordering close to Lutherans and Episcopals)
Confession of Faith: | 2026-01-26T22:02:48.091707 |
512,956 | 3.774597 | http://thexorb.com/Finance/Balance/BalanceSheet.aspx | What is Balance Sheet? Balance sheet or the statement of financial position is the financial statement which summarizes a company’s financial balance at a single point of time. Note that balance sheet is not created over a period of time rather a single point of time. Balance sheet summarizes a company’s assets, liabilities and shareholder’s equity. It provides investors with the knowledge of what a company owns versus what it owes. Balance sheet is an elaborated version of Accounting Equation. Accounting equation states that Assets equals to liabilities plus shareholders equity. In other word, Asset minus liabilities is known as a company’s net worth or shareholder’s equity.
Assets = Liabilities + Owner’s Equity
Balance sheet has three components; assets, liabilities and owners’ equity. Below are components of Balance sheet.
What are assets? Assets are anything of value that an organization owns such as plant, land and equipment. Assets can be either tangible or non-tangible. Tangible assets are anything that we can touch or feel such as machinery of a company. Intangible assets are anything that we can’t touch such as goodwill and knowledgebase. Along with tangible and intangible, assets are defined in balance sheet with two categories; current asset and long term assets. Current assets are any assets that a company can convert to cash within twelve months. Cash, account receivables, inventory, and raw materials are an organization’s current assets. Long term assets are type of assets that cannot be converted into cash within twelve months. Land, plant and equipment are an organization’s long term assets. Generally, asset sections start with most liquid assets.
Liabilities are anything that an organization owes. Similar to assets, liabilities can also be classified into two types, current liabilities and long term liabilities. Current liabilities of an organization can be paid off within twelve months. Accounts payable, employee salaries, and sales taxes are considered as current liabilities. Long term liabilities are liabilities that requires longer than 12 months to be paid off. Mortgage and long term loans are considered as long term liabilities.
Owner’s Equity is debt that a business owes to its shareholders or owners. It is also knows as net worth of a company. This is usually the amount the owner has funded to the company plus any retained income. Use the template above to construct a balance sheet.
Below is an example of balance sheet created by utilizing the template above. You can use the Balance Sheet Template above print it and turn it in as homework.
Balance Sheet Figure/Template
Balance sheet allows us to calculate various ratios such as debt to equity ratio, debt ratio, and return on equity. Use the ratio section within this site to calculate various types of ratios.
Income Statement – Build an income statement print and just turn it in as homework
Debt Ratio – Calculate debt ratio. Use the template to automatically calculate your homework.
Debt to equity ratio – calculate debt to equity ratio. Use the template to do your homework.
Back to Top | 2026-01-26T06:50:35.300124 |
456,348 | 3.647426 | http://www.nss.org/settlement/nasa/spaceresvol4/lifesupport.html | The development of a controlled ecological life support system (CELSS) is necessary to enable the extended presence of humans in space, as on the Moon or on another planetary body. Over a long period, the provision of oxygen, water, and food, and protection from such inimical agents as radiation and temperature extremes, while maintaining the psychological health of the subjects, becomes prohibitively expensive if all supplies must be brought from Earth. Thus, some kind of a regenerative life support system within an enclosure or habitat must be established, thereby cutting the umbilicus to Mother Earth, but not irreversibly. This protective enclosure will enable the survival and growth of an assemblage of terrestrial species of microorganisms, plants, and animals. I envision that the nonterrestrial ecosystem will evolve through the sequential introduction of terrestrial and local materials, together with the appropriate living forms.
The principal constraints on life on the Moon (A rendering of a 17th-Century Vision of life on the Moon),are (1) a hard vacuum; (2) apparent lack of water; (3) lack of free oxygen, (4) paucity of hydrogen, carbon, and nitrogen; (5) intense radiation, periodically augmented by solar flares; (6) wide temperature fluctuations at extremes harmful to life; (7) a 2-week diurnal rhythm; and (8) gravity only 1/6 that on Earth.
Chemistry and Nutrition
Lunar regolith as a cover for the habitat would shield the ecosystem from radiation and from both high and low temperatures. The enclosure would need to be airtight to contain a life-sustaining atmosphere of O2, CO2, N2, and H2O vapor. Presumably oxygen would be provided through the reductive processing of oxide ores. From a hydrogen reduction process, the product water could be used. With the establishment in the enclosure of photosynthesis by eucaryotes (algae and higher plants), reliance on ore processing for life-support oxygen would diminish, although that capacity should remain, in place as a backup. Additional water, as needed, would continue to be provided externally, although the amounts would be small because, in a properly functioning CELSS, all water is recycled and appropriately treated to render it potable (free of infectious or toxic agents).
The lunar regolith could also provide elements implanted there by the solar wind. These elements include hydrogen and carbon, which could be used to manufacture water and carbon dioxide, and gaseous nitrogen (see box). However, the levels of these elements are low (100-150 ppm), and thus their recovery may prove to be uneconomical. In that case, they would have to be transported from Earth until the CELSS matured. Even then, more oxygen, carbon, and nitrogen would need to be introduced into the system periodically as the human and domestic animal population of the habitat increased or as the recycling process became imbalanced. As with oxygen, tanks of compressed carbon dioxide and nitrogen should be on hand to cope with such perturbations. Both air and water would need to be biologically and chemically monitored.
The Moon contains all of the "trace elements" known to be necessary for life; e.g., magnesium, manganese, cobalt, tin, iron, selenium, zinc, vanadium, and tungsten. The trace elements are absolutely critical to all species of life, largely as cofactors or catalysts in the enzymatic machinery. While their diminution would slow down the ecosystem, a sudden flush of certain trace elements known to be toxic above certain concentrations would be detrimental to some of the component species. I hope that the microbial flora which becomes established in the ecosystem will be able to minimize the extent of fluctuation of the trace elements through its collective adsorptive and metabolic functions. This issue will need to be considered in the selection of the microbial species for introduction into the CELSS.
Although no free water has been found on the Moon, its elemental constituents are abundant there. One constituent, oxygen, is the most abundant element on the Moon; some 45 percent of the mass of lunar surface rocks and soils is oxygen. The other constituent, hydrogen, is so scarce in the lunar interior that we cannot claim to have measured any in erupted lavas. Nevertheless, thanks to implantation of ions from the solar wind into the grains of soil on the lunar surface, there is enough hydrogen in a cubic meter of typical lunar regofith to yield more than 1-1/2 pints of water.
Similarly, carbon and nitrogen have also been implanted from the solar wind. The amount of nitrogen in a cubic meter of lunar regolith is similar to the amount of hydrogen-about 100 grams, or 3 percent of the nitrogen in a human body. The amount of carbon is twice that, the carbon beneath each square meter of the lunar surface is some 35 percent of the amount found tied up in living organisms per square meter of the Earth's surface.
All of these elements except oxygen can be extracted from the lunar soil simply by heating it to a high temperature (> 1200 degrees C for carbon and nitrogen), and some oxygen comes off with the hydrogen. Thus, the problem of accessibility of hydrogen, carbon, and nitrogen reduces to one of the economics of heating substantial quantities of lunar soil, capturing the evolved gases, and separating the different gaseous components from each other. Although water could no doubt be extracted from martian soil at a much lower temperature and organic compounds could probably be extracted at a somewhat lower temperature from an asteroid that proved to be of carbonaceous chondrite composition, the Moon is much closer than Mars and its composition is much better known than that of any asteroid.
Collection of even a small fraction of the Moon's budget of hydrogen, carbon nitrogen, phosphorus, sulfur. and other elements essential to life into a suitable environment on the Moon would support a substantial biosphere.
Taken from Larry A. Haskin, 1990, Water and Cheese From the Lunar Desert: Abundances and Accessibility of H, C, and N on the Moon, in The 2nd Conference on Lunar Bases and Space Activities of the 21 st Century (in press), ad. W. W. Mendell (Houston: Lunar & Planetary Inst).
The surface of the Moon receives from the Sun lethal levels of highenergy electromagnetic radiation, frequently exacerbated by solar flares of varying duration. Without appropriate protection, no living creature, from microbe to man, could survive the onslaught of this radiation. It has been estimated that approximately 2 meters of regolith will absorb this radiation, thereby protecting the human and nonhuman occupants.
Just how much radiation will penetrate various protective shields still needs to be determined. Undoubtedly, radiation induced mutations will occur; some will be lethal; others may be incapacitating; and still others may result in mutants better able to cope with the lunar environment than the parental organisms. Of particular concern is the likelihood of mutation among many of the microorganisms constituting the ecosystem, thereby endangering the cycling of the critical elements in the lunar CELSS.
With proper attire, humans can withstand, at least for short periods, temperature extremes as high as 50 degrees C and as low as -90 degrees C. The extremes on the Moon exceed these limits by a wide margin. Moreover, other species within the CELSS module would be either killed or suppressed by such extreme temperatures. Obviously, the temperature of a CELSS must be maintained within a moderate range, such as 15 to 45 degrees C, to enable the growth and reproduction of living forms. While many types of psychrophilic and thermophilic microorganisms abound on Earth, their introduction into the CELSS would be useless because neither the food crops to be grown therein nor the human beings harvesting those crops can withstand the temperatures they require.
Regulation of the temperature within a CELSS must take into account radiant energy from the Sun and the release of energy from biological and mechanical activity within the confines of the habitat. The former can be minimized by the protective blanket of regolith required for radiation protection. Efflux of heat from the interior of the habitat may require provision of active or radiative cooling.
Energy for Life
All living forms require an adequate food supply and a source of energy. Among animals and humans, energy is derived from the metabolism of various organic constituents of the diet; e.g., carbohydrates, lipids, proteins, and other nutrients. While many microorganisms gain energy (and carbon) from the oxidation of organic compounds, including methane, many also derive energy from the oxidation of reduced inorganic compounds; e.g., sulfides, ammonia, nitrites, Fe' and hydrogen. Photosynthetic forms of life (some bacteria, the algae, and the higher plants) can convert photons of energy into chemical bond energy with the photolysis of water and the evolution of molecular oxygen. The energy thus realized is used in the synthesis of carbohydrate (from carbon dioxide and water), which the plants can further metabolize to meet their needs or which can be eaten by animals and humans to supply their energy needs.
It is clear that the requirements for food and energy are interrelated and that the various metabolic processes in the complex food chain affect the availability of nutrients to all the species. Light is a particularly important source of energy because the process of photosynthesis must go on in order for molecular oxygen, required by all but the anaerobic forms of life (principally bacteria), to be regenerated from water. It will probably be necessary, however, to regulate the light synchrony if crop production is to be successful, because 2 weeks of dark and 2 of light will not enable normal plant growth, though special strains might be developed. At the poles, perpetual sunlight could probably be obtained on selected mountains, thus enabling an Earth-like photocycle to be created by periodic blocking of the sunlight (see fig. 5 A diagram of Lunar Polar Illumination and a Polar Lunar Base Module.). Elsewhere, artificial light will have to be used to break up the 2-week night.
Investigations of the effects of reduced gravity (0. 167 g on the Moon) on human physiology and performance and on fundamental life processes in general are being supported and conducted by NASA. Astronauts from the Apollo and other short-term space missions have experienced the well known, and reversible, vestibular effect (motion sickness) and cephalic shift of body fluids (facial puffiness, head congestion, orthostatic intolerance, and diminution of leg girth). (See figure 6.) Longer-term weightlessness, as experienced by the Skylab astronauts and the Salyut cosmonauts, is more complex, resulting in cardiovascular impairment, atrophy of muscle, reduction in bone mass through osteoporosis (loss of calcium), hematologic changes (leading to immunosuppression and diminished red blood cell mass), neuroendocrine perturbations, and other pathophysiological changes. Some of these effects may be minimized by routine exercise, and apparently all are reversible, in time, upon return to 1 g. The response of plants, microorganisms, and "lower forms" of animal life to micro- or zero gravity has been investigated on Skylab, on Space Shuffle missions (see fig. 7. A Photo of seedlings grown on Spacelab 1.), and in simulations on Earth, during parabolic flight of aircraft or in a "clinostat." Unless humans and other life forms can adapt to zero g, or to low g as on the Moon or Mars, it may be necessary to provide rotating habitats to achieve the desired gravitational force, as many authorities have long proposed.
Selection of Species for an Ecosystem
The greatest challenge in the ultimate establishment of a true space habitat is the creation of a functioning, reliable ecosystem, free, insofar as possible, of pathogens, noxious plants, venomous insects, etc. Moreover, the integrity of a working ecosystem would need to be preserved and its functions monitored regularly.
While it is easy to propose the inclusion of particular species of bacteria, fungi, algae, and higher plants, each of which performs a particular biochemical function in the recycling of nutrients, no rationale exists by which one can predict which particular combination of species would be compatible under the conditions extant in the lunar environment. Considerably more research must be done on closed and semi-closed ecosystems before the organisms for a lunar CELSS are selected (see fig. 8. A Photo of Zeoponics Plant Growth Center at NASA's Johnson Space Center.). Conceivably, any number of combinations of species may be found to work well. One point must be stressed: More than one species of organism must be selected to carry out each particular function. Thus, several species of photosynthetic, nitrogen -fixing, nitrifying, or sulfur-oxidizing bacteria must be included. Likewise, a number of species of algae and higher plants, each with the common characteristic of being photosynthetic, must be introduced into the community. Such redundancy, which exists to a vast degree on Earth, provides a kind of buffer in case some of the species lose their niches in the ecosystem and die.
Some Special Aspects of Life in the CELSS
In the first place, the smaller the CELSS, the more magnified becomes any biological, chemical, or physical aberration. No doubt human occupants of the first CELSS module would need more help from the outside at the beginning than they would later on, when numerous connected modules were in place.
The dieback of some support species may well occur from time to time. This would need to be monitored, so that appropriate measures could be taken to reintroduce another strain of the lost species or to introduce an entirely different species with comparable biochemical properties. Reasons for loss of a species in an artificial ecosystem include (1) mutation, (2) temporary tie-up of a critical nutrient, (3) a flush of toxic ions or compounds, (4) malfunction of the temperature control system, (5) a sudden shift in the synchrony of food cycling, and (6) infection or toxemia. The last reason may be particularly troublesome since it will not be possible to assure the exclusion of infectious or toxic microorganisms from the habitat. Many members of the body's normal microflora are opportunistic and can, under certain circumstances, cause disease. Also, plant diseases may emerge if care is not exercised in the initial entry of seeds or of other materials which may contain plant pathogens.
Any animals (e.g., chickens, goats, dwarf pigs) ultimately selected for the space habitat should have been raised in a pathogen-free environment on Earth and tested thoroughly for the presence of any microbial pathogens before their introduction. Gnotobiotic ("germfree," devoid of a microflora) animals, however, should not be considered because upon exposure to the nonsterile environment of the CELSS they would undoubtedly die of overwhelming infections; such animals have very immature immune systems.
Humans chosen to occupy the CELSS should be protected against certain infectious diseases (e.g., poliomyelitis, measles, whooping cough, and typhoid fever) and bacterial toxemias (e.g., tetanus, diphtheria, and perhaps botulism) by the administration of appropriate vaccines and toxoids. While it would not be possible to assure the total lack of serious pathogenic microorganisms in the human inhabitants, all candidates should be checked microbiologically to assess their carrier state.
The very potent tool of genetic engineering no doubt will be useful in establishing strains of microorganisms or plants with special properties, making it possible to introduce (1) better food crops, (2) organisms with special metabolic functions, and (3) disease-resistant plants. While this treatise is directed at a lunar ecological system, it is worth noting that a laboratory in low Earth orbit or one in a modified external tank placed in orbit by a Space Shuttle offers certain advantages over the lunar environment as a place to establish and study space ecosystems for application elsewhere, as on Mars, where water and carbon dioxide exist in relative abundance.
Table of Contents
To request information on this web site in a Section 508 accessible format, please contact email@example.com
NASA Website Privacy Statement | 2026-01-25T07:21:06.955239 |
695,637 | 3.610841 | http://www.earth-policy.org/books/pb4/pb4ch8_ss2 | We can cut carbon emissions by one third by replacing fossil fuels with renewable energy sources for electricity and heat production." –Lester R. Brown, Janet Larsen, Jonathan G. Dorn, and Frances Moore, Time for Plan B: Cutting Carbon Emissions 80 Percent by 2020
Chapter 8. Restoring the Earth: Protecting and Restoring Forests
Since 1990, the earth’s forest cover has shrunk by more than 7 million hectares each year, with annual losses of 13 million hectares in developing countries and regrowth of almost 6 million hectares in industrial countries. Protecting the earth’s nearly 4 billion hectares of remaining forests and replanting those already lost are both essential for restoring the earth’s health—the foundation for the new economy. Reducing rainfall runoff and the associated soil erosion and flooding, recycling rainfall inland, and restoring aquifer recharge depend on both forest protection and reforestation. 3
There is a vast unrealized potential in all countries to lessen the demands that are shrinking the earth’s forest cover. In industrial nations the greatest opportunity lies in reducing the quantity of wood used to make paper; in developing countries, it depends on reducing fuelwood use.
The use of paper, perhaps more than any other single product, reflects the throwaway mentality that evolved during the last century. There is an enormous possibility for reducing paper use simply by replacing facial tissues, paper napkins, disposable diapers, and paper shopping bags with reusable cloth alternatives.
First we reduce paper use, then we recycle as much as possible. The rates of paper recycling in the top 10 paper-producing countries range widely, from Canada and China on the low end, recycling just over a third of the paper they use, to Japan and Germany on the higher end, each at close to 70 percent, and South Korea recycling an impressive 85 percent. The United States, the world’s largest paper consumer, is far behind the leaders, but it has raised the share of paper recycled from roughly one fifth in 1980 to 55 percent in 2007. If every country recycled as much of its paper as South Korea does, the amount of wood pulp used to produce paper worldwide would drop by one third. 4
The largest single demand on trees—fuelwood—accounts for just over half of all wood removed from the world’s forests. Some international aid agencies, including the U.S. Agency for International Development (AID), are sponsoring fuelwood efficiency projects. One of AID’s more promising projects is the distribution of 780,000 highly efficient cookstoves in Kenya that not only use far less wood than a traditional stove but also pollute less. 5
Kenya is also the site of a project sponsored by Solar Cookers International, whose inexpensive cookers, made from cardboard and aluminum foil, cost $10 each. Requiring less than two hours of sunshine to cook a complete meal, they can greatly reduce firewood use at little cost and save women valuable time by freeing them from traveling long distances to gather wood. The cookers can also be used to pasteurize water, thus saving lives. 6
Over the longer term, developing alternative energy sources is the key to reducing forest pressure in developing countries. Replacing firewood with solar thermal cookers or even with electric hotplates powered by wind, geothermal, or solar thermal energy will lighten the load on forests.
Despite the high ecological and economic value to society of intact forests, only about 290 million hectares of global forest area are legally protected from logging. An additional 1.4 billion hectares are economically unavailable for harvesting because of geographic inaccessibility or low-value wood. Of the remaining area thus far not protected, 665 million hectares are virtually undisturbed by humans and nearly 900 million hectares are semi-natural and not in plantations. 7
There are two basic approaches to timber harvesting. One is clearcutting. This practice, often preferred by logging companies, is environmentally devastating, leaving eroded soil and silted streams, rivers, and irrigation reservoirs in its wake. The alternative is simply to cut only mature trees on a selective basis, leaving the forest intact. This ensures that forest productivity can be maintained in perpetuity. The World Bank has recently begun to systematically consider funding sustainable forestry projects. In 1997 the Bank joined forces with the World Wide Fund for Nature to form the Alliance for Forest Conservation and Sustainable Use. By the end of 2005 they had helped designate 56 million hectares of new forest protected areas and certify 32 million hectares of forest as being harvested sustainably. That year the Alliance also announced a goal of reducing global net deforestation to zero by 2020. 8
Several forest product certification programs let environmentally conscious consumers know about the management practices in the forest where wood products originate. The most rigorous international program, certified by a group of nongovernmental organizations, is the Forest Stewardship Council (FSC). Some 114 million hectares of forests in 82 countries are certified by FSC-accredited bodies as responsibly managed. Among the leaders in FSC-certified forest area are Canada, with 27 million hectares, followed by Russia, the United States, Sweden, Poland, and Brazil. 9
Forest plantations can reduce pressures on the earth’s remaining forests as long as they do not replace old-growth forest. As of 2005, the world had 205 million hectares in forest plantations, almost one third as much as the 700 million hectares planted in grain. Tree plantations produce mostly wood for paper mills or for wood reconstitution mills. Increasingly, reconstituted wood is substituted for natural wood as the world lumber and construction industries adapt to a shrinking supply of large logs from natural forests. 10
Production of roundwood (logs) on plantations is estimated at 432 million cubic meters per year, accounting for 12 percent of world wood production. Six countries account for 60 percent of tree plantations. China, which has little original forest remaining, is by far the largest, with 54 million hectares. India and the United States follow, with 17 million hectares each. Russia, Canada, and Sweden are close behind. As tree farming expands, it is starting to shift geographically to the moist tropics. In contrast to grain yields, which tend to rise with distance from the equator and with longer summer growing days, yields from tree plantations are higher with the year-round growing conditions found closer to the equator. 11
In eastern Canada, for example, the average hectare of forest plantation produces 4 cubic meters of wood per year. In the southeastern United States, the yield is 10 cubic meters. But in Brazil, newer plantations may be getting close to 40 cubic meters. While corn yields in the United States are nearly triple those in Brazil, timber yields are the reverse, favoring Brazil by nearly four to one. 12
Plantations can sometimes be profitably established on already deforested and often degraded land. But they can also come at the expense of existing forests. And there is competition with agriculture, since land that is suitable for crops is also good for growing trees. Since fast-growing plantations require abundant moisture, water scarcity is another constraint.
Nonetheless, the U.N. Food and Agriculture Organization (FAO) projects that as plantation area expands and yields rise, the harvest could more than double during the next three decades. It is entirely conceivable that plantations could one day satisfy most of the world’s demand for industrial wood, thus helping protect the world’s remaining forests. 13
Historically, some highly erodible agricultural land in industrial countries was reforested by natural regrowth. Such is the case for New England in the United States. Settled early and cleared by Europeans, this geographically rugged region suffered from cropland productivity losses because soils were thin and the land was rocky, sloping, and vulnerable to erosion. As highly productive farmland opened up in the Midwest and the Great Plains during the nineteenth century, pressures on New England farmland lessened, permitting cropped land to return to forest. New England’s forest cover has increased from a low of roughly one third two centuries ago to four fifths today, slowly regaining its original health and diversity. 14
A somewhat similar situation exists now in parts of the former Soviet Union and in several East European countries. As centrally planned agriculture was replaced by market-based agriculture in the early 1990s, unprofitable marginal land was abandoned. Precise figures are difficult to come by, but millions of hectares of low-quality farmland there are now returning to forest. 15
South Korea is in many ways a reforestation model for the rest of the world. When the Korean War ended, half a century ago, the mountainous country was largely deforested. Beginning around 1960, under the dedicated leadership of President Park Chung Hee, the South Korean government launched a national reforestation effort. Relying on the formation of village cooperatives, hundreds of thousands of people were mobilized to dig trenches and to create terraces for supporting trees on barren mountains. Se-Kyung Chong, researcher at the Korea Forest Research Institute, writes, “The result was a seemingly miraculous rebirth of forests from barren land.” 16
Today forests cover 65 percent of the country, an area of roughly 6 million hectares. While driving across South Korea in November 2000, it was gratifying to see the luxuriant stands of trees on mountains that a generation ago were bare. We can reforest the earth! 17
In Turkey, a mountainous country largely deforested over the millennia, a leading environmental group, TEMA (Türkiye Erozyonla Mücadele, Agaclandirma), has made reforestation its principal activity. Founded by two prominent Turkish businessmen, Hayrettin Karaca and Nihat Gökyiğit, TEMA launched in 1998 a 10-billion-acorn campaign to restore tree cover and reduce runoff and soil erosion. Since then, 850 million oak acorns have been planted. The program is also raising national awareness of the services that forests provide. 18
Reed Funk, professor of plant biology at Rutgers University, believes the vast areas of deforested land can be used to grow trillions of trees bred for food (mostly nuts), fuel, and other purposes. Funk sees nuts used to supplement meat as a source of high-quality protein in developing-country diets. 19
In Niger, farmers faced with severe drought and desertification in the 1980s began leaving some emerging acacia tree seedlings in their fields as they prepared the land for crops. As the trees matured they slowed wind speeds, thus reducing soil erosion. The acacia, a legume, fixes nitrogen, thereby enriching the soil and helping to raise crop yields. During the dry season, the leaves and pods provide fodder for livestock. The trees also supply firewood. 20
This approach of leaving 20–150 tree seedlings per hectare to mature on some 3 million hectares has revitalized farming communities in Niger. Assuming an average of 40 trees per hectare reaching maturity, this comes to 120 million trees. This practice also has been central to reclaiming 250,000 hectares of abandoned cropland. The key to this success story was the shift in tree ownership from the state to individual farmers, giving them the responsibility for protecting the trees. 21
Shifting subsidies from building logging roads to planting trees would help protect forest cover worldwide. The World Bank has the administrative capacity to lead an international program that would emulate South Korea’s success in blanketing mountains and hills with trees.
In addition, FAO and the bilateral aid agencies can work with individual farmers in national agroforestry programs to integrate trees wherever possible into agricultural operations. Well-chosen, well-placed trees provide shade, serve as windbreaks to check soil erosion, and can fix nitrogen, which reduces the need for fertilizer.
Reducing wood use by developing more-efficient wood stoves and alternative cooking fuels, systematically recycling paper, and banning the use of throwaway paper products all lighten pressure on the earth’s forests. But a global reforestation effort is unlikely to succeed unless it is accompanied by the stabilization of population. With such an integrated plan, coordinated country by country, the earth’s forests can be restored.
3. U.N. Food and Agriculture Organization (FAO), The State of the World’s Forests 2009 (Rome: 2009), pp. 109–15.
4. FAO, ForesSTAT, electronic database, at faostat.fao.org, updated 12 January 2009, using five-year averages; U.S. Environmental Protection Agency (EPA), Municipal Solid Waste in the United States: 2007 Facts and Figures (Washington, DC: 2008), p. 102.
5. FAO, op. cit. note 3, p. 129; Daniel M. Kammen, “From Energy Efficiency to Social Utility: Lessons from Cookstove Design, Dissemination, and Use,” in José Goldemberg and Thomas B. Johansson, Energy as an Instrument for Socio-Economic Development (New York: U.N. Development Programme, 1995).
6. Kevin Porter, “Final Kakuma Evaluation: Solar Cookers Filled a Critical Gap,” in Solar Cookers International, Solar Cooker Review, vol. 10, no. 2 (November 2004); “Breakthrough in Kenyan Refugee Camps,” at solarcooking.org/kakuma-m.htm, viewed 30 July 2007.
7. FAO, Agriculture: Towards 2015/30, Technical Interim Report (Geneva: Economic and Social Department, 2000), pp. 156–57.
8. Alliance for Forest Conservation and Sustainable Use, “WWF/World Bank Forest Alliance Launches Ambitious Program to Reduce Deforestation and Curb Illegal Logging,” press release (New York: World Bank/WWF, 25 May 2005); WWF/World Bank Global Forest Alliance, Annual Report 2005 (Gland, Switzerland, and Washington, DC: December 2006), p. 31.
9. Forest Stewardship Council (FSC), Forest Stewardship Council: News & Notes, vol. 7, issue 6 (July 2009); FSC, “Global FSC Certificates: Type and Distribution (March 2009),” PowerPoint Presentation, at www.fsc.org, June 2009.
10. A. Del Lungo, J. Ball, and J. Carle, Global Planted Forests Thematic Study: Results and Analysis (Rome: FAO Forestry Department, December 2006), p. 13; U.S. Department of Agriculture (USDA), Production, Supply and Distribution, electronic database, at www.fas.usda.gov/psdonline, updated 9 April 2009.
11. R. James and A. Del Lungo, “Comparisons of Estimates of ‘High Value’ Wood With Estimates of Total Forest Plantation Production,” in FAO, The Potential for Fast-Growing Commercial Forest Plantations to Supply High Value Roundwood (Rome: Forestry Department, February 2005), p. 24; plantation area in “Table 4. Total Planted Forest Area: Productive and Protective—61 Sampled Countries,” in Del Lungo, Ball, and Carle, op. cit. note 10, pp. 66–70.
12. Ashley T. Mattoon, “Paper Forests,” World Watch, March/April 1998, pp. 20–28; USDA, op. cit. note 10.
13. FAO, op. cit. note 7, p. 185; Chris Brown and D. J. Mead, eds., “Future Production from Forest Plantations,” Forest Plantation Thematic Paper (Rome: FAO, 2001), p. 9.
14. M. Davis et al., “New England—Acadian Forests,” in Taylor H. Ricketts et al., eds., Terrestrial Ecoregions of North America: A Conservation Assessment (Washington, DC: Island Press, 1999); David R. Foster, “Harvard Forest: Addressing Major Issues in Policy Debates and in the Understanding of Ecosystem Process and Pattern,” LTER Network News: The Newsletter of the Long Term Ecological Network, spring/summer 1996; U.S. Forest Service, “2006 Forest Health Highlights,” various state sheets, at fhm.fs.fed.us, viewed 2 August 2007.
15. C. Csaki, “Agricultural Reforms in Central and Eastern Europe and the Former Soviet Union: Status and Perspectives,” Agricultural Economics, vol. 22 (2000), pp. 37–54; Igor Shvytov, Agriculturally Induced Environmental Problems in Russia, Discussion Paper No. 17 (Halle, Germany: Institute of Agricultural Development in Central and Eastern Europe, 1998), p. 13.
16. Se-Kyung Chong, “Anmyeon-do Recreation Forest: A Millennium of Management,” in Patrick B. Durst et al., In Search of Excellence: Exemplary Forest Management in Asia and the Pacific, Asia-Pacific Forestry Commission (Bangkok: FAO Regional Office for Asia and the Pacific, 2005), pp. 251–59.
18. Turkish Foundation for Combating Soil Erosion, at english.tema.org.tr, viewed 31 July 2007.
19. Reed Funk, letter to author, 9 August 2005.
20. U.S. Embassy, Niamey, Niger, “Niger: Greener Now Than 30 Years Ago,” reporting cable circulated following national FRAME workshop, October 2006; Chris Reij, “More Success Stories in Africa’s Drylands Than Often Assumed,” presentation at Network of Farmers’ and Agricultural Producers’ Organisations of West Africa Forum on Food Sovereignty, 7–10 November 2006.
21. U.S. Embassy, op. cit. note 20; Reij, op. cit. note 20.
Copyright © 2009 Earth Policy Institute | 2026-01-28T23:41:08.385137 |
63,190 | 3.704797 | http://econlib.org/library/YPDBooks/Lalor/llCy894.html | Cyclopædia of Political Science, Political Economy, and the Political History of the United States
RECIPROCITY is a relation between two independent powers, such that the citizens of each are guaranteed certain commercial privileges at the hands of the other. Up to the middle of the present century the term referred almost exclusively to the grant of privileges to foreign shipping. The earlier English policy had been very illiberal in this respect, carrying out the principles of Cromwell's navigation act, and of the colonial system of the last century. But as time went on, it became more important for England to extend her carrying trade in foreign lands than to monopolize it in her own; and in the early part of this century, under the influence of statesmen like Huskisson, reciprocity treaties were concluded with the leading maritime powers, by which each of the contracting parties admitted the other's ships in its ports to the same privileges as its own in the matter of the international carrying trade. This system aroused much opposition at different times in England; and in the United States was strongly opposed by Webster; but it soon became the prevailing one.
—The commercial treaties of earlier times aimed at securing special privileges and discriminating rates of duty. The one most commonly referred to as a type of them all is the Methuen treaty of 1703 between England and Portugal, by which England made special rates for Portuguese wines, and Portugal removed her prohibition of the import of English woolens. The same general principles, but applied with far sounder judgment of political and social needs, appear in the series of German treaties beginning with that between Prussia and Hesse in 1828, culminating with the establishment of the Zollverein, and ending with the treaty between the Zollverein and Austria in 1853.
—The treaty between England and France in 1860 was the beginning of a new order of things. Preceding treaties had been dictated by special reasons of social policy: this was intended and understood as an attempt in the direction of free trade. France had an almost prohibitive tariff; Napoleon wished to reduce it, but in the existing state of public opinion dared not do so without the appearance of international co-operation. He had in view the general development of French commerce, but he wished to be able to show definite advantages to distinct interests. The treaty with England, arranged in 1860 by Chevalier and Cobden, was the first result of this policy. The English tariff was already on a revenue basis; yet in return for the important French concessions it was still further reduced on French articles of export. But what distinguished this treaty from preceding ones was the fact that these reductions were not bargained for as special and exclusive privileges. This treaty was intended to become part of a system; it was contemplated that both England and France would make similar treaties with other nations, and in view of this it was provided, that in case either of the contracting powers should subsequently grant to a third power conditions more favorable in any respect, the other should have the benefit of such conditions. This provision constitutes what is known as the most favored nations clause; it was incorporated in subsequent treaties, as had occasionally been done in previous treaties, and soon became the important element in them; for by it a special concession made in favor of any one nation at once inured to the benefit of all who had similar treaties. It is this provision that distinguishes the modern European reciprocity system, and has caused that system to work so strongly in favor of free trade.
—The gain to the commerce of France and England was so great that other nations hastened to secure the same advantages. Similar treaties with France or England were made by Belgium in 1861, Prussia in 1862, Italy and Spain in 1863, Switzerland in 1864, and by most of the other European states in 1865 and 1866. Even Russia ultimately secured at the hands of some of the powers the benefit of the most favored nations clause, though without much reciprocity on her part. Within ten years the system seemed to be firmly established all over Europe, and to insure steady progress in the direction of free trade. (For certain special statistics, see Leone Levi in Journ. of Stat. Soc., 40, 1; for discussion of principles, a work entitled "Letters on Commercial Treaties," etc., "by a disciple of Richard Cobden.")
—But several circumstances combined to stop this progress, and to a certain extent unsettle the system. The first of these was the downfall of Napoleon III. He had not only started the system, but had by his strong influence done more to extend it than most people were aware of. It had never been really popular in the sense of calling forth general enthusiasm. It savored too much of bargaining, too little of principle. And it was rendered less popular than ever by wars like that of 1870, which intensified the opposition of national feeling, and substituted a spirit of embittered rivalry for one of mutual help. This acted against the reciprocity system in a variety of ways. Increased military expenditure demanded larger revenue; and nations chafed under treaty restrictions which hampered them in raising this revenue. The commercial treaties looked toward free trade; but national pride and the constant possibility of war led men to demand a protective system. While men's minds were in this state came the crisis of 1873; and public feeling was only too ready to attribute the hard times which followed to the one tangible grievance of foreign competition, and to seek to be rid of this grievance in all possible ways.
—The diplomatists were mainly free traders; and it was some time before they understood the strength of the feelings they had to contend against. The failure of the English negotiators in 1876 to obtain some expected concessions from France, began to reveal the true state of the case. The termination in the same year, by the action of Italy, of the French-Italian treaty, and the rejection by France of a proposed compromise treaty in 1877, were equally significant. Of still greater importance was Bismarck's change of attitude in 1878. Ever since the year 1818 the Prussian government leaned toward a free trade policy, much more so than any other great power except England. In 1862 their steps in support of the reciprocity system had been bold in the extreme. Now, such a change on the part of Prussia, as well as France and Italy, rendered the future of the system extremely doubtful.
—To understand the negotiations which followed, we must observe that in the application of these treaties of commerce, two different courses had been pursued by different states. One group of states, headed by England and Prussia, had no sooner made a concession to a single nation, than they modified their whole tariff in accordance with it, so that all nations, even those outside of the system, at once had the benefit of the change. Another group, represented by France, left their general tariff unchanged, but in the collection made a deduction of that amount in favor of nations having the benefit of a treaty. Spain went so far in this direction as to have two tariffs, the lower for "most favored nations," the higher for all others.
—As long as the statesmen on both sides were animated by common aims, this distinction made very little difference. But when it became a matter of international bickering the nations of the first group found themselves at a great disadvantage. What special privileges are you offering us under the treaty? French negotiators constantly asked of the representatives of those nations which had reduced their general tariff. To this question there was no thoroughly available reply; and it was this diplomatic helplessness that led to the "fair trade" agitation in England, and to a full discussion of certain points in the theory of reciprocity into which we can not here enter. (Westminster Rev., 112, 1; Contemp., 35, 269; Nineteenth Cent., 5, 638, 992; 6, 179; Fawcett, "Free Trade and Protection," last chapter.)
—In the year 1881 a number of French treaties were about to expire; and it was felt that a critical point had come in the history of the system. After some difficulties, particularly in connection with the Italian and Swiss treaties, they were nearly all renewed on the basis of increased duties on either side. The treaty with England was not renewed, but a special act was passed placing England on the footing of the most favored nations. On the whole, it may be said that the continuance of the system has been secured, but its efficiency in the direction of free trade destroyed.
—The United States has never been in any way connected with the system. At the time of its adoption and growth, American tendencies were all in the direction of increased duties. Our reciprocity treaties have all belonged to the earlier type of special arrangements. By far the most important of them was the one with Canada, proclaimed Sept. 11, 1854, and terminated March 17, 1866, on notice given by the United States one year previous. By the terms of this treaty food products of all kinds, nearly all raw materials, and some half-manufactured articles, were allowed to pass free from one country to the other. The dissatisfaction with the treaty arose from the owners of mines, timber, etc., in the United States, who found the price of their products kept down by Canadian competition. A memorial in favor of its renewal was presented to the United States government by the national board of trade in 1873, but without calling forth vigorous general support.
—A similar treaty was concluded with Hawaii in the summer of 1876, for the benefit of certain business interests of the Pacific states, particularly the sugar refiners. It was severely criticised by Secretary Sherman, after having been in operation about two years; but it now seems to have accomplished what was expected of it. The position of the United States government on the subject of commercial treaties is illustrated by the fact, that, when the Hawaiian authorities attempted to negotiate a similar treaty with Germany in 1879, they were checked by an intimation from the United States that the value of those privileges lay largely in their exclusiveness, and that the treaty must guarantee the United States exclusive rights.
—In, the years succeeding the exhibition of 1876, strong efforts were made by French exporters to secure reciprocity privileges from the United States. It was hoped that if France would place America on the basis of the most favored nations, America would lower its duties on French wines and silks. In spite of the repeated efforts of the French manufacturers' agent to secure public sentiment in its favor, the subject was never officially taken up.
ARTHUR T. HADLEY.
Return to top | 2026-01-19T06:59:38.689333 |
359,331 | 3.571367 | http://en.wikipedia.org/wiki/Frequentist_interpretation_of_probability | ||This article has multiple issues. Please help improve it or discuss these issues on the talk page.
The development of the frequentist account was motivated by the problems and paradoxes of the previously dominant viewpoint, the classical interpretation. In the classical interpretation, probability was defined in terms of the principle of indifference, based on the natural symmetry of a problem, so, e.g. the probabilities of dice games arise from the natural symmetric 6-sidedness of the cube. This classical interpretation stumbled at any statistical problem that has no natural symmetry for reasoning.
The shift from the classical view to the frequentist view represents a paradigm shift in the progression of statistical thought. This school is often associated with the names of Jerzy Neyman and Egon Pearson who described the logic of statistical hypothesis testing. Other influential figures of the frequentist school[clarification needed] include John Venn, R.A. Fisher, and Richard von Mises.
In the frequentist interpretation, probabilities are discussed only when dealing with well-defined random experiments. The set of all possible outcomes of a random experiment is called the sample space of the experiment. An event is defined as a particular subset of the sample space to be considered. For any given event, only one of two possibilities may hold: it occurs or it does not. The relative frequency of occurrence of an event, observed in a number of repetitions of the experiment, is a measure of the probability of that event. This is the core conception of probability in the frequentist interpretation.
Thus, if is the total number of trials and is the number of trials where the event occurred, the probability of the event occurring will be approximated by the relative frequency as follows:
Clearly, as the number of trials is increased, one might expect the relative frequency to become a better approximation of a "true frequency".
A controversial claim of the frequentist approach is that in the "long run," as the number of trials approaches infinity, the relative frequency will converge exactly to the true probability:
Such a limit is possible only in theory (e.g. counting the relative fraction of even numbers less than nt: one may easily compute the limit .) This conflicts with the standard claim that the frequency interpretation is somehow more "objective" than other theories of probability.
The frequentist interpretation is a philosophical approach to the definition and use of probabilities; it is one of several, and, historically, the earliest to challenge the classical interpretation. It does not claim to capture all connotations of the concept 'probable' in colloquial speech of natural languages.
As an interpretation, it is not in conflict with the mathematical axiomatization of probability theory; rather, it provides guidance for how to apply mathematical probability theory to real-world situations. It offers distinct guidance in the construction and design of practical experiments, especially when contrasted with the Bayesian interpretation. As to whether this guidance is useful, or is apt to mis-interpretation, has been a source of controversy. Particularly when the frequency interpretation of probability is mistakenly assumed to be the only possible basis for frequentist inference. So, for example, a list of mis-interpretations of the meaning of p-values accompanies the article on p-values; controversies are detailed in the article on statistical hypothesis testing. The Jeffreys–Lindley paradox shows how different interpretations, applied to the same data set, can lead to different conclusions about the 'statistical significance' of a result.
There is no place in our system for speculations concerning the probability that the sun will rise tomorrow. Before speaking of it we should have to agree on an (idealized) model which would presumably run along the lines "out of infinitely many worlds one is selected at random..." Little imagination is required to construct such a model, but it appears both uninteresting and meaningless.
|This section requires expansion. (December 2008)|
the probable is that which for the most part happens
It was given explicit statement by Robert Leslie Ellis in "On the Foundations of the Theory of Probabilities" read on 14 February 1842, (and much later again in "Remarks on the Fundamental Principles of the Theory of Probabilities"). Antoine Augustin Cournot presented the same conception in 1843, in Exposition de la théorie des chances et des probabilités.
Perhaps the first elaborate and systematic exposition was by John Venn, in The Logic of Chance: An Essay on the Foundations and Province of the Theory of Probability (published editions in 1866, 1876, 1888).
- 3....we may broadly distinguish two main attitudes. One takes probability as 'a degree of rational belief', or some similar idea...the second defines probability in terms of frequencies of occurrence of events, or by relative proportions in 'populations' or 'collectives'; (p. 101)
- 12. It might be thought that the differences between the frequentists and the non-frequentists (if I may call them such) are largely due to the differences of the domains which they purport to cover. (p. 104)
- I assert that this is not so ... The essential distinction between the frequentists and the non-frequentists is, I think, that the former, in an effort to avoid anything savouring of matters of opinion, seek to define probability in terms of the objective properties of a population, real or hypothetical, whereas the latter do not. [emphasis in original]
Alternative views
|This section requires expansion. (April 2012)|
The frequentist interpretation does resolve difficulties with the classical interpretation, such as any problem where the natural symmetry of outcomes is not known. It does not address other issues, such as the dutch book. Propensity probability is an alternative physicalist approach.
- The Frequency theory Chapter 5; discussed in Donald Gilles, Philosophical theories of probability (2000), Psychology Press. ISBN 9780415182751 , p. 88.
- von Mises, Richard (1939) Probability, Statistics, and Truth (in German) (English translation, 1981: Dover Publications; 2 Revised edition. ISBN 0486242145) (p.14)
- William Feller (1957), An Introduction to Probability Theory and Its Applications, Vol. 1, page 4
- Keynes, John Maynard; A Treatise on Probability (1921), Chapter VIII “The Frequency Theory of Probability”.
- Rhetoric Bk 1 Ch 2; discussed in J. Franklin, The Science of Conjecture: Evidence and Probability Before Pascal (2001), The Johns Hopkins University Press. ISBN 0801865697 , p. 110.
- Ellis, Robert Leslie (1843) “On the Foundations of the Theory of Probabilities”, Transactions of the Cambridge Philosophical Society vol 8
- Ellis, Robert Leslie (1854) “Remarks on the Fundamental Principles of the Theory of Probabilitiess”, Transactions of the Cambridge Philosophical Society vol 9
- Cournot, Antoine Augustin (1843) Exposition de la théorie des chances et des probabilités. L. Hachette, Paris. archive.org
- Venn, John (1888) The Logic of Chance, 3rd Edition archive.org. Full title: The Logic of Chance: An essay on the foundations and province of the theory of probability, with especial reference to its logical bearings and its application to Moral and Social Science, and to Statistics, Macmillan & Co, London
- Earliest Known Uses of Some of the Words of Probability & Statistics
- Kendall, Maurice George (1949). "On the Reconciliation of Theories of Probability". Biometrika (Biometrika Trust) 36 (1/2): 101–116. doi:10.1093/biomet/36.1-2.101. JSTOR 2332534.
- P W Bridgman, The Logic of Modern Physics, 1927
- Alonzo Church, The Concept of a Random Sequence, 1940
- Harald Cramér, Mathematical Methods of Statistics, 1946
- William Feller, An introduction to Probability Theory and its Applications, 1957
- P Martin-Löf, On the Concept of a Random Sequence, 1966
- Richard von Mises, Probability, Statistics, and Truth, 1939 (German original 1928)
- Jerzy Neyman, First Course in Probability and Statistics, 1950
- Hans Reichenbach, The Theory of Probability, 1949 (German original 1935)
- Bertrand Russell, Human Knowledge, 1948
- Friedman, C. (1999). "The Frequency Interpretation in Probability". Advances in Applied Mathematics 23 (3): 234–174. doi:10.1006/aama.1999.0653. PS | 2026-01-23T17:34:39.824474 |
1,020,972 | 3.668582 | http://sciencenetlinks.com/lessons/risks-and-benefits/ | To assess and weigh the risks and benefits associated with innovations in science and technology.
This lesson provides students with an opportunity to further their understanding of the risks and benefits associated with innovations in science and technology. Using the case study approach, students examine two examples of technological innovations and the risks and benefits associated with them.
Many important personal and social decisions are made based on perceptions of risks and benefits. Analyzing risk entails looking at probabilities of events and at how bad the events would be if they were to happen. Students need to learn that comparing risks is difficult because people vary greatly in their perceptions of risk, which tends to be influenced by such matters as whether the risk is gradual or instantaneous (global warming versus plane crashes), how much control people think they have over the risk (cigarette smoking versus being struck by lightning), and how the risk is expressed (the number of people affected versus the proportion affected.) (Benchmarks for Science Literacy, p. 52.)
Case studies can provide an effective way to examine issues related to how society responds to the promise or threat of technological change—whether by adopting new technologies or curtailing the use of existing ones. However, teachers must be careful to avoid turning the case studies into occasions for promoting a particular point of view. People tend to hold very strong opinions on the use of technologies. The teacher's job is not to provide students with the "right" answers about technology but to see to it that students know what questions to ask. (Benchmarks for Science Literacy, p. 56.)
Refer students to the Risks and Benefits student esheet, which will direct them to Cell Phones and Driving to learn about how cell phones might be affecting safety on the road. The resource will pose the following questions, which should be discussed with the entire class:
- Consider this statement: "Cell phones don't cause accidents. It's just that people who use cell phones a lot tend to be bad drivers." Does Sondhi's research support this claim? Why or why not?
- Consider this statement: "People should be allowed to use only hands-free phones when driving." Does Sondhi's research support this claim? Why or why not?
- Design an experiment that would test the effects of cell phones on driving further. What would you measure? How would you measure it? How would you check to see if either of the above arguments were valid?
- Do you think there should be any laws regulating the use of cell phones while driving? How far should they go? What kind of evidence do you feel is necessary to justify these laws?
- What is Risk?
- What is Risk Assessment?
- What are the Goals of Risk Assessment?
- What is the Procedure for Performing a Risk Assessment?
- How Do I Estimate Risk?
- What is the Point of Doing a Risk Assessment?
- What is Risk Management
- How Do You Combine Risk Assessment with Risk Management?
Once students have finished exploring the above resource, they should continue using the student esheet to go to Report of the Presidential Commission on the Space Shuttle Challenger Accident, which provides an analysis of the mechanical and administrative causes of the accident. As students read the resource, they should consider the risks and benefits of human spaceflight and how public perceptions of these may have changed after the Challenger accident.
Now that students have been exposed to some of the pros and cons of technology, give students 10 to 15 minutes to write down on a sheet of paper their thoughts to the following questions. Then discuss students' answers as a class. (There can be many correct answers to these questions. The goal of the questions is to stimulate critical and creative thinking in terms of technology. Some of the possible answers were taken from the Professors calculate monetary, statistical value of human life, an article on the Daily Princetonian website regarding monetary value of a human life.)
- What are some causes of device failures and how can these be prevented? (Many examples of mechanical stress or design oversight should be mentioned. For example, the 1986 Space Shuttle Challenger explosion was caused by exhaust flames leaking through a booster rocket and into an external fuel tank. Several mechanical flaws—including blowholes forming in sealing putty, and O-ring erosion and rotation—caused this leakage itself. While the prevention of any of these flaws would have prevented the explosion, all mechanical flaws were independently corrected with redundant measures added—e.g., multiple O-rings, airtight adhesive in place of putty.)
- Why are perfect (or near-perfect) systems seldom observed? (Costs are prohibitive, and near-perfect designs may be nearly impossible to implement in practice. Even if costs and material capabilities pose no problem, unintended consequences unobserved in testing may occur. Studies have calculated that the average person values his or her life at approximately $1.54 million, and so engineers knowingly accept that they cannot spend that amount of money on each technological advance so they accept imperfect designs.)
- Why do some negative effects of technology cause a greater societal uproar than others, independent of the number of people affected or the severity of the consequence(s)? (Societal uproar is worse if: a negative outcome occurs instantaneously versus gradually; the perceived level of control is low over a negative event—e.g., people like to feel in control of circumstances; the shock value of the effect is high—e.g., a gruesome explosion will cause more uproar than chronic heart disease; the technology involves moral or ethical issues—e.g., personal convictions against genetically modified organisms would amplify one's response to an accident involving them; or society is not familiar with technology—e.g., a scientist or engineer might respond less severely to an accident than a less tech-savvy person.)
- How does the non-scientist population view scientists and engineers? How does a non-scientist population view scientists' perception of their work? (Non-scientists can view scientists with a sense of awe or even trepidation. The "mad scientist" stereotype might cause non-scientists to assume scientists are all knowing about their field, or entirely benevolent in its technological implementation. In actuality, scientists discover and elucidate the laws of the universe and engineers harness them for human benefit, but neither can forecast all the consequences. Engineers in particular need to trade off costs and benefits in implementing a technology—the absence of risk is nonexistent.)
Using the Risks and Benefits esheet, students should write a brief essay that explains the risks and benefits of one of these technologies:
- Gene technology
- Transgenic crops
- Food irradiation
- Heart drugs
Use the Risks and Benefits Essay Rubric to assess student essays.
H1N1 Flu: Are Parents Underestimating Risk to Kids? can be used to explore public perceptions of risks and benefits of science and technology. | 2026-02-03T00:33:15.682814 |
211,998 | 3.924288 | http://thegoodhuman.com/2011/01/16/what-is-desertification-and-why-is-it-important-to-know-about/ | Dear EarthTalk: Can you explain what desertification is and why it is an important environmental issue?
Desertification is the degradation of land in already dry parts of the globe that results from various factors, including natural climate changes as well as human activity. As the name connotes it is the expansion of desert-like conditions which render useless land that was once biologically and/or economically productive. According to the United Nations’ Convention to Combat Desertification, the phenomenon occurs in drylands (arid, semi-arid and dry sub-humid areas) on all continents except Antarctica and affects the livelihoods of millions of people, including a large proportion of the world’s poor.
Drylands constitute about 40 percent of the world’s total land area, and are home to some two billion people – a third of human population. Water scarcity in existing drylands makes it difficult for plants, animals and humans to thrive there; desertification makes it impossible, forcing those affected to flee to more hospitable lands, whether they are welcome or not. The United Nations estimates that 10-20 percent of the world’s drylands are already degraded to the point where desertification is an imminent threat.
While global warming and the resulting intensification of fresh water scarcity is the most serious factor in converting drylands into deserts, population pressure and lack of proper land use planning only serve to make matters worse. In Sub-Saharan Africa, one of the regions most vulnerable to desertification, severe droughts already lead to major food and health crises once every three decades or so on average; environmentalists and planners worry that human-induced warming and other factors will increase the frequency of such debilitating droughts and lead to even more problems with desertification there. The African Union is working to muster international support for the creation of a Green Wall a forested green belt o help hold back the Sahara desert.
Other governments are also taking steps to keep desertification in check. China is working to create a 2,800-mile forest belt that will not only block the fast advancing sands of the Gobi desert but serve as a carbon sink, as well, to absorb greenhouse gas emissions. And Algerian leaders are optimistic that the recent creation of a 600,000 acre national park will head off a looming desertification crisis there.
Desertification is also a problem right here in the United States, mostly a result of overgrazing by farm animals and poorly designed irrigation schemes across especially vulnerable parts of Texas, New Mexico and Arizona. Some 40 percent of the continental U.S. is dry enough to be at risk for desertification.
Historians point to the Dust Bowl of the 1930s as proof positive of America’s susceptibility to such problems. Lessons learned then led to the creation of the Soil Conservation Service now called the Natural Resources Conservation Service to teach farmers and other landowners agricultural practices that reduce soil loss and maintain biological diversity around agricultural operations. In spite of such efforts, desertification still plagues parts of the U.S. today. The hope today is that global warming won’t tip us to the point where have to learn some hard lessons all over again.
SEND YOUR ENVIRONMENTAL QUESTIONS TO: EarthTalk, c/o E The Environmental Magazine, P.O. Box 5098, Westport, CT 06881; email@example.com. E is a nonprofit publication. Subscribe: www.emagazine.com/subscribe; Request a Free Trial Issue: www.emagazine.com/trial. | 2026-01-21T12:05:35.586345 |
988,374 | 3.604468 | http://muslimheritage.com/article/ukhaidir-palace-720-800-ce | About 100 miles south-west of Baghdad is Ukhaidar palace, one of the most preserved palaces of the Muslim world. It is unique in its architectural wealth and incorporated some of the key innovations that greatly impacted the development of Muslim as well as non-Muslim architecture.
|Figure 1. The main gate showing its Pishtaq form, probably the first use in Islam.|
Lying in the desert about 100 miles south-west of Baghdad is Ukhaidir palace, one of the best-preserved palaces of the Muslim world. The palace is unique in its architectural wealth that incorporated some of the key innovations that greatly impacted the development of Muslim as well as non-Muslim architecture. Because of this, the palace has attracted much academic interest, particularly from German, French and British archaeologists and architects. However, these numerous studies failed to provide a proper explanation for the circumstances surrounding the foundation of the palace due to the absence of any text or in situ inscription to help establish a definite date and a conclusive story.
Based on architectural evidence, researchers put the date of the construction of the palace at between 720 and 800 CE . This 80 year period provokes a serious historical problem, since within this period the Umayyad dynasty gave way to the Abbasids who gained power in year 750 CE. These events raise questions as to whether the Palace is Umayyad or Abbasid. First, there are indications, which suggest that the palace is Umayyad, built before 750 CE. Features such as the presence of few semi-circular arches, the limited use of squinches to half-domes, the use of corner slabs to support the scalloped dome are all indices pointing out to the Umayyads whose architecture continued to use these elements before the architectural revolution of the Abbasids introduced the pointed arch, the expansion of the use of dome squinches and many other features. The Umayyads also had the habit of living in desert palaces. But who could have built Ukhaidir during these troubled times when the Umayyad were waging wars against the Kharijit and the Abbasids? Additionally, most princes and wealthy Umayyads had known residences.
|Figure 2. The fluted dome supported on triangular slabs bridging the corner of the square is an early version of erecting domes on a square bay.|
Similarly, early Abbasid rulers can be ruled out as Al-Saffah, the founder of the dynasty, for example, lived in his palace beside the Persian city of Anbar (about 45 miles west of Baghdad) and died there in 754. His successor Al-Mansur at first lived in his palace between Kufa and the old Persian town of Hira, and later settled in his capital Baghdad .
This leaves us with two remaining theories. Creswell's theory proposes that the construction was due to the nephew of Al-Mansur, Isa ibn Musa (d. 783/4), who received large sums of money from the Caliph to prevent him making a claim to the throne. Isa was somehow promised the Caliphate after Al-Mansur who later changed his mind in favour of his son Al-Mahdi. Isa was expelled from his governorship of Kufa in 778 and was made to renounce his claim to the Caliphate. It is reported that on his expulsion he returned to his estates and used to visit Kufa every Friday for the congregational prayer . However, some scholars raised doubts about this thesis arguing that the two way journey of 200 km from Ukhaidir to Kufa through the desert could not have been an easy undertaking for a man who is known to have been a permanent invalid .
|Figure 3. Ukhaidar general plan|
The final possible patron of Ukhaidir suggested was Isa Ibn Ali ibn Abdullah, another nephew of Caliph Al-Mansur. According to the Arab historian Yaqut, Isa Ibn Ali built a palace named Qasr al- Muqatil on the site of a ruined pre-Islamic castle. Al-Muqatil palace was located on the road between Kufa and Al-Anbar, a location which is sufficiently close to Ukhaidir .
"Here then we have direct evidence for a palace with the qualifications we are looking for: the timing is correct, and a man of such importance must have had the resources and what he hadn't, he made up for by employing building material for the most part already laid out for him" .
Palace Plan and Architecture
The palace consists of two fortified enclosures. On the outer enclosure there is a 17 meters high robust rampart made from limestone slabs and mortar strengthened by corner and intermediary towers alternating with pairs of blind arches on pilasters. A series of portcullises were fitted along it and even on the supporting towers. These defensive arrangements, which must have added extra protection, were known to the Romans but this was perhaps their first use in Islam. Another defensive scheme employed in Ukhaidir was the use of arrow- slits, which were served by the wall walk (once vaulted). From these slits defenders of the palace could fire their arrows without being exposed to the enemy. The gateways were also fitted with slits in their vaults for dropping flaming missiles, as well as lateral grooves for letting down the portcullis.
|Figure 4. The pointed barrel vault and arches of the great hall|
The enclosure was pierced, in the centre of each side, with gateways flanked by quarter round towers. On the main gate, in the northern side of the wall, we find the earliest appearance of the arched portal, set within a rectangular frame rising above the walls (pishtaq) . This gateway leads into a pointed barrel-vaulted hall made of seven transverse arches. The vault incorporated slits defending the entrance from the room above. Through the entrance hall one progresses into a square chamber covered by a fluted dome, the first of its type in Iraq, borne across the corners on flat triangular slabs forming a bridge on the corners of the square . This is followed by a narrow corridor and a set of internal curtain walls protecting the palace proper. Following this corridor, towards the west, one reaches a larger open space extending the whole length of the palace. The existence of an annexe building in the centre of this court and the remains of feeding troughs raises the speculation that it was used for keeping horses and possibly horse riding activities.
The palace complex consists of series of functional units; the great hall, a mosque, a court of honour, audience halls and four domestic compounds called Bayts. Continuing from the domed square room there is the great hall. It is a large pointed barrel-vaulted hall of 7 meters width, 15.5 meters length and 10.5 meters height, with arched recesses to the left and right leading to side chambers which were probably store rooms. West of the great hall and to the right of the main entrance, there is the palace mosque, a hall consisting of a single aisle of five arches raised on cylindrical piers made of limestone and mortar. Above the arcade was a single barrel vault which had a corrugated surface, and along the ridge were coffers of diverse shape. The ends of the vaults were fused to the side walls by means of fluted half-domes with squinches of the same form inserted at the transitions. The architects and masons of Ukhaidir introduced, for the first time, an elaborate technique based on the construction of elliptical (pointed) barrel vaults with bricks in similar technique to building a wall which therefore made the way vaults were built considerably easier. The old tradition consisted of the use of a mixture of mortar, small stones and debris laid out on wooden base. Such method required a lot of wood not available in this arid region and building took a considerable time to finish as masons had to wait for the vault to dry to move the scaffolding to another part of the building. This new technique, likely to have been introduced through Persian and Mesopotamian Muslims, provided a good solution to these problems. Further innovation appearing in the vault of the mosque was the use of decorated flat arches to support the vault, a technique which was later transformed into the so called rib vaulting which revolutionised vault construction.
|Figure 5. Court of honour showing the pishtag gate of the Audience Hall, the well in the centre, and the blind arch decoration on the walls|
The centre of the palace is occupied by the court of honour, a large court decorated with blind arcades, incorporating in its top sections brick decoration of geometric patterns. The northern side of the court is a 6 metres high wall arcades on round engaged columns, above which a second storey block with blind pointed arches having cusped archivolts, and rising still further above a set-back wall crowned by a parapet frieze of recessed niches.
|Figure 6. The northern side of the Court of Honour, showing the upper floor|
The south side once incorporated a vaulted iwan framed by a rectangular frontispiece in the form of pishtaq. Behind it, there was a long chamber for private audience connecting with a square room fitted with four doors, one in the centre of each side; this must have been the throne room.
The four living compounds "Bayts" were self-contained units consisting of a courtyard flanked by two symmetrical built sections (north-south) containing three rooms each. They were distributed in pairs on the east and west sides of the court of honour, but kept in isolation from its ceremonial function by a tunnel-vaulted corridor encircling both the court and the audience hall.
Architectural Merits of the Palace
In many respects, Ukhaidir has been regarded as an important workshop where many elements of what was to become known as Muslim architecture were elaborated and developed upon. Below is a summary of key developments introduced in Ukhaidir and their impact on Muslim and non-Muslim architecture.
|Figure 7. The vault of the mosque portico showing the innovative flattened arches used as both decorative and support for the vault|
The first of these innovations is the transverse arch. The Ukhaidir example is definitely the first recorded example of its Muslim use. Through Muslim examples, this type of arch reached Europe where it was used over the aisles in Romanesque architecture. These were semicircular arches thrown from each pier of the arcade to the wall of the aisle in similar fashion to that of the Mosque. To support the connecting point, at which the arch sprung from the pier, a flat pilaster was added to the pier. Later these transverse arches were thrown over the nave, providing greater safety and durability. The first recorded example is found in the church of St. Felice e Fortunato at Vicenza (985 CE). In a later stage the transverse arch represented a fundamental structural step in the process of development of the Gothic style. It led to the adoption of ribbed vaulting which progressively enabled the vaulting of the nave and the evolution of the compound pier. The form of the pier abandoned the single or cylindrical shape and became compound with an engaged member carrying up to the spring of the transverse arch.
The second feature is the use of pointed arches and vaults. Such forms were to dominate much of the architecture of the Abbasids and later periods. More importantly, their introduction into Europe caused a revolution in the way buildings were constructed there. The semi-circular arcading and vaulting provided some static problems in covering large and irregular spaces. The pointed arch resolved the difficulty of achieving level crowns in the arches of the vault, allowing the vault to become suitable for any ground plan . The pointed arch did not reach Europe until late 11th century where it was first employed in Monte Cassino (1080) . Later, the pointed arch and vault were the corner stones for the development of the famous Gothic style of architecture.
The vaulting system of Ukhaidir is also revealing. The use of the transverse arches and groin vaults which occur eight times in the Palace, sometimes as supporting arches, showed that Muslims were innovators in vaulting techniques. The vaults of the main hall of the palace and its mosque provide evidence of the early Muslim use of the so called rib vaulting. This initial technique later matured in the Ribat of Sussa (821-822) . Great Romanesque and Gothic cathedrals would not have been constructed the way we admire today if it was not for the introduction of rib vaulting.
In addition to the above, one can mention elements of defensive architecture that were later to play an important role in the construction of Muslim and non-Muslim military architecture. The introduction of loopholes (arrow -slits), which were used to allow bowmen (standing or kneeling) to fire their arrows, left their influence on later Muslim fortifications. Europe did not realise their potential until 12th century when they were introduced by returning Crusaders. They first appeared in London in 1130 .
The use of a portcullis is another defensive element that became popular across Muslim defensive structures and again appears in Europe after the Crusades.
Finally we may note that the masons of Ukhaidir made a daring innovation which was to have a permanent consequence in Islamic architecture. The decoration of a top section of blind arches of the Court of Honour, which was based on laying the bricks in mosaic manner upright and sideways to form patterns, developed into a technique spreading across the Muslim world especially Persia where it became known as hazarbaf, or "thousand twistings".
Creswell, K.A.C. (1989), ‘A Short Account of Early Muslim Architecture', Revised and supplemented by James, W. Allan, Scolar Press, p.261.
Jairazbhoy, R.A. (1972), ‘An Outline of Islamic Architecture', Asia Publishing House, Bombay, London and New York, p.55.
Creswell, p.261
Creswell K.A.C. (1958), ‘A short account of early Muslim Architecture', Penguin Books, pp.201-3.
Jairazbhoy, R.A. (1972) op cit., p.55.
Bell Gertrude (1914), ‘Palace and Mosque at Ukhaidir', pp.163-168
Jairazbhoy, R.A. (1972) op cit., p.55.
Also called cataracta are usually suspended by iron rings to be dropped behind the enemy imprisoning him.
Michell, M. et al. (eds.) (1978), ‘Architecture of the Islamic World', Thames and Hudson, London, p.
Jairazbhoy, R.A. (1972), op, cit, p.58.
Symbolically the pointed arch embraced the divine dominance of the ordinary arch/ dome but the upward stilting pointed to the divine sky. Christians used it for this symbolism too, and in some cases it was combined with lobed arches to signify the theme of the Trinity.
See our article on Ribat of Soussa, Muslim invention of rib vaulting?
Jairazbhoy, R.A. (1972), op, cit., p.58. | 2026-02-02T13:18:15.907342 |
951,056 | 3.573454 | http://www.npwrc.usgs.gov/resource/plants/explant/cirsvulg.htm | Northern Prairie Wildlife Research Center
Cirsium vulgare (Savi.) Tenore (Cirsium lanceolatum
Bull thistle, spear thistle (Asteraceae)
Current level of impact
Known locations in RMNP: Moraine Park, Club Lake Trail. Common in open meadows, and ponderosa pine savannas on the east side of park. Found in some late succession communities.
Assessment: An intermediate number of patchy distributed populations. When added together, all populations would cover an estimated area less than 5 hectares.
Origin: Introduced from Europe but native to Asia.
Geographic distribution: Widely established in North America. Bull thistle has been introduced many times as a seed contaminant. Scattered through out Colorado.
Ecological distribution: Meadows, fields, roadsides, and other disturbed sites. A weed of pastures, montane and found at elevations up to 9000'.
Soils: On rich, rather moist soils. Common on calcareous soils, or those rich in bases.
Biennial forb, reproduces only by seeds and plants die after they set seed. A true biennial in that it produces a rosette in the first year, and in the second year it flowers and dies. Flowers July to October.
Seed production: A single inflorescence may produce up to 250 seeds, the number of inflorescence per plant vary, but individuals with more than 60 are not uncommon. Average fruit production is nearly 4000.
Seed germination: Studies in coastal dune populations and in British populations have produced little evidence of seeds persisting in soil from year to year.
Seed dispersal: Seeds possess a hairy pappus, and are well suited for wind dispersal.
Germination: Germination commonly takes place in autumn. In one study, bull thistle germination was not inhibited by dense cover, however, subsequent seedling survival was reduced.
Highly competitive weed. Studies have found that the spread of bull thistle is favored by trampling and soil disturbance. In Yosemite valley, the heaviest infestations were in areas that were heavily used by visitors. Digging by pocket gophers may also be important disturbance that favors the spread of bull thistle.
Bull thistle will not withstand cultivation. Studies in Yosemite show that mechanically cutting the thistles at the soil surface is an effective method of control. Infested fields should be mowed before seeds have a chance to ripen. A program that involves cutting should be maintained for at least 4 years.
Chemical: 2,4-D can be used to control bull thistle. 2,4-D should be sprayed while plants are still in rosette stage because plants become resistant as flower stalk is produced. If plants are too large before 2,4-D is applied, mow areas to prevent seed production and spray 2,4-D to inhibit regrowth (Lorenzi and Jeffrey 1987).
Forcella, F. and H. Wood. 1986. Demography and control of Cirsium vulgare (Savi) Ten. in relation to grazing. Weed Research 26:199-206. Jong, T.J. de and P.G.L. Klinkhamer. 1988. Population ecology of the biennials Cirsium vulgare and Cynoglossum offcinale in a coastal sand-dune area. Journal of Ecology 76:366-382. Klinkhamer, P.G.L. and T.J. de Jong. The importance of small-scale disturbance for seedling establishment in Cirsium vulgare and Cynoglossum officinale. Journal of Ecology 76:403-413. Lorenzi, H.J. and L.S. Jeffrey. 1987. Weeds of the United States and Their Control. Van Nostrand Reinhold Company, New York. 355 pp. McCarty, R.J. and J.L. Hattling. 1975. Effects of herbicides or mowing on musk thistle seed production. Weed Research 15 :363-367. Randall, J.M. 1988. Population dynamics of bull thistle, Cirsium vulgare, in Yoesmite Valley. In Proceedings of the Symposium on Exotic Pest Plants. U.S. Dept. of Interior, Washington D.C. 261 -277. | 2026-02-02T01:02:08.651647 |
217,650 | 3.63295 | http://eprints.bournemouth.ac.uk/17386/ | Green, I. D., 2011. Trace metals in the soil-plant system and beyond. In: Metallophyte Plants: A theatened and unrealised resource hidden in our biodiversity, 8-10 February 2011, University of Western Australia. (Unpublished)
Full text available as:
|PDF - Accepted Version|
Metals in the soil-plant system and beyond Metallophytes can play an important role in remediating contaminated land either by decreasing soil metal levels via phytoextraction or through restoring a vegetation community to prevent soil erosion or re-establish ecosystem function/services. As even non-hyperaccumulating metallophytes can accumulate considerable concentrations of metals in their tissues, the ecological consequences of growing plants with enhanced metal concentration requires consideration. Metals accumulated by metallophytes can potentially be transferred to higher trophic levels through consumption of the root/shoot tissue or sap, through consumption of the seed or through consumption of litter. Invertebrates are most likely to consume root/shoot tissues or sap, but many are able to discern high metal levels in their food, which in turn has a strong antifeedant effect. Indeed, it is hypothesised that hyperaccumulation may have a function in plant defence, but non-hyperaccumulating metallophytes may also benefit from reduced herbivory due to the metal content of their tissues. Although this may generally be effective in restricting the transfer of accumulated metals to higher trophic levels, there are arthropod herbivores that feed despite high metal concentrations. These species can in turn accumulate very high levels of metals from their metallophyte hosts. Consequently, there is the potential for critical pathways to be formed that may transfer high concentrations of metals from the soil to higher trophic levels, resulting in secondary toxicity. Granivory is the most likely pathway through which vertebrate animals such as birds and small mammals can be exposed to metals and hence secondary toxicity. Most plants effectively exclude metallic elements from their seeds, but metallophytes can contain high levels of toxic metals such as cadmium. Carnivorous & omnivorous vertebrates may also be exposed by consuming contaminated herbivorous invertebrates and vertebrates. The senescence or death of plants will return metals to the soil in the resulting litter. Contaminated litter can induce toxicity in invertebrate detritivores, effectively excluding them from contaminated ecosystems with the result that litter builds up, severely curtailing key ecosystem functions/services. Thus, the use of metallophytes in remediation/restoration can negatively affect the fauna on or adjacent to the site. Unfortunately, the scale of the potential problem is not yet clear.
|Item Type:||Conference or Workshop Item (Lecture)|
|Subjects:||Science > Biology and Botany|
|Group:||School of Applied Sciences|
|Deposited By:||Dr. Iain Green|
|Deposited On:||17 Feb 2011 14:16|
|Last Modified:||07 Mar 2013 15:42|
Document DownloadsMore statistics for this item...
|Repository Staff Only -|
|BU Staff Only -|
|Help Guide -||Editing Your Items in BURO| | 2026-01-21T14:04:42.656176 |
500,802 | 3.776644 | http://learningtogive.org/faithgroups/phil_in_america/stewardship.asp | A “steward” is someone entrusted with the responsibility of caring for certain possessions, gifts, or other valuables, valuables which the steward usually does not own. While the original meaning of steward and stewardship was primarily economic—care of a master's property or household—it is the religious interpretation of stewardship that has animated philanthropic giving and service for centuries. The religious notion of stewardship holds that everything we have—and even the earth we inhabit—belongs to God, and while we are permitted to use it we must take care to use it well, and perhaps even to improve it.
The Christian principle of stewardship, as articulated by Calvin and Wesley, has long been the primary foundation on which religious institutions promote giving and voluntary service among their members. But this religious idea has also been adopted by non-religious philanthropists such as Andrew Carnegie. And stewardship has also become the central value of the modern environmental movement, one of the more vibrant parts of the voluntary sector in the United States and elsewhere. These secular uses of the religious idea of stewardship illustrate one way in which religious values have predominated throughout the history of the western philanthropic tradition.
In classical Greece, the word most closely associated with what we call “stewardship” was oeconomia (for steward, oeconomicus ). It meant the person charged with care of a household—this term is also the root for what we now call “economics.” “Household” meant real property, the land and the improvements on it, but stewardship often extended to protection of a family as well.
“Steward” appeared for the first time in English about a thousand years ago. According to the Oxford English Dictionary , in this early use a steward was “an official who controls the domestic affairs of a household, supervising the service of his master's table, directing the domestics, and regulating household expenditures… (19XX, p. XX).” The original written form of steward is stigweard . Stig meant a domestic building of some sort; weard meant guard—the word from which “warden” has also come down to us. In these uses, it is clear that the property over which the steward has responsibility is not the steward's property, but the master's. In the medieval context in which the word originated, the steward's sole focus of loyalty was to his master. When that master was in fact the king of the realm, the king's steward became an important and powerful official.
The meaning of “stewardship” in the Judeo-Christian religious tradition adds a spiritual dimension while retaining this original economic dimension. Stewardship, especially in the Christian tradition, has become a spiritual metaphor, a pious responsibility, and a weekly religious practice. This is the meaning of the term for many Americans.
The idea of stewardship has a long ecclesiastical history. The spiritual metaphor of stewardship is drawn in Judaism and Christianity first from the Torah or Old Testament (Leviticus, for example). In Christianity, in particular, the word “steward” carries a weighty heritage, and there are key references to Christians as God's “stewards” throughout the New Testament. The apostle Paul identified himself as a steward: “This is how one should regard us,” he wrote to the Corinthians, “as servants of Christ and stewards of the mysteries of God (I Corinthians 4:2).” The apostle Peter exhorted his fellow Christians: “As each has received a gift, employ it for one another, as good stewards of God's varied grace. Like good stewards of the manifold grace of God, serve one another with whatever gift each of you has received (I Peter 4:10).”
The “Parable of the Talents” (Matthew 25:14-30 and Luke 19:11-27) is usually interpreted as a lesson in stewardship, as a call to make positive use of the gifts—even monetary ones—that God provides. In this parable, a master gives different amounts of gold (“talents” were gold coins) to several servants for safekeeping, and the servants who use the gold to procure more gold for return to the master are rewarded as “wise and faithful stewards,” (Matthew 25:23—sometimes translated as “good and faithful servants”) while the servant who simply hid the gold away is punished.
The medieval church also perpetuated and extended the idea of stewardship, “based on the recognition that all gifts come from God and must be used to his glory, and applying equally to all types of gifts, whether of money, time or talents (Maquarrie 1967, p. 333).” The church claimed the right to be the mediator between God and humankind in matters of human claims to use God's property.
In the late 19 th century, especially in America, as the Christian churches professionalized their fund raising appeals to meet their growing budgetary needs, the term “stewardship” became widely used as a virtual euphemism for “giving money” or “tithing” (Conway 1995). Stewardship in this narrower usage did have theological origins, however, especially in Protestantism. John Calvin made this connection between the spiritual and practical philanthropic responsibilities of stewardship: “Let this, therefore, be our rule for generosity and beneficence: We are the stewards of everything God has conferred on us by which we are able to help our neighbor, and are required to render account of our stewardship (1950, III: VII: 5, p. 695).”
Even more significant is John Wesley's famous explication of the Christian as “good steward,” which provides a religious rationale for philanthropic responsibility. Wesley, the founder of Methodism, gave a sermon in Newcastle, England, in May 1768, entitled “The Good Steward.” In this sermon, Wesley addresses three points: 1) that we are God's stewards; 2) that our stewardship ends when we die; and 3) that we may expect to account for what we have done or failed to do as stewards. Wesley says that we have been entrusted with “our souls, our bodies, our goods, and whatever other talents we have received. Of all these,” he observes, “it is certain we are only stewards (1991, p. 421).” Among the goods over which God has given us stewardship, Wesley declares that God “has committed to our charge that precious talent which contains all the rest, money… Indeed, [money] is unspeakably precious, if we are ‘wise and faithful stewards' of it; if we employ every part of it for such purposes as our blessed Lord has commanded us to do (p. 422).”
Today, the larger religious meanings of stewardship—e.g., as a general philanthropic responsibility to take care of God's world—are often submerged in what becomes a dreary burden of annual pledge appeals in churches. But yet, the people in the pews are the backbone of American philanthropy, considering that one out of every two philanthropic dollars is given to a religious institution. Moreover, the general religious notion of stewardship as more than just an obligation to tithe, as an obligation (owed to God) to “do good,” as Cotton Mather preached in early America (see Bremner 1988, p. 12), has remained.
In fact, this general meaning of stewardship has been adopted by philanthropists, social reformers, and activists—some religious, some not—who think of and describe their work as fulfilling an obligation of “stewardship,” an obligation owed either to God, to previous generations, or even to a specific individual. Many famous philanthropists in America turned to the doctrine of stewardship to explain their motives and justify their philanthropy. John D. Rockefeller and William Penn believed that wealth was a gift from God entrusted to certain people for proper management, like the master's property was entrusted to his steward. Andrew Carnegie ( 1962) espoused an explicitly secular “gospel of wealth” that mirrored the religious gospel of stewardship; he believed that along with great wealth came a great obligation to use that wealth for the public good. Recent studies by Paul Schervish, et al. (1994), suggest that this sort of gospel continues to motivate wealthy benefactors. They analyzed the “narratives” offered by wealthy philanthropists to describe what guides their giving, and identified a “moral stewardship model” that recurs in many of these narratives.
Perhaps the most prominent contemporary use of the idea of stewardship by philanthropic actors is in the modern environmental movement. The need for responsible “stewardship of the earth” is a common refrain heard from both environmental philosophers and movement activists (Nash 1989). Environmental stewardship involves the recognition of our responsibility to maintain, and perhaps improve, the natural world which we have inherited and which we will bequeath to future generations. Stewardship is the opposite of both neglect and abuse of the environment. It requires, at least, the preservation and proper management of “natural resources” (following the original economic meaning of stewardship); but more than this, it usually requires the restoration of over-exploited nature as well.
The modern notion of environmental stewardship is typically presented as a wholly secular one; only occasionally are the religious origins of the idea acknowledged. Among those who do trace these religious origins, there has been considerable debate over whether the Judeo-Christian responsibility to be a “steward of God's creation” does, in fact, encourage environmental stewardship (Passmore 1974). When God gives man “dominion” over nature (Genesis 1: 26-28), does this justify total human control to use nature for human needs, or does having “dominion” really mean being a “steward” or “caretaker” of God's creation, to “till it and keep it (Genesis 2: 15)?” Many religious environmentalists, and proponents of what has been called “eco-theology,” have tried to advance the second interpretation, that God's natural resources are entrusted to human stewards for preservation and improvement rather than exploitation (Moody 2002). However, this religious environmental ethic has been further criticized for being too “anthropocentric” in conceptualizing man as a separate steward managing nature rather than man as a part of nature (see Dubos 1972).
Overall, stewardship in modern usage is a powerful philanthropic concept, deeply moral in its concern for the well-being of others and rooted in ancient and enduring religious and social values, but also economically grounded in its concern for wise investment and management of our resources. Stewardship is about more than maintenance, it is about visionary management. Stewards have temporary control over, but not ownership of, an inheritance. They also have an obligation to manage this inheritance in such a way that it can be passed along even better and stronger than it was when they received it. There is a profound implication of trust in the idea of stewardship—“steward,” “trustee,” and “curator” are in many ways comparable terms.
Being a steward means accepting responsibility, but fulfilling this responsibility is never easy. Stewards lack an adequate job description or work plan. Stewards must decide what, exactly, they are stewards of . Should all parts of the inheritance be preserved? What parts should be improved upon or, perhaps, discarded? And modern stewards may have a variety of “masters.” These can be individuals, institutions, or generations. Nevertheless, it is the desire to fulfill these difficult responsibilities of stewardship that motivates a great deal of the activity we call philanthropy.Bibliography
Bremner, Robert. 1988. American Philanthropy . Second Edition. Chicago: University of Chicago Press.
Calvin, John. 1950. Institutes of the Christian Religion . Edited by John T. McNeill. Translated by Ford Lewis Battles. Philadelphia: Westminster Press.
Carnegie, Andrew. 1962. The Gospel of Wealth and Other Timely Essays . Edited by Edward C. Kirkland. Cambridge, MA: Harvard University Press.
Conway, Daniel. 1995. “Faith Versus Money: Conflicting Views of Stewardship and Fundraising in the Church.” New Directions for Philanthropic Fundraising 7 (Spring): 71-77.
Dubos, Andre. 1972. A God Within . New York: Charles Scribner's Sons.
Maquarrie, John. 1967. Dictionary of Christian Ethics . Philadelphia: Westminster Press.
Moody, Michael. 2002. “Caring for Creation: Environmental Advocacy by Mainline Protestant Organizations.” Pp. 237-264 in The Quiet Hand of God: Faith-Based Activism and the Public Role of Mainline Protestantism. Edited by Robert Wuthnow and John Evans. Berkeley: University of California Press.
Nash, Roderick F. 1989. The Rights of Nature: A History of Environmental Ethics . Madison, WI: University of Wisconsin Press.
Oxford English Dictionary . 19XX. Oxford: Oxford University Press.
Passmore, John. 1974. Man's Responsibility for Nature: Ecological Problems and Western Traditions. New York: Charles Scribner's Sons.
Schervish, Paul G., Platon E. Coutsoukis, and Ethan Lewis. 1994. Gospels of Wealth: How the Rich Portray Their Lives . Westport, CT: Praeger Publishers.
The Bible. 1971. Revised Standard Version. New York: American Bible Society.
Wesley, John. 1991. “The Good Steward.” Pp. XXX-XXX in John Wesley's Sermons: An Anthology . Edited by Albert C. Outler and Richard P. Heitzenrater. Nashville, TN: Abingdon Press.
Learning to Give wishes to express appreciation to Dr. Dwight Burlingame, editor/author of Philanthropy in America, a Comprehensive Historical Encyclopedia of Philanthropy Information, and the publisher ABC-CLIO for graciously sharing this resource information with Learning to Give. The complete encyclopedia may be purchased through ABC-CLIO. | 2026-01-26T01:54:38.759934 |
66,804 | 4.120847 | http://en.wikipedia.org/wiki/Thrust | ||This article needs additional citations for verification. (April 2009)|
Thrust is a reaction force described quantitatively by Newton's second and third laws. When a system expels or accelerates mass in one direction, the accelerated mass will cause a force of equal magnitude but opposite direction on that system. The force applied on a surface in a direction perpendicular or normal to the surface is called thrust.
A fixed-wing aircraft generates forward thrust when air is pushed in the direction opposite to flight. This can be done in several ways including by the spinning blades of a propeller, or a rotating fan pushing air out from the back of a jet engine, or by ejecting hot gases from a rocket engine. The forward thrust is proportional to the mass of the airstream multiplied by the difference in velocity of the airstream. Reverse thrust can be generated to aid braking after landing by reversing the pitch of variable pitch propeller blades, or using a thrust reverser on a jet engine. Rotary wing aircraft and thrust vectoring V/STOL aircraft use engine thrust to support the weight of the aircraft, and vector sum of this thrust fore and aft to control forward speed.
Birds normally achieve thrust during flight by flapping their wings.
A motorboat generates thrust (or reverse thrust) when the propellers are turned to accelerate water backwards (or forwards). The resulting thrust pushes the boat in the opposite direction to the sum of the momentum change in the water flowing through the propeller.
A rocket is propelled forward by a thrust force equal in magnitude, but opposite in direction, to the time-rate of momentum change of the exhaust gas accelerated from the combustion chamber through the rocket engine nozzle. This is the exhaust velocity with respect to the rocket, times the time-rate at which the mass is expelled, or in mathematical terms:
- T is the thrust generated (force)
- is the rate of change of mass with respect to time (mass flow rate of exhaust);
- v is the speed of the exhaust gases measured relative to the rocket.
For vertical launch of a rocket the initial thrust must be more than the weight.
Each of the three Space Shuttle Main Engines could produce a thrust of 1.8 MN, and each of the Space Shuttle's two Solid Rocket Boosters 14.7 MN, together 29.4 MN. Compare with the mass at lift-off of 2,040,000 kg, hence a weight of 20 MN.
By contrast, the simplified Aid for EVA Rescue (SAFER) has 24 thrusters of 3.56 N each.
In the air-breathing category, the AMT-USA AT-180 jet engine developed for radio-controlled aircraft produce 90 N (20 lbf) of thrust. The GE90-115B engine fitted on the Boeing 777-300ER, recognized by the Guinness Book of World Records as the "World's Most Powerful Commercial Jet Engine," has a thrust of 569 kN (127,900 lbf).
Thrust to power
The power needed to generate thrust and the force of the thrust can be related in a non-linear way. In general, . The proportionality constant varies, and can be solved for a uniform flow:
Note that these calculations are only valid for when the incoming air is accelerated from a standstill - for example when hovering.
The inverse of the proportionality constant, the "efficiency" of an otherwise-perfect thruster, is proportional to the area of the cross section of the propelled volume of fluid () and the density of the fluid (). This helps to explain why moving through water is easier and why aircraft have much larger propellers than watercraft do.
Thrust to propulsive power
A very common question is how to contrast the thrust rating of a jet engine with the power rating of a piston engine. Such comparison is difficult, as these quantities are not equivalent. A piston engine does not move the aircraft by itself (the propeller does that), so piston engines are usually rated by how much power they deliver to the propeller. Except for changes in temperature and air pressure, this quantity depends basically on the throttle setting.
Now, a jet engine has no propeller. So let's find out the propulsive power of a jet engine from its thrust. Power is the force (F) it takes to move something over some distance (d) divided by the time (t) it takes to move that distance:
In case of a rocket or a jet aircraft, the force is exactly the thrust produced by the engine. If the rocket or aircraft is moving at about a constant speed, then distance divided by time is just speed, so power is thrust times speed:
This formula looks very surprising, but it is correct: the propulsive power (or power available ) of a jet engine increases with its speed. If the speed is zero, then the propulsive power is zero. If a jet aircraft is at full throttle but is tied to a very strong tree with a very strong chain, then the jet engine produces no propulsive power. It certainly transfers a lot of power around, but all that is wasted. Compare that to a piston engine. The combination piston engine–propeller also has a propulsive power with exactly the same formula, and it will also be zero at zero speed –- but that is for the engine–propeller set. The engine alone will continue to produce its rated power at a constant rate, whether the aircraft is moving or not.
Now, imagine the strong chain is broken, and the jet and the piston aircraft start to move. At low speeds:
The piston engine will have constant 100% power, and the propeller's thrust will vary with speed
The jet engine will have constant 100% thrust, and the engine's power will vary with speed
This shows why one cannot compare the rated power of a piston engine with the propulsive power of a jet engine – these are different quantities (even if the name "power" is the same). There isn't any useful power measurement in a jet engine that compares directly to a piston engine rated power. However, instead of comparing engine performance, the gross aircraft performance as complete systems can be compared using first principle definitions of power, force and work with the requisite considerations of constantly changing effects like drag and the mass (of the fuel) in both systems. There is of course an implicit relationship between thrust and their engines. Thrust specific fuel consumption is a useful measure for comparing engines.
See also
- Aerodynamic force
- Astern propulsion
- Gimballed thrust, the most common thrust system in modern rockets
- Stream thrust averaging
- Thrust-to-weight ratio
- Thrust vectoring
- Tractive effort | 2026-01-19T08:13:22.497203 |
475,031 | 3.731592 | http://www.conservationmagazine.org/2008/07/circuitous-routes/ | Circuit theory guides wildlife corridor design
Brad McRae was studying cougar populations when he ran into a problem. He knew the populations’ genetic makeup, and he had good maps of the cats’ habitat. But he couldn’t explain how the landscape influenced the way the animals—and, in turn, their genes—traveled between groups.
The search for a solution led McRae, now a biologist at the National Center for Ecological Analysis and Synthesis, to his past life as an electrical engineer. He had a hunch that the way animals travel through a landscape might be similar to how electricity moves across circuits. If that were the case, circuit theory would help explain how genes disperse.
Big cats and tiny circuits don’t have much in common, but McRae was on to something. It turns out circuit theory can predict gene flow better than biologists’ standard methods. Even better, it could be an important new tool for designing wildlife corridors, which help animal populations stay connected when human development encroaches on their habitat.
To understand how McRae’s method works, think of two patches of habitat as nodes in an electrical circuit. Animals and plants (or, at the root of it, their genes) are the currents that flow between the nodes. To get from node to node, electric currents take as many paths as possible. As McRae’s reasoning goes, so do genes in the wild. If there’s only one resistor—a single path—the current won’t flow as well. Better for there to be several resistors wired in parallel. In the parlance of conservation, the resistors are corridors, the links between islands of prime habitat. They help plants and animals avoid the problems that can affect small populations, in part by helping them move in search of food and mates. “It’s like a power grid,” says McRae. “If you have more corridors, the network is more robust. Then, if you lose one, the whole thing doesn’t go down.”
But identifying the best corridors to preserve has been a huge challenge, and it’s hard to predict the path any one organism will travel. Meanwhile, the dollars needed to protect corridors are scarce, leaving little room for guesswork.
To hear McRae tell it, though, the approaches now guiding corridor design are too narrow. Traditionally, wildlife managers have relied on a method of analysis called least-cost path (LCP). To predict how organisms might travel, LCP divides a landscape into a collection of cells, then estimates how difficult—or costly—it would be to move between them. Highways and other barriers carry high costs, while intact habitats have low costs. But LCP has at least one serious limitation: it assumes organisms will choose an optimal, least-costly path.
McRae argues that this doesn’t reflect how genes flow in the wild, where organisms usually have many paths to choose from. “LCP models don’t consider how alternative routes might contribute to connectivity,” he said.
McRae’s model does. It sees the landscape as a conductive surface and then calculates all possible pathways between the patches of good habitat. As a result, connectivity between those patches is affected by all corridors between them, not just the most obvious ones. And a test of the model, published in PNAS, showed it could explain gene flow patterns more accurately than the most popular conventional models, including LCP.
McRae and his collaborators are now using circuit theory to pinpoint critical linkages in landscapes, which they say could help make populations more robust when faced with habitat change or catastrophes such as wildfires. | 2026-01-25T14:59:08.720261 |
392,594 | 3.970504 | http://www.calvin.edu/academic/cas/gpa/textbk02.htm | Background: This is a chapter from a middle school geography textbook published in the midst of the war. It reviews Nazi actions after 1933 and summarizes the Nazi claim that Germany needed more space. This was a major element of Nazi propaganda. Late in the war, the Nazis were still planning what to do with the colonies they intended to regain.
The source: Reinhard Müller, Deutschland. Sechster Teil (Munich and Berlin: R. Oldenbourg Verlag, 1943), pp. 116-130.
People without Space
People and Living Space
|Births per 1,000||
Despite the great decrease in birth rate, the German people, with a population density of 133.5 per square kilometer, remains a crowded people. Other peoples with a much smaller population density still have large colonial holdings that can accept their surplus population. Although it is true that the Four Year Plan has guaranteed our food supply and raw material needs, we lack the abundance that other nations have because of their colonies. [Here there is a pie chart that shows that England and its colonies are 27% of the earth, Russia 16%, the U.S.A. 7%, and Germany 0.6%.] Since we do not want to be a dying people, our goal is to increase our birth rate. But for a growing population we need space if we do not again want to see large amounts of German blood emigrating to other nations, as was the case before the World War. Each year, a large number of German emigrants left for foreign lands.
|Number of Emigrants||625,968||1,342,423||529,875||279,645||92,161||567,293||137,212|
Regaining our colonies will make it possible to send some of our surplus population to them. The great majority of settlers, however, will join those in the East who have returned to us and will find in the wide areas of the East a new home where German land can be put to German uses.
The growing German population of the last century lead to increasing emigration, which particularly led German emigrants to the United States. In this “promised land for German emigrants,” however, establishing German colonies was impossible. The German scholars and scientists, missionaries and merchants, farmers and craftsmen were working in distant lands under foreign flags for other peoples, among whom they were guests, and they mostly lost their German nature.
In the middle of the nineteenth century, energetic exploration began that above all was directed toward Africa. Among the leaders in the exploration of the “Dark Continent” were brave German explorers, who gradually awakened the interest of broad circles in Germany. Among them were Barth, Rohlfs, Nachtigal, and Weißmann.
In contrast to the other great colonial powers that divided the world, Germany began thinking late about gaining colonies. At the last minute, thanks to the intrepid work of Adolf Lüderitz in German Southwest Africa, Gustav Nachtigal in German Togo and German Cameroon, and Carl Peter in German East Africa, we gained colonies. The South Pacific Protectorate followed. All of these colonies were gained legally through treaties, not through theft, as was the case of some of the colonies of other nations.
The German colonies together covered nearly 3 million square kilometers, nearly six times the size of the German Reich before the World War (540,000 square kilometers), and had about 14 million inhabitants. The population was large enough to provide a work force, but small enough to leave sufficient room for white settlers. Thus, the colonies were able to absorb at least some of our emigrants before the war.
In only thirty years, Germany was able to develop our colonies to a significant extent. One measure is the foreign trade of the colonies, which rose from 102 million marks in 1903 to 520 million marks in 1913.
The outbreak of the war put an end to state support of developing the economic infrastructure of our colonies. Only since 1914 have the colonies begun to produce a profit, and would have been a valuable source of raw materials, food, and luxury items for us. They would also have given young Germans a place to turn their gaze. The development of the colonies, however, was stopped by the outbreak of the World War.
Wilson’s 14 Points expressly stated that the Allies would not take control of Germany’s colonies, but rather that Germany needed its colonies as a source of raw materials from the tropics and as areas for surplus population. Point 5 promised “a free, generous, and impartial resolution of all colonial claims.” But Wilson’s 14 Points were so distorted in the Dictates of Versailles that nothing of them was left. Without considering Germany’s claims, its colonies were removed and declared mandates. In order to provide a facade of legality, one invented lies about our colonies, accusing Germany of unfit and incompetent administration, and of military imperialism. The blooming plantations, the great achievements in reducing disease, the many native schools, the good German transportation system and German order clearly refute these lies. After the World War, these charges were laughed at even in the enemy nations. The Daily News wrote on 9 June 1926: “Little in the Treaty of Versailles was more unscrupulous than the moral excuses used to exclude Germany from the ranks of the colonial powers.” the Manchester Guardian wrote on 15 June 1926: “The seizure of Germany’s colonies and their division among the victorious Allies has a prime position in its foolishness, treachery, and hypocrisy of the Treaty of Versailles.” We must therefore resist those who wish to slander our colonial pioneers and denigrate their achievements. Our entire nation demands a formal apology for these lies about our colonies.
The World War rudely interrupted the development of our colonies and destroyed everything we had built. Our colonial troops put up legendary resistance, though they were not trained against an outside threat. Or colonies were stolen from us after the World War. But Germany has not given up its claim on the colonies. Point 3 of the [Nazi] party program stated: “We demand land and territories (colonies) to feed our people and accept our surplus population.” At the Reich Party Rally of Honor on 9 September 1936, the Führer said in his remarks on the Four Year Plan: “Regardless, however, Germany cannot surrender its colonial demands. The right to life of the German nation is just as great as that of other nations.” Although the American delegation to the peace conference explicitly recognized Germany’s right to colonies, we were robbed of our colonial possessions without a “popular referendum” or the agreement of the natives. German would not have feared a referendum, for the loyalty of the natives to their German protectors was proven by their behavior during the World War. Since the colonies were acquired legally and since the enemy powers have not kept the promises they made before the Armistice, for reasons of simple justice we have a right to the return of the colonies.
We must demand the return of our colonies not only for reasons of honor and equality, but also for economic reasons and to provide space for emigration. Germany today does not have the ability to meet its needs for food and raw materials from its own holdings, as do the major colonial powers. And colonies would be a market for our industrial products and give German colonial pioneers and settlers the opportunity to develop large and yet unused parts of the earth. “Colonies are not an expression of imperialist thinking for the German nation, no outward sign of power and assertion, but rather they are a necessity of life.” Our demands for their return show the way from the Greater German Reich of the present to the larger Germany of the future.
Germany is in the heart of Europe. It is the bridge between the land to the East and the peninsulas and islands to the north, west and south. It is the intermediary between the cultures of the east and the west. The North Sea and the Baltic Seas are the way to the northern lands, the Alpine rivers open the way to sunny Italy. Germany shares the characteristics of all the surrounding land areas. It has always had more or less close relations with the areas around it. Its climate is not uniform. It is influenced both by the Atlantic Ocean and the Gulf Stream to the west and the great land masses to the east. Its rivers flow into the sea to the north, and therefore provide access to the seas. The Danube provides access to the countries in the southeast of Europe and to the Black Sea. The Mediterranean coast is only 80 kilometers from its southern border.
The German people has always had close cultural and economic relations with its neighbors. Its role as a cultural intermediary is evident from the beginning. Especially during the Middle Ages, the influence extended far into the east. At the invitation of foreign counts, thousands and thousands of Germans moved east, bringing German culture as far as the Volga River. As a result of its central position, Germany also had an influence to the west in the 16th and 17th centuries, and to some degree in the 19th as well. Its central position made Germany the dominant merchant power in Europe during the Hansa period. In this age of transportation, important rail and air routes run through Germany. They connect the east to the west and the north to the south, making Germany one of the most important nations in the modern transportation system. The central position also makes Germany a leader in world commerce.
As a consequence of its central location, Germany has long borders and has more neighbors than any other nation. That also leads to close political contacts with its neighbors. Its back is nowhere free. The danger of encirclement was always great. Many times its western neighbors succeeded in forcing Germany into a two-front war. Since Europe’s important transportation routes pass through Germany, its neighbors have always had an interest in keeping Germany disunited, since that gave them a stronger influence than they would have over a united Germany.
The World War showed how grave a danger Germany’s central position is. Germany was surrounded by enemies. All access to the seas and to neutral countries were blocked, which allowed for a successful hunger blockade. Though the German army fought bravely for four years against a multitude of foes, the lack of food and raw materials finally weakened the homeland and forced the nation to capitulate. However, the enemy’s hopes of a fragmented Reich were not fulfilled.
Surrounded as it is by other nations, Germany cannot lead a comfortable life. We Germans must be constantly alert to deal with the challengers brought about by our central location. Only a strong and united people can do that. The National Socialist government drew the necessary conclusions from Germany’s unfavorable central location, and took the steps necessary to ensure that Germany will never again find itself in a situation similar to that of the end of the World War.
Germany borders on the Baltic and North Seas. The North Sea provides access to the oceans. The North Sea Coast, however, is considerably shorter than the Baltic Coast. The Baltic Sea provides access to all the other countries bordering on it, and it is difficult to interfere with shipping on it. The North Sea, however, can be blockaded by England. This is particularly evident from the English blockade during the World War. To keep Germany from the world’s oceans, England did everything it could to bring the opposing coast in the hands of smaller nations. It brought about the separation of Belgium and the Netherlands into two states, which it could influence since both depended on England to protect their large colonial possessions. This also removed the mouth of the Rhine from German control, and reduced the length of the German coast. The British influence on Norway was just as great, since it needed British help to protect its large merchant fleet. England was thus able to completely block Germany’s access to the seas. The leadership of National Socialist Germany, however, understood how to wage the present war in a way that rendered England’s blockade ineffective, and made it impossible to block Germany’s routes to the seas.
Germany’s access to the seas is weakened by the fact that it is on two almost entirely separated seas, the North and the Baltic Seas. The route between the two is not in German, but in Danish hands. Germany attempted to remedy this by building the Kiel Canal. The canal was placed under international control after the World War, which considerably reduced its value. Here too the National Socialist government broke the chains and freed the canal from foreign influence.
After German soldiers had fought against a host of enemies for four years, Germany had to lay down its arms at the end of 1918. Two German ministers signed the so-called “Peace Treaty of Versailles” on 28 June 1919 n the Hall of Mirrors at Versailles. In reality, it was an imposed peace. Hate and envy wanted to weaken and destroy Germany. Without regard to those living there, large parts of Germany were taken from it. Austria, too, lost territory, particularly the Sudetenland, and was prohibited from uniting with Germany. The theft of German blood and territory was intended to weaken the center, Europe’s heart, and destroy Germany as a great power.
At the cost of the Reich, the neighboring countries were given new advantages. The borders were often so senseless that cities lost their surrounding hinterlands. Important railroad stations were taken from the Reich. Roads and train tracks ran alternately through German and foreign territory. Normally river borders run along the middle of the river, but the East Prussian Weisel River border ran along its German bank, with the result that five German villages became Polish territory, with no bridge at all leading to Polish territory. Many such errors in drawing the borders were made.
Beside the loss in blood and territory, Germany also suffered enormous economic losses through the dictates of Versailles. Germany received no compensation for the government possessions it lost (estates, forests, railway tracks, etc.). All merchant ships larger than 1600 tons and half of all those between 1000 and 1600 tons had to be surrendered. Although the English hunger blockade was not lifted, Germany was compelled to deliver large quantities of livestock, and large amounts of coal annually to France, Belgium, and Italy. Germany also had to deliver large amounts of coke, benzene, tar, dyes, medicines, and other chemical products. The enemy powers took over all long-distance cables. With the surrendered territories we lost the iron mines of Lorraine, the potash and oil of Alsace, and numerous coal, zinc, and lead mines in Upper Silesia. Important industrial areas were lost with Lorraine and Upper Silesia. In northwest Schleswig Holstein and the eastern regions we lost important agricultural regions. Their loss reduced our ability to grow sufficient food.
The restrictions the enemy put on our army and navy limited our government’s power, forcing us to national impotence.
After years of struggle, our Führer Adolf Hitler took power on 30 January 1933. The National Socialist German Workers’ Party (NSDAP) became the political movement of the entire nation. In place of political fragmentation, there came a united front of the entire people. The goal of the Führer’s policies was to eliminate the dictate of Versailles, to restore Germany’s honor, and to strengthen the Reich.
Systematic policies encouraged German agriculture and guaranteed sufficient food supplies. Unemployment was ended through a comprehensive job creation program. Political equality was gained by generous encouraging generous living and working conditions.
The Reich left the League of Nations, freeing itself of foreign ties that hindered its new strength. The united front of its enemies did not dare to resist Germany by hindering the return of the Saar in March 1935. Soon after, the Führer declared the military autonomy of the Reich and introduced universal military service. All available means were used to build the army into a powerful force on which the nation can rely. Under the lead of Reich Marshal Hermann Göring, the Luftwaffe was built up. In the spring of 1936, a year later, German troops could return to the Rhineland. The demilitarized zone along the western border of the Reich no longer existed. Germany could once again protect its borders. On 30 January 1937, the Führer formally withdrew the signature under Paragraph 231 of the dictate of Versailles.
Strong powers stood against the Reich after the Führer’s takeover of power. But a new encirclement was impossible. Italy, which had also been disappointed by the Treaty of Versailles, was Germany’s important partner. The relationship between Italy and Germany improved from year to year, and was deepened by visits by Mussolini and Germany and the Führer in Italy. The common interests of both “young states” over against those of the presumed winners of Versailles strengthened the Berlin-Rome Axis, which shattered the ring of encirclement. The Axis partners supported Nationalist Spain in its struggle for freedom. Germany and Italy recognized General Franco’s government on 18 November 1936. The victory of young Spain relieved the pressure on Germany’s western border.
On 25 November 1936, the German-Japanese Anti-Komintern Treaty brought the young, advancing Japan into the Axis. It too was struggling for its right to life. Italy signed the Anti-Komintern Treaty on 6 November 1937 and followed German’s example by withdrawing from the League of Nations on 11 November. On 27 September 1940, Germany, Italy and Japan signed the Three Power Treaty, and were later joined by Hungary, Rumania, Slovakia, Bulgaria, and Croatia. The dictate of Versailles was now rendered void. The ring around Germany could no longer be closed.
Until the National Socialist takeover, the German economy followed the principles of economic liberalism, which held that a nation’s economy could develop irrespective of its natural economic foundations. A smoothly functioning world economy would provide nations poor in raw materials with what they needed, industrial nations would receive foodstuffs, and agricultural nations would receive industrial goods. A world empire like Britain’s could in fact do this, because it controlled everything that it needed. It could endure the destruction of its agricultural class in favor of a one-sided industrialization. Germany, however, did not have such resources at its disposal, and became dependent both politically and economically on foreign powers.
Our development into an industrial state dependent on world markets was intensified by the economic losses resulting from the dictates of Versailles. The dependence on world capital, controlled by Jewry, became intolerable. After seeming prosperity, there were 7 million unemployed in 1932. The economy threatened to collapse.
At the moment of greatest need, Adolf Hitler came to power and freed the state and the people step by step from the chains of Versailles. He won the independence of Germany’s economy. The foundations of Germany’s economy are not German labor and German soil. Each worker is in the right place, and German soil and its farmers are fully used. The state and our citizens are free of foreign influence. The final goal is to secure Germany’s food supplies and raw materials, from which German military freedom grows.
If the National Socialist economic plan was to be successful in reviving the German economy, all participants in economic life had to be convinced of National Socialist economic thinking. In the economy too, the guiding principle had to be: “The common good comes before the individual good.” There could no longer be conflict between the various branches and forces of the economy. “All elements of the nation serve a higher goal, the prosperity of the nation and the well-being of the people. That is also true of the economy.” (Robert Ley) All branches of the economy must be cared for. Workers and employers must be treated fairly, the economy must be managed, and the economic enterprises must be distributed around German territory. The “Reich Agricultural Trust” and the “German Workers’ Front” organize all those involved in the economy, both workers and managers. Together they organize the entire factory, realizing one of the cardinal goals of the NSDAP: overcoming class struggle. National Socialist economic thinking condemns profit for the sake of profit. The success of an undertaking is evaluated solely on its contribution to the whole nation. “The goal of any economy must be to strengthen the people’s struggle for existence in its battles with foreign powers and inner conflicts.” (Rosenberg)
The principle of putting the economy in service of the entire people is made possible by a wide-ranging economic plan, as evidenced by the Four Year Plan. This vast plan incorporates all German economic capacities. It aims to guarantee our needs for raw materials and foodstuffs such that they are secure even during emergencies. “In the long run, it is intolerable to be dependent on the success of failure of each year’s harvest.” (Adolf Hitler) Through increasing agricultural production, the plan seeks to ensure food supplies. It seeks to intensify the use of our territory, and expand mining. German scientists created new materials that could replace old raw materials, and new factories use raw materials that formerly could not be used. The Four Year Plan has to a large extent made the German economy free of foreign countries.
The Four Year Plan demanded the full energies of all Germans. The population policies of past decades made the shortage of workers evident. It was necessary to take measures to “maintain the ability of working Germans.” The Office for Popular Health of the German Workers Front and the German medical community helped by inspecting factories and providing medical care for workers. Children in schools and members of the Hitler Youth also received medical examinations, as did small children through the “Mother and Child” organization. In addition, the purity of blood and the health of the German people were encouraged through the “Law for the Protection of German Blood and German Honor” and the “Law to Prevent Hereditary Illness.” “A higher level of humanity depends not on the state, but on the race that creates that state.” (Adolf Hitler) The state is only a means to an end, and this end is in preserving and advancing the race. The goal of the state is therefore to create a healthy and growing people.
A people with no children is a dying people. The National Socialist government took on the task of maintaining and increasing the German population, and had en educational impact on the whole nation in this regard. Each German must understand that he is nothing, the people everything. Healthy children are the pride of each family. Since every citizen will not have children, each couple must have an average of four children to ensure the continuation of the people. The state supports families with numerous healthy children and gives them every kind of protection. Marriage loans make it easier for young people to marry. Child support, reductions in income tax, reductions in school fees for additional children, and reduced railway ticket prices reduce the burden on families with many children. Building healthy homes and developments at the edge of cities encourage more healthy children. Families with their own home on their own ground no matter how small it may be generally have more children than families that live in rented apartments.
Insurance also encourages National Socialist population policies.
By 1938 National Socialist foreign policy had reached the point where the Führer could finally reorganize Central Europe. The Western powers claimed the Mediterranean, and Germany’s alliance with Italy guaranteed its freedom of commerce. Now it was time for the longed-for union with Austria. Early on the morning of 12 March 1938, German troops marched in to Austria, requested by Minister Seiß-Inquart, the only minister remaining in office after the resignation of Schuschnigg’s government. On 13 March, Austrian and German law declared: “Austria is a state of the German Reich.” On 15 March, the Führer declared in Vienna: “The old Eastern Reaches of the German people from now on is the newest bulwark of the German Reich.” That was the geopolitical task of Eastern Reaches in the Greater German Reich.
This union eliminated “one of the great injustices of the Treaties of Versailles and St. Germain.” The development of central Europe took a major step forward. Germany now bordered directly on Italy. The door to southeastern Europe was in German hands. An important pillar of the encirclement plan was broken. The Danube and Alpine regions became an important connection between the Reich and the nations to the southeast. The economic relations between Germany and those nations will become close.
The incorporation of Austria resulted in borders that were a great danger to the Reich, particularly the border with Czechoslovakia, which made itself a lackey of the encirclement policies of the Western Powers. Also, the spiritual, political and economic pressure on the 3 1/2 million Germans grew, becoming over time unbearable.
A solution to the Sudeten German question became increasingly urgent. First, however, it was necessary to protect our western border to keep the Reich safe from attack from that direction. The Führer ordered the building of the West Wall, an insurmountable defensive barrier. This enormous endeavor required huge amounts of material, labor, and money. The German coast was protected by new fortifications and our strong, battle-ready navy.
As the Czech terror against the Germans in the Sudetenland grew, the Führer mobilized the German military. The Führer, Mussolini, Chamberlain, and Daladier met in Munich, with the result that German troops marched into the Sudetenland. The Sudeten area was occupied by German troops at the beginning of October 1938. They were received by the population with jubilation. Again an injustice was eliminated, and a piece of German territory returned to the Reich. There were now over 80 million Germans within its borders.
The Czechoslovakian state did not change its hostile position toward Germany after the return of the Sudetenland, and continued its persecution of Germans remaining in its territory. Its army was a threat to the security of the Reich, particularly with regard to air attacks.
The Führer drew the correct conclusion from this geopolitical situation. The collapse of the Czechoslovakian state made matters easier for him. After Slovakia put itself under the protection of the German Reich, President Hacha also put the fate of the Czech people in the hands of the Führer on 14 March 1939. German troops marched in on 15 March, and on 16 March the Führer announced in Prague the establishment of the Protectorate of Bohemia and Moravia.
Both lands had belonged to the German Reich for a millennium. Now they were again a part of it. German culture and German ability gave these lands their nature, and will continue to do so. The Germans living in pockets of the Protectorate became citizens of the German Reich.
Under the pressure of new political conditions, Lithuania voluntarily returned the Memel District to the Greater German Reich in March 1939. This area, too, is free of the foreign rule that its brave inhabitants long endured. On 23 March, German troops entered Memel, and the Führer himself welcomed its citizens back to the Reich.
Peacefully and without bloodshed, the Führer had remedied a large part of the injustice of Versailles. But it proved impossible to work things out peacefully with Poland. Poland felt it had the support of England and France, and was not willing to negotiate a settlement of the Danzig Corridor question. In the war forced upon us, Poland was defeated in an 18-day campaign. It was completely dissolved, after about 60,000 ethnic Germans had fallen victim to Polish incitement and murder. Danzig was freed and returned with West Prussia to the German state. Posen too again became part of the German Reich. It became the Gau Wartheland after the incorporation of neighboring areas with a Germanic population. Germans will here establish order in place of the “Polish economy.” The Führer ordered the resettlement of ethnic Germans from the Baltic areas, the Cholm-Lublin district, and from Wolhynien, Galicia, Buchenland, Bessarabia, Dobrudscha, and Lithuania. These ethnic Germans in the Wartegau and in Danzig-West Prussia will bring the land to new prosperity, and form a living wall to protect the German east. The Poles in these areas were resettled to other remaining Polish districts, and the Russians brought their citizens back from the German districts. As a result, the borders between Germanic and Slavic regions will in the future be clear.
The eastern part of Poland went to the Soviet Union under the border and friendship treaty. Both states had their areas of influence, and agreed to settle all Eastern European questions between themselves. The remainder of Poland was incorporated into the German Reich as the Generalgouvernement. After the war with the Soviet Union began, and large areas were occupied by our troops, Eastern Galacia and Lemberg were also added to the German Reich and made part of the Generalgouvernement.
Germany has now created a new order in the East. Together with Italy, it has also reordered southeastern Europe. Now it is engaged in a decisive battle with England and France and their vassals. Great battles have occurred, great victories won. Norway, the Netherlands, Belgium, Luxembourg, and France are in German hands. Alsace and Lorraine have been reincorporated into the German Reich. Since summer 1940 [sic], we have also been fighting Soviet Russia, which in violation of all the treaties threatened Germany. The outcome is not yet clear. Two worlds are facing each other: One is the world of the young rising states, the other the world of the old declining states and Bolshevism. The German people have an unshakable will for victory. Their faith in their Führer Adolf Hitler is also unshakable. A united Germany will win the victory, and finish its fight for a new order in Europe.
[Page copyright © 1999 by Randall Bytwerk. No unauthorized reproduction. My e-mail address is available on the FAQ page.]
Go to the German Propaganda Archive Home Page. | 2026-01-24T06:18:22.113073 |
518,258 | 3.534932 | http://www.absoluteastronomy.com/topics/Applied_Behavior_Analysis | Applied behavior analysis
) is a science that involves using modern behavioral learning theory
Behaviorism , also called the learning perspective , is a philosophy of psychology based on the proposition that all things that organisms do—including acting, thinking, and feeling—can and should be regarded as behaviors, and that psychological disorders are best treated by altering behavior...
to modify behaviors. Behavior analysts reject the use of hypothetical constructs and focus on the observable relationship of behavior to the environment. By functionally assessing the relationship between a targeted behavior and the environment, the methods of ABA can be used to change that behavior. Research in applied behavior analysis ranges from behavioral intervention methods to basic research which investigates the rules by which humans adapt and maintain behavior.
Areas of application
ABA-based interventions are best known for treating people with developmental disabilities, most notably autism spectrum disorders. However, applied behavior analysis contributes to a full range of areas including: AIDS prevention, conservation of natural resources, education, gerontology, health and exercise, industrial safety, language acquisition, littering, medical procedures, parenting, seatbelt use, severe mental disorders, sports, and zoo management and care of animals.
ABA is defined as the science in which the principles of the analysis of behavior are applied systematically to improve socially significant behavior, and in which experimentation is used to identify the variables responsible for change in behavior. It is one of the three fields of behavior analysis. The other two are behaviorism
Behaviorism , also called the learning perspective , is a philosophy of psychology based on the proposition that all things that organisms do—including acting, thinking, and feeling—can and should be regarded as behaviors, and that psychological disorders are best treated by altering behavior...
, or the philosophy of the science; and experimental analysis of behavior
The experimental analysis of behavior is the name given to the school of psychology founded by B.F. Skinner, and based on his philosophy of radical behaviorism. A central principle was the inductive, data-driven examination of functional relations, as opposed to the kinds of hypothetico-deductive...
, or basic experimental research.
Montrose Madison Wolf, PhD was an American psychologist. He developed the technique of "time-out" as a learning tool to shape behavior in children in the 1960s. He was a leader in creating the discipline of problem-solving, real-world psychological research known as applied behavior analysis...
, and Risley's 1968 article is still used as the standard description of ABA. It describes the seven dimensions of ABA: application; a focus on behavior; the use of analysis; and its technological, conceptually systematic, effective, and general approach.
Baer, Wolf, and Risley's seven dimensions are:
- Applied: ABA focuses on areas that are of social significance. In doing this, behavior scientists must take into consideration more than just the short-term behavior change, but also look at how behavior changes can affect the consumer, those who are close to the consumer, and how any change will affect the interactions between the two.
- Behavioral: ABA must be behavioral, i.e.: behavior itself must change, not just what the consumer says about the behavior. It is not the goal of the behavior scientists to get their consumers to stop complaining about behavior problems, but rather to change the problem behavior itself. In addition, behavior must be objectively measured. A behavior scientist cannot resort to the measurement of non-behavioral substitutes.
- Analytic: The behavior scientist can demonstrate believable control over the behavior that is being changed. In the lab, this has been easy as the researcher can start and stop the behavior at will. However, in the applied situation, this is not always as easy, nor ethical, to do. According to Baer, Wolf, and Risley, this difficulty should not stop a science from upholding the strength of its principles. As such, they referred to two designs that are best used in applied settings to demonstrate control and maintain ethical standards. These are the reversal and multiple baseline designs. The reversal design is one in which the behavior of choice is measured prior to any intervention. Once the pattern appears stable, an intervention is introduced, and behavior is measured. If there is a change in behavior, measurement continues until the new pattern of behavior appears stable. Then, the intervention is removed, or reduced, and the behavior is measured to see if it changes again. If the behavior scientist truly has demonstrated control of the behavior with the intervention, the behavior of interest should change with intervention changes.
- Technological: This means that if any other researcher were to read a description of the study, that researcher would be able to "replicate the application with the same results." This means that the description must be very detailed and clear. Ambiguous descriptions do not qualify. Cooper et al. describe a good check for the technological characteristic: "have a person trained in applied behavior analysis carefully read the description and then act out the procedure in detail. If the person makes any mistakes, adds any operations, omits any steps, or has to ask any questions to clarify the written description then the description is not sufficiently technological and requires improvement."
- Conceptually Systematic: A defining characteristic is in regards to the interventions utilized; and thus research must be conceptually systematic by only utilizing procedures and interpreting results of these procedures in terms of the principles from which they were derived.
- Effective: An application of these techniques improve behavior under investigation. Specifically, it is not a theoretical importance of the variable, but rather the practical importance (social importance) that is essential.
- Generality: It should last over time, in different environments, and spread to other behaviors not directly treated by the intervention. In addition, continued change in specified behavior after intervention for that behavior has been withdrawn is also an example of generality.
In 2005, Heward, et al.
added their belief that the following five characteristics should be added:
- Accountable: Direct and frequent measurement enables analysts to detect their success and failures to make changes in an effort to increase successes while decreasing failures. ABA is a scientific approach in which analysts may guess but then critically test ideas, rather than "guess and guess again." This constant revision of techniques, commitment to effectiveness and analysis of results leads to an accountable science.
- Public: Applied behavior analysis is completely visible and public. This means that there are no explanations that cannot be observed. There are no mystical, metaphysical explanations, hidden treatment, or magic. Thus, ABA produces results whose explanations are available to all of the public.
- Doable: ABA has a pragmatic element in that implementors of interventions can consist of a variety of individuals, from teachers to the participants themselves. This does not mean that ABA requires one simply to learn a few procedures, but with the proper planning, it can effectively be implemented by most everyone willing to invest the effort.
- Empowering: ABA provides tools to practitioners that allow them to effectively change behavior. By constantly providing visual feedback to the practitioner on the results of the intervention, this feature of ABA allows clinicians to assess their skill level and builds confidence in their technology.
- Optimistic: According to several leading authors, practitioners skilled in behavior analysis have genuine cause to be optimistic for the following reasons:
- Individual behavior is largely determined by learning and cumulative effects of the environment, which itself is manipulable
- Direct and continuous measurements enable practitioners to detect small improvements in performance that might have otherwise been missed
- As a practitioner uses behavioral techniques with positive outcomes, the more they will become optimistic about future success prospects
- The literature provides many examples of success teaching individuals considered previously unteachable.
Behavior is the activity of living organisms. Human behavior is the entire gamut of what people do including thinking and feeling. Behavior can be determined by applying the Dead Man's test:
Behavior is that portion of an organism's interaction with its environment that is characterized by detectable displacement in space through time of some part of the organism and that results in a measurable change in at least one aspect of the environment. Often, the term behavior is used to reference a larger class of responses that share physical dimensions or function. In this instance, the term response
indicates a single instance of that behavior. If a group of responses have the same function, this group can be classified as a response class. Finally, when discussing a person's collection of behavior, repertoire is used. It can either pertain specifically to a set of response classes that are relevant to a particular situation, or it can refer to every behavior that a person can do.
Operant behavior is that which is selected by its consequences. The conditioning of operant behavior is the result of reinforcement
Reinforcement is a term in operant conditioning and behavior analysis for the process of increasing the rate or probability of a behavior in the form of a "response" by the delivery or emergence of a stimulus Reinforcement is a term in operant conditioning and behavior analysis for the process of...
Punishment is the authoritative imposition of something negative or unpleasant on a person or animal in response to behavior deemed wrong by an individual or group....
. Operant conditioning applies to voluntary responses, which an organism performs deliberately, to produce a desirable outcome. The term operant emphasizes this point: the organism operates on its environment to produce some desirable result. For example, operant conditioning is at work when we learn that toiling industriously can bring about a raise or that studying hard results in good grades.
All organisms respond in predictable ways to certain stimuli. These stimulus–response relations are called reflex
A reflex action, also known as a reflex, is an involuntary and nearly instantaneous movement in response to a stimulus. A true reflex is a behavior which is mediated via the reflex arc; this does not apply to casual uses of the term 'reflex'.-See also:...
es. The response component of the reflex is called respondent behavior. It is defined as behavior which is elicited by antecedent stimuli. Respondent conditioning (also called classical conditioning
Classical conditioning is a form of conditioning that was first demonstrated by Ivan Pavlov...
) is learning in which new stimuli acquire the ability to elicit respondents. This is done through stimulus–stimulus pairing, for example, the stimulus (smell of food) can elicit a person's salivation. By pairing that stimulus (smell) with another stimulus (e.g., a light), the second stimulus can obtain the function of the first stimulus, given that the predictive relationship between the two stimuli is maintained.
The environment is the entire constellation of circumstances in which an organism exists. This includes events both inside and outside of an organism, but only real physical events are included. The environment consists of stimuli. A stimulus is an "energy change that affects an organism through its receptor cells."
A stimulus can be described:
- Formally by its physical features.
- Temporally by when they occur in respect to the behavior.
- Functionally by their effect on behavior.
Reinforcement is the most important principle of behavior and a key element of most behavior change programs. It is the process by which behavior is strengthened, if a behavior is followed closely in time by a stimulus and this results in an increase in the future frequency of that behavior. The addition of a stimulus following an event that serves as a reinforcer is termed positive reinforcement. If the removal of an event serves as a reinforcer, this is termed negative reinforcement. There are multiple schedules of reinforcement that affect the future probability of behavior.
Punishment is a process by which a consequence immediately follows a behavior which decreases the future frequency of that behavior. Like reinforcement, a stimulus can be added (positive punishment) or removed (negative punishment). Broadly, there are three types of punishment: presentation of aversive stimuli, response cost and time out. Punishment in practice can often result in unwanted side effects, and has as such been used only after reinforcement-only procedures have failed to work. Unwanted side effects can include the increase in other unwanted behavior as well as a decrease in desired behaviors. Punishment is also associated in certain cases with increases in the likelihood of aggression by the person. Some other potential unwanted effects include escape and avoidance, emotional behavior, and can result in behavioral contrast.
Extinction is the technical term to describe the procedure of withholding/discontinuing reinforcement of a previously reinforced behavior, resulting in the decrease of that behavior. The behavior is then set to be extinguished (Cooper, et al.
). Extinction procedures are often preferred over punishment procedures that are frequently deemed unethical and in many states prohibited. Nonetheless, extinction procedures must be implemented with utmost care by professionals, as they are generally associated with extinction bursts. An extinction burst is the temporary increase in the frequency, intensity, and/or duration of the behavior targeted for extinction. Other characteristics of an extinction burst include a) extinction-produced aggression—the occurrence of an emotional response to an extinction procedure often manifested as aggression; and b) extinction-induced response variability—the occurrence of novel behaviors that did not typically occur prior to the extinction procedure. These novel behaviors are a core component of shaping procedures.
Discriminated operant and three-term contingency
In addition to a relation being made between behavior and its consequences, operant conditioning also establishes relations between antecedent conditions and behaviors. This differs from the S–R formulations (If-A-then-B), and replaces it with an AB-because-of-C formulation. In other words, the relation between a behavior (B) and its context (A) is because of consequences (C), more specifically, this relationship between AB because of C indicates that the relationship is established by prior consequences that have occurred in similar contexts. This antecedent–behavior–consequence contingency is termed the three-term contingency. A behavior which occurs more frequently in the presence of an antecedent condition than in its absence is called a discriminated operant OD
. The antecedent stimulus is called a discriminative stimulus SD
. The fact that the discriminated operant occurs only in the presence of the discriminative stimulus is an illustration of stimulus control
Stimulus control is the phenomenon of a stimulus increasing the probability of a behavior because of a history of that behavior being differentially reinforced in the presence of the stimulus...
. More recently behavior analysts have been focusing on conditions that occur prior to the circumstances for the current behavior of concern that increased the likelihood of the behavior occurring or not occurring. These conditions have been referred to variously as "Setting Event", "Establishing Operations", and "Motivating Operations" by various researchers in their publications.
B.F. Skinner's classification system of behavior analysis has been applied to treatment of a host of communication disorders. Skinner's system includes:
- Tact (psychology)
Tact is a term that B.F. Skinner used to describe a verbal operant in which a certain response is evoked by a particular object or event, or property of an object or event. More generally, the tact is verbal contact with the physical world.Chapter five of Skinner's Verbal Behavior discusses the...
– stimulus control as it enters the verbal domain
- Mand (psychology)
Mand is a term that B.F. Skinner used to describe a verbal operant in which the response is reinforced by a characteristic consequence and is therefore under the functional control of relevant conditions of deprivation or aversive stimulation...
– behavior under control of motivating operations that is directly reinforced by the listener
- Intraverbals – verbal behavior under verbal control of others
- Autoclitics (psychology)
Autoclitics are verbal responses that modify the effect on the listener of the primary operants that comprise B.F. Skinner's classification of Verbal Behavior.-Autoclitics:...
– how tacts tact to other tacts to change effects on the speaker.
For assessment of verbal behavior from Skinner's system see Assessment of Basic Language and Learning Skills
The Assessment of Basic Language and Learning Skills is an educational tool used frequently with applied behavior analysis to measure the basic linguistic and functional skills of an individual with developmental delays or disabilities.-Development:The Assessment of Basic Language and Learning...
When measuring behavior, there are both dimensions of behavior and quantifiable measures of behavior. In applied behavior analysis, the quantifiable measures are a derivative of the dimensions. These dimensions are repeatability, temporal extent, and temporal locus.
Response classes occur repeatedly throughout time—i.e., how many times the behavior occurs.
- Count is the number of occurrences in behavior.
- Rate/frequency is the number of instances of behavior per unit of time.
- Celeration is the measure of how the rate changes over time.
This dimension indicates that each instance of behavior occupies some amount of time—i.e., how long the behavior occurs.
- Duration is the amount of time in which the behavior occurs.
Each instance of behavior occurs at a specific point in time—i.e., when the behavior occurs.
- Response latency is the measure of elapsed time between the onset of a stimulus and the initiation of the response.
- Interresponse time is the amount of time that occurs between two consecutive instances of a response class.
Derivative measures are unrelated to specific dimensions:
- Percentage is the ratio formed by combining the same dimensional quantities.
- Trials-to-criterion are the number of response opportunities needed to achieve a predetermined level of performance.
In applied behavior analysis, all experiments should include the following:
- At least one participant
- At least one behavior (dependent variable)
- At least one setting
- A system for measuring the behavior and ongoing visual analysis of data
- At least one treatment or intervention condition
- Manipulations of the independent variable so that its effects on the dependent variable
- An intervention that will benefit the participant in some way
Functional behavior assessment (FBA)
Functional assessment of behavior provides hypotheses about the relationships between specific environmental events and behaviors. Decades of research has established that both desirable and undesirable behaviors are learned through interactions with the social and physical environment. FBA is used to identify the type and source of reinforcement for challenging behaviors as the basis for intervention efforts designed to decrease the occurrence of these behaviors.
Functions of behavior
The function of a behavior can be thought of as the purpose a behavior serves for a person.
behaviors serve a purpose. All
behavior is communication.
Problem behaviors can serve the following functions for an individual:
e.g., toothache, stomach pain, fever
Access to attention
e.g., Child throws toy in order to get mom's attention. (If this maladaptive behavior results in mom looking at child and
giving him lots of attention—even if she's saying "NO"—he will be more likely to engage in the same behavior in the
future to get mom's attention)
Access to escape
e.g., Mom tells the child "Go clean up" and child runs to the kitchen because s/he does not want to complete the task.
Access to automatic reinforcement
e.g., Child flaps (or, Stereotypic, repetitive movement) in order to release feelings (excitement, frustration, etc.).
Access to tangibles (e.g., activities, toys, edibles, etc.)
e.g., Child hits mom because s/he wants the toy mom is holding.
e.g., Child crashes into furniture or pushes people to gain sensory input
e.g., Environmental or personal factors such as large classroom, didn’t sleep night before, loud noises.
These can be remembered by the acronym SMEATArS.
We can describe behaviors in various ways such as tantrum
A tantrum is an emotional outburst, usually associated with children or those in emotional distress, that is typically characterized by stubbornness, crying, screaming, yelling, shrieking, defiance, angry ranting, a resistance to attempts at pacification and, in some cases, violence...
s, noncompliance, inattention, aggression, etc., however all behavior can be classified as serving one or more of the functions above.
Function is identified in an FBA by identifying the type and source of reinforcement for the behavior of interest. Those reinforcers might be positive or negative social reinforcers provided by someone who interacts with the person, or automatic reinforcers produced directly by the behavior itself.
- Positive reinforcement – social positive reinforcement (attention), tangible reinforcement, and automatic positive reinforcement.
- Negative reinforcement – social negative reinforcement (escape), automatic negative reinforcement.
Function versus topography
Behaviors may look different but can serve the same function and likewise behavior that looks the same may serve multiple functions. What the behavior looks like often reveals little useful information about the conditions that account for it. However, identifying the conditions that account for a behavior, suggests what conditions need to be altered to change the behavior. Therefore, assessment of function of a behavior can yield useful information with respect to intervention strategies that are likely to be effective.
FBA methods can be classified into three types:
- Functional (experimental) analysis
- Descriptive assessment
- Indirect assessment
Functional (experimental) analysis
A functional analysis is one in which antecedents and consequences are manipulated to indicate their separate effects on the behavior of interest. This type of arrangement is often called synthetic because they are not conducted in a naturally occurring context. However, research is indicating that functional analysis done in a natural environment will yield similar or better results.
A standard functional analysis normally has four conditions (three test conditions and one control):
- Contingent attention
- Contingent escape
- Control condition
While the above four conditions are the most widely used functional analysis experimental conditions, using the basic methodology of functional analysis (and experimental analysis in general) it is possible to arrange any combination of antecedents and consequences for behavior to determine what effect, if any, they have on a behavior.
- Advantages – it has the ability to yield a clear demonstration of the variable(s) that relate to the occurrence of a problem behavior. It serves as the standard of scientific evidence by which other assessment alternatives are evaluated. It represents the method most often used in research on the assessment and treatment of problem behavior.
- Limitations – assessment process may temporarily strengthen or increase the undesirable behavior to gravely unacceptable levels or result in the behavior acquiring new unpleasant functions. Some behaviors may neither be amenable to functional analyses (e.g., those that, albeit serious, occur infrequently). Functional analyses conducted in contrived settings may not detect the variable that accounts for the occurrence in the natural environment.
As with functional analysis, descriptive functional behavior assessment utilizes direct observation of behavior; unlike functional analysis, however, observations are made under naturally occurring conditions. Therefore, descriptive assessments involve observation of the problem behavior in relation to events that are not arranged in a systematic manner.
There are three variations of descriptive assessment:
- ABC (antecedent–behavior–consequence) continuous recording – observer records occurrences of targeted behavior and selected environmental events in the natural routine.
- ABC narrative recording – data are collected only when behaviors of interest are observed, and the recording encompasses any events that immediately precede and follow the target behavior.
- Scatterplots – a procedure for recording the extent to which a target behavior occurs more often at particular times than others.
This method uses structured interviews, checklists, rating scales, or questionnaires to obtain information from persons who are familiar with the person exhibiting the behavior to identify possible conditions or events in the natural environment that correlate with the problem behavior. They are called "indirect" because they do not involve direct observation of the behavior, but rather solicit information based on others' recollections of the behavior.
- Advantages – some can provide a useful source of information in guiding subsequent, more objective assessments, and contribute to the development of hypotheses about variables that might occasion or maintain the behaviors of concern.
- Limitations – informants may not have accurate and unbiased recall of behavior and the conditions under which it occurred.
Conducting an FBA
Provided the strengths and limitations of the different FBA procedures, FBA can best be viewed as a four-step process:
- The gathering of information via indirect and descriptive assessment.
- Interpretation of information from indirect and descriptive assessment and formulation
A clinical formulation is a theoretically-based explanation or conceptualisation of the information obtained from a clinical assessment. It offers a hypothesis about the cause and nature of the presenting problems and is considered an alternative approach to the more categorical approach of...
of a hypothesis about the purpose of problem behavior.
- Testing of a hypothesis using a functional analysis
Functional analysis in behavioral psychology is the application of the laws of operant conditioning to establish the relationships between stimuli and responses...
- Developing intervention options based on the function of problem behavior.
Task analysis is the analysis of how a task is accomplished, including a detailed description of both manual and mental activities, task and element durations, task frequency, task allocation, task complexity, environmental conditions, necessary clothing and equipment, and any other unique factors...
is a process in which a task is analyzed into its component parts so that those parts can be taught through the use of chaining: forward chaining, backward chaining and total task presentation. Task analysis has been used in organizational behavior management, a behavior analytic approach to changing organizations. Behavioral script
In the behaviorism approach to psychology, behavioral scripts are a sequence of expected behaviors for a given situation. For example, when an individual enters a restaurant they choose a table, order, wait, eat, pay the bill, and leave. People continually follow scripts which are acquired through...
s often emerge from a task analysis. Bergan conducted a task analysis of the behavioral consultation relationship and Thomas Kratochwill developed a training program based on teaching Bergan's skills. A similar approach was used for the development of microskills training for counselors. Ivey would later call this "behaviorist" phase a very productive one and the skills-based approach came to dominate counselor training during 1970–90. Task analysis was also used in determining the skills needed to access a career. In education, Englemann (1968) used task analysis as part of the methods to design the Direct Instruction curriculum.
The skill to be learned is broken down into small units for easy learning. For example, a person learning to brush teeth independently may start with learning to unscrew the toothpaste cap. Once they have learned this, the next step may be squeezing the tube, etc.
For problem behavior, chains can also be analyzed and the chain can be disrupted to prevent the problem behavior. Some behavior therapies, such as dialectical behavior therapy, make extensive use of behavior chain analysis.
Response Prompting Procedures are systematic strategies used to increase the probability of correct responding and opportunities for positive reinforcement for learners by providing and then systematically removing prompts. Response prompting is sometimes called “errorless” learning because...
is a cue or assistance to encourage the desired response from an individual. Prompts are often categorized into a prompt hierarchy from most intrusive to least intrusive. There is some controversy about what is considered most intrusive: physically intrusive versus hardest prompt to fade (i.e., verbal). In a faultless learning approach, prompts are given in a most-to-least sequence and faded systematically to ensure the individual experiences a high level of success. There may be instances in which a least-to-most prompt method is preferred. Prompts are faded systematically and as quickly as possible to avoid prompt dependency. The goal of teaching using prompts would be to fade prompts towards independence, so that no prompts are needed for the individual to perform the desired behavior.
Types of prompts:
- Verbal prompts: Utilizing a vocalization to indicate the desired response.
- Visual prompts: A visual cue or picture.
- Gestural prompts: Utilizing a physical gesture to indicate the desired response.
- Positional prompt: The target item is placed closer to the individual.
- Modeling: Modeling the desired response for the student. This type of prompt is best suited for individuals who learn through imitation and can attend to a model.
- Physical prompts: Physically manipulating the individual to produce the desired response. There are many degrees of physical prompts. The most intrusive being hand-over-hand, and the least intrusive being a slight tap to initiate movement.
This is not an exhaustive list of all possible prompts. When using prompts to systematically teach a skill, not all prompts need to be used in the hierarchy; prompts are chosen based on which ones are most effective for a particular individual.
The overall goal is for an individual to eventually not need prompts. As an individual gains mastery of a skill at a particular prompt level, the prompt is faded to a less intrusive prompt. This ensures that the individual does not become overly dependent on a particular prompt when learning a new behavior or skill.
Thinning a reinforcement schedule
Thinning is often confused with fading. Fading refers to a prompt being removed, where thinning refers to the spacing of a reinforcement schedule getting larger. Some support exists that a 30% decrease in reinforcement can be an efficient way to thin. Schedule thinning is often an important and neglected issue in contingency management
Contingency management is a type of treatment used in the mental health or substance abuse fields. Patients are rewarded for their behavior; generally, adherence to or failure to adhere to program rules and regulations or their treatment plan...
and token economy
A token economy is a system of behavior modification based on the systematic positive reinforcement of target behavior. The reinforcers are symbols or tokens that can be exchanged for other reinforcers. Token economy is based on the principles of operant conditioning and can be situated within...
systems, especially when developed by unqualified practitioners (see professional practice of behavior analysis
The professional practice of behavior analysis is one domain of behavior analysis: others being behaviorism, experimental analysis of behavior and applied behavior analysis...
Generalization is the expansion of a student's performance ability beyond the initial conditions set for acquisition of a skill. Generalization can occur across people, places, and materials used for teaching. For example, once a skill is learned in one setting, with a particular instructor, and with specific materials, the skill is taught in more general settings with more variation from the initial acquisition phase. For example, if a student has successfully mastered learning colors at the table, the teacher may take the student around the house or his school and then generalize
the skill in these more natural environments with other materials. Behavior analysts have spent considerable amount of time studying factors that lead to generalization.
Shaping involves gradually modifying the existing behavior into the desired behavior. If the student engages with a dog by hitting it, then he or she could have their behavior shaped by reinforcing interactions in which he or she touches the dog more gently. Over many interactions, successful shaping would replace the hitting behavior with patting or other gentler behavior. Shaping is based on a behavior analyst's thorough knowledge of operant conditioning
Operant conditioning is a form of psychological learning during which an individual modifies the occurrence and form of its own behavior due to the association of the behavior with a stimulus...
principles and extinction
Extinction is the conditioning phenomenon in which a previously learned response to a cue is reduced when the cue is presented in the absence of the previously paired aversive or appetitive stimulus.-Fear conditioning:...
. Recent efforts to teach shaping have used simulated computer tasks.
One teaching technique found to be effective with some students, particularly children, is the use of video modeling (the use of taped sequences as exemplars of behavior). It can be used by therapists to assist in the acquisition of both verbal and motor
A voluntary action is produced by conscious choice of an organism. The organism, would in turn also be aware of the action while it is executed. This is the opposite of an involuntary action...
responses, in some cases for long chains
Chaining is an instructional procedure used in behavioral psychology, experimental analysis of behavior and applied behavior analysis. It involves reinforcing individual responses occurring in a sequence to form a complex behavior. It is frequently used for training behavioral sequences that are...
Interventions based on an FBA
Critical to behavior analytic interventions is the concept of a systematic behavioral case formulation with a functional behavioral assessment or analysis at the core. This approach should apply a behavior analytic theory of change (see Behavioral change theories). This formulation should include a thorough functional assessment, a skills assessment, a sequential analysis (behavior chain analysis), an ecological assessment, a look at existing evidenced-based behavioral models for the problem behavior (such as Fordyce's model of chronic pain) and then a treatment plan based on how environmental factors influence behavior. Some argue that behavior analytic case formulation can be improved with an assessment of rules and rule-governed behavior. Some of the interventions that result from this type of conceptualization involve training specific communication skills to replace the problem behaviors as well as specific setting, antecedent, behavior, and consequence strategies.
Efficacy in autism
ABA-based techniques are often used to treat autism
Autism is a disorder of neural development characterized by impaired social interaction and communication, and by restricted and repetitive behavior. These signs all begin before a child is three years old. Autism affects information processing in the brain by altering how nerve cells and their...
, so much so that ABA itself is often mistakenly considered to be a therapy for autism. ABA for autism may be limited by diagnostic severity and IQ. The most influencial and widely cited review of the literature regarding efficacy of treatments for Autism is the National Research Council's book Educating Children with Autism
(2001) which clearly concluded that ABA was the best research supported and most effective treatment for the main characteristics of Autism. Some critics attempted to claim that the NRC's report was an inside job by behavior analysts but there were no board certified behavior analysts on the panel (which did include physicians, speech pathologists, educators, psychologists, and others). Recent reviews of the efficacy of ABA-based techniques in autism include:
- A 2007 clinical report of the American Academy of Pediatrics
The American Academy of Pediatrics is the major professional association of pediatricians in the United States. The AAP was founded in 1930 by 35 pediatricians to address pediatric healthcare standards. It currently has 60,000 members in primary care and sub-specialist areas...
concluded that the benefit of ABA-based interventions in autism spectrum disorders (ASDs) "has been well documented" and that "children who receive early intensive behavioral treatment have been shown to make substantial, sustained gains in IQ, language, academic performance, and adaptive behavior
Adaptive behavior is a type of behavior that is used to adjust to another type of behavior or situation. This is often characterized by a kind of behavior that allows an individual to change an unconstructive or disruptive behavior to something more constructive. These behaviors are most often...
as well as some measures of social behavior."
- Researchers from the MIND Institute
The UC Davis M.I.N.D. Institute research and treatment center affiliated with the University of California, Davis, with facilities located on the UC Davis Medical Center campus in Sacramento, California...
published an evidence-based review of comprehensive treatment approaches in 2008. On the basis of "the strength of the findings from the four best-designed, controlled studies," they were of the opinion that one ABA-based approach (the Lovaas technique
LOVAAS technique, which is known to the general public as Applied behavior analysis , as well as Intensive behavioral intervention , and Early intensive behavioral intervention , is a form of treatment guided by ABA and developed by Dr. Ivar Lovaas, a psychology professor at UCLA...
created by Ole Ivar Lovaas
Ole Ivar Løvaas, Ph.D. was a clinical psychologist at UCLA. He is considered to be one of the fathers of applied behavior analysis therapy for autism through his development of the Lovaas technique and the first to provide evidence that the behavior of autistic children can be modified through...
) is "well-established" for improving intellectual performance of young children with ASD.
- A 2009 review of psycho-educational interventions for children with autism whose mean age was six years or less at intake found that five high-quality ("Level 1" or "Level 2") studies assessed ABA-based treatments. On the basis of these and other studies, the author concluded that ABA is "well-established" and is "demonstrated effective in enhancing global functioning in pre-school children with autism when treatment is intensive and carried out by trained therapists."
- A 2009 paper included a descriptive analysis, an effect size analysis, and a meta-analysis
In statistics, a meta-analysis combines the results of several studies that address a set of related research hypotheses. In its simplest form, this is normally by identification of a common measure of effect size, for which a weighted average might be the output of a meta-analyses. Here the...
of 13 reports published from 1987–2007 of early intensive behavioral intervention (EIBI, a form of ABA-based treatment with origins in the Lovaas technique) for autism. It determined that EIBI's effect sizes were "generally positive" for IQ, adaptive behavior, expressive language, and receptive language. The paper did note limitations of its findings including the lack of published comparisons between EIBI and other "empirically validated treatment programs."
- In a 2009 systematic review
A systematic review is a literature review focused on a research question that tries to identify, appraise, select and synthesize all high quality research evidence relevant to that question. Systematic reviews of high-quality randomized controlled trials are crucial to evidence-based medicine...
of 11 studies published from 1987–2007, the researchers wrote "there is strong evidence that EIBI is effective for some, but not all, children with autism spectrum disorders, and there is wide variability in response to treatment." Furthermore, any improvements are likely to be greatest in the first year of intervention.
- A 2009 meta-analysis of nine studies published from 1987–2007 concluded that EIBI has a "large" effect on full-scale intelligence and a "moderate" effect on adaptive behavior in autistic children.
- In 2011, investigators from Vanderbilt University
Vanderbilt University is a private research university located in Nashville, Tennessee, United States. Founded in 1873, the university is named for shipping and rail magnate "Commodore" Cornelius Vanderbilt, who provided Vanderbilt its initial $1 million endowment despite having never been to the...
under contract with the Agency for Healthcare Research and Quality
The Agency for Healthcare Research and Quality is a part of the United States Department of Health and Human Services, which supports research designed to improve the outcomes and quality of health care, reduce its costs, address patient safety and medical errors, and broaden access to effective...
performed a comprehensive review of the scientific literature on ABA-based and other therapies for autism spectrum disorders; the ABA-based therapies included the UCLA
The University of California, Los Angeles is a public research university located in the Westwood neighborhood of Los Angeles, California, USA. It was founded in 1919 as the "Southern Branch" of the University of California and is the second oldest of the ten campuses...
/Lovaas method and the Early Start Denver Model. They concluded that "both approaches were associated with ... improvements in cognitive performance, language skills, and adaptive behavior skills." However, they also concluded that "the strength of evidence ... is low," "many children continue to display prominent areas of impairment," "subgroups may account for a majority of the change," there is "little evidence of practical effectiveness or feasibility beyond research studies," and the published studies "used small samples, different treatment approaches and duration, and different outcome measurements."
A 2009 systematic review and meta-analysis by Spreckley and Boyd of four 2000–2007 studies (involving a total of 76 children) came to different conclusions than the aforementioned reviews. Spreckley and Boyd reported that applied behavior intervention (ABI), another name for EIBI, did not significantly improve outcomes compared with standard care of preschool children with ASD in the areas of cognitive outcome, expressive language, receptive language, and adaptive behavior. In a letter to the editor, however, authors of the four studies meta-analyzed claimed that Spreckley and Boyd had misinterpreted one study comparing two forms of ABI with each other as a comparison of ABI with standard care, which erroneously decreased the observed efficacy of ABI. Furthermore, the four studies' authors raised the possibility that Spreckley and Boyd had excluded some other studies unnecessarily, and that including such studies could have led to a more favorable evaluation of ABI. Spreckley, Boyd, and the four studies' authors did agree that large multi-site randomized trials are needed to improve the understanding of ABA's efficacy in autism.
- Behavioral activation
Behavioral activation is a third generation behavior therapy for treating depression. It is one of many functional analytic psychotherapies which are based on a Skinnerian psychological model of behavior change, generally referred to as applied behavior analysis...
- Educational psychology
Educational psychology is the study of how humans learn in educational settings, the effectiveness of educational interventions, the psychology of teaching, and the social psychology of schools as organizations. Educational psychology is concerned with how students learn and develop, often focusing...
- Professional practice of behavior analysis
The professional practice of behavior analysis is one domain of behavior analysis: others being behaviorism, experimental analysis of behavior and applied behavior analysis...
- Parent Management Training
Parent Management Training is a programme that trains parents to manage their children's behavioural problems at home and at school. PMT works to correct maladaptive parent-child interactions especially as they apply to discipline...
- Behavior analysis of child development
Child development in behavior analytic theory has origins in John B. Watson's behaviorism. Watson wrote extensively on child development and conducted research . Watson was instrumental in the modification of William James' stream of consciousness approach to construct a stream of behavior theory...
- Behavior therapy
Applied behavior analysts publish in many journals. Some of the ones considered core journals to behavior analysis are:
- Journal of Applied Behavior Analysis
The Journal of Applied Behavior Analysis was established in 1968 as a peer-reviewed, psychology journal, that publishes research about applications of the experimental analysis of behavior to problems of social importance....
- Journal of the Experimental Analysis of Behavior
The Journal of the Experimental Analysis of Behavior was established in 1958 as apeer-reviewed, psychology journal, that publishes fundamental research about the experimental analysis of behavior....
- Journal of Organizaitonal Behavior Management
- Journal of Behavioral Education
- Journal of the Analysis of Verbal Behavior
- The Behavior Analyst Today BAO
- The Behavior Analyst
- The Journal of Speech-Language Pathology and Applied Behavior Analysis BAO
- Journal of Early and Intensive Behavioral Interventions BAO
- The International Journal of Behavioral Consultation and Therapy BAO
- The Journal of Behavioral Assessment and Intervention in Children BAO
- The Behavioral Development Bulletin BAO
- The Journal of Percision Teaching and Standard Celeration
- Behavior and Social Issues http://www.uic.edu/htbin/cgiwrap/bin/ojs/index.php/bsi/index
- Journal of Behavior Analysis of Sports, Health, Fitness, and Behavioral Medicine BAO
- Journal of Behavior Analysis of Offender and Victim: Treatment and Prevention BAO
- Behavioral Health and Medicine BAO
- Behavior Therapy | 2026-01-26T09:03:28.984745 |
431,725 | 3.812633 | http://www.sciencedaily.com/releases/2012/06/120620113137.htm | June 20, 2012 In a unique field experiment, ten research groups from nine different countries have studied the ecological status of 100 streams across Europe. This was the first study to make extensive use of leaf-litter breakdown as an assessment method. The findings of the study -- in which Eawag played a key role -- are reported in the latest issue of Science.
To assess the condition of rivers and streams, environmental scientists generally measure variables such as temperature, acidity and nutrient concentrations. They also determine the composition of the benthic macroinvertebrate community -- insect larvae and other small streambed organisms. This method was originally developed to assess water pollution caused by wastewater. But today, surface waters are exposed to a more complex range of stressors: freshwater ecosystems may be severely impaired by bank reinforcement, weirs and altered flow regimes as well as by cocktails of chemical pollutants, invasive species and the effects of climate change.
According to biologist Professor Mark Gessner, this means that the existing measures are no longer adequate for assessing an ecosystem as a whole: "Just as a patient can be ill without having a temperature, rivers and streams with clean water can still have a lot of other problems as ecosystems." Also crucial to ecosystem health, he points out, is the functioning of processes which are characteristic of natural systems -- an aspect which has been neglected to date in the assessment of surface waters.
Gessner and his colleagues therefore tested a new method based on one such process -- the breakdown of leaf litter. "Litter input is the main source for stream food webs and is of major importance for whole-system metabolism," says Gessner, who carried out the investigations with the research group he formerly led at Eawag (he now works at the Leibniz Institute of Freshwater Ecology and Inland Fisheries and at the TU Berlin). Leaf litter is broken down largely by microscopic fungi -- some of which are noted for their bizarrely shaped spores -- and by benthic macroinvertebrates.
The researchers deployed mesh bags filled with oak and alder leaves in 100 streams in France, the UK, Ireland, Poland, Portugal, Romania, Spain, Sweden and Switzerland. They then determined how long it took for half of the leaf litter to be broken down -- a measure analogous to the half-life of radioactive elements. In some of the streams, they also determined the number and diversity of benthic macroinvertebrate species, as well as concentrations of phosphate and inorganic nitrogen compounds.
It was found that low-nutrient waters contain few organisms which can make efficient use of litter resources. Conditions in heavily enriched waters are likewise unfavourable for organisms of this kind. Breakdown rates were thus low in both cases. With intermediate nutrient concentrations, however, no correlation was observed between concentrations, benthic macroinvertebrates and breakdown rates. Accelerated litter breakdown may thus indicate impairments caused by nutrients in cases where conventional methods would suggest good water quality -- i.e. where nutrient concentrations are relatively low.
Gessner believes that this method has significant potential: "Just 'taking the patient's temperature' is certainly no longer a reliable way of checking the health of streams and rivers in Europe. Modern assessment of freshwater ecosystems calls for the kind of differential diagnosis which we take for granted in medicine -- identifying the underlying causes of symptoms on the basis of additional criteria. Here, processes such as litter breakdown can make an important contribution."
Other social bookmarking and sharing tools:
The above story is reprinted from materials provided by EAWAG: Swiss Federal Institute of Aquatic Science and Technology.
Note: Materials may be edited for content and length. For further information, please contact the source cited above.
- G. Woodward, M. O. Gessner, P. S. Giller, V. Gulis, S. Hladyz, A. Lecerf, B. Malmqvist, B. G. McKie, S. D. Tiegs, H. Cariss, M. Dobson, A. Elosegi, V. Ferreira, M. A. S. Graca, T. Fleituch, J. O. Lacoursiere, M. Nistorescu, J. Pozo, G. Risnoveanu, M. Schindler, A. Vadineanu, L. B.- M. Vought, E. Chauvet. Continental-Scale Effects of Nutrient Pollution on Stream Ecosystem Functioning. Science, 2012; 336 (6087): 1438 DOI: 10.1126/science.1219534
Note: If no author is given, the source is cited instead. | 2026-01-24T21:38:20.252340 |
614,624 | 3.933819 | http://www.heritage.nf.ca/exploration/karluk.html | The Karluk Disaster
In the summer of 1913, the wooden-hulled Karluk departed Canada for the western Arctic. On board were 10 scientists, 13 crewmembers, four Inuit hunters, one seamstress, her two children, and one passenger. Of these, 11 never returned and most were not heard from again until September 1914. During their 13-month exile, expedition members survived for seven months amid the drifting and inhospitable Arctic ice floes before establishing camp on an uninhabited island hundreds of miles north of Siberia.
|The Karluk, ca. 1913-1914.
The wooden-hulled Karluk departed Canada for the western Arctic in June 1913. It became solidly trapped in sea ice in August and never reached its destination. Most crew and passengers spent the next 13 months stranded in the Arctic, where 11 men died.
From Fitzhugh Green, Bob Bartlett Master Mariner (New York: G.P. Putnam's Sons, 1929) 152.
with more information (65 kb).
Many expedition members lacked any experience in Arctic travel and likely owed their lives to the knowledge and leadership skills of Karluk captain and polar explorer Bob Bartlett. Under his guidance, survivors built a camp on the ice, weathered the long Arctic night, and travelled 150 miles by dog sledge to find land. From there, Bartlett and one other team member completed a perilous 700-mile journey to the Bering Strait, where they searched for a vessel to save their stranded crewmates..
Canadian Arctic Expedition
After returning to Brigus from the 1913 spring seal hunt, Bartlett received a telegram from Canadian explorer Vilhjalmur Stefansson asking him to captain the Karluk, flagship of the government-backed Canadian Arctic Expedition. The vessel's mission was to take a crew of geologists, anthropologists, meteorologists and other scientists north of the Yukon to Herschel Island, where they would establish a base and survey the region's flora, fauna, mineral deposits, and other characteristics. The party was also to search for any new land masses north of Alaska. It was the largest scientific expedition into the north to date and Ottawa hoped it would also help assert Canada's sovereignty over the Arctic islands.
||Bartlett breaks for tea, pre-1929.
Bartlett was concerned the Karluk would not be able to navigate through the dangerous Arctic ice floes. The ship later sank after sea ice punched a hole in its side on 10 January 1914.
From Fitzhugh Green, Bob Bartlett Master Mariner (New York: G.P. Putnam's Sons, 1929) 25.
Although Bartlett agreed to captain the Karluk, he was concerned about the vessel's ability to navigate through the dangerous Artcic waters. Instead of a new steel-hulled icebreaker, the government acquired for the expedition an old and underpowered wooden barkentine that had been converted into a whaling vessel in 1899. Workers reinforced the vessel with crossbeams and extra sheathing and Bartlett accepted the mission under an assumption that he would not spend the winter in Arctic waters.
The vessel departed British Columbia on 17 June 1913, but encountered heavy sea ice less than two months later. By 13 August, winds and the water's movement caused the ice to close in and freeze around the Karluk, which became solidly trapped about 225 miles northwest of Alaska. The vessel drifted helplessly with the pack ice, unable to free itself and reach Herschel Island.
The ice, however, stopped moving for a few days in mid-September and Stefansson decided to leave the ship with five other men, 14 dogs, and two sledges to hunt caribou. The group prepared for a 10-day expedition, hoping the ship's static position would allow them to return safely. Just two days after their departure on 20 September, strong winds moved the Karluk rapidly to the west, making it impossible for Stefansson and his team to find the vessel. Instead, they travelled south by dog sledge and eventually reached Alaska.
|Scientists on board the Karluk, 1913.
Anthropoligist Diamond Jenness and magnetician/meteorologist William Laird McKinley were both members of the Canadian Arctic Expedition.
Photograph by Curtis and Miller. Courtesy of Library and Archives Canada (C-086412), Ottawa, Ontario.
with more information (62 kb)
The Karluk, meanwhile, drifted for months amid the unpredictable Arctic floes until the ice punched a large hole its side on 10 January 1914. Fortunately, Bartlett had prepared for the sinking weeks earlier when he ordered crewmembers to build igloos on the ice and transfer to its surface most of the ship's food, fuel, and other supplies. As the Karluk slowly sank, expedition members removed all remaining supplies and then abandoned ship. Bartlett stayed onboard until the last possible moment, playing dozens of records on the ship's Victrola. At about 3:30 p.m. on 11 January, he placed Chopin's “Funeral March” on the turntable, stepped onto the ice, and watched the Karluk disappear below the water.
The Karluk sank during the middle of the Arctic night, which lasts from about mid-November until the end of January. Although Bartlett realized the expedition's best chance of survival was to find land, he did not want to take expedition members – many of whom had no experience in Arctic travel – across the ice in the dark. Fortunately, the expedition had enough food and fuel to live off for months; it also had several igloos to serve as shelter. Bartlett planned to remain at what he called Shipwreck Camp until the light returned in February, at which point he and all other expedition members would travel by dog sledge to Wrangel Island in the south.
Four men in the group, however, disagreed with Bartlett's plan and decided to travel south on their own. At their request, Bartlett equipped them with a sledge, dogs, and enough supplies to last 50 days. In return, the men wrote Bartlett a letter absolving him of any responsibility for their decision. The four left Shipwreck Camp in late January and were never heard from again.
||Map of Shipwreck Camp, ca. 1914-1916.
After the Karluk sank on 11 January 1914, its crew and passengers lived in igloos on the Arctic ice for more than a month. Bartlett named the site Shipwreck Camp.
Illustration by William Laird McKinley. From Robert A. Bartlett, The Last Voyage of the Karluk: Flagship of Vilhjalmar Stefansson's Canadian Arctic Expedition of 1913-16 (Toronto: McClelland, Good-child & Stewart 1916).
with more information (66 kb)
In the meantime, Bartlett decided to send small teams of men across the ice to establish a chain of supply caches toward Wrangel Island. The first group, which consisted of the Karluk's first and second mates as well as two crewmen, departed on 20 January. Alongside stowing supplies, the men were also to look for Herald Island, which Bartlett believed lay about 50 miles south of Shipwreck Camp. Although the men eventually reached the island, they never left – likely because of running ice and open water. Other expedition members came searching for them about a week later, but falsely assumed they had fallen through the ice and died. It was not until 1929 that a passing vessel found their skeletons on the island.
Once the sunlight returned and supply caches had been established, Bartlett and all remaining expedition members left Shipwreck Camp on 19 February and began travelling by dog sledge to Wrangel Island. The group, which now numbered 17, took with it 12 dogs, three sledges, and enough supplies to last 60 days. Its members reached Wrangel Island on 12 March after travelling approximately 100 miles through the ice and cold.
Seven-Hundred-Mile Sledge Journey
Six days after arriving, Bartlett and one other expedition member, Inuit hunter Kataktovick, departed on a perilous 700-mile sledge journey to Siberia and then to the Bering Strait for help. Although Bartlett had originally planned to take all expedition members to Siberia, many were in a severely weakened condition and unable to make such a difficult and dangerous trip.
“From now on,” wrote Bartlett in The Karluk's Last Voyage, “our journey became a never-ending series of struggles to get around or across lanes of open water – leads, as they are called – the most exasperating and treacherous of all Arctic travelling” (179).
|View from a sledge, ca. 1908-09.
Bartlett and Kataktovick departed Wrangel Island and travelled by dog sledge to seek help for the Karluk castaways.
From Robert Peary, The North Pole (London: Hodder and Stoughton, 1910) 80.
with more information (66 kb)
In early April, the two men reached a small Inuit village in Siberia where residents gave them food, a bed, and helped mend their clothing and dog harnesses. Although Bartlett and Kataktovick had already crossed 200 miles in less than three weeks, they only stayed in the village for two nights before beginning the long journey to the Bering Strait. This time, they travelled across land, stopping at a few small villages along the way for food, rest, and sometimes to acquire a new sledge dog.
By the end of April, the two men reached East Cape (also known as Cape Dezhnev) on the Bering Strait, where Bartlett searched for a vessel that would take him to the closest wireless station at Alaska. Most ships, however, did not begin sailing from East Cape until late spring and it was not until 21 May that Bartlett departed aboard the Herman. He arrived at St. Michael, Alaska on 28 May and wired government officials in Ottawa of the Wrangel Island castaways. Bartlett also began searching for a vessel that would take him into the Arctic to rescue survivors, but he first had to recover from the severe swelling in his legs and feet that made it almost impossible for him to walk.
Although Bartlett departed for Wrangel Island aboard the American vessel Bear on 13 July, it was the Canadian schooner King and Winge that rescued survivors from Wrangel Island on 7 September 1914 – almost eight months after the Karluk sank. Bartlett was reunited with his fellow survivors the following day, after the King and Winge encountered the Bear. Three men, however, died on the island and the remaining 11 expedition members survived by digging for roots and hunting duck, seals, walrus, and other animals.
||Youngest Karluk survivor, 1914.
Three-year-old Mugpie (also Mukpie) was the youngest member of the Karluk, which sank in the Arctic on 11 January 1914. She and 20 others survived the disaster, while 11 men died.
Photograph by Curtis and Miller. Courtesy of Library and Archives Canada (PA-105139), Ottawa, Ontario.
with more information (53 kb)
Although an admiralty commission later criticized Bartlett for agreeing to take the Karluk into the Arctic and allowing a group of four to travel south on their own, the press and public celebrated him as a hero. The Royal Geographic Society gave him an award for outstanding bravery and many survivors credited him with saving their lives, particularly William Laird McKinlay, who later wrote: “there was for me only one real hero in the whole [Karluk] story – Bob Bartlett. Honest, fearless, reliable, loyal, everything a man should be” (Niven 366).
Article by Jenny Higgins. ©2008, Newfoundland and Labrador Heritage Web Site. | 2026-01-27T19:11:26.324143 |
1,079,377 | 3.955187 | http://www.sciencedaily.com/releases/2011/03/110321134617.htm | A new study by Baylor University geology researchers shows that Native Americans' land use nearly a century ago produced a widespread impact on the eastern North American landscape and floodplain development several hundred years prior to the arrival of major European settlements.
The study appears online in the journal Geology.
Researchers attribute early colonial land-use practices, such as deforestation, plowing and damming with influencing present-day hydrological systems across eastern North America. Previous studies suggest that Native Americans' land use in eastern North America initially caused the change in hydrological systems, however, little direct evidence has been provided until now.
The Baylor study found that pre-European so-called "natural" floodplains have a history of prehistoric indigenous land use, and thus colonial-era Europeans were not the first people to have an impact on the hydrologic systems of eastern North America. The study also found that prehistoric small-scale agricultural societies caused widespread ecological change and increased sedimentation in hydrologic systems during the Medieval Climate Anomaly-Little Ice Age, which occurred about 700 to 1,000 years ago.
"These are two very important findings," said Gary Stinchcomb, a Baylor doctoral candidate who conducted the study. "The findings conclusively demonstrate that Native Americans in eastern North America impacted their environment well before the arrival of Europeans. Through their agricultural practices, Native Americans increased soil erosion and sediment yields to the Delaware River basin."
The Baylor researchers found that prehistoric people decreased forest cover to reorient their settlements and intensify corn production. They also contributed to increased sedimentation in valley bottoms about 700 to 1,000 years ago, much earlier than previously thought. The findings suggest that prehistoric land use was the initial cause of increased sedimentation in the valley bottoms, and sedimentation was later amplified by wetter and stormier conditions.
To conduct the study, the Baylor researchers took samples from several different spots along the Delaware River Valley. Landforms were mapped based on relative elevations to Delaware River base flow and archaeological excavations assessed the presence of human habitation. The Baylor researchers then used a site-specific geoarchaeological approach and a regional synthesis of previous research to test the hypothesis that the indigenous population had a widespread impact on terrestrial sedimentation in eastern North America.
"This study provides some of the most significant evidence yet that Native Americans impacted the land to a much greater degree than previously thought," said Dr. Steve Driese, professor and chair of Baylor's department of geology, College of Arts and Sciences, who co-authored the study. "It confirms that Native American populations had widespread effects on sedimentation."
Cite This Page: | 2026-02-03T21:41:33.217098 |
841,903 | 4.437966 | http://en.m.wikibooks.org/wiki/Haskell/A_Miscellany_of_Types | So far the only basic types we have looked at are Integers and Lists, although passing mention was made of strings and characters. This section will cover the major basic types.
An important thing to understand about the built-in types of Haskell is that they are not particularly special. From the compiler's point of view they are special because processors have special operations to do things like integer addition, but from the programmer's point of view they are just ordinary types with ordinary functions that follow exactly the same rules as any other type. Hence this section contains very little in the way of explanation on how you use these types or what the rules are, because for the most part you already know it: if it works for integers then it works for strings, booleans and floats.
Haskell has two types for integer numbers: Int and Integer.
"Integer" is an arbitrary precision type: it will hold any number no matter how big, up to the limit of your machine's memory. That is why "factorial 1000" gives you the right answer. This means you never have arithmetic overflows. On the other hand it also means your arithmetic is relatively slow. Lisp users may recognise the "bignum" type here.
"Int" is the more common 32 or 64 bit integer. Implementations vary, although it is guaranteed to be at least 30 bits.
Haskell also has two types for floating point: Float and Double. These behave like the corresponding types in C.
All the usual operators are there, along with some extras.
- Aside: Haskell numeric types are tied together in a complicated hierarchy of classes, which will be described later. The purpose of this page is to give you enough to do ordinary arithmetic without tripping over the type system.
There are three "raise to the power" operators which work differently and take different argument types.
- Takes two floating point numbers and uses logarithms to compute the power.
- Takes a fractional number (i.e. a floating point or a ratio, of which more later) and raises it to a positive or negative integer power.
- Takes any numerical type and raises it to a positive integer power.
Conversion from an integer type (Int or Integer) to anything else is done by "fromIntegral". The target type is inferred automatically. So for example:
n :: Integer n = 6 x :: Float x = fromIntegral n m :: Int m = 7 y :: Double y = fromIntegral m
will define x to be 6.0 and y to be 7.0.
Division of integers is a little complicated. If you use the ordinary "/" operator on integers then you will get an error message (although the expression "4/3" does work because Haskell helpfully promotes literal integers to floats where necessary). Instead integer division is done using a collection of named operators.
Haskell has a neat trick with operators: you can take any function that takes two arguments and use it like an operator by enclosing the name in back-ticks. So the following two lines mean exactly the same:
d = 7 `div` 3 d = div 7 3
With that in mind, here are the integer division operators:
- Returns the quotient of the two numbers. This is the result of division which is then truncated towards zero.
- Returns the remainder from the quotient.
- Similar to "quot", but is rounded down towards minus infinity.
- Returns the modulus of the two numbers. This is similar to the remainder, but has different rules when "div" returns a negative number.
Provided y is not negative then the following two equations will always hold:
(x `quot` y)*y + (x `rem` y) == x (x `div` y)*y + (x `mod` y) == x
Just as you can convert a function with two arguments into an operator, you can convert an operator into a function with two arguments: just put it in parentheses. So the following two lines mean the same thing:
(+) 3 4 3 + 4
This can also be done with any "incomplete" operator application:
(3+) 4 (+4) 3 -- not 3 (+4)
Haskell has a boolean type Bool, with two values: True and False. There are also two operators defined on the Bool type: && and ||.
Numbers, characters and strings can be compared using the usual comparison operators to produce a Bool value:
- Not equal.
- Less than or equal.
- Greater than or equal.
- Less than.
- Greater than.
(When you learn about type classes, note that the equality operators (==, /=) are part of the type class Eq, and the comparison operators (<=, >=, <, >) are part of the type class Ord.)
Haskell has an "if-then-else" clause, but because Haskell is a functional language it is more akin to the "? :" operator of C: rather than "doing" either the "then" clause or the "else" clause, the whole expression evaluates to one of them. For example, the factorial function could be written
factorial n = if n <= 0 then 1 else n * factorial (n-1)
The syntax of an if expression is:
if <condition> then <true-value> else <false-value>
If the condition is True then the result of the "if" is the true-value, otherwise it is the false-value. | 2026-01-31T08:54:19.088611 |
757,459 | 3.887592 | http://nrich.maths.org/453/solution?nomenu=1 | Copyright © University of Cambridge. All rights reserved.
The simplest tree graph consists of one line with two vertices, one at each end.
If a new line is added it must connect to one and only one of the existing vertices.
If the new line connected to no vertices, the tree graph would not be connected, as the new line's vertices could not be reached from the existing vertices.
If each end of the new line connects to a vertex the graph will have a circuit and will not be a tree graph.
So every new line added will join on to one existing vertex and create a new vertex at its end. This adds one line and one vertex to the tree graph making no change to the difference between the numbers of edges and vertices. So the difference remains constant at what it originally was.
Marcos's solution is a subtle variation on the above method:
Proof . If there wasn't at least one such vertex we could keep moving around the graph indefinitely and as there is a finite number of edges it would mean that there is a cycle, counter to the definition of a tree.
Take one such vertex as described in (2) and its respective edge. So far we have 2 vertices and 1 edge. The difference is 1.
(*) Add to this an adjacent edge (this is adding one edge and one vertex. The difference is still one.
Generally, carrying out this step, (*), an arbitrary number of times until we add the final edge will still result in a difference of 1 (as the step was in no way linked to fact that the previous edge was the starting one)
Hence, the number of vertices is one more than the number of edges. | 2026-01-29T23:08:15.964071 |
968,721 | 3.780617 | http://blogs.msdn.com/b/rezanour/archive/2011/05/19/math-primer-vectors-i-points-vs-vectors.aspx | Some math and physics libraries contain separate data types to represent points and vectors. I’ve seen many people become confused by this, since more commonly packages use vectors exclusively for everything. What’s going on here? Why have 2 separate types? Well, the answer is quite simple. Strictly speaking, a point is not a vector and a vector is not a point! We’ll come back to why many packages, XNA included, use vectors to represent points and why that can make sense later.
A point is a location in an n-dimensional space. It is usually represented in what we call Cartesian coordinates, such as (3, 5, 23). You can think of this as being the absolute location of the point. Because locations are specified using real numbers only, we say that the set of all real values in an n-dimensional space are within Rn, so for 3-dimensional space it would be R3.
Points are quite limiting in the types of mathematical operations they support. For instance, it doesn’t make sense to add 2 locations together. Think about that for a second. What does it mean if I add together the locations of a pencil and a book? It doesn’t really mean anything. You also can’t multiply a location by another location. That just doesn’t make sense.
A vector is an element from a vector space, and can be thought of as a relative displacement. Vectors do not have location, but instead describe how to get from one location to another. Therefore, they are directly related to points in that they describe the displacement required to go from one point (or location) to another. For example, if you have a point A, and you add a vector to it, you end up at some new point B. Similarly, if we subtract that displacement from B, we end up at A again. Like points, vectors are also commonly written using Cartesian coordinates. However, it is important to realize that the coordinates, such as (3, 4, 5) do not represent a location, but rather how much displacement you have along each axis.
Unlike points, we can add vectors together. For example, when we say something like “move forward 3 feet, then move to your right 3 feet”, that’s 2 displacements concatenated together. Two vectors are considered equal if their directions and magnitudes (or length) are the same. Since they have no location, we can look at 2 equal vectors side by side, something we couldn’t do with points, since any two equal points would be directly on top of each other (see Figure 2).
There are several operations we can do directly on vectors. We can add 2 vectors together, as we saw above. We can subtract 2 vectors, for instance if you move forward 3 units, then back 2 units. We can multiply a vector by a scalar, which just scales the magnitude of the vector. We can normalize a vector, which maintains its direction but scales it’s magnitude to 1. There are also several forms of multiplication we can perform between vectors. We’ll cover all of these in a bit more depth in the next installment of this series.
Implementation of Points as Vectors
So, now that we know that points are different from vectors, why do many libraries implement points as vectors? Surely, this can’t be, since we just discovered that we can add vectors but can’t add points, amongst other differences, right? Well, the answer is somewhat complicated. Technically, both points and vectors can be represented by the same data on the computer, which in the case of 3D is 3 floating or double precision values. However, the operations that are allowed for each type varies. Instead of implementing two nearly identical types, with just a different set of operators on them, most packages opt to leave the interpretation of the data up to the user, and just use a single implementation which can perform all the operations. This can lead to many algorithmic bugs if users aren’t aware of how they’re using them, but it’s a common tradeoff nonetheless.
In some packages, they use a single 4-element vector to represent both concepts, using a 1 in the last element (usually called w) for points, and a 0 for vectors. This actually helps loosely enforce the rules above, since subtracting two points would give 1 – 1 = 0 in the w field, which makes it a vector. And adding a vector to a point would leave 1, making it a point. Adding two points would make 2 in the w field, which is invalid as expected. However, this scheme uses considerably more memory (33% more in the case of 3D) with little real world benefit.
So, ultimately, as long as we mentally consider our points as locations and our vectors as displacements in our calculations, then we can at least use the same data structure for both, saving on space and code duplication. We just need to be careful that our algorithms don’t do any mathematically invalid operations on these types, which the compiler won’t catch for us. In order to use a vector as a point, we treat it as the relative displacement from the origin, 0. This will give the exact same Cartesian coordinates as the point would have had, now in the form of a vector. | 2026-02-02T06:50:44.759185 |
854,756 | 3.950744 | http://www.free-online-private-pilot-ground-school.com/auxiliary-aircraft-systems.html | Auxiliary Aircraft Systems
Airplanes are equipped with either a 14- or 28-volt direct-current electrical system. A basic airplane electrical system consists of the following components:
- Master/battery switch
- Alternator/generator switch
- Bus bar, fuses, and circuit breakers
- Voltage regulator
- Associated electrical wiring
Engine-driven alternators or generators supply electric current to the electrical system. They also maintain a sufficient electrical charge in the battery. Electrical energy stored in a battery provides a source of electrical power for starting the engine and a limited supply of electrical power for use in the event the alternator or generator fails.
Most direct current generators will not produce a sufficient amount of electrical current at low engine r.p.m. to operate the entire electrical system. Therefore, during operations at low engine r.p.m., the electrical needs must be drawn from the battery, which can quickly be depleted.
Alternators have several advantages over generators.
Alternators produce sufficient current to operate the entire electrical system, even at slower engine speeds, by producing alternating current, which is converted to direct current. The electrical output of an alternator is more constant throughout a wide range of engine speeds.
Some airplanes have receptacles to which an external ground power unit (GPU) may be connected to provide electrical energy for starting. These are very useful, especially during cold weather starting. Follow the manufacturer´s recommendations for engine starting using a GPU.
The electrical system is turned on or off with a master switch. Turning the master switch to the ON position provides electrical energy to all the electrical equipment circuits with the exception of the ignition system. Equipment that commonly uses the electrical system for its source of energy includes:
- Position lights
- Anticollision lights
- Landing lights
- Taxi lights
- Interior cabin lights
- Instrument lights
- Radio equipment
- Turn indicator
- Fuel gauges
- Electric fuel pump
- Stall warning system
- Pitot heat
- Starting motor
Many airplanes are equipped with a battery switch that controls the electrical power to the airplane in a manner similar to the master switch. In addition, an alternator switch is installed which permits the pilot to exclude the alternator from the electrical system in the event of alternator failure.
Figure 1: On this master switch, the left half is for the alternator and the right half is for the battery.
With the alternator half of the switch in the OFF position, the entire electrical load is placed on the battery. Therefore, all nonessential electrical equipment should be turned off to conserve battery power.
A bus bar is used as a terminal in the airplane electrical system to connect the main electrical system to the equipment using electricity as a source of power. This simplifies the wiring system and provides a common point from which voltage can be distributed throughout the system.
Figure 2: Electrical system schematic.
Fuses or circuit breakers are used in the electrical system to protect the circuits and equipment from electrical overload. Spare fuses of the proper amperage limit should be carried in the airplane to replace defective or blown fuses. Circuit breakers have the same function as a fuse but can be manually reset, rather than replaced, if an overload condition occurs in the electrical system.
Placards at the fuse or circuit breaker panel identify the circuit by name and show the amperage limit.
An ammeter is used to monitor the performance of the airplane electrical system. The ammeter shows if the alternator/generator is producing an adequate supply of electrical power. It also indicates whether or not the battery is receiving an electrical charge.
Ammeters are designed with the zero point in the center of the face and a negative or positive indication on either side.
Figure 3: Ammeter and loadmeter.
When the pointer of the ammeter on the left is on the plus side, it shows the charging rate of the battery. A minus indication means more current is being drawn from the battery than is being replaced. A full-scale minus deflection indicates a malfunction of the alternator/generator. A full-scale positive deflection indicates a malfunction of the regulator. In either case, consult the AFM or POH for appropriate action to be taken.
Not all airplanes are equipped with an ammeter. Some have a warning light that, when lighted, indicates a discharge in the system as a generator/alternator malfunction. Refer to the AFM or POH for appropriate action to be taken.
Another electrical monitoring indicator is a loadmeter. This type of gauge, illustrated on the right in figure 3, has a scale beginning with zero and shows the load being placed on the alternator/generator. The loadmeter reflects the total percentage of the load placed on the generating capacity of the electrical system by the electrical accessories and battery. When all electrical components are turned off, it reflects only the amount of charging current demanded by the battery.
A voltage regulator controls the rate of charge to the battery by stabilizing the generator or alternator electrical output. The generator/alternator voltage output should be higher than the battery voltage. For example, a 12-volt battery would be fed by a generator/alternator system of approximately 14 volts.The difference in voltage keeps the battery charged.
There are multiple applications for hydraulic use in airplanes, depending on the complexity of the airplane.
For example, hydraulics are often used on small airplanes to operate wheel brakes, retractable landing gear, and some constant-speed propellers. On large airplanes, hydraulics are used for flight control surfaces, wing flaps, spoilers, and other systems.
A basic hydraulic system consists of a reservoir, pump (either hand, electric, or engine driven), a filter to keep the fluid clean, selector valve to control the direction of flow, relief valve to relieve excess pressure, and an actuator.
The hydraulic fluid is pumped through the system to an actuator or servo. Servos can be either single-acting or double-acting servos based on the needs of the system.
This means that the fluid can be applied to one or both sides of the servo, depending on the servo type, and therefore provides power in one direction with a single-acting servo. A servo is a cylinder with a piston inside that turns fluid power into work and creates the power needed to move an aircraft system or flight control. The selector valve allows the fluid direction to be controlled. This is necessary for operations like the extension and retraction of landing gear where the fluid must work in two different directions. The relief valve provides an outlet for the system in the event of excessive fluid pressure in the system. Each system incorporates different components to meet the individual needs of different aircraft.
A mineral-based fluid is the most widely used type for small airplanes. This type of hydraulic fluid, which is a kerosene-like petroleum product, has good lubricating properties, as well as additives to inhibit foaming and prevent the formation of corrosion. It is quite stable chemically, has very little viscosity change with temperature, and is dyed for identification. Since several types of hydraulic fluids are commonly used, make sure your airplane is serviced with the type specified by the manufacturer. Refer to the AFM, POH, or the Maintenance Manual.
Figure 4: Basic hydraulic system.
The landing gear forms the principal support of the airplane on the surface. The most common type of landing gear consists of wheels, but airplanes can also be equipped with floats for water operations, or skis for landing on snow.
Figure 5: The landing gear supports the airplane during the takeoff run, landing, taxiing, and when parked.
The landing gear on small airplanes consists of three wheels - two main wheels, one located on each side of the fuselage, and a third wheel, positioned either at the front or rear of the airplane. Landing gear employing a rear-mounted wheel is called a conventional landing gear. Airplanes with conventional landing gear are often referred to as tailwheel airplanes. When the third wheel is located on the nose, it is called a nosewheel, and the design is referred to as a tricycle gear. A steerable nosewheel or tailwheel permits the airplane to be controlled throughout all operations while on the ground.
Tricycle landing gear airplanes
A tricycle gear airplane has three main advantages:
- It allows more forceful application of the brakes during landings at high speeds without resulting in the airplane nosing over.
- It permits better forward visibility for the pilot during takeoff, landing, and taxiing.
- It tends to prevent ground looping (swerving) by providing more directional stability during ground operation since the airplane´s center of gravity (CG) is forward of the main wheels. The forward CG, therefore, tends to keep the airplane moving forward in a straight line rather than ground looping.
Nosewheels are either steerable or castering. Steerable nosewheels are linked to the rudders by cables or rods, while castering nosewheels are free to swivel. In both cases, you steer the airplane using the rudder pedals.
However, airplanes with a castering nosewheel may require you to combine the use of the rudder pedals with independent use of the brakes.
Tailwheel landing gear airplanes
On tailwheel airplanes, two main wheels, which are attached to the airframe ahead of its center of gravity, support most of the weight of the structure, while a tailwheel at the very back of the fuselage provides a third point of support. This arrangement allows adequate ground clearance for a larger propeller and is more desirable for operations on unimproved fields.
Figure 6: Tailwheel landing gear.
The main drawback with the tailwheel landing gear is that the center of gravity is behind the main gear. This makes directional control more difficult while on the ground. If you allow the airplane to swerve while rolling on the ground at a speed below that at which the rudder has sufficient control, the center of gravity will attempt to get ahead of the main gear. This may cause the airplane to ground loop.
Another disadvantage for tailwheel airplanes is the lack of good forward visibility when the tailwheel is on or near the surface. Because of the associated hazards, specific training is required in tailwheel airplanes.
Fixed and retractable landing gear
Landing gear can also be classified as either fixed or retractable. A fixed gear always remains extended and has the advantage of simplicity combined with low maintenance. A retractable gear is designed to streamline the airplane by allowing the landing gear to be stowed inside the structure during cruising flight.
Figure 7: Fixed and retractable gear airplanes.
Airplane brakes are located on the main wheels and are applied by either a hand control or by foot pedals (toe or heel). Foot pedals operate independently and allow for differential braking. During ground operations, differential braking can supplement nosewheel/tailwheel steering.
Autopilots are designed to control the aircraft and help reduce the pilot´s workload. The limitations of the autopilot depend on the complexity of the system. The common features available on an autopilot are altitude and heading hold. More advanced systems may include a vertical speed and/or indicated airspeed hold mode. Most autopilot systems are coupled to navigational aids.
An autopilot system consists of servos that actuate the flight controls. The number and location of these servos depends on the complexity of the system. For example, a single-axis autopilot controls the aircraft about the longitudinal axis and a servo actuates the ailerons. A three-axis autopilot controls the aircraft about the longitudinal, lateral, and vertical axes; and three different servos actuate the ailerons, the elevator, and the rudder.
The autopilot system also incorporates a disconnect safety feature to automatically or manually disengage the system. Autopilots can also be manually overridden. Because autopilot systems differ widely in their operation, refer to the autopilot operating instructions in the AFM or POH.
Ice control systems
Ice control systems installed on aircraft consist of anti-ice and de-ice equipment. Anti-icing equipment is designed to prevent the formation of ice, while de-icing equipment is designed to remove ice once it has formed. Ice control systems protect the leading edge of wing and tail surfaces, pitot and static port openings, fuel tank vents, stall warning devices, windshields, and propeller blades. Ice detection lighting may also be installed on some airplanes to determine the extent of structural icing during night flights. Since many airplanes are not certified for flight in icing conditions, refer to the AFM or POH for details.
Airfoil ice control
Inflatable de-icing boots consist of a rubber sheet bonded to the leading edge of the airfoil. When ice builds up on the leading edge, an engine-driven pneumatic pump inflates the rubber boots. Some turboprop aircraft divert engine bleed air to the wing to inflate the rubber boots. Upon inflation, the ice is cracked and should fall off the leading edge of the wing. De-icing boots are controlled from the cockpit by a switch and can be operated in a single cycle or allowed to cycle at automatic, timed intervals. It is important that de-icing boots are used in accordance with the manufacturer´s recommendations. If they are allowed to cycle too often, ice can form over the contour of the boot and render the boots ineffective.
Figure 8: De-icing boots on the leading edge of the wing.
Many de-icing boot systems use the instrument system suction gauge and a pneumatic pressure gauge to indicate proper boot operation. These gauges have range markings that indicate the operating limits for boot operation. Some systems may also incorporate an annunciator light to indicate proper boot operation.
Proper maintenance and care of de-icing boots is important for continued operation of this system. They need to be carefully inspected prior to a flight.
Another type of leading edge protection is the thermal anti-ice system installed on airplanes with turbine engines. This system is designed to prevent the buildup of ice by directing hot air from the compressor section of the engine to the leading edge surfaces. The system is activated prior to entering icing conditions. The hot air heats the leading edge sufficiently to prevent the formation of ice.
An alternate type of leading edge protection that is not as common as thermal anti-ice and de-icing boots is known as a weeping wing. The weeping-wing design uses small holes located in the leading edge of the wing. A chemical mixture is pumped to the leading edge and weeps out through the holes to prevent the formation and buildup of ice.
Windscreen ice control
There are two main types of windscreen anti-ice systems. The first system directs a flow of alcohol to the windscreen. By using it early enough, the alcohol will prevent ice from building up on the windshield.
The rate of alcohol flow can be controlled by a dial in the cockpit according to procedures recommended by the airplane manufacturer.
Another effective method of anti-icing equipment is the electric heating method. Small wires or other conductive material is imbedded in the windscreen.
The heater can be turned on by a switch in the cockpit, at which time electrical current is passed across the shield through the wires to provide sufficient heat to prevent the formation of ice on the windscreen. The electrical current can cause compass deviation errors; in some cases, as much as 40°. The heated windscreen should only be used during flight. Do not leave it on during ground operations, as it can overheat and cause damage to the windscreen.
Propeller ice control
Propellers are protected from icing by use of alcohol or electrically heated elements. Some propellers are equipped with a discharge nozzle that is pointed toward the root of the blade. Alcohol is discharged from the nozzles, and centrifugal force makes the alcohol flow down the leading edge of the blade. This prevents ice from forming on the leading edge of the propeller.
Propellers can also be fitted with propeller anti-ice boots. The propeller boot is divided into two sections—the inboard and the outboard sections. The boots are grooved to help direct the flow of alcohol, and they are also imbedded with electrical wires that carry current for heating the propeller. The prop anti-ice system can be monitored for proper operation by monitoring the prop anti-ice ammeter. During the preflight inspection, check the propeller boots for proper operation. If a boot fails to heat one blade, an unequal blade loading can result, and may cause severe propeller vibration.
Figure 9: Prop ammeter and anti-ice boots.
Other ice control systems
Pitot and static ports, fuel vents, stall-warning sensors, and other optional equipment may be heated by electrical elements. Operational checks of the electrically heated systems are to be checked in accordance with the AFM or POH.
Operation of aircraft anti-icing and de-icing systems should be checked prior to encountering icing conditions. Encounters with structural ice require immediate remedial action. Anti-icing and de-icing equipment is not intended to sustain long-term flight in icing conditions.
When an airplane is flown at a high altitude, it consumes less fuel for a given airspeed than it does for the same speed at a lower altitude. In other words, the airplane is more efficient at a high altitude. In addition, bad weather and turbulence may be avoided by flying in the relatively smooth air above the storms. Because of the advantages of flying at high altitudes, many modern general aviation-type airplanes are being designed to operate in that environment. It is important that pilots transitioning to such sophisticated equipment be familiar with at least the basic operating principles.
A cabin pressurization system accomplishes several functions in providing adequate passenger comfort and safety. It maintains a cabin pressure altitude of approximately 8,000 feet at the maximum designed cruising altitude of the airplane, and prevents rapid changes of cabin altitude that may be uncomfortable or cause injury to passengers and crew. In addition, the pressurization system permits a reasonably fast exchange of air from the inside to the outside of the cabin. This is necessary to eliminate odors and to remove stale air.
Figure 10: Standard atmospheric pressure chart.
Pressurization of the airplane cabin is an accepted method of protecting occupants against the effects of hypoxia. Within a pressurized cabin, occupants can be transported comfortably and safely for long periods of time, particularly if the cabin altitude is maintained at 8,000 feet or below, where the use of oxygen equipment is not required. The flight crew in this type of airplane must be aware of the danger of accidental loss of cabin pressure and must be prepared to deal with such an emergency whenever it occurs.
In the typical pressurization system, the cabin, flight compartment, and baggage compartments are incorporated into a sealed unit that is capable of containing air under a pressure higher than outside atmospheric pressure. On aircraft powered by turbine engines, bleed air from the engine compressor section is used to pressurize the cabin. Superchargers may be used on older model turbine powered airplanes to pump air into the sealed fuselage. Piston-powered airplanes may use air supplied from each engine turbocharger through a sonic venturi (flow limiter). Air is released from the fuselage by a device called an outflow valve. The outflow valve, by regulating the air exit, provides a constant inflow of air to the pressurized area.
Figure 11: High performance airplane pressurization system.
To understand the operating principles of pressurization and air-conditioning systems, it is necessary to become familiar with some of the related terms and definitions, such as:
- Aircraft altitude—the actual height above sea level at which the airplane is flying.
- Ambient temperature—the temperature in the area immediately surrounding the airplane.
- Ambient pressure—the pressure in the area immediately surrounding the airplane.
- Cabin altitude—used to express cabin pressure in terms of equivalent altitude above sea level.
- Differential pressure—the difference in pressure between the pressure acting on one side of a wall and the pressure acting on the other side of the wall. In aircraft air-conditioning and pressurizing systems, it is the difference between cabin pressure and atmospheric pressure.
The cabin pressure control system provides cabin pressure regulation, pressure relief, vacuum relief, and the means for selecting the desired cabin altitude in the isobaric and differential range. In addition, dumping of the cabin pressure is a function of the pressure control system. A cabin pressure regulator, an outflow valve, and a safety valve are used to accomplish these functions.
The cabin pressure regulator controls cabin pressure to a selected value in the isobaric range and limits cabin pressure to a preset differential value in the differential range. When the airplane reaches the altitude at which the difference between the pressure inside and outside the cabin is equal to the highest differential pressure for which the fuselage structure is designed, a further increase in airplane altitude will result in a corresponding increase in cabin altitude. Differential control is used to prevent the maximum differential pressure, for which the fuselage was designed, from being exceeded. This differential pressure is determined by the structural strength of the cabin and often by the relationship of the cabin size to the probable areas of rupture, such as window areas and doors.
The cabin air pressure safety valve is a combination pressure relief, vacuum relief, and dump valve. The pressure relief valve prevents cabin pressure from exceeding a predetermined differential pressure above ambient pressure. The vacuum relief prevents ambient pressure from exceeding cabin pressure by allowing external air to enter the cabin when ambient pressure exceeds cabin pressure. The cockpit control switch actuates the dump valve. When this switch is positioned to ram, a solenoid valve opens, causing the valve to dump cabin air to atmosphere.
The degree of pressurization and the operating altitude of the aircraft are limited by several critical design factors. Primarily the fuselage is designed to withstand a particular maximum cabin differential pressure.
Several instruments are used in conjunction with the pressurization controller. The cabin differential pressure gauge indicates the difference between inside and outside pressure. This gauge should be monitored to assure that the cabin does not exceed the maximum allowable differential pressure. A cabin altimeter is also provided as a check on the performance of the system.
In some cases, these two instruments are combined into one. A third instrument indicates the cabin rate of climb or descent. A cabin rate-of-climb instrument and a cabin altimeter are illustrated in Figure 12.
Figure 12: Cabin pressurization instruments.
Decompression is defined as the inability of the airplane´s pressurization system to maintain its designed pressure differential. This can be caused by a malfunction in the pressurization system or structural damage to the airplane. Physiologically, decompressions fall into two categories; they are:
- Explosive Decompression—Explosive decompression is defined as a change in cabin pressure faster than the lungs can decompress; therefore, it is possible that lung damage may occur. Normally, the time required to release air from the lungs without restrictions, such as masks, is 0.2 seconds. Most authorities consider any decompression that occurs in less than 0.5 seconds as explosive and potentially dangerous.
- Rapid Decompression—Rapid decompression is defined as a change in cabin pressure where the lungs can decompress faster than the cabin; therefore, there is no likelihood of lung damage.
During an explosive decompression, there may be noise, and for a split second, one may feel dazed. The cabin air will fill with fog, dust, or flying debris. Fog occurs due to the rapid drop in temperature and the change of relative humidity. Normally, the ears clear automatically. Air will rush from the mouth and nose due to the escape of air from the lungs, and may be noticed by some individuals.
The primary danger of decompression is hypoxia.
Unless proper utilization of oxygen equipment is accomplished quickly, unconsciousness may occur in a very short time. The period of useful consciousness is considerably shortened when a person is subjected to a rapid decompression. This is due to the rapid reduction of pressure on the body—oxygen in the lungs is exhaled rapidly. This in effect reduces the partial pressure of oxygen in the blood and therefore reduces the pilot´s effective performance time by one-third to one-fourth its normal time. For this reason, the oxygen mask should be worn when flying at very high altitudes (35,000 feet or higher). It is recommended that the crewmembers select the 100 percent oxygen setting on the oxygen regulator at high altitude if the airplane is equipped with a demand or pressure demand oxygen system.
Another hazard is being tossed or blown out of the airplane if near an opening. For this reason, individuals near openings should wear safety harnesses or seatbelts at all times when the airplane is pressurized and they are seated.
Another potential hazard during high altitude decompressions is the possibility of evolved gas decompression sicknesses. Exposure to wind blasts and extremely cold temperatures are other hazards one might have to face.
Rapid descent from altitude is necessary if these problems are to be minimized. Automatic visual and aural warning systems are included in the equipment of all pressurized airplanes.
Most high altitude airplanes come equipped with some type of fixed oxygen installation. If the airplane does not have a fixed installation, portable oxygen equipment must be readily accessible during flight. The portable equipment usually consists of a container, regulator, mask outlet, and pressure gauge. Aircraft oxygen is usually stored in high pressure system containers of 1,800 — 2,200 pounds per square inch (p.s.i.). When the ambient temperature surrounding an oxygen cylinder decreases, pressure within that cylinder will decrease because pressure varies directly with temperature if the volume of a gas remains constant. If a drop in indicated pressure on a supplemental oxygen cylinder is noted, there is no reason to suspect depletion of the oxygen supply, which has simply been compacted due to storage of the containers in an unheated area of the aircraft. High pressure oxygen containers should be marked with the p.s.i. tolerance (i.e., 1,800 p.s.i.) before filling the container to that pressure. The containers should be supplied with aviation oxygen only, which is 100 percent pure oxygen. Industrial oxygen is not intended for breathing and may contain impurities, and medical oxygen contains water vapor that can freeze in the regulator when exposed to cold temperatures. To assure safety, oxygen system periodic inspection and servicing should be done.
An oxygen system consists of a mask and a regulator that supplies a flow of oxygen dependent upon cabin altitude. Regulators approved for use up to 40,000 feet are designed to provide zero percent cylinder oxygen and 100 percent cabin air at cabin altitudes of 8,000 feet or less, with the ratio changing to 100 percent oxygen and zero percent cabin air at approximately 34,000 feet cabin altitude. Regulators approved up to 45,000 feet are designed to provide 40 percent cylinder oxygen and 60 percent cabin air at lower altitudes, with the ratio changing to 100 percent at the higher altitude.
Pilots should avoid flying above 10,000 feet without oxygen during the day and above 8,000 feet at night.
Figure 13: Oxygen system regulator.
Pilots should be aware of the danger of fire when using oxygen. Materials that are nearly fireproof in ordinary air may be susceptible to burning in oxygen. Oils and greases may catch fire if exposed to oxygen, and cannot be used for sealing the valves and fittings of oxygen equipment. Smoking during any kind of oxygen equipment use is prohibited. Before each flight, the pilot should thoroughly inspect and test all oxygen equipment. The inspection should include a thorough examination of the aircraft oxygen equipment, including available supply, an operational check of the system, and assurance that the supplemental oxygen is readily accessible. The inspection should be accomplished with clean hands and should include a visual inspection of the mask and tubing for tears, cracks, or deterioration; the regulator for valve and lever condition and positions; oxygen quantity; and the location and functioning of oxygen pressure gauges, flow indicators and connections. The mask should be donned and the system should be tested. After any oxygen use, verify that all components and valves are shut off.
There are numerous types of oxygen masks in use that vary in design detail. It would be impractical to discuss all of the types on this page. It is important that the masks used be compatible with the particular oxygen system involved. Crew masks are fitted to the user´s face with a minimum of leakage. Crew masks usually contain a microphone. Most masks are the oronasal type, which covers only the mouth and nose.
Passenger masks may be simple, cup-shaped rubber moldings sufficiently flexible to obviate individual fitting. They may have a simple elastic head strap or the passenger may hold them to the face.
All oxygen masks should be kept clean. This reduces the danger of infection and prolongs the life of the mask. To clean the mask, wash it with a mild soap and water solution and rinse it with clear water. If a microphone is installed, use a clean swab, instead of running water, to wipe off the soapy solution. The mask should also be disinfected. A gauze pad that has been soaked in a water solution of Merthiolate can be used to swab out the mask. This solution should contain one-fifth teaspoon of Merthiolate per quart of water.
Wipe the mask with a clean cloth and air dry.
Diluter demand oxygen systems
Diluter demand oxygen systems supply oxygen only when the user inhales through the mask. An automix lever allows the regulators to automatically mix cabin air and oxygen or supply 100 percent oxygen, depending on the altitude. The demand mask provides a tight seal over the face to prevent dilution with outside air and can be used safely up to 40,000 feet. A pilot who has a beard or mustache should be sure it is trimmed in a manner that will not interfere with the sealing of the oxygen mask. The fit of the mask around the beard or mustache should be checked on the ground for proper sealing.
Pressure demand oxygen systems
Pressure demand oxygen systems are similar to diluter demand oxygen equipment, except that oxygen is supplied to the mask under pressure at cabin altitudes above 34,000 feet. Pressure demand regulators also create airtight and oxygen-tight seals, but they also provide a positive pressure application of oxygen to the mask face piece that allows the user´s lungs to be pressurized with oxygen. This feature makes pressure demand regulators safe at altitudes above 40,000 feet.
Some systems may have a pressure demand mask with the regulator attached directly to the mask, rather than mounted on the instrument panel or other area within the flight deck. The mask-mounted regulator eliminates the problem of a long hose that must be purged of air before 100 percent oxygen begins flowing into the mask.
Continous flow oxygen system
Continuous flow oxygen systems are usually provided for passengers. The passenger mask typically has a reservoir bag, which collects oxygen from the continuous flow oxygen system during the time when the mask user is exhaling. The oxygen collected in the reservoir bag allows a higher aspiratory flow rate during the inhalation cycle, which reduces the amount of air dilution. Ambient air is added to the supplied oxygen during inhalation after the reservoir bag oxygen supply is depleted. The exhaled air is released to the cabin.
Figure 14: Continuous flow mask and rebreather bag.
Servicing of oxygen systems
Certain precautions should be observed whenever aircraft oxygen systems are to be serviced. Before servicing any aircraft with oxygen, consult the specific aircraft service manual to determine the type of equipment required and procedures to be used. Oxygen system servicing should be accomplished only when the aircraft is located outside of the hangars. Personal cleanliness and good housekeeping are imperative when working with oxygen. Oxygen under pressure and petroleum products create spontaneous results when they are brought in contact with each other.
Service people should be certain to wash dirt, oil, and grease (including lip salves and hair oil) from their hands before working around oxygen equipment. It is also essential that clothing and tools are free of oil, grease, and dirt. Aircraft with permanently installed oxygen tanks usually require two persons to accomplish servicing of the system. One should be stationed at the service equipment control valves, and the other stationed where he or she can observe the aircraft system pressure gauges. Oxygen system servicing is not recommended during aircraft fueling operations or while other work is performed that could provide a source of ignition. Oxygen system servicing while passengers are on board the aircraft is not recommended.
This concludes the auxiliary aircraft systems page. You can now go on and read the
Flight Instruments page
or test you knowledge with the
FAA Principles of Flight question bank. | 2026-01-31T14:10:07.547084 |
449,083 | 4.009328 | http://www.sherpaguides.com/georgia/flint_river/wildnotes/index.html | The Natural Georgia Series: The Flint River
Rivers are among the most human-impacted ecological systems in the world. As people colonize a continent they tend to follow river corridors, building settlements along the way. As commerce develops, rivers become important transportation routes. To aid river traffic, channels are dredged and cleared. As cities grow, rivers are used both as a source of water and to treat municipal and industrial wastes. Rivers receive runoff from storm sewers and disturbed areas. Reservoirs and levees are constructed to control floods and store water for later use. All of these activities contribute to the degradation of natural river habitat, alteration of river flows, and pollution of river water. The net affect of human activity has been a loss of native river flora and fauna. For example, in North America over 20 percent of freshwater snails and fishes, over 30 percent of crayfishes, and almost 50 percent of mussels are in danger of extinction. In a recent survey, only 2 percent of North America's total river miles were rated as being in a high-quality natural condition. Scientists and river managers are recognizing that rivers can be restored. Critical to the recovery of river health is the reestablishment of seasonal floods and conservation or restoration of streamside forests and wetlands. Once these components are reestablished, a river's natural capacity for self-purification will result in recovery of water quality and river habitat. However, when a species is lost, it cannot be recovered.
The Flint River of Georgia is a river that still maintains high-quality river habitat, unpolluted waters, and an abundance of native species in some sections. Even though the river begins in urban Atlanta and crosses an agricultural landscape, streamside forests, swamps, and unimpeded river flow allow natural self-purification to occur. The Flint River and areas adjacent provide habitat for many interesting and unusual plants and animals.
Historically, 29 species of freshwater mussels were found in the Flint River and its tributaries, making it one of the most diverse rivers for freshwater mussels in the state of Georgia. In the early 1990s, surveys found that 22 species of freshwater mussels still occurred in the Flint River and its tributaries.
Freshwater mussels are sedentary creatures that live on river bottoms or burrow into river sediments. They are filter feeders; they feed by drawing water into their bodies with siphons and capture organic particles and microscopic algae with special structures on their gills. Freshwater mussels have an unusual life cycle that depends on freshwater fish. Once mussels eggs are fertilized, immature mussels, called glochidia, are released into river water where they must find a fish host. They attach to the gills or other body surface of fish and development occurs. They then release from the fish and settle to the river bottom where they mature. Some mussels produce elaborate lures to attract fish hosts and many produce large numbers of glochidia.
One of the largest freshwater mussels in the Flint River is the washboard (Megalonaias nervosa), which may grow to a length of 8 inches. The washboard has a dark brown almost round shell. Its name comes from the large ridges on the outside of the shell which resemble the ridges on an antique washboard. The inside of the shell is iridescent white to bluish-white. The washboard is a large river species and prefers deeper midchannel areas with swift current. It is often found in areas with sand or limestone rock bottoms. The washboard is still abundant in the Flint River.
The seeps, springs, and caves that are found in the karst topography of south Georgia provide habitat for a unique and diverse faunal community. Underground and cave-dwelling species are highly endemic, or limited in their geographical distribution. A survey of obligate cave fauna of the United States determined that over 60 percent of these species were restricted in distribution to only one county, with some species found only in a single cave or spring.
These subterranean faunal communities function in an ecological system with a significantly restricted energy input. For most of these systems, primary energy inputs come from nutrients and detritus transported by infiltrating groundwater. Some researchers have also suggested that inputs from bat guano deposited in caves and bacteria growing in the subterranean waters are important subsidies for the food web of these ecosystems. Much remains to be learned about the ecology of these subterranean systems, but we do know that they are very sensitive and vulnerable to impacts such as pollutants and sediments that infiltrate groundwater and altered hydrology.
The most specialized subterranean creatures are known as troglobites and spend their entire lives underground. Species that comprise this group often exhibit extreme morphological adaptations to their environment, such as lack of pigmentation, reduction or loss of eyes, and development of accessory sensory structures. Two of the most well-known troglobites from southwest Georgia include the Georgia blind cave salamander (Haideotriton wallacei) and the Dougherty Plain cave crayfish (Cambarus cryptodytes).
The Georgia blind cave salamander was discovered in 1939 by an engineer with the Dougherty County water system when a specimen was lifted to the top of a 200 foot well. These salamanders range in size from 1 to 3 inches and vary in color from a translucent pale pink to white. Other adaptations to their unusual aquatic environment are bright red feathery external gills and a finned tail.
The Dougherty Plain cave crayfish is another subterranean species that was discovered from a well. Found in Jackson County, FL in 1941, this crayfish has subsequently been reported from caves in southwest Georgia and neighboring parts of the Florida panhandle. It shares the typical subterranean adaptations of lack of pigmentation and reduced eyes.
The net-spinning caddisflies (Family Hydropsychidae) are among the most abundant type of caddisfly in the rivers of Georgia. One hundred and forty-five species occur in North America, although it is not known how many of these occur in the Flint River. The immature or larval form of a net-spinning caddisfly resembles a small caterpillar and lives in the water. They have a rounded armored head, three pairs of legs, plates on the back of the first three body segments, and a pair of hook-like tails. At maturity the larvae range from 0.5 to 1 inch in length.
Net-spinning caddisflies produce silk, which they use to construct elaborate tent-like homes called retreats on the surface of rocks or wood. In the Flint River, boulders and pieces of submerged wood may be covered with hundreds of retreats. Net-spinning caddisflies are filter-feeders and use nets woven into their retreats to capture small animals or organic particles from river water. They use brush-like mouthparts to remove food captured in their nets. Each species of net-spinning caddisfly produces a unique retreat and capture net. When the larvae complete their growth, they withdraw into their retreat and gradually develop wings. The adult caddisfly swims to the surface and emerges from the water.
The caddisfly resembles a small moth and its wings are folded tent-like over its body when resting. Adult caddisflies live for only a few days. During this time they mate and the female deposits eggs in the water to complete the life cycle.
Dobsonfly larvae (Corydalus cornutus) are abundant in the Flint River but seldom observed. Growing to a length of 3 inches, they have large square heads with distinct yellow and brown markings and large strong jaws. They have three pairs of legs and a muscular body with long, lateral filaments. Dobsonfly larvae, also called hellgrammites or 'gator-fleas, live beneath rocks and boulders in swiftly flowing sections of a river. They can be found in abundance in the shoals of the Flint River or its tributaries.
Dobsonfly larvae are predators. They use their strong jaws to capture other smaller aquatic animals. Their jaws are also used for defense and if handled carelessly a dobsonfly larva can deliver a severe pinch.
In the Flint River, dobsonfly larvae require a year to complete their larval growth. They mature in early summer, crawl to the stream bank, and burrow into moist soil. Over the next few weeks they gradually develop wings and emerge from the soil to find a mate. The adults do not feed and are clumsy fliers, spending most of their time on the streambank or in vegetation. Following mating, females deposit eggs in masses on vegetation or on structures overhanging the river. Egg masses are chalky white and circular, about 1 inch in diameter. They are commonly observed in early summer. Dobsonfly larvae are a preferred food of shoal bass and can be used as bait.
Burrowing mayflies of the family Ephemeridae are common in silty or muddy bottom, slow-moving streams across North America. In the Flint River, the genus Hexagenia the most common representative of this group. Hexagenia mayflies spend most of their lives as an immature form or nymph. The nymph has two long tusks, 3 pairs of legs, and feathery gills on its body. The front legs are enlarged with bristles and spikes used for digging into the stream bottom. They make shallow burrows in muddy areas or near the river margin. They beat their feathery gills to circulate water and dissolved oxygen to their burrows. Hexagenia nymphs feed on silt and mud, deriving their nutrition from microbes and decaying organic matter. Hexagenia nymphs can be very abundant in areas where rivers have been impounded.
Hexagenia mayflies are known for their enormous hatches in early summer. The nymphs are sensitive to water temperature and when river water warms to the right level they swim to the surface and emerge as winged adults. In some areas of the lower Flint tens of thousands of mayflies emerge at once. They are often referred to as "willow flies" because the adults rest on willows and other trees growing next to the river. Bass and other game fish recognize these hatches and feed aggressively on mayflies that fall off vegetation and into the water. The adult mayflies are attracted to street and building lights and can become a nuisance on road surfaces and parking lots.
The adults do not feed and live for only a few days. Males form circular mating swarms, flying up and down to attract a mate. Once mating occurs the females deposit eggs directly on the water surface. The eggs sink to the river bottom and hatch, starting the next generation of burrowing mayflies.
The Halloween darter (Percina sp.) was discovered by researchers in the early 1990s. It is found only in the Apalachicola, Chattahoochee, and Flint rivers. It is very abundant in shoals of the upper and lower Flint and some of its tributaries. The Halloween darter has not been 'named,' a process that involves a formal description of the species published in the scientific literature. However, researchers at the University of Georgia are working on a formal species description.
The Halloween darter is a small fish, with adults ranging in size from 2 to 4 inches. Its body is banded with dark bars and blotches against a bronze background. The Halloween darter received its common name because males and females develop a bright orange band on their front fins during breeding season in the late spring. Also, all of their other fins are banded with a bright orange wash. Halloween darters live exclusively in swift-flowing shoals and riffles. They use their front fins to rest on the stream bottom. Halloween darters are predators and feed on small aquatic invertebrates that live in shoal areas.
The stiped bass (Morone saxatilis) occurs throughout the lower Flint River. It has a long narrow body that is silver to white with a series of dark stripes running from head to tail. The head is small with a large mouth and jutting lower jaw. In the Flint River, these fish can attain a large size, 10- to 15-pound individuals are not unusual, and occasionally an individual over 30 pounds is observed.
Striped bass have been adversely affected by humans. In the 1800s millions of pounds of striped bass were commercially harvested annually, resulting in population declines. Striped bass are naturally migratory; they spawn in fresh water in late spring and then migrate to the ocean for the rest of the year. The development of reservoirs prevented annual migration, and landlocked populations developed. However, landlocked striped bass do not attain the large size (up to 100 pounds) reported for ocean-migrating individuals. Striped bass are intolerant of warm water temperatures common in southeastern rivers. In the Flint River, they are dependant on blue holes and other cool water springs where they congregate in large numbers during the warm summer months. Striped bass are predators feeding primarily on shad and other schooling fishes.
The Gulf sturgeon (Acipenser oxyrhynchus desotoi) is one of the larger freshwater fishes. In the 1800s, sturgeon greater than 10 feet in length and weighing up to 800 pounds were commonly harvested from southeastern rivers. Although large individuals are still encountered, they are not abundant.
Like the striped bass, the Gulf sturgeon has been adversely affected by humans. Overharvesting in the 1800s caused severe population declines. The Gulf sturgeon was harvested both for meat and for its eggs, an ingredient in caviar. Gulf sturgeon live in shallow waters of the Gulf of Mexico, but they migrate into rivers in the early spring to spawn. The construction of locks and dams on rivers interferes with migration, further reducing Gulf sturgeon numbers. The Gulf sturgeon once spawned in the lower Flint River, but the construction of the Woodruff Lock and Dam at Lake Seminole in the 1950s stopped the annual migration. Gulf Sturgeon still occur in the Apalachicola River but are rare.
Gulf sturgeon are striking fish. They have a long, V-shaped snout and flat head. The mouth is in the bottom of the head and has a set of barbels in front. The body is covered with leathery skin and a row of triangular plates line the sides of the body. Gulf sturgeon are bottom feeders. They use their snouts to dig through soft mud where they find a variety of worms, mollusks, crabs, and insect larvae. They do not feed during their annual spawning migration in rivers. Because of their large size, Gulf sturgeon have few predators and can be long-lived (up to 100 years). They mature slowly and require 7 to 12 years to reach breeding age.
Barbour's map turtle (Graptemys barbouri) is one of the most attractive aquatic turtles in the Flint River. They have olive green shells and their heads and legs are patterned with bright yellow and green markings resembling a road map. They have keel-like spines running across the top of their shell and additional spines on the shell near their tail. Male Barbour's map turtles are much smaller than females and are more brightly patterned. Females have strong jaws, which they use to crush snails and mussels, their preferred food sources. Males have a more diverse diet that includes aquatic insects and small snails.
The Barbour's map turtle is found only in the Apalachicola, Chattachoochee, and Flint river systems. They are especially abundant in the lower Flint and its tributaries and prefer areas where shoals alternate with deeper areas (5 to 12 feet). Like many aquatic turtles, Barbour's map turtles bask on logs overhanging the stream. They are noted for their shyness, quickly dropping into the water at the first sign of disturbance.
Barbour's map turtles breed from April through July and females deposit more than one clutch of eggs per season. Preferred nesting sites are sandy beaches and sand bars. Barbour's map turtles prefer clean water and populations decline in polluted areas. Since they prefer swiftly flowing areas, the development of reservoirs also contributes to population declines.
The alligator snapping turtle (Macroclemys temminckii) is the largest freshwater turtle in North America. Adults often weigh more than 100 pounds and a few individuals weighing 200 pounds have been observed. Alligator snappers are primitive-looking creatures. They have a huge, rough-textured head with a down-turned beak. Their eyes are surrounded by fleshy "eyelashes." The shell has three parallel ridges running from front to back.
Alligator snapping turtles are found in deeper areas of the Flint River and its tributaries. They also occur in swamps and ponds on the floodplain of the Flint River. They are predators and scavengers and feed on a wide variety of animals and plants including fish, aquatic invertebrates, other turtles, acorns, briar roots, and wild grapes. They also have a pink worm-like lure in their mouth. They sit quietly on the bottom with their mouth open and the lure gently wiggling in the current. When a fish comes to investigate, they snap their jaws shut, quickly capturing their prey. They are the only turtle known to possess a lure.
Alligator snapping turtles are declining in the Flint River and throughout the Southeast. The major cause of the decline is overharvesting. Turtle meat is considered a delicacy and is also used for turtle soup. Alligator snapping turtles are long-lived and require 11 to 13 years to reach maturity. Females only lay one clutch of eggs per year and mortality of hatchlings from predators is high. Thus, alligator snapping turtle populations cannot survive high rates of harvest. Georgia considers alligator snapping turtles a threatened species and it is illegal to possess or sell them.
The brown water snake (Nerodia taxispilota) is a common water snake in the Flint River. They have brown bodies with rectangular dark brown blotches down the middle of the back. They also have dark brown blotches on the sides of the body. Their body color can range from light brown to almost black. Mature adults range from 3 to 5 feet in length. They have heavy bodies with large triangular heads and are often mistaken for cottonmouths. Brown water snakes can be distinguished from cottonmouths by the lack of facial pits and lack of a narrow light line on the face. Unlike cottonmouths, brown water snakes are not venomous. Like most snakes, brown water snakes avoid humans and generally flee if disturbed. If cornered or carelessly handled they can deliver a painful bite and secrete a fowl smelling musk. Brown water snakes are excellent climbers and can often be observed in tree branches overhanging the Flint River.
Brown water snakes eat fish and have a preference for catfish, but they will also capture other bottom-dwelling fish and frogs. Breeding occurs in late spring and females give birth to 15 to 50 young in late summer or early autumn. Brown water snakes prefer slow-moving sections of the Flint River but can also be found in sloughs, forested wetlands, and freshwater marshes.
One of the most colorful breeding birds in North America, the prothonotary warbler (Protonotaria citrea) inhabits swamps and wetland forests throughout its range. While its breeding range extends as far north as the Coastal Plain of the mid-Atlantic states and river valleys of the Midwest, this bright yellow songbird is far more common in bottomland hardwood forests and swamps of the southeastern United States, including the Flint River. The "sweet-sweet-sweet" song of this bird is synonymous with springtime in forested wetlands of the region.
The bird also has a unique place in our legal and political history. Alger Hiss's testimony of his excitement at seeing this species along the Potomac revealed him to have committed perjury when testifying before the House Un-American Activities Committee. Hiss had repeatedly denied knowing Whittaker Chambers, an ex-communist who accused him of espionage. Chambers's testimony included details of Hiss's personal life that only a friend could have known, such as Hiss's sighting of the warbler along the Potomac. Hiss's independent verification of his prothonotary warbler sighting was taken as proof that Chambers was indeed telling the truth. One of Hiss's most visible and fervent adversaries in this interrogation was Richard Nixon.
Wintering in mangroves along the coasts of Mexico and Central America, the prothonotary warbler is one of the earliest migrants in the spring, usually arriving on the Gulf coast by mid-March. The only eastern wood-warbler that nests in tree cavities, prothonotaries are almost always found nesting in trees in standing or slowly moving water. When learning to fly, fledglings that fall in the water have been observed to swim significant distances. Habitat destruction poses the major threat to this species, both on its breeding and wintering grounds. As a result this species is on the watch list of Partners in Flight, a multiagency international bird conservation group.
Cryptic even to its discoverer, the northern rough-winged swallow (Stelgidopteryx serripennis) was first documented by John James Audubon in 1838 when he collected what he believed to be bank swallows. Only later upon closer observation of his specimens did he realize that this was a distinct species. Although relatively common in suitable habitat, this bird is still easily overlooked. The ends of the primary feathers lack terminal barbules and in males even bend up in tiny hooks. This trait gives the bird its name, but it remains unclear what purpose this adaptation serves.
Although the northern rough-winged swallow is found in a variety of open habitats throughout its range, it is closely associated with rivers. They nest in burrows in eroded banks, particularly bluffs, along the river. There is debate as to whether they excavate their own burrow or utilize existing burrows excavated by other birds such as kingfishers or even small mammals. While apparently capable of digging their own burrows, a majority of birds do use existing burrows, and scarcity of burrows is believed to be a limiting factor in their reproductive success.
After raising their young to the fledgling stage (approximately 20 days), young birds take an initial flight and rarely return to their burrow again. Feeding at lower altitudes than most swallows, northern rough-wingeds are commonly seen hawking insects over the Flint River, sometimes dipping their bill down to take prey from the water's surface. Although inconspicuous, they are a fascinating species to observe on and around the river.
Writing of his discovery of this plant in the shoals of the Savannah River at Augusta in the 1770s, the famous early American naturalist William Bartram stated "nothing in vegetable nature was more pleasing than the odoriferous Pancratium fluitans, which alone possesses the little rocky islets which just appear above the water." A true habitat specialist, the shoals spider lily, which has been reclassified with the Latin name Hymenocallis coronaria, is found in the shallow, fast-flowing water of river shoals along the fall line in South Carolina, Georgia, and Alabama.
The odoriferous quality to which Bartram refers plays an important role in the plant's natural history. Blooming from mid-May through early June, the large, showy white flowers open in the late afternoon and emit a pleasant fragrance throughout the night that wanes the next day as the blossom withers. This fragrance and the light color of the flower attract a species of sphinx moth, which pollinates the lily on nocturnal flights across the shoals. Another unique adaptation to their habitat is the plant's production of pecan-sized seeds that sink, rather than float like other species of spider lily. The seeds' lack of buoyancy allows them to sink quickly into a crevice between the rocks where they germinate in the shoals instead of floating into deeper water where they would not survive.
Found in Georgia in isolated populations along the Savannah, Chattahoochee, and Flint rivers, much of this plant's habitat has been submerged by the construction of reservoirs. Listed as endangered by the state of Georgia, threats to remaining populations include siltation, abnormally lowered water levels from drought and overwithdrawal, and collecting by humans.
Ranging from North Carolina to Louisiana, this plant is the only epiphytic (growing on trees) orchid north of central Florida. Most commonly found growing on live oak or magnolia trees, the greenfly orchid (Epidendrum canopseum) is not parasitic. It uses the tree only as a substrate on which to grow. Greenfly orchids prefer habitats with moist microclimates, such as swamps and riparian hammocks.
Blooming from mid to late summer, this plant's pale green flowers resemble the shape of a housefly, thus its name. Like many of the light colored tropical orchids, the greenfly is pollinated by moths and is fragrant only during the evening and nighttime hours. Listed as unusual by the state of Georgia, this orchid is sometimes overlooked because of its tendency to grow mixed with other moisture-loving epiphytic plants such as resurrection fern and ball moss.
Corkwood (Leitneria floridana) is a relict species. According to the fossil pollen record, it was once common throughout the Southeast, but now it is relatively rare. Corkwood is a shrubby, small tree sometimes growing in dense clonal thickets. Found in a variety of wetland habitats, it is exceptionally flood tolerant and is also tolerant of brackish conditions. This species occurs in only a few counties in south Georgia, with one population found in the Chickasawhatchee Swamp.
Wood from this tree is exceptionally light, having the lowest specific gravity of any tree in North America. It is more common on the Gulf coast of Florida and Texas, although still rare there. Because of its light weight and buoyancy, wood from the corkwood tree was used by local fishermen to make floats for their nets.
One of the most unusual ecological communities in the state of Georgia is the Atlantic white cedar (Chaeaecyparis thyoides) swamp. While usually found in peat bogs in New England and the mid-Atlantic or in frequently inundated river swamps along the Gulf coast of Florida and Alabama, Georgia's Atlantic white cedar communities are found on drier sandy terraces along western tributaries of the Flint in the fall line sandhills. These Atlantic white cedar communities represent the most disjunct occurrence of this community and are the only examples in the state of Georgia. Generally, Atlantic white cedar decreases in abundance with increasing distance from the coast, and these communities along the Flint are the most distant from the coast throughout the range of this species.
Atlantic white cedar is a straight tree typically found in dense stands, with lower branches dead due to lack of light. Its leaves are scale-like, overlapping, and often completely cover the stems. Unique assemblages of shrubby and herbaceous vegetation are often associated with Atlantic white cedar and the extreme conditions in which it thrives. These swamps also provide important habitat for wildlife, especially birds. Surveys in the Great Dismal Swamp in North Carolina showed Atlantic white cedar to be valuable breeding habitat for a variety of species, as well as important wintering habitat for some species. The larvae of one butterfly, Hessel's hairstreak, feeds exclusively on Atlantic white cedar.
Much more common at the time of European settlement than it is now, Atlantic white cedar was logged extensively throughout the last two centuries. Dependent on the interplay of disturbances such as light- to medium-intensity fires and flooding for regeneration, altered hydrology, fire suppression, and poor logging practices are the principal threats to Atlantic white cedar swamps.
While cultivated ornamental hybrids from southeast Asia have long been a favorite of southern gardeners, fewer people are familiar with the diversity and beauty of our native azaleas (Rhododendron sp.). Of the 15 species of azaleas native to the eastern United States, 8 are found in the Flint River watershed. Native azaleas range from more common species that are relatively widespread geographically, such as the Piedmont azalea (R. canescens), to rare species with restricted ranges, such as the plumleaf azalea (R. prunifolium). Extremely variable in color within a given species, native azaleas are best identified by season of bloom, habitat preference, fragrance, and geographic range.
For the most part, native azaleas are found in wooded sites with moist soils, such as riparian hardwood forests, although some species, such as Alabama azalea (R. alabamense) and Oconee azalea (R. flammeum) prefer drier sites. Depending on the species, native azaleas bloom from early spring through late summer. Some species are quite fragrant and they are a valuable source of nectar for both butterflies and hummingbirds. One of the best places to view native azaleas is at Callaway Gardens near Pine Mountain.
Primary threats to native azaleas in the wild include habitat destruction and removal by collectors. Native azaleas should never be collected from the wild; transplants usually fail and several nurseries in the Southeast now offer a variety of native azalea species and even hybrids.
As the Flint enters the Dougherty Plain physiographic province, the character of its riparian forests changes dramatically. As the river cuts into the underlying limestone, the floodplain becomes narrow, especially compared with the broad floodplain swamps found just below the fall line. Rather than the frequently flooded bottomland hardwood forests characteristic of these wider floodplains, the lower Flint and its tributaries are flanked by a narrow band of more floristically diverse hardwood hammocks.
The diversity of woody plant species in these hammocks rivals and even surpasses that of southern Appalachian cove hardwood communities. This diversity is a result of calcareous soils, variable hydrology, a position in the landscape that provides a refuge from frequent fire, and the confluence of species characteristic of the Coastal Plain with those of more northern affinities at the southern limit of their ranges. Riparian hammocks also feature many unique and rare nonwoody plants, such as several species of trillium, greenfly orchid, atamasco lily, and claytonia. This community has great value as habitat for wildlife, with mast-producing trees such as oaks and hickories providing an important food source. These forests also serve as riparian corridors, facilitating movement of wildlife through the landscape and providing important habitat for breeding and migratory birds, reptiles, and amphibians.
One of the world's hardiest palms, needle palm (Rhapidophyllum hystrix) has become a favorite of collectors around the world for its ability to withstand cold. Although visible damage may occur when temperatures are between 0 and 10 degrees Fahrenheit, plants have reportedly recovered from temperatures as low as 20 degrees Fahrenheit. While its tolerance to cold temperature has resulted in translocations that range from the northern United States to Europe, this plant's natural range is from southeastern South Carolina westward across the Coastal Plain to southeastern Mississippi.
This shrubby palm gets its name from the rather stout, sharp needles that spiral outward from its short, fibrous trunk. Usually found in moist, hardwood-dominated forests, needle palm can be found in the Flint River basin in the Chickasawhatchee Swamp and in Magnolia Swamp. This palm has an extremely slow growth rate, only reaching 4 to 6 feet at maturity. Many seeds are infertile and viable seeds may take up to two years to germinate. This low reproductive potential and habitat destruction are the primary threats to this increasingly rare species.
Some of the special features found along the Flint and its tributaries are relatively high-quality mesic hardwood forests. A few of the more undisturbed examples of these forests in the Flint Basin contain a very rare species of trillium, the relict trillium (Trillium reliquum). Listed as endangered by both the U.S. Fish and Wildlife Service and the state of Georgia, relict trillium ranges from South Carolina to southeastern Alabama in three general groupings of populations. The plant is believed to have once been more widespread, with these three groups representing relicts of its original distribution, thus the common name. This plant was described as a new species in 1975.
The trillium genus dates back 30 million years and contains 38 species, all occurring in the Northern Hemisphere. Trilliums are found in temperate forests of the eastern U.S., the Pacific Northwest, and eastern Asia. Throughout its range, relict trillium is typically found close to streams, creeks, and rivers along the fall line in habitat with high relative humidity. The plant grows low to the ground, producing a single flower that ranges in color from dark purple to yellow. The flower emerges from a single whorl of three leaves that are mottled with blotches of color that vary from dark green to silver.
Trilliums have an intriguing method of seed dispersal and are known as myrmecochorus, or ant-dispersed, plants. The fruit drops to the ground, splitting open and exposing the seeds. Attached to the seeds are elaiosomes, appendages rich in lipids as well as chemicals that mimic the "smell" of ant prey. Ants then carry this package away, consuming the elaiosome and depositing the hard seed in their mound, where the ants protect the seed from being consumed by other creatures.
Primary threats to relict trillium are habitat destruction due to clearing of forest for agriculture or pine plantations and competition from exotic species, particularly Japanese honeysuckle. Recent research has also shown that exotic fire ants have negative impacts on trillium reproduction both by displacing native species of ants that disperse seeds and by fire ants' moving of seeds into unsuitable trillium habitat.
Red buckeye (Aesculus pavia) is a shrub or small understory tree that may reach heights of 20 to 30 feet. It is found in mesic woodlands, along streams and creeks, and in parts of floodplains that seldom flood or flood for short periods of time. The tree has broad, flat palmate compound leaves, usually with five leaflets. Its flowers, which appear in the spring as early as March, are a showy red and borne in groups on panicles 6 to 10 inches long. Red buckeye is becoming popular as an ornamental and is known to attract ruby-throated hummingbirds, which are the primary pollinator of this plant.
Red buckeye is native to the Coastal Plain and lower Piedmont from North Carolina through Texas and is also found north through southern Illinois and Missouri in the western part of its range. Early settlers made a soap substitute from its gummy roots, and Native Americans used the crushed branches of this and other buckeyes to drug fish and make them easier to catch.
Go back to previous page. Go to The Flint River contents page. Go to Sherpa Guides home.
[ Previous Topic | Next Topic ] | 2026-01-25T04:29:31.193159 |
964,869 | 4.379124 | http://www.learner.org/courses/envsci/unit/text.php?unit=12&secNum=6 | Unit 12: Earth's Changing Climate // Section 6: Present Warming and the Role of CO2
There is clear evidence from many sources that the planet is heating up today and that the pace of warming may be increasing. Earth has been in a relatively warm interglacial phase, called the Holocene Period, since the last ice age ended roughly 10,000 years ago. Over the past thousand years average global temperatures have varied by less than one degree—even during the so-called "Little Ice Age," a cool phase from the mid-fourteenth through the mid-nineteenth centuries, during which Europe and North America experienced bitterly cold winters and widespread crop failures.
Over the past 150 years, however, global average surface temperatures have risen, increasing by 0.6°C +/- 0.2°C during the 20th century. This increase is unusual because of its magnitude and the rate at which it has taken place. Nearly every region of the globe has experienced some degree of warming in recent decades, with the largest effects at high latitudes in the Northern Hemisphere. In Alaska, for example, temperatures have risen three times faster than the global average over the past 30 years. The 1990s were the warmest decade of the 20th century, with 1998 the hottest year since instrumental record-keeping began a century ago, and the ten warmest years on record have all occurred since 1990 (Fig. 9).
Figure 9. Global temperature record
See larger image
Source: Courtesy Phil Jones. © Climactic Research Unit, University of East Anglia and the U.K. Met Office Hadley Centre.
As temperatures rise, snow cover, sea ice, and mountain glaciers are melting. One piece of evidence for a warming world is the fact that tropical glaciers are melting around the globe. Temperatures at high altitudes near the equator are very stable and do not usually fluctuate much between summer and winter, so the fact that glaciers are retreating in areas like Tanzania, Peru, Bolivia, and Tibet indicates that temperatures are rising worldwide. Ice core samples from these glaciers show that this level of melting has not occurred for thousands of years and therefore is not part of any natural cycle of climate variability. Paleoclimatologist Lonnie Thompson of Ohio State University, who has studied tropical glaciers in South America, Asia, and Africa, predicts that glaciers will disappear from Kilimanjaro in Tanzania and Quelccaya in Peru by 2020.
“The fact that every tropical glacier is retreating is our warning that the system is changing.”
Lonnie Thompson, Ohio State University
Rising global temperatures are raising sea levels due to melting ice and thermal expansion of warming ocean waters. Global average sea levels rose between 0.12 and 0.22 meters during the 20th century, and global ocean heat content increased. Scientists also believe that rising temperatures are altering precipitation patterns in many parts of the Northern Hemisphere (footnote 7).
Because the climate system involves complex interactions between oceans, ecosystems, and the atmosphere, scientists have been working for several decades to develop and refine General Circulation Models (also known as Global Climate Models), or GCMs, highly detailed models typically run on supercomputers that simulate how changes in specific parameters alter larger climate patterns. The largest and most complex type of GCMs are coupled atmosphere-ocean models, which link together three-dimensional models of the atmosphere and the ocean to study how these systems impact each other. Organizations operating GCMs include the National Aeronautic and Space Administration (NASA)'s Goddard Institute for Space Studies and the United Kingdom's Hadley Centre for Climate Prediction and Research (Fig. 10).
Figure 10. Hadley Centre GCM projection
See larger image
Source: © Crown copyright 2006, data supplied by the Met Office.
Researchers constantly refine GCMs as they learn more about specific components that feed into the models, such as conditions under which clouds form or how various types of aerosols scatter light. However, predictions of future climate change by existing models have a high degree of uncertainty because no scientists have ever observed atmospheric CO2 concentrations at today's levels.
Modeling climate trends is complicated because the climate system contains numerous feedbacks that can either magnify or constrain trends. For example, frozen tundra contains ancient carbon and methane deposits; warmer temperatures may create a positive feedback by melting frozen ground and releasing CO2 and methane, which cause further warming. Conversely, rising temperatures that increase cloud formation and thereby reduce the amount of incoming solar radiation represent a negative feedback. One source of uncertainty in climate modeling is the possibility that the climate system may contain feedbacks that have not yet been observed and therefore are not represented in existing GCMs.
Scientific evidence, including modeling results, indicates that rising atmospheric concentrations of CO2 and other GHGs from human activity are driving the current warming trend. As the previous sections showed, prior to the industrial era atmospheric CO2 concentrations had not risen above 300 parts per million for several hundred thousand years. But since the mid-18th century CO2 levels have risen steadily.
In 2007 the Intergovernmental Panel on Climate Change (IPCC), an international organization of climate experts created in 1988 to assess evidence of climate change and make recommendations to national governments, reported that CO2 levels had increased from about 280 ppm before the industrial era to 379 ppm in 2005. The present CO2 concentration is higher than any levels over at least the past 420,000 years and is likely the highest level in the past 20 million years. During the same time span, atmospheric methane concentrations rose from 715 parts per billion (ppb) to 1,774 ppb and N2O concentrations increased from 270 ppb to 319 ppb (footnote 8).
Do these rising GHG concentrations explain the unprecedented warming that has taken place over the past century? To answer this question scientists have used climate models to simulate climate responses to natural and anthropogenic forcings. The best matches between predicted and observed temperature trends occur when these studies simulate both natural forcings (such as variations in solar radiation levels and volcanic eruptions) and anthropogenic forcings (GHG and aerosol emissions) (Fig. 11). Taking these findings and the strength of various forcings into account, the IPCC stated in 2007 that Earth's climate was unequivocally warming and that most of the warming observed since the mid-20th century was "very likely" (meaning a probability of more than 90 percent) due to the observed increase in anthropogenic GHG emissions (footnote 9).
Figure 11. Comparison between modeled and observations of temperature rise since the year 1860
See larger image
Source: © Intergovernmental Panel on Climate Change, Third Assessment Report, 2001. Working Group 1: The Scientific Basis, Figure 1.1.
Aerosol pollutants complicate climate analyses because they make both positive and negative contributions to climate forcing. As discussed in Unit 11, "Atmospheric Pollution," some aerosols such as sulfates and organic carbon reflect solar energy back from the atmosphere into space, causing negative forcing. Others, like black carbon, absorb energy and warm the atmosphere. Aerosols also impact climate indirectly by changing the properties of clouds—for example, serving as nuclei for condensation of cloud particles or making clouds more reflective.
Researchers had trouble explaining why global temperatures cooled for several decades in the mid-20th century until positive and negative forcings from aerosols were integrated into climate models. These calculations and observation of natural events showed that aerosols do offset some fraction of GHG emissions. For example, the 1991 eruption of Mount Pinatubo in the Philippines, which injected 20 million tons of SO2 into the stratosphere, reduced Earth's average surface temperature by up to 1.3°F annually for the following three years (footnote 10).
But cooling from aerosols is temporary because they have short atmospheric residence times. Moreover, aerosol concentrations vary widely by region and sulfate emissions are being reduced in most industrialized countries to address air pollution. Although many questions remain to be answered about how various aerosols are formed and contribute to radiative forcing, they cannot be relied on to offset CO2 emissions in the future. | 2026-02-02T05:35:55.483854 |
489,854 | 3.648163 | http://tewt.org/index.php/lessons-activities/commonly-taught-books/74-grapes-of-wrath | Suggested Introduction to The Grapes of Wrath
Begin with an historical introduction to the Great Depression using public domain images available via the Library of Congress American Memory exhibition. Select a few images - such as "Migrant Mother" by Dorothy Lange to the right - to include in a slide show and add a question or two to each slide that will encourage students to analyze and discuss the emotional toll of the Great Depression.
Direct students to read personal histories of Americans living during the Great Depression, such as those found at The New Deal Network and PBS’s Surviving the Dust Bowl. If your students are teenagers, consider having them read the stories of teenage hobos who were “riding the rails” in the 30s.
Steinbeck: Biography As A Tool In Teaching Reading And Writing Skills This detailed unit from the Yale-New Haven Teacher's Institute offers varied resources and techniques for presenting biography to students in secondary classrooms. Lesson Three: The Grapes of Wrath helps orient students to the geography of the novel and reviews its historical background. The lesson also provides background information on Steinbeck’s winning the Pulitzer Prize and helps explain the Phalanx Theory, which many critics feel is crucial in understanding The Grapes of Wrath. There is also emphasis on relating the novel to contemporary America and teachers might adopt one or several of seventeen Grapes of Wrath writing topics.
Creating Dramatic Monologues from The Grapes of Wrath This High School lesson plan from Discovery School aims to help students understand the universal nature of Steinbeck’s characters' struggles and some of the complex forces affecting their lives. It also emphasizes the value of primary source material in presenting an authentic picture of an given period in history. Students are encouraged to explore Web sites about the Dust Bowl and develop a monologue. There are six discussion questions, a monologue evaluation guide, reading suggestions, and a vocabulary list. Can be used with or without the video available from Discovery School.
"A Day in the Life of a Hobo" Interdisciplinary Blogging Activity This activity, created by EdTechTeacher's Tom Daccord, has students write blog posts from the perspective of a Hobo who is "riding the rails." Students use their knowledge of the period and their creativity to create a story (250-500 words) about a day in your life as a Hobo. Students post their blog and read everyone's work. Students then comment on the posting and state what they liked about the story they read -- and what made it seem authentic. The blogs serve to provide a public form to present and share student work without undue stress on the student. There is no "right" answer, students are allowed to express themselves creatively, and each student receives positive feedback about their posting. Resources for this assignment include:
- Riding the Rails Part of PBS's American Experience television series, this site focuses on the plight of more than a quarter million teenagers living on the road in America. There is a timeline, maps, "tales from the rails", Hobo songs, a teacher's guide, recommended resources and more
- New Deal Network The Franklin and Eleanor Roosevelt Institute (FERI), in collaboration with the Franklin D. Roosevelt Presidential Library, Marist College, and IBM, launched the New Deal Network (NDN). The site features 20,000 items: photographs, speeches, letters, documents, and exercises from the New Deal era.
- Listen to actual audio interviews of Americans who lived during the Great Depression. Visit the Library of Congress’ “Voices from the Dust Bowl” collection for mp3 files.
- Listen to a “Fireside Chat” by President Roosevelt and discuss what impact these chats had on the American public. You can find select Fireside Chat audio recording at the American Rhetoric web site.
John Steinbeck - Nobel Prize Speech
Reading the Grapes
This teacher's discussion guide from the California Council for the Humanities
provides an introduction to the novel, a biography of Steinbeck with
a timeline of his life, suggested vocabulary, expressions, discussion of the music used for the dramatic production, and discussion questions.
Teacher Cyberguide: Grapes of Wrath This supplemental unit to The Grapes of Wrath by John Steinbeck was developed as part of the Schools of California Online Resources for Educators (SCORE). You could assign any of the activities as small group or individual work for students. After reading The Grapes of Wrath students chart the conditions and circumstances leading to the displacement of the Dust Bowl refugees and the experiences of Dust Bowl refugees after their relocation. They also study the conditions and experiences of the Kosovo refugees and compare and contrast the conditions and experiences of the these refugee groups.
The Grapes of Wrath NPR's report on the story behind the creation of the Grapes of Wrath. Listen to the report as well as Woody Guthrie's 1940 song "Tom Joad" and watch a scene from the 1940 film The Grapes of Wrath
Classic Note on the Grapes of Wrath A detailed student guide from GradeSaver. Offers an introduction to Steinbeck and the book, summaries and analysis of each chapter, a list of characters.
Grapes of Wrath Teacher's Guide This Penguin teacher's guide questions, exercises, and assignments on these pages are designed to guide students’ reading of the literary work and to provide suggestions for exploring the implications of the story through discussions, research, and writing. There are five pre-reading questions, dozens of chapter questions, twelve "digging deeper" questions, nine "writer responses" questions, and ten questions for further explorations.
The Grapes of Wrath, Of Mice and Men, and The Pearl The Great Books Foundation offers an introduction to the novel, fifteen discussion questions, and three questions for "further reflection."
Grapes of Wrath (C-Span) From C-Span's American Writers series, students explore the life and works of John Steinbeck via an electronic scrapbook and learn about the effect his work had on others. Students are then invited to use a printable page from the site to create their own scrapbook. Contains 10 questions for High School students.
Film Study of the Grapes of Wrath This New Deal Network lesson plan has students analyze the effects of the Dust Bowl on tenant farmers by using a visual document, analyze the film "The Grapes of Wrath" as a "cultural document" of its time, and view film critically by using a film guide to explore techniques and visual treatment of the migrant experience.
The Grapes of Wrath Google Lit Trip Use Google Earth to follow the Joad family as it travels to California.
The Grapes of Wrath @Web English Teacher Resources, including vocabulary lists, to help teach the novel.
Weedpatch Camp While writing Grapes of Wrath, John Steinbeck visited Bakersfield, California and based his book on Arvin Federal Government Camp which he portrayed as "Weedpatch Camp." There are newspaper articles about the camps, personal reminiscences, a bibliography for Dust Bowl and Migrant workers, the story behind Dorothea Lange's famous "Migrant Mother" photograph, and more.
Woody Guthrie and the Grapes of Wrath This lesson plan from the Rock & Roll Hall of Hame is designed for students to recognize thematic parallels between Woody Guthrie's music and Steinbeck's novel, develop an appreciation for The Grapes of Wrath and the music of Woody Guthrie, as works of art and historical documents, and explore the idea of the "American spirit." Students discuss the parallels between Guthrie's life and music and the experience of the Joad family as well as study lyrics of "This Land Is Your Land" and explore connections between the novel and the song.
Teaching Programme for Grapes of Wrath Offers hyperlinked outlines of teaching strategies
John Steinbeck's Pacific Grove This site provides a visual tour of local sites relating to the life and work of John Steinbeck as well as Steinbeck links.
"GRAPES OF WRATH" BANNED IN KERN COUNTY An NPR article about the August 22, 1939 decision of the Kern County (California) Board of Supervisors to ban the Steinbeck novel in the county's public schools and libraries.
Review of The Grapes of Wrath movie A Filmsite review of John Ford's 1940 black and white adaptation of the novel. | 2026-01-25T21:14:29.986529 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.