id
int64 110
1.16M
| edu_score
float64 3.5
5.1
| url
stringlengths 21
286
| text
stringlengths 507
485k
| timestamp
stringdate 2026-01-18 07:33:45
2026-02-05 07:22:54
|
|---|---|---|---|---|
718,277
| 3.799238
|
http://en.wikipedia.org/wiki/Geological_unit
|
|This article does not cite any references or sources. (December 2009)|
A geological unit is a volume of rock or ice of identifiable origin and relative age range that is defined by the distinctive and dominant, easily mapped and recognizable petrographic, lithologic or paleontologic features (facies) that characterize it.
Units must be mappable and distinct from one another, but the contact need not be particularly distinct. For instance, a unit may be defined by terms such as "when the sandstone component exceeds 75%".
- Supergroup – two or more groups and lone formations
- Group – two or more formations
- Formation – primary unit of lithostratigraphy
- Member – named lithologic subdivision of a formation
- Bed – named distinctive layer in a member or formation
- Flow – smallest distinctive layer in a volcanic sequence
The component units of any higher rank unit in the hierarchy need not be everywhere the same.
- The term "supergroup" may be used for several associated groups or for associated groups and formations with significant lithologic properties in common.
- A succession of two or more contiguous or associated formations with significant and diagnostic lithologic properties in common. Formations need not be aggregated into groups unless doing so provides a useful means of simplifying stratigraphic classification in certain regions or certain intervals. Thickness of a stratigraphic succession is not a valid reason for defining a unit as a group rather than a formation. The component formations of a group need not be everywhere the same.
- Exceptionally, a group may be divided into subgroups.
- Formations are the primary formal unit of lithostratigraphic classification. Formations are the only formal lithostratigraphic units into which the stratigraphic column everywhere should be divided completely on the basis of lithology. The contrast in lithology between formations required to justify their establishment varies with the complexity of the geology of a region and the detail needed for geologic mapping and to work out its geologic history. No formation is considered justifiable and useful that cannot be delineated at the scale of geologic mapping practiced in the region. There is no formal limit to how thick or thin a formation may be.
- The formal lithostratigraphic unit next in rank below a formation.
- It possesses lithologic properties distinguishing it from adjacent parts of the formation.
- No fixed standard is required for the extent and thickness of a member.
- A formation need not be divided into members unless a useful purpose is thus served.
- Formations may have only certain parts designated as members. A member may extend from one formation to another.
- Beds are the smallest formal unit in the hierarchy of sedimentary lithostratigraphic units, e.g. a single stratum lithologically distinguishable from other layers above and below. Customarily only distinctive beds (key beds, marker beds) particularly useful for stratigraphic purposes are given proper names and considered formal lithostratigraphic units.
- A discrete extrusive volcanic body distinguishable by texture, composition, or other objective criteria. The designation and naming of flows as formal lithostratigraphic units should be limited to those that are distinctive and widespread.
Whenever lithological grounds fail to provide significant ability to distinguish mappable rock units, it is possible to map lithology using geochemistry to identify stratigraphy with the same or similar geochemical composition.
Chemostratigraphy can also be the basis for defining a member, bed or subdivision of a geologic unit. For instance, a shale unit may be more sulphidic in the base, and less so in the upper portions, allowing a subdivision into a sulphidic member.
The mapped chemostratigraphic units need not follow stratigraphic or lithostratigraphic units, as the chemical stratigraphy of an area may be independent of lithology. Any geochemical criteria could be used to define chemostratigraphic units; gold, nickel, carbonate, silica or aluminium content, or a ratio of one or more elements to another.
For instance, it would be possible to map a regolith feature such as carbonate cement in a sandstone and siltstone area, which is independent of lithology. Similarly, it is possible to identify fertile nickel-bearing volcanic flows in heavily sheared greenstone terranes by utilising a chemostratigraphic approach.
In mapping of igneous rocks, particularly volcanic rocks and intrusive rocks, particularly layered intrusions and granites, chemical stratigraphy and chemical differentiation of phases of these intrusives is warranted and in many cases necessary.
Chemical stratigraphy is useful in areas of sparse outcrop for making correlations between separate, distant sections of stratigraphy, especially in layered intrusions and granite terrains which have poor outcrop. Here, chemical trends in the stratigraphy and between intrusive phases can be used to correlate individual sections within the larger intrusive stratigraphy, or group outcrops into their respective intrusive phases and make rough correlative maps.
Chemical stratigraphy is often used with drilling information to assist in correlating between drill holes on a section, to resolve dips and pick formation boundaries. Downhole geophysical logging can produce a form of chemical stratigraphy via logging of radioactive properties of a rock.
Often when compared to lithostratigraphic units, chemostratigraphic units will not always clearly match. Thus, it is wise to map both lithology and geochemistry and provide separate interpretations and map units.
Biostratigraphic units are defined by the presence of biological markers, usually fossils, which place the rock into a chronostratigraphic sequence.
Biostratigraphic units are defined by assemblages of fossils. Few biostratigraphic intervals are entirely distinctive as to an age of a rock, and often the best chronostratigraphic resolution that can be provided by biostratigraphy can be a range in ages from a maximum to a minimum when that fossil assemblage is known to have coexisted.
Biostratigraphic units can be defined by as little as one fossil, and can be marker beds or members within a formally identified Formation, for instance an ammonoid bearing bed. This can be a valuable tool for orienting oneself within a stratigraphic section or within a thick lithostratigraphic unit.
Biostratigraphic units can overlap lithostratigraphic units, as the habitat of a fossil may extend from areas where sediment was being deposited as sandstone into areas where it was a being deposited as a siltstone. An example would be a trilobite.
Biostratigraphic units can also be used in archaeology, for instance where introduction of a plant species can be identified by different pollen assemblages through time or the presence of the bones of different vertebrate animals in midden heaps.
|
2026-01-29T07:54:22.242844
|
957,530
| 3.574806
|
http://www.icr.org/article/6765/
|
Chewed Dinosaur Bones Fit Flood
by Brian Thomas, M.S. *
A new cache of fossils found in Arlington, Texas, contains plenty of clues that are best explained by Noah's Flood.
More specifically, the circumstances surrounding these remains match a hypothesis proposed by creation scientist Michael Oard that describes how swamp plants and land creatures could have mixed with sea creatures several months into the year-long Flood.
According to Scripture, five months passed after the Flood began before its waters had completely covered the earth (Genesis 7:24). By then, all air-breathing, land-dwelling creatures not on board the Ark were dead or dying. According to Oard, the interiors of continents may have been the last land areas to be submerged after being repeatedly washed by successive wave-like surges. Water and land levels fluctuated, and desperate, starving creatures made their last stands on temporary barren mud flats.
Oard first described his hypothesis in 1997: "It is more reasonable that dinosaurs found a linear strip of land (or a series of shoals separated by shallow water) during the Flood while the sea level was oscillating and sediments were being deposited."1 He gave an updated description in his 2011 book Dinosaur Challenges and Mysteries, where he wrote, "Patches of newly laid sediments briefly emerged from the water during the Flood due to a local fall in relative sea level."2
University of Texas at Arlington adjunct instructor Derek Main co-authored a description of the Arlington fossil site in the journal Palaois that provides clues that seem to fit Oard's hypothesis.3 First, the creatures were deposited by sediment-carrying flood waters that travelled long distances. The Arlington remains include bones of dinosaur, turtle, and now-extinct crocodile forms. They also include many fish, including shark fossils. The Palaois authors noted that since all these did not normally live in the same place, they must have been washed in from far away.
Second, at the time when these fossils were formed, "more than half of Texas was under water," according to the UT Arlington Magazine.4 It would all have been under water, and at times only half submerged, during the Flood year.
Also, the rapid fossilization of the assembled creatures fits a catastrophic flood capable of combining and burying land, swamp, and sea animals. The Palaois study authors wrote, "Most bones were likely buried within a few years of deposition as indicated by the minimal amounts of weathering and breakage." Some dinosaur bones exhibited "tooth marks consistent with predation" by a large crocodile-like animal. Supposedly, "estimating the time of formation for a fossil assemblage is difficult." But because "all bones are well-preserved and lacked any pitting or etching that would indicate they had passed through a crocodile's digestive system," it is certain that very little time elapsed between when the crocodiles began eating and when all of the remains commenced fossilization.3
Bones usually rot within months on dry land, even without oxygen. And they decay even faster in watery environments like those of the ancient Arlington creatures. Because "the time of formation" was obviously very short, it is difficult to reconcile with evolution's vast ages.
"The data as a whole indicates a coastal, possibly seasonal, marsh that was periodically influenced by marine incursions [sea flooding onto continents]," according to the study authors.3 They assumed the remains represent scenes from everyday life, but the data do not support that story. For example, the assemblage includes charred remains of large branches. If this represents a normal daily scene, then why were the branches ripped off of trees and the animal bones torn apart? The clues indicate catastrophic, not seasonal, processes.
Floods form fossils fast. And it makes sense that a catastrophic worldwide flood, like the one recorded in Genesis, would have deposited these fossils quickly.
- Oard, M. J. 1997. The extinction of the dinosaurs. Journal of Creation. 11 (2):137-154.
- Oard, M. J. 2011. Dinosaur Challenges and Mysteries. Atlanta, GA: Creation Book Publishers.
- Noto, C. R., D. J. Main and S. K. Drumheller. 2012. Feeding Traces and Paleobiology of a Cretaceous (Cenomanian) Crocodyliform: Example from the Woodbine Formation of Texas. Palaois. 27 (2): 105-115.
- Wiley, J. Winter 2010. What lies beneath. UT Arlington Magazine. Accessed on uta.edu March 30, 2012.
* Mr. Thomas is Science Writer at the Institute for Creation Research.
Article posted on April 18, 2012.
|
2026-02-02T03:08:56.000378
|
1,106,123
| 3.76569
|
http://www.historyofbridges.com/facts-about-bridges/types-of-bridges/
|
History of Bridges
Types of Bridges
Bridges by Structure
– These bridges uses arch as a main structural component (arch is always located below the bridge, never above it). They are made with one or more hinges, depending of what kind of load and stress forces they must endure. Examples of arch bridge are “Old Bridge” in Mostar, Bosnia and Herzegovina and The Hell Gate Bridge in New York.
– Very basic type of bridges that are supported by several beams of various shapes and sizes. They can be inclined or V shaped. Example of beam bridge is Lake Pontchartrain Causeway in southern Louisiana.
– Very popular bridge designs that uses diagonal mesh of posts above the bridge. The two most common designs are the king posts (two diagonal posts supported by single vertical post in the center) and queen posts (two diagonal posts, two vertical pots and horizontal post that connect two vertical posts at the top).
– Similar in appearance to arch bridges, but they support their load not trough vertical bracing but trough diagonal bracing. They often use truss formation both below and above the bridge. Example of cantilever bridge is Queensboro Bridge in New York City.
Tied arch bridges
– Similar to arch bridges, but they transfer weight of the bridge and traffic load to the top chord that is connected to the bottom cords in bridge foundation. They are often called bowstring arches or bowstring bridges.
– Bridges that use ropes or cables from the vertical suspender to hold the weight of bridge deck and traffic. Example of suspension bridge is Golden Gate Bridge in San Francisco.
– Bridge that uses deck cables that are directly connected to one or more vertical columns. Cables are usually connected to columns in two ways – harp design (each cable is attached to the different point of the column, creating harp like design of “strings” and fan design (all cables connect to one point at the top of the column).
Fixed or moveable bridges
– Majority of bridges are fixed, with no moveable parts to provide higher clearance for river/sea transport that is flowing below them. They are designed to stay where they are made to the point they are deemed unusable or demolished.
– Bridges made from modular basic components that can be moved by medium or light machinery. They are usually used in military engineering or in circumstances when fixed bridges are repaired.
– They have moveable decks, most often powered by electricity.
Types by use
– The most common type of bridge, with two or more lanes designed to carry car and truck traffic of various intensities.
– Usually made in urban environments, or in terrain where car transport is inaccessible (rough mountainous terrain, forests, etc.).
– Built to provide best possible flow of traffic across bodies of water or rough terrain. Most offen they have large amount of car lanes, and sometimes have dedicated area for train tracks.
– Bridges made specifically to carry one or multiple lane of train tracks.
– Bridges made to carry pipelines across water or inaccessible terrains. Pipelines can carry water, air, gas and communication cables.
– Ancient structures created to carry water from water rich areas to dry cities.
– Modern bridges that host commercial buildings such as restaurants and shops.
Types by materials
Concrete and Steel
Copyright© History of Bridges
Go to top
|
2026-02-04T08:01:37.077999
|
1,019,104
| 3.970791
|
http://www.rejinpaul.com/2011/06/ee2402-protection-switchgear-anna.html
|
ELECTRICAL AND ELECTRONICS ENGINEERING
TWO MARKS QUESTION &ANSWERS1. What are the functions of protective relays
To detect the fault and initiate the operation of the circuit breaker to isolate
the defective element from the rest of the system, thereby protecting the system from
damages consequent to the fault.
2. Give the consequences of short circuit.
Whenever a short-circuit occurs, the current flowing through the coil increases to an
enormous value. If protective relays are present , a heavy current also flows through the
relay coil, causing it to operate by closing its contacts.The trip circuit is then closed , the
circuit breaker opens and the fault is isolated from the rest of the system. Also, a low
voltage may be created which may damage systems connected to the supply.
3. Define protected zone.
Are those which are directly protected by a protective system such as relays,
fuses or switchgears.If a fault occurring in a zone can be immediately detected and or
isolated by a protection scheme dedicated to that particular zone.
4. What are unit system and non unit system?
A unit protective system is one in which only faults occurring within its
protected zone are isolated.Faults occurring elsewhere in the system have no influence
on the operation of a unit system.A non unit system is a protective system which is
activated even when the faults are external to its protected zone.
5. What is primary protection?
Is the protection in which the fault occurring in a line will be cleared by its
own relay and circuit breaker.It serves as the first line of defence.
6. What is back up protection?
Is the second line of defence , which operates if the primary protection
fails to activate within a definite time delay.
7. Name the different kinds of over current relays.
Induction type non-directional over current relay,Induction type directional
over current relay & current differential relay.
8. Define energizing quantity.
It refers to the current or voltage which is used to activate the relay into
9. Define operating time of a relay.
It is defined as the time period extendind from the occurrence of the
fault through the relay detecting the fault to the operation of the relay.
10. Define resetting time of a relay.
It is defined as the time taken by the relay from the instant of isolating
the fault to the moment when the fault is removed and the relay can be reset.
11. What are over and under current relays? Overcurrent relays are those that operate when the current in a line exceeds a
predetermined value. (eg: Induction type non-directional/directional overcurrent relay,
differential overcurrent relay)whereas undercurrent relays are those which operate
whenever the current in a circuit/line drops below a predetermined value.(eg: differential
12. Mention any two applications of differential relay.
Protection of generator & generator transformer unit; protection of large motors
and busbars .
13. What is biased differential bus zone reduction?
The biased beam relay is designed to respond to the differential current in terms
of its fractional relation to the current flowing through the protected zone. It is essentially
an over-current balanced beam relay type with an additional restraining coil. The
restraining coil produces a bias force in the opposite direction to the operating force.
14. What is the need of relay coordination?
The operation of a relay should be fast and selective, ie, it should isolate the fault
in the shortest possible time causing minimum disturbance to the system. Also, if a relay
fails to operate, there should be sufficiently quick backup protection so that the rest of the
system is protected. By coordinating relays, faults can always be isolated quickly without
serious disturbance to the rest of the system.
15. Mention the short comings of Merz Price scheme of protection applied to a power
In a power transformer, currents in the primary and secondary are to be compared.
As these two currents are usually different, the use of identical transformers will give
differential current, and operate the relay under no-load condition. Also, there is usually a
phase difference between the primary and secondary currents of three phase transformers.
Even CT’s of proper turn-ratio are used, the differential current may flow through the
relay under normal condition.
16. What are the various faults to which a turbo alternator is likely to be subjected?
Failure of steam supply; failure of speed; overcurrent; over voltage; unbalanced
loading; stator winding fault .
17. What is an under frequency relay?
An under frequency relay is one which operates when the frequency of the system
(usually an alternator or transformer) falls below a certain value.
18. Define the term pilot with reference to power line protection.
Pilot wires refers to the wires that connect the CT’s placed at the ends of a power
transmission line as part of its protection scheme. The resistance of the pilot wires is
usually less than 500 ohms.
19. Mention any two disadvantage of carrier current scheme for transmission line only. The program time (ie, the time taken by the carrier to reach the other end-upto
.1% mile); the response time of band pass filter; capacitance phase-shift of the
transmission line .
20. What are the features of directional relay?
High speed operation; high sensitivity; ability to operate at low voltages; adequate
short-time thermal ratio; burden must not be excessive.
21. What are the causes of over speed and how alternators are protected from it?
Sudden loss of all or major part of the load causes over-speeding in alternators.
Modern alternators are provided with mechanical centrifugal devices mounted on their
driving shafts to trip the main valve of the prime mover when a dangerous over-speed
22. What are the main types of stator winding faults?
Fault between phase and ground; fault between phases and inter-turn fault
involving turns of the same phase winding.
23. Give the limitations of Merz Price protection.
Since neutral earthing resistances are often used to protect circuit from earth-fault
currents, it becomes impossible to protect the whole of a star-connected alternator. If an
earth-fault occurs near the neutral point, the voltage may be insufficient to operate the
relay. Also it is extremely difficult to find two identical CT’s. In addition to this, there
always an inherent phase difference between the primary and the secondary quantities
and a possibility of current through the relay even when there is no fault.
24. What are the uses of Buchholz’s relay?
Bucholz relay is used to give an alarm in case of incipient( slow-developing)
faults in the transformer and to connect the transformer from the supply in the event of
severe internal faults. It is usually used in oil immersion transformers with a rating over
25. What are the types of graded used in line of radial relay feeder?
Definite time relay and inverse-definite time relay.
26. What are the various faults that would affect an alternator?
(a) Stator faults
1, Phase to phase faults
2, Phase to earth faults
3, Inter turn faults
1, Earth faults
2, Fault between turns
3, Loss of excitation due to fuel failure
(c) 1, Over speed 2, Loss of drive
3, Vacuum failure resulting in condenser pressure rise, resulting in
shattering of the turbine low pressure casing
(d) 1, Fault on lines
2, Fault on busbars
27. Why neutral resistor is added between neutral and earth of an alternator?
In order to limit the flow of current through neutral and earth a resistor is
introduced between them.
28. What is the backup protection available for an alternator?
Overcurrent and earth fault protection is the backup protections.
29. What are faults associated with an alternator?
(a) External fault or through fault
(b) Internal fault
1, Short circuit in transformer winding and connection
2, Incipient or slow developing faults
30. What are the main safety devices available with transformer?
Oil level guage, sudden pressure delay, oil temperature indicator, winding
temperature indicator .
31. What are the limitations of Buchholz relay?
(a) Only fault below the oil level are detected.
(b) Mercury switch setting should be very accurate, otherwise even for
vibration, there can be a false operation.
(c) The relay is of slow operating type, which is unsatisfactory.
32. What are the problems arising in differential protection in power transformer and how
are they overcome?
1. Difference in lengths of pilot wires on either sides of the relay. This is
overcome by connecting adjustable resistors to pilot wires to get equipotential
points on the pilot wires.
2. Difference in CT ratio error difference at high values of short circuit currents
that makes the relay to operate even for external or through faults. This is
overcome by introducing bias coil.
3. Tap changing alters the ratio of voltage and currents between HV and LV sides
and the relay will sense this and act. Bias coil will solve this.
4. Magnetizing inrush current appears wherever a transformer is energized on its
primary side producing harmonics. No current will be seen by the secondary.
CT’s as there is no load in the circuit. This difference in current will actuate the differential relay. A harmonic restraining unit is added to the relay which will
block it when the transformer is energized.
33. What is REF relay?
It is restricted earth fault relay. When the fault occurs very near to the neutral
point of the transformer, the voltage available to drive the earth circuit is very small,
which may not be sufficient to activate the relay, unless the relay is set for a very low
current. Hence the zone of protection in the winding of the transformer is restricted to
cover only around 85%. Hence the relay is called REF relay.
34. What is over fluxing protection in transformer?
If the turns ratio of the transformer is more than 1:1, there will be higher core loss
and the capability of the transformer to withstand this is limited to a few minutes only.
This phenomenon is called over fluxing.
35. Why busbar protection is needed?
(a) Fault level at busbar is high
(b) The stability of the system is affected by the faults in the bus zone.
(c) A fault in the bus bar causes interruption of supply to a large portion of the
36. What are the merits of carrier current protection?
Fast operation, auto re-closing possible, easy discrimination of simultaneous
37. What are the errors in CT?
(a) Ratio error
Percentage ratio error = [(Nominal ratio – Actual ratio)/Actual ratio] x 100
The value of transformation ratio is not equal to the turns ratio.
(b) Phase angle error:
Phase angle Ө =180/π[(ImCos δ-I1Sin δ)/nIs]
38. What is field suppression?
When a fault occurs in an alternator winding even though the generator circuit
breaker is tripped, the fault continues to fed because EMF is induced in the generator
itself. Hence the field circuit breaker is opened and stored energy in the field winding is
discharged through another resistor. This method is known as field suppression.
39. What are the causes of bus zone faults?
Failure of support insulator resulting in earth fault
Flashover across support insulator during over voltage
Heavily polluted insulator causing flashover
Earthquake, mechanical damage etc.
40. What are the problems in bus zone differential protection? Large number of circuits, different current levels for different circuits for
Saturation of CT cores due to dc component and ac component in short
circuit currents. The saturation introduces ratio error.
Sectionalizing of the bus makes circuit complicated.
Setting of relays need a change with large load changes.
41. What is static relay?
It is a relay in which measurement or comparison of electrical quantities is made
in a static network which is designed to give an output signal when a threshold condition
is passed which operates a tripping device.
42. What is power swing?
During switching of lines or wrong synchronization surges of real and reactive
power flowing in transmission line causes severe oscillations in the voltage and current
vectors. It is represented by curves originating in load regions and traveling towards relay
43. What is a programmable relay?
A static relay may have one or more programmable units such as microprocessors
or microcomputers in its circuit.
44. What is CPMC?
It is combined protection, monitoring and control system incorporated in the static
45. What are the advantages of static relay over electromagnetic relay?
o Low power consumption as low as 1mW
o No moving contacts; hence associated problems of arcing, contact bounce,
erosion, replacement of contacts
o No gravity effect on operation of static relays. Hence can be used in
vessels ie, ships, aircrafts etc.
o A single relay can perform several functions like over current, under
voltage, single phasing protection by incorporating respective functional
blocks. This is not possible in electromagnetic relays
o Static relay is compact
o Superior operating characteristics and accuracy
o Static relay can think , programmable operation is possible with static
o Effect of vibration is nil, hence can be used in earthquake-prone areas
o Simplified testing and servicing. Can convert even non-electrical
quantities to electrical in conjunction with transducers.
46. What is resistance switching? It is the method of connecting a resistance in parallel with the contact space(arc).
The resistance reduces the restriking voltage frequency and it diverts part of the arc
current. It assists the circuit breaker in interrupting the magnetizing current and capacity
47. What do you mean by current chopping?
When interrupting low inductive currents such as magnetizing currents of the
transformer, shunt reactor, the rapid deionization of the contact space and blast effect
may cause the current to be interrupted before the natural current zero. This phenomenon
of interruption of the current before its natural zero is called current chopping.
48. What are the methods of capacitive switching?
• Opening of single capacitor bank
• Closing of one capacitor bank against another
49. What is an arc?
Arc is a phenomenon occurring when the two contacts of a circuit breaker
separate under heavy load or fault or short circuit condition.
50. Give the two methods of arc interruption?
High resistance interruption:-the arc resistance is increased by elongating, and
splitting the arc so that the arc is fully extinguished
Current zero method:-The arc is interrupted at current zero position that
occurs100 times a second in case of 50Hz power system frequency in ac.
51. What is restriking voltage?
It is the transient voltage appearing across the breaker contacts at the instant of arc
52. What is meant by recovery voltage?
The power frequency rms voltage appearing across the breaker contacts after the
arc is extinguished and transient oscillations die out is called recovery voltage.
53. What is RRRV?
It is the rate of rise of restriking voltage, expressed in volts per microsecond. It is
closely associated with natural frequency of oscillation.
54. What is circuit breaker?
It is a piece of equipment used to break a circuit automatically under fault
conditions. It breaks a circuit either manually or by remote control under normal
conditions and under fault conditions. 55. Write the classification of circuit breakers based on the medium used for arc
Air break circuit breaker
Oil circuit breaker
Minimum oil circuit breaker
Air blast circuit breaker
SF6 circuit breaker
Vacuum circuit breaker
56. What is the main problem of the circuit breaker?
When the contacts of the breaker are separated, an arc is struck between them.
This arc delays the current interruption process and also generates enormous heat which
may cause damage to the system or to the breaker itself. This is the main problem.
57. What are demerits of MOCB?
Short contact life
Possibility of explosion
Larger arcing time for small currents
Prone to restricts
58. What are the advantages of oil as arc quenching medium?
• It absorbs the arc energy to decompose the oil into gases, which have
excellent cooling properties
• It acts as an insulator and permits smaller clearance between line conductors
and earthed components
59. What are the hazards imposed by oil when it is used as an arc quenching medium?
There is a risk of fire since it is inflammable. It may form an explosive mixture
with arc. So oil is preferred as an arc quenching medium.
60. What are the advantages of MOCB over a bulk oil circuit breaker?
• It requires lesser quantity of oil
• It requires smaller space
• There is a reduced risk of fire
• Maintenance problem are reduced
61. What are the disadvantages of MOCB over a bulk oil circuit breaker?
o The degree of carbonization is increased due to smaller quantity of oil
o There is difficulty of removing the gases from the contact space in time
o The dielectric strength of the oil deteriorates rapidly due to high degree of
carbonization. 62. What are the types of air blast circuit breaker?
63. What are the advantages of air blast circuit breaker over oil circuit breaker?
o The risk of fire is diminished
o The arcing time is very small due to rapid buildup of dielectric strength
o The arcing products are completely removed by the blast whereas oil
deteriorates with successive operations
64. What are the demerits of using oil as an arc quenching medium?
• The air has relatively inferior arc quenching properties
• The air blast circuit breakers are very sensitive to variations in the rate of rise
of restriking voltage
• Maintenance is required for the compression plant which supplies the air blast
65. What is meant by electro negativity of SF6 gas?
SF6 has high affinity for electrons. When a free electron comes and collides with
a neutral gas molecule, the electron is absorbed by the neutral gas molecule and negative
ion is formed. This is called as electro negativity of SF6 gas.
66. What are the characteristic of SF6 gas?
It has good dielectric strength and excellent arc quenching property. It is inert,
non-toxic, noninflammable and heavy. At atmospheric pressure, its dielectric strength is
2.5 times that of air. At three times atmospheric pressure, its dielectric strength is equal to
that of the transformer oil.
67. Write the classifications of test conducted on circuit breakers.
68. What are the indirect methods of circuit breaker testing?
o Unit test
o Synthetic test
o Substitution testing
o Compensation testing
o Capacitance testing
69. What are the advantages of synthetic testing methods?
• The breaker can be tested for desired transient recovery voltage and RRRV.
• Both test current and test voltage can be independently varied. This gives
flexibility to the test • The method is simple
• With this method a breaker capacity (MVA) of five time of that of the
capacity of the test plant can be tested.
70. How does the over voltage surge affect the power system?
The over voltage of the power system leads to insulation breakdown of the
equipments. It causes the line insulation to flash over and may also damage the nearby
transformer, generators and the other equipment connected to the line.
71. What is pick up value?
It is the minimum current in the relay coil at which the relay starts to
72. Define target.
It is the indicator used for showing the operation of the relay.
73. Define reach.
It is the distance upto which the relay will cover for protection.
74. Define blocking.
It means preventing the relay from tripping due to its own
characteristics or due to additional relays.
75. Define a over current relay.
Relay which operates when the current ia a line exceeds a
76. Define an under current relay?
Relays which operates whenever the current in a circuit drops below a
77. Mention any 2 applications of differential relays.
Protection of generator and generator-transformer unit: protection of
large motors and bus bars
78.Mention the various tests carried out in a circuit breaker at HV labs.
Short circuit tests, Synthetic tests& direct tests.
78. Mention the advantages of field tests.
The circuit breaker is tested under actual conditions like those that occur in the
Special occasions like breaking of charging currents of long lines ,very short line
faults ,interruption of small inductive currents etc… can be tested by direct testing
79. State the disadvantages of field tests.
The circuit breaker can be tested at only a given rated voltage and network
The necessity to interrupt the normal services and to test only at light load
Extra inconvenience and expenses in installation of controlling and
measuring equipment in the field.
80. Define composite testing of a circuit breaker. In this method the breaker is first tested for its rated breaking capacity at a
reduced voltage and afterwards for rated voltage at a low current.This method does not
give a proper estimate of the breaker performance.
81. State the various types of earthing.
Solid earthing, resistance earthing , reactance earthing , voltage transformer
earthing and zig-zag transformer earthing.
82. What are arcing grounds?
The presence of inductive and capacitive currents in the isolated neutral system
leads to formation of arcs called as arcing grounds.
83. What is arc suppression coil?
A method of reactance grounding used to suppress the arc due to arcing
84. State the significance of single line to ground fault.
In single line to ground fault all the sequence networks are connected
in series. All the sequence currents are equal and the fault current magnitude is three
times its sequence currents.
85. What are symmetrical components?
It is a mathematical tool to resolve unbalanced components into balanced
86. State the three sequence components.
Positive sequence components, negative sequence components and zero
87. Define positive sequence component.
-has 3 vectors equal in magnitude and displaced from each other by an angle
120 degrees and having the phase sequence as original vectors.
88. Define zero sequence component.
They has 3 vectors having equal magnitudes and displaced from each
other by an angle zero degees.
89. State the significance of double line fault.
It has no zero sequence component and the positive and negative sequence
networks are connected in parallel.
90. Define negative sequence component.
It has 3 vectors equal in magnitude and displaced from each other by an angle
120 degrees and has the phase sequence in opposite to its original phasors.
91. State the different types of faults.
Symmetrical faults and unsymmetrical faults and open conductor faults.
92. State the various types of unsymmetrical faults.
Line to ground ,line to line and double line to ground faults
93. Mention the withstanding current in our human body.
94. State the different types of circuit breakers.
Air ,oil,vacuum circuit breakers.
95. Define per unit value.
It is defined as the ratio of actual value to its base value.
96. Mention the inductance value of the peterson’s coil.
2 97. Define single line diagram.
Representation of various power system components in a single line is
defined as single line diagram.
98. Differentiate between a fuse and a circuit breaker.
Fuse is a low current interrupting device. It is a copper or an
aluminium wire.Circuit breaker is a high current interrupting device and it act as a switch
under normal operating conditions.
99. How direct tests are conducted in circuit breakers?
Using a short circuit generator as the source.
Using the power utility system or network as the source.
100. What is dielectric test of a circuit breaker?
It consists of overvoltage withstand test of power frequency
lightning and impulse voltages.Testa are done for both internal and external insulation
with switch in both open and closed conditions.
|
2026-02-02T23:52:59.176345
|
617,438
| 3.644647
|
http://www.csustan.edu/cj/evidence/chap12sa.htm
|
1. Explain the rationale for excluding evidence under the hearsay rule. What is the hearsay rule? Define the following terms as they are used in relation to the admission of hearsay evidence: statement, declarant, hearsay, statements that are not hearsay. Give some examples of statements that are not hearsay. (§§12.1, 12.2)
2. What is the relationship between the history of the hearsay rule and the history of the trial by jury? (§12.3)
3. What four reasons are advanced as to why hearsay evidence should not be admitted? What is the rationale for allowing some hearsay evidence to be admitted? (§§12.1, 12.4)
4. Explain how the “spontaneous or excited utterance” operates. Why should a spontaneous utterance be believed as truthful? (§12.5)
5. State the four requirements that must be met if a spontaneous utterance is to be admitted as an exception to the rule. Give some examples. What part does time play in determining whether a statement is spontaneous? Does this apply if statements are made to police officers? (§12.5)
6. Why are business and public records usually admitted even though the person who originally made the records is not present? Give some examples of reports that are admissible under the exception. (§12.6)
7. What is the basis for the family history (pedigree) exception to the rule? Give some examples. (§12.7)
8. What is the rationale for admitting “former testimony” as an exception to the hearsay rule? Is it really hearsay by the definition? (§12.8)
9. Under what conditions may evidence relating to testimony given at a former trial be admitted into court? Who has the burden of proof to show that a witness is unavailable? What is the “unavailability” test? (§12.8)
10. What is a dying declaration? Must a declarant state that he or she is aware of imminent death before the statement is admissible? In what type of case is a dying declaration admissible? Are such statements admitted if elicited by questions? (§12.9)
11. Why are declarations against the interests of the declarant admissible as exceptions to the hearsay rule? Give some examples. (§12.10)
12. What is the rationale for allowing some confessions into evidence even though the confessions are hearsay? Are confessions reliable as hearsay exceptions? Does the defendant have a real complaint when the defendant made the confession? (§12.10)
13. What are “residual” exceptions to the hearsay rule? What inquiries are made to determine their admissibility? Explain their application. (§12.11)
14. When the physical or mental state of a person is to be proved, declarations of another that are indicative of the declarant’s physical or mental state are admitted. Are such statements hearsay? For what purpose are such statements admitted? Discuss. Distinguish between out-of-court statements offered to prove the matter asserted and statements that are not hearsay. (§12.12)
15. In Bell v. Florida, the victim of an attempted kidnapping at gunpoint stated that she was walking along the street during the daytime when the defendant twice drove up to her in his van and offered to give her a ride to her destination. The defendant changed his location and accosted her by grabbing her around the neck, holding a gun to her head, and attempted to force her into his vehicle. When she broke free and ran into traffic, she pounded on cars and asked for help in getting away. The defendant, while standing nearby, pointed his gun in her direction and threatened to shoot her. When she managed to call police and talk to them at her residence, she was barely able to speak coherently. When she told her story to police, they remembered it sufficiently to testify about it at his trial. The defendant argued that the victim's statements failed to meet the excited utterance test because there was a time delay of approximately 50 minutes between the time of the incident and the time the victim became calm enough to speak. According to the defendant, this was sufficient time for the victim to contrive or misrepresent. What are the general requirements for the application of the excited utterance exception? Would the victim have had time for reason to return if it took 50 minutes for her to become coherent? Did the court approve of the admission of an excited utterance exception in this case? Do you agree with the court’s decision? Why or why not? (Bell v. Florida, Part II)
16. Federal Rule of Evidence 801(d)(1)(B) provides that a witness’s prior statements are not hearsay if they are consistent with the witness’s testimony and offered to rebut a charge against the witness of “recent fabrication or improper influence or motive.” In Tome v. United States, the defendant was convicted of sexual abuse of his daughter. The government claimed that the assault was committed while the defendant had custody of the child. The defendant countered that the allegations were concocted so the child would not be returned to him. The government presented six witnesses who recounted out-of-court statements made by the child after the charged fabrication. Does the rule permit introduction of a declarant’s consistent out-of-court statements to rebut a charge of recent fabrication if the statements were made after the charged fabrication? Does the federal rule, as interpreted, differ from the common law rule? What was the decision of the court in this case? (Tome v. United States, Part II)
17. Upon responding to a domestic dispute radio call in Cox v. Indiana, officers observed defendant Cox standing in front of an apartment building talking to another police officer. One deputy found Cox’s girlfriend inside an apartment building. He noticed that she was crying and shaking and appeared to be very upset. The officer also noticed that she was talking very quickly and showed signs of a fresh injury, including a cut above her eye that was bleeding; her left eye was swollen; and she was holding an ice pack to her eye. Additionally, she had marks on her neck that appeared to have been caused by someone grabbing her on the neck. Cox contended on appeal that the hearsay testimony of the deputy who told the court some of what the girlfriend told him while she was upset should have been ruled as inadmissible because it failed to fit into any hearsay exception and because his girlfriend did not appear as a witness at the trial. Cox also contended that, if the testimony fell under the excited utterance exception, the prosecution failed to lay a proper foundation for the evidence. Should the appellate court reverse the case because the excited utterance exception did not apply in this context? Why or why not? Did it seem that there was there a sufficiently startling event? How long would it take for a person who has just been beaten by her boyfriend to calm down? Did the evidence in the case indicate that she had returned to rational thought and contemplation? What kind of foundation for a spontaneous utterance could be made in this case? (Cox v. Indiana, Part II)
18. In the case of Morgan v. Georgia, the defendant, Morgan, and the deceased were visiting Morgan’s girlfriend at her home. When Morgan and the soon-to-be-deceased began to argue over Morgan’s rough treatment of his girlfriend, Morgan shot the other man. At the hospital, the victim told police that Morgan shot him and detailed the circumstances under which he had received the gunshot wounds. At the hospital, the victim told a police officer that Morgan "just shot me" and "we weren't fighting." The officer who took the statement testified that the victim exhibited great pain and asked the officer "if he was going to die." The officer told him that he was not going to die and that the doctor was working on him. Morgan contended that this testimony shows that the victim was not conscious of imminent death, and for that reason, the trial court erroneously admitted the victim's statement to police as a dying declaration. Was the victim conscious of his impending death? Was the victim unavailable to testify at trial? Does it make any difference that the deceased may not have stated clearly that he knew he was going to die? Did the victim have any motivation for lying to the police at the hospital? Is this a close case for the court to determine? Would you have ruled the same way as the appellate court? Why? (Morgan v. Georgia, Part II).
19. In the case of Michigan v. Washington, the defendant and an accomplice
were convicted of armed robbery and assault with intent to do great bodily harm.
Police stopped the two men for routine questioning. Later, they heard a radio
report of a robbery and shooting. One of the men they stopped blurted out that
he was the shooter. He was later identified as the shooter. Because the confessing
partner was tried separately, the judge allowed his statement to be admitted
in evidence against his partner, Washington, as a declaration against interest.
Washington appealed his conviction, contending that the admission of his accomplice’s
declaration against interest should not have been used against him. How did
the appellate court rule? What rationale did it use in making its decision?
Were there sufficient guarantees of trustworthiness to allow the evidence to
be admitted as an exception to the hearsay rule? Would you have ruled the same
way as the appellate court? Why? (Michigan v. Washington, Part II).
|
2026-01-27T20:07:10.454693
|
118,050
| 3.831988
|
http://serc.carleton.edu/introgeo/jigsaws/why.html
|
Why Use Jigsaws
Effectiveness of cooperative learning techniques in generalIn cooperative learning, students work with their peers to accomplish a shared or common goal, and jigsaw is one type of cooperative learning structure. Research over the past several decades shows overwhelmingly that well-structured cooperative learning is beneficial for students in terms of engagement, achievement (especially with respect to reasoning skills), and enjoyment. The Pedagogy in Action Module on Cooperative Learning has an excellent summary of research results on the value of cooperative learning in general.
Effectiveness of jigsaw in particular
Cooperative learning works well when 1) students are interdependent in a positive way, 2) individuals are accountable, 3) students interact to promote student learning, 4) groups use good teamwork skills, and 5) students have an opportunity for analyzing how well their groups are functioning (Johnson and Johnson, 1999; Johnson et al., 1998; Slavin, 1991, 1996).
The first three components are inherent in the way that a well-constructed jigsaw functions.
- Each student must not only be involved in peer teaching in a mixed group but also must help others in the group learn in order for the group to be able to carry out the group synthesis/analysis task (1 and 3 above).
- Success in the group task requires individuals to be accountable, to interact to promote peer learning, and to depend upon each other in positive ways (2 above).
The fourth and fifth components of successful cooperative learning are not inherent in the jigsaw structure but can be addressed by the instructor in a variety of ways, including starting with lower stakes interactions early in the semester and setting aside time for students to reflect on what is working and what isn't.A number of studies have documented effective use of jigsaw in a variety of types of classes: undergraduate statistics (Perkins and Saris, 2001), undergraduate biology lab (Colosi and Zales, 1998), undergraduate psychology (Carroll, 1986), prospective elementary school teachers (Wedman, 1996; Artut and Tarim, 2007), undergraduate geology (Tewksbury, 1995), and project-based computational science and engineering at the U.S. Naval Academy (Burkhardt and Turner, 2001).
Two critical ideas emerge from research on jigsaw:
- The jigsaw structure produces long-term learning gains when the group engages in a culminating analytical group task that requires actively using all team members' contributions for a group analysis or problem-solving task (Michaelson et al., 1997).
- If group members only have to interact to learn what individual students know, rewarding the group for the successful performance of individuals in the group seems to be necessary to produce more than marginal increases in student achievement (Slavin, 1996).
Overall benefits of the jigsaw technique
- Students are directly engaged with the material, instead of having material presented to them, which fosters depth of understanding.
- Students gain practice in self-teaching, which is one of the most valuable skills we can help them learn.
- Students gain practice in peer teaching, which requires them to understand the material at a deeper level than students typically do when simply asked to produce on an exam.
- During a jigsaw, students speak the language of the discipline and become more fluent in the use of discipline-based terminology.
- Each student develops an expertise and has something important to contribute to the group.
- Each student also has a chance to contribute meaningfully to a discussion, something that is more difficult to achieve in large-group discussion.
- The group task that follows individual peer teaching promotes discussion, problem-solving, and learning.
- Jigsaw encourages cooperation and active learning and promotes valuing all students' contributions.
- Jigsaw can be an efficient cooperative learning strategy. Although the jigsaw assignment takes time in class, the instructor does not need to spend as much time lecturing about the topic. If planned well, the overall time commitment to using the jigsaw technique during class can be comparable to lecturing about a topic.
|
2026-01-20T01:49:35.081754
|
736,118
| 4.0654
|
http://en.wikipedia.org/wiki/Cavalieri's_principle
|
- 2-dimensional case: Suppose two regions in a plane are included between two parallel lines in that plane. If every line parallel to these two lines intersects both regions in line segments of equal length, then the two regions have equal areas.
- 3-dimensional case: Suppose two regions in three-space (solids) are included between two parallel planes. If every plane parallel to these two planes intersects both regions in cross-sections of equal area, then the two regions have equal volumes.
Today Cavalieri's principle is seen as an early step towards integral calculus, and while it is used in some forms, such as its generalization in Fubini's theorem, results using Cavalieri's principle can often be shown more directly via integration. In the other direction, Cavalieri's principle grew out of the ancient Greek method of exhaustion, which used limits but did not use infinitesimals.
Cavalieri's principle was originally called the method of indivisibles, the name it was known by in Renaissance Europe. Archimedes was able to find the volume of a sphere given the volumes of a cone and cylinder using a method resembling Cavalieri's principle. In the 5th century AD, Zu Chongzhi and his son Zu Gengzhi established a similar method to find a sphere's volume. The transition from Cavalieri's indivisibles to John Wallis's infinitesimals was a major advance in the history of the calculus. The indivisibles were entities of codimension 1, so that a plane figure was thought as made out of an infinity of 1-dimensional lines. Meanwhile, infinitesimals were entities of the same dimension as the figure they make up; thus, a plane figure would be made out of "parallelograms" of infinitesimal width. Applying the formula for the sum of an arithmetic progression, Wallis computed the area of a triangle by partitioning it into infinitesimal parallelograms of width 1/∞.
That is done as follows: Consider a sphere of radius and a cylinder of radius and height . Within the cylinder is the cone whose apex is at the center of the sphere and whose base is the base of the cylinder. By the Pythagorean theorem, the plane located units above the "equator" intersects the sphere in a circle of area . The area of the plane's intersection with the part of the cylinder that is outside of the cone is also . The aforementioned volume of the cone is of the volume of the cylinder, thus the volume outside of the cone is the volume of the cylinder. Therefore the volume of the upper half of the sphere is of the volume of the cylinder. The volume of the cylinder is
("Base" is in units of area; "height" is in units of distance. Area × distance = volume.)
Therefore the volume of the upper half-sphere is and that of the whole sphere is .
Cones and pyramids
The fact that the volume of any pyramid, regardless of the shape of the base, whether circular as in the case of a cone, or square as in the case of the Egyptian pyramids, or any other shape, is (1/3) × base × height, can be established by Cavalieri's principle if one knows only that it is true in one case. One may initially establish it in a single case by partitioning the interior of a triangular prism into three pyramidal components of equal volumes. One may show the equality of those three volumes by means of Cavalieri's principle.
In fact, Cavalieri's principle or similar infinitesimal argument is necessary to compute the volume of cones and even pyramids, which is essentially the content of Hilbert's third problem – polyhedral pyramids and cones cannot be cut and rearranged into a standard shape, and instead must be compared by infinite (infinitesimal) means. The ancient Greeks used various precursor techniques such as Archimedes's mechanical arguments or method of exhaustion to compute these volumes.
The napkin ring problem
In what is called the napkin ring problem, one shows by Cavalieri's principle that when a hole of length h is drilled straight through the center of a sphere, the volume of the remaining material surprisingly does not depend on the size of the sphere. The cross-section of the remaining ring is a plane annulus, whose area is the difference between the areas of two circles. By the Pythagorean theorem, the area of one of the two circles is π times r 2 − y 2, where r is the sphere's radius and y is the distance from the plane of the equator to the cutting plane, and that of the other is π times r 2 − (h/2) 2. When these are subtracted, the r 2 cancels; hence the lack of dependence of the bottom-line answer upon r.
N. Reed has shown how to find the area bounded by a cycloid by using Cavalieri's principle. A circle of radius r can roll in a clockwise direction upon a line below it, or in a counterclockwise direction upon a line above it. A point on the circle thereby traces out two cycloids. When the circle has rolled any particular distance, the angle through which it would have turned clockwise and that through which it would have turned counterclockwise are the same. The two points tracing the cycloids are therefore at equal heights. The line through them is therefore horizontal (i.e. parallel to the two lines on which the circle rolls). Consequently each horizontal cross-section of the circle has the same length as the corresponding horizontal cross-section of the region bounded by the two arcs of cyloids. By Cavalieri's principle, the circle therefore has the same area as that region.
It is a short step from there to the conclusion that the area under a single whole cycloidal arch is three times the area of the circle. Which then means that the area of a rectangle bounding one half of a single cycloidal arch is two times the area of the circle, the area of a rectangle bounding a single whole cycloidal arch is four times the area of the circle, and the rectangularly-bounded area above a single whole cycloidal arch is exactly equal to the area of the circle.
- Fubini's theorem (Cavalieri's principle is a particular case of Fubini's theorem)
- Howard Eves, "Two Surprising Theorems on Cavalieri Congruence", The College Mathematics Journal, volume 22, number 2, March, 1991), pages 118–124
- Zill, Dennis G.; Wright, Scott; Wright, Warren S. (2009). Calculus: Early Transcendentals (3 ed.). Jones & Bartlett Learning. p. xxvii. ISBN 0-7637-5995-3., Extract of page 27
- N. Reed, "Elementary proof of the area under a cycloid", Mathematical Gazette, volume 70, number 454, December, 1986, pages 290–291
- Weisstein, Eric W., "Cavalieri's Principle", MathWorld.
- (German) Prinzip von Cavalieri
- Cavalieri Integration
|
2026-01-29T14:44:09.430745
|
1,061,360
| 3.518094
|
http://www.smartplanet.com/blog/thinking-tech/scientists-eavesdrop-on-the-thoughts-of-humans-and-ferrets/10123
|
Scientists can decode our thoughts and translate them into computer language so that a person can move inanimate objects, like a robotic arm, by just thinking about moving it. Now it looks like they are moving on to perform similar magic in the speech area of the brain. The hope is that neuroscientists may be able to hear the unspoken thoughts of a paralyzed patient, and then translate them through an audio device. Today a team out of the University of California, Berkeley, have come closer to realizing that hope.
They have successfully decoded the patterns of neural firing in the temporal lobe of the brain, this is the area responsible for hearing, as a subject listens to normal speech. And from this pattern they could discern the words the person had heard. Sort of like eavesdropping. Their work is published today in the journal PLoS Biology.
But decoding the patterns for hearing a word may not be the same as the patterns for imagining saying a word. This relationship is necessary for mental conversations to work. If scientists can crack the code then they could either use a synthesizer to give it vocal life or some kind of interface that types the imagined words.
The scientists believe this experiment will pave the way for a speech prosthetic for patients because previous studies have shown that when people imagine speaking a word, the same brain regions light up as do when the person actually verbalizes the word.
It even works with ferrets. In previous studies scientists read words out loud to ferrets and recorded the resulting neural patterns. Later they were able to guess which words were being read to the ferret based solely on the ferret’s neural firing patterns. Of course these are regular ferrets, not ones that understand the English language.
For the current paper researchers analyzed the brain activity of 15 epilepsy patients. The subjects were undergoing brain surgery and had 256 electrodes monitoring the electrical patterns in the temporal lobe for about a week. Specifically scientists recorded the brain activity while the patients listened to 10 minutes of conversation. They then used the data to reconstruct the sounds the patients had heard.
Researchers liken this to a pianist who can look at the keys of another pianist through sound-proof window and still imagine perfectly the sound of the music.
Of course humans can understand words even when they sound quite different—think of accents—so the real challenge for research will be to discern what are the most meaningful sounds of speech, since it might be a syllable or a phoneme or something else altogether.
[photo via emeryc]
|
2026-02-03T15:13:38.114978
|
985,452
| 3.841398
|
http://www.answersingenesis.org/articles/cm/v12/n2/oil
|
Many people today, including scientists, have the idea that oil and natural gas must take a long time to form, even millions of years. Such is the strong mental bias that has been generated by the prevailing evolutionary mindset of the scientific community.
However, laboratory research has shown that petroleum hydrocarbons (oil and gas) can be made from natural materials in short time-spans. Such research is spurred on by the need to find a viable process by which man may be able to replenish his dwindling stocks of liquid hydrocarbons so vital to modern technology.
The 1 March 1989 edition of The Age newspaper (Melbourne, Australia) carried a report from Washington (USA) entitled ‘Researchers convert sewage into oil’. The report states that researchers from Batelle Laboratories in Richland, Washington State, use no fancy biotechnology or electronics, but the process they have developed takes raw, untreated sewage and converts it to usable oil. Their recipe works by concentrating the sludge and digesting it with alkali. As the mixture is heated under pressure, the hot alkali attacks the sewage, converting the complex organic material, particularly cellulose, into the long-chain hydrocarbons of crude oil.
However, the oil produced in their first experiments did not have the qualities needed for commercial fuel oil. So, the report says, in September 1987 Batelle joined forces with American Fuel and Power Corporation, a company specializing in blending and recycling oils. Together they have made the oil more ‘free-flowing’ using an additive adapted from one developed to cut down friction in engines. A fuel has now been produced with almost the same heating value as diesel fuel. The process from sewage to oil takes only a day or two!
The researchers are now building a pilot plant. As the report states, potential economic benefits of this new technology are tremendous, since the process produces more energy than is consumed during normal sewage disposal, and the surplus energy products can be sold at a profit. Bonuses include an 80 percent reduction in waste volumes, and the eradication of poisonous pollutants such as insecticides, herbicides and toxic metals that normally end up in sewage.
Of course, one cannot claim that this is the way oil could have been made naturally in the ground in a short time period. The starting raw material is man-made and hot alkali digesters don't occur naturally in the ground.
Of greater significance are laboratory experiments that have generated petroleum under conditions simulating those occurring naturally in a sedimentary rock basin. Between 1977 and 1983, research experiments were performed at the CSIRO (Commonwealth Scientific and Industrial Research Organisation) laboratories in Sydney (Australia). In their reports1,2 the researchers noted that others had attempted to duplicate under laboratory conditions geochemical reactions that lead to economic deposits of liquid and gaseous hydrocarbons, but such experiments had only lasted for a few or several hundred days, and usually at constant temperatures. Consequently, the differences in timescale and other parameters between geological processes and laboratory experiments being so great meant that scientists generally questioned the relevance of such laboratory results. Thus, in their experiments, the CSIRO scientists had tried to carefully simulate in a laboratory under a longer time period, in this case six years, the conditions in a naturally subsiding sedimentary rock basin.
Two types of source rock were chosen for this study—an oil shale (torbanite) from Glen Davis (New South Wales, Australia), and a brown coal (lignite) from Loy Yang in the Latrobe Valley (Victoria, Australia). Both these samples were important in the Australian context, since both represent natural source rocks in sedimentary basins where oil and natural gas have been naturally generated from such source rocks, and in the case of the Bass Strait oil and natural gas fields, sufficient quantities to sustain commercial production.
These two source rock samples were each split into six sub-samples, and each sub-sample was individually sealed in a separate stainless steel tube. The two sets of six stainless steel tubes were then placed in an oven at 100°C and the temperature increased by 1°C each week, After 50, 100, 150, 200, 250, and 300 weeks (that is, at maximum temperatures of 150°C, 200°C, 250°C, 300°C, 350°C, and 400°C), one stainless steel tube from each series was removed, cooled and opened. Any gas in each tube was sampled and analysed. Residues in each tube were extracted and treated with solvents to remove any oil, which was then analysed. The solid remaining was also weighed, studied, and analysed.
In the brown coal samples, however, during the first 50 weeks of heating, gas (mainly carbon dioxide) was produced, and the production rate increased over the next 100 weeks. Virtually no oil was formed up until this point. Between 250°C and 300°C, when the oil shale generated copious oil, the brown coal produced about 1% short-chain hydrocarbons and 0.2% oil, which resembled a natural light crude oil (similar to that commercially recovered from Bass Strait, the offshore oil fields in the same sedimentary basin as, and geologically above, the Latrobe Valley coal beds from which the samples used in the experiment came).
The products after 250 weeks (350°C) resembled a carbon dioxide rich natural gas. Over the same time period and at those temperatures, the brown coal samples had also been converted to anthracite (the highest grade form of black coal).
The researchers concluded that overall, the four-year (300°C) results provide experimental proof of oil shale acting as an oil source and of brown coal being a source first of carbon dioxide and then of mainly natural gas/condensate. Significantly, these products of these slow ‘molecule-by-molecule’, solid-state decompositions are all typical of naturally occuring hydrocarbons (natural gases and petroleum), with no hydrocarbon compounds called olefins or carbon monoxide gas being formed.
Geologists usually maintain that these processes of oil formation from source rocks (maturation events) commonly involve one thousand to one million years or more at near maximum temperatures.3 However, the researchers believe their series of experiments are the best attempts so far to duplicate natural, subsiding, sedimentary conditions. Extensive conversion of organic matter to hydrocarbons has also been achieved at less than 300°C under non-catalytic conditions with a minimum of water present.
Furthermore, the researchers maintained that their experiments clearly show that altering the time-scale of source rock heating from seconds (the duration of many previous laboratory experiments) to years makes the products produced similar to natural petroleum.
They went on to say:
In many geological situations much longer time intervals are available but evidently the molecular mechanism of the decomposition is little changed by the additional time. Thus, within sedimentary basins, heating times of several years are sufficient for the generation of oil and gas from suitable precursors. The precise point in this range of times from seconds to years, at which the products adequately resemble natural gases and/or oils, remains to be established. Heating times of the order of years during recent times may even improve the petroleum prospects of particular areas. Flooding of a reservoir with migrating hydrocarbons is more likely to produce a reservoir filled to the spill point than slow accumulation over a long geological period with a possibility of loss …’.4
They also noted that it should be remembered their experiments ‘relate to a situation which is possibly unusual in the geological context—one in which hydrocarbons do not migrate away from their source rocks as they are generated.’5
But could these laboratory experiments really have simulated natural petroleum generation from organic matter in source rocks in only six years as stated?
No sooner had the discovery of ongoing natural formation of petroleum been published in the journal Nature,6 than The Australian Financial Review of February 2, 1982 carried an article by Walter Sullivan of The New York Times under the heading ‘Natural oil refinery found under ocean’. The report indicated that
‘The oil is being formed from the unusually rapid breakdown of organic debris by extraordinarily extensive heat flowing through the sediments, offering scientists a singular opportunity to see how petroleum is formed….Ordinarily oil has been thought to form over millions of years whereas in this instance the process is probably occurring in thousands of years…. The activity is not only manufacturing petroleum at relatively high speed but also, by application of volcanic heat, breaking it down into the constituents of gasoline and other petroleum products as in a refinery.’
Figure 1. The Location of the Guaymas Basin in the Gulf of California.
This ‘natural refinery under the ocean’ is found under the waters of the Gulf of California, in an area known as the Guaymas Basin (see Fig. 1). Through this basin is a series of long deep fractures that link volcanoes of the undersea ridge known as the East Pacific Rise with the San Andreas fault system that runs northwards across California. The basin consists of two rift valleys (flat-bottomed valleys bounded by steep cliffs along fault lines), which are filled with 500 metre thick layers of sediments consisting of diatomaceous ooze (made up of the opal-like ‘shells’ of diatoms, single-celled aquatic plants related to algae) and silty mud washed from the nearby land.
Along these fractures through the sediments in the basin flows boiling hot water at temperatures above 200°C, the result of deep-seated volcanic activity below the basin. These hot waters (hydrothermal fluids) discharging through the sediments on the ocean floor have been investigated by deep sea divers in mini-submarines.
The hydrothermal activity on the ocean floor releases discrete oil globules (up to 1–2 centimetres in diameter), which are discharged into hydrothermal the ocean water with the hydrothermal fluids.7 Disturbance of the surface layers of the sediments on the ocean bottom also releases oil globules.
Correct measurement of the oil flow rate at these sites has so far not been feasible, but the in situ collection of oil globules has shown that the gas/oil ratio is approximately 5:1. Large mounds of volcanic sinter (solids coalesced by heating) form via precipitation around the vents and spread out in a blanket across the ocean floor for a distance of 25 metres. These sinter deposits consist of clays mixed with massive amounts of metal sulphide minerals, together with other hydrothermal minerals such as barite (barium sulphate) and talc.
The remains of unusual tubeworms that frequent the seawaters around these mounds are also mixed in with the sinter deposits. Thus the organic matter content of these sinter deposits in the mounds approaches 24%.8
The hydrothermal oil from the Guaymas Basin is similar to reservoir crude oils.9 Selected hydrocarbon ratios of the vapour phase are similar to those of the gasoline fraction of typical crude oils, while the general distribution pattern of light volatile hydrocarbons resembles that of crude oils (see Table of analyses) . The elemental composition is within the normal ranges of typical crude oils, while contents of some of the significant organic components, and their distribution, are well within the range of normal crude oils. Other key analytical techniques on the oil give results that are compatible with a predominantly bacterial/algal origin of the organic matter that is the source of the oil and gas.10
This oil and gas has probably formed by the action of hydrothermal processes on the organic matter within the diatomaceous ooze layers in the basin. Of crucial significance is the radiocarbon (C14 ) dating of the oil. Samples have yielded ages between 4,200 and 4,900 years, with uncertainties in the range 50?190 years.11 Thus, the time-temperature conversion of the sedimentary organic matter to hydrothermal petroleum has taken place over a very short geological time-scale (less than 5,000 years) and has occurred under relatively mild temperature conditions.
It is significant also that the temperature conditions in these hydrothermal fluids, of up to and exceeding 315 °C, are similar to the ideal temperatures for oil and gas generation in the previously described Australian laboratory experiments.12 Figure 2a illustrates the oil generation system operating in the Guaymas Basin, while Figure 2b shows how this process could be applied in a closed sedimentary basin to the hydrothermal generation of typical oil and gas deposits.
Click image to enlarge.
Click image to enlarge.
The generally accepted model of oil generation assumes long-term heating and maturing of the sedimentary organic matter in subsiding sedimentary basins. The organic matter undergoes successive and gradual increases in alteration, leading to a process of continuous oil generation. The oil subsequently migrates to be trapped in suitable host rocks and structures.
This multi-step oil formation process has a low efficiency and converts only a minor fraction of the original organic matter of the sediment to oil.13 There is difficulty in balancing and timing an adequate degree of oil generation occurring at intermediate stages in the sedimentary basin, and ample fluid available for adequate transport (migration) of the oil.
Although considerable progress in the understanding of this multi-step oil formation mechanism has been achieved, there are still problems that need to be solved. Such a slow multistep mechanism differs significantly from hydrothermal petroleum formation. No evidence is so far available on the extent to which this alternative single-step oil generation process has contributed towards the origin of presently exploited oil reserves.
It is very significant that this naturally produced hydrothermal oil is identical to conventionally exploited crude oils, as are the oil and gas products from the Australian laboratory experiments. Nevertheless, hydrothermal oil formation provides an efficient single-step mechanism for petroleum generation, expulsion, and migration which could have a considerable impact on our understanding of petroleum formation mechanisms and eventually assist us in tapping resources in new areas.14
Thus the rapid formation of oil and gas is not only feasible on the basis of carefully controlled laboratory experiments, but has now been shown to occur naturally under geological conditions that have been common in the past.
Significantly, these short timescales are well within those proposed by creation scientists for the generation of petroleum from organic matter in sediments laid down during Noah’s Flood. Subsequently, the discovery of the hydrothermally produced petroleum on the ocean floor in the Guaymas Basin of the Gulf of California is even more crucial to the case of the creation scientists and Flood geologists, when they argue that the fountains of the deep referred to in the Book of Genesis were probably vast volcanic upheavals that broke open the earth's crust, pulverizing rock which was then scattered as volcanic debris, and expelling lavas, gases, and hot liquids, principally water.
Indeed, the bulk of the volcanic products would have been superheated water, similar to the hydrothermal fluids found in the Guaymas Basin. The rock record contains many layers of volcanic lavas and ash between other sedimentary layers, many containing organic matter. Thus this model for hydrothermal generation of petroleum is more than a feasible process for the generation of today’s oil and gas deposits in the time-scale subsequent to Noah's Flood as suggested by creation scientists.
Help keep these daily articles coming. Support AiG.
Discover how compromise starting in Genesis has filtered down from Christian seminaries and colleges to pastors—and finally to parents and their children. This erosive legacy is seen in generations of young people leaving the church—two-thirds of them. Get the facts, discover God’s truth, and help bring a new reformation to churches and families by helping to call them back to the authority of God’s Word.
Answers magazine is the Bible-affirming, creation-based magazine from Answers in Genesis. In it you will find fascinating content and stunning photographs that present creation and worldview articles along with relevant cultural topics. Each quarterly issue includes a detachable chart, a pullout children’s magazine, a unique animal highlight, excellent layman and semi-technical articles, plus bonus content. Why wait? Subscribe today and get a FREE DVD download!
|
2026-02-02T12:20:26.224429
|
1,107,932
| 4.05804
|
http://www.lessonplanet.com/lesson-plans/dictionary
|
Dictionary Teacher Resources
Find Dictionary educational ideas and activities
Showing 1 - 20 of 7,544 resources
Educational apps reviews are available to members
Searching for an incredibly thorough Latin app? Look no further! Latin learners will be quite satisfied with the collection of texts, three dictionaries, customizable flashcards, assessment options, and other features that are right at their fingertips.
Dictionary Game Substitute Plans
Every teacher needs some activities for his/her substitute teacher to lead when they're out of the classroom. Here is a nifty lesson, which has the kids use dictionaries in a game format. This is meant to be left in your folder for the substitute when she comes to class. The class is divided up into two teams, and learners are paired off with one dictionary each. The teacher calls out a word and the first pair of young scholars to raise their hands and yell out, "We've got it!" wins that round. They must read the definition of the word as proof that they found the right one.
Iditarod Picture Dictionary
Students are introduced to the Iditarod and create a picture dictionary of key Iditarod terms. They review photographs of mushers and sled dogs and then draw illustrations with one-line written captions to accompany the pictures for their dictionaries.
ABC-Introduction to the Dictionary
Looking for an excellent way to give your class practice using and a better understanding of how the dictionary works? Try this 4 page printable packet! They read about the dictionary, sort and alphabetize words, the create a dictionary page of their own.
Third graders determine appropriate definitions in context. They read a passage, look up words with multiple meanings in the dictionary, and determine which definition fits. Students review guide words, entry words, parts of speech,plural, pronunciation guide, syllables, accent mark. Students also review different types of dictionaries.
Dictionary skills are important to learn. Model how to find a word in the dictionary and how to choose the correct definition. As a class, look up words from The Life Cyle of a Bettle. Additionally, encourage your class to determine which definition is correct by looking at the context of the paragraph or sentences. Send individuals off with the provided worksheet for some independent practice. The worksheet can be viewed full-size with a free membership at the hosting site.
Dictionary Guide Words: How Do They Guide Us?
Fourth graders, after brainstorming what the word "guide" means, examine how to use guide words in a dictionary to locate words. They define "guide," identify guide words in a variety of dictionaries and locate words using guide words. Each student also completes a number of practice pages of words.
Using Bilingual Dictionaries
Practice researching and translating all parts of speech using a bilingual dictionary. The class uses their research to increase their own vocabulary and learn the dictionary abbreviations for words such as transitive verb, pronoun, masculine, and feminine.
Dictionaries Lesson Plan
Students practice using a dictionary. In this dictionary instructional activity, students participate in an online activity to familiarize themselves with the features and purpose of a dictionary.
Dictionary Skills - What's the Meaning of This!
How do I use this word? Middle schoolers review the parts of a dictionary definition and compare an entry to a thesaurus entry. They then create mini-dictionaries using the Internet and a word processor.
Vocabulary Development through Dictionary Skills
There will always be words you don't know, so how do you find their meaning? You have to use the dictionary! Provide teams of learners with this two-page packet, and have them search the dictionary for several unknown words. Words like turban, ukulele, vestry, and albumen are included, so even your most precocious learners will need to use the dictionary.
Second graders examine the purposes of the dictionary and practice how to use it correctly. Students discuss parts of speech, number of syllables, accent marks, antonyms, plurals, and various word ending before participating in a dictionary scavenger hunt.
Creating a Spanish/ English Picture Dictionary
Students explore all the avenues of why its important to use a dictionary to find out what a word means. The functions of a dictionary is discussed in depth within this lesson. They create a Spanish/English Picture Dictionary to illustrate the important use of a dictionary.
The Amazing World of Dictionaries
Use this resource to discuss various ways to dictionaries can be used. What a terrific presentation to display when exploring how dictionaries are used to define words, check spellings, identify parts of speech, and more. The PowerPoint also talks about the different types of dictionaries available, such as online sources and illustrated versions.
Prep your class for any upcoming standardized language test with this resource. Use the sample dictionary page included to answer the reference questions provided. Great practice is provided here!
In this recognizing parts of a dictionary worksheet, students read about the guide words, punctuation key, definition, and part of speech; read entry words and guide words; and answer comprehension questions. Students write 15 answers.
In this recognizing parts of a dictionary page activity, students use the dictionary page provided to answer questions about guide words, entry word, and multiple word meanings. Students write 5 short answers.
Immigration Picture Dictionary
Second graders create an immigration picture dictionary. In this immigration lesson, 2nd graders visit Pier 21 and discuss how it has changed Canada. Students also discuss how the lives of the immigrants have changed. Students create a picture dictionary of new vocabulary words.
Dictionary Research Work
Explore new words and examine an unusual alphabet book with your young scholars. They are introduced to an alphabet book containing many unfamiliar words. They are then guided as they search for definitions in a dictionary. They then create their own alphabet book based on their amazing vocabulary.
Vocabulary Development Through Dictionary Skills
In this vocabulary development worksheet, students predict what 8 words mean and then check their meanings in a dictionary. Students also select 8 of their own personal words, make predictions for them, and check their meanings.
|
2026-02-04T08:44:29.757697
|
419,006
| 3.710492
|
http://www.eoearth.org/article/Lorentz,_Hendrik_Antoon
|
Hendrik Antoon Lorentz (1853–1928), a Dutch physicist and pioneer in formulating the relations between electricity, magnetism, and light. He explained the Zeeman effect—a change in spectrum lines in a magnetic field—for which he shared with Pieter Zeeman the 1902 Nobel Prize in Physics. In 1878, he published an essay on the relation between the velocity of light in a medium and the density and composition thereof. The resulting formula, proposed almost simultaneously by Danish physicist Lorenz, is known as the Lorenz-Lorentz formula. He extended the hypothesis of G. F. Fitzgerald that the length of a body contracts as its speed increases, now known as the Lorentz contraction. He also formulated the Lorentz transformation, in which the space and time coordinates of one moving system can be correlated with the known space and time coordinates of any other system. This work influenced and was confirmed by Einstein’s special theory of relativity.
|
2026-01-24T16:39:45.760884
|
184,894
| 3.80288
|
http://www.wou.edu/~yehnerc/254.2.htm
|
English 254 Class #2 Wed, Jan 9
Chopin, “The Storm” (1611), “The Story of an Hour” (handout)
*The structure of plot: conflict, crisis, resolution
*Plot structure relies on cause and effect: not what happens, but why.
*Two main characters: protagonist and antagonist.
* In general, the protagonist is the character who changes.
The antagonist is the one who causes the change.
*Minor characters often serve as mirrors or foils to main characters.
1. Who are the two main characters? Who is the antagonist and who is the protagonist?
2. What is the main (external) conflict? Is there an internal conflict?
3. What does the title mean? When you trace both the plot and the storm through each of
the five parts, does the progress of the storm parallel the progress of the plot, and if so,
what might this suggest?
4. In part I the “sombre clouds. . . rolling with sinister intention” introduce the storm.
Does this description introduce the story’s action as well—is the action sinister?
5. Clearly the storm sets in motion the chain of events that leads to the characters’
adultery. Does the storm excuse them in any way from responsibility for their actions?
6. Do you suspect your judgment of the characters and their actions in this story differs
from the author’s? If so, why isn’t the story persuasive?
7. What is the point of this story?
"The Story of an Hour"
1. Does this story have a conflict? If so, what is it? Do any of the characters exhibit an internal conflict?
2. Where is the climactic moment in the plot?
3. What might be the cause or causes of the "physical exhaustion that haunted her body and seemed to reach into her soul" that Mrs. Mallard feels as she sinks into the armchair? Mrs. Mallard's face reveals repression. What has she been repressing? What are the social realities of marriage in the 19th century?
4. What kind of man is Brently Mallard, as Mrs. Mallard remembers him? In what ways is he like Josephine and Richards?
5. What does Mrs. Mallard see and hear from the open window? How do you react emotionally to this imagery? What does the imagery suggest?
6. What is the attitude of the author toward those who would comfort Mrs. Mallard?
7. How does Mrs. Mallard look as she leaves her room? What does Richards' "quick motion" at the end of the story reveal? Who is he screening from whom?
8. Does the ending of this story merely surprise you, or do you believe Chopin is making a thematic point?
|
2026-01-21T02:18:58.855502
|
508,003
| 3.601989
|
http://www.mcgill.ca/ehs/laboratory/radiation/manual/5
|
Monitoring is an essential component of any radiation safety program. It involves the regular and routine measurement and/or assessment of factors relevant to radiation safety and takes the following forms:
- Monitoring of radiation dose or dose rate.
- Area monitoring, i.e. measurement of radiation dose rate at various points in an area where a radiation-emitting machine or equipment is located, or where radioactive sources are stored, handled or used.
- Technique monitoring, i.e. measurement of the dose or dose rate applicable to specific persons or specific locations, during complex procedures involving radiation sources.
- Personnel monitoring, i.e. measurement of the total dose received by individual radiation workers over a specified period of time.
- Monitoring of radioactivity (count rate):
- measurement of radioactive contamination on surfaces(i.e. benches, floors) and equipment;
- measurement of radioactive contamination on exposed areas of skin and on clothing of radiation workers;
- monitoring of ingestion, inhalation and injection of radioactivity by workers handling unsealed radioisotopes.
- Medical surveillance of radiation workers.
Environmental Health & Safety conducts these radiation surveys. The purpose is to ensure that the room and equipment shielding, and the local practices and procedures are satisfactory. Permit Holders, Heads of Department, Departmental Radiation Safety Officers and individual radiation workers should collaborate with the Environmental Safety Office in these surveys and should report any situation or change in procedure, which may warrant special investigation.
Permit Holders have the responsibility of carrying out recommendations arising from the surveys and of making the results known to their staff. The Permit Holder must retain a copy of each Radiation Survey Report pertaining to his department or laboratory, together with a record of any action taken as a result of such report. These records must be made available on request to authorized persons such as the Radiation Safety Officer and CNSC inspectors. Permit Holders are required to keep radiation survey reports and all related documents up to 3 years in their McGill Radiation Log Book or in other files.
Radiation Users and Nuclear Energy Workers whose main source of exposure is from external beta or X-ray and gamma sources may be subject to routine, continuous monitoring by means of a thermoluminescent dosimeter (TLD) which is worn at all times during working hours. Such monitoring is mandatory for NEWs and recommended for Radiation Users. In addition, NEWs who may be exposed to neutrons must carry a separate neutron badge* . The National Dosimetry Service of the Radiation Protection Bureau, a service of Health Canada in Ottawa, provides the TLDs as whole body and extremity dosimeters. To obtain more information or to subscribe to the service, contact the McGill Radiation Safety Officer. The following paragraphs refer specifically to TLD badges for the monitoring of photons and high-energy electrons. Neutron monitors are considered in Section 5.3.5.
* A person who is likely to be exposed to neutrons to a significant extent must be designated as a NEW, irrespective of the level of and/or or X-ray and radiation to which he/she is also exposed.
The whole body TLDs provided by the National Dosimetry Service comprise two small plaques of lithium fluoride (approx. 3 mm x 3 mm x 1 mm thick) mounted on an aluminum plate, all contained in a plastic holder provided with a pocket clip. The base plate is encoded (by a series of punched holes) so that an individual monitor can be identified.
The National Dosimetry Service provides the whole body TLD badges on a 3-month or 1-month cycle, with distribution being on a department or laboratory basis. The Permit Holder must make the necessary arrangements for personnel monitoring of radiation workers in his/her own department or laboratory and is also responsible for the cost of the service. An outline of the procedure is given below, further details being available from the Environmental Safety Office:
- usually the Permit Holder delegates the organizational aspects of personnel monitoring to another person who is in daily contact with the staff concerned. It is essential for this person to be properly briefed. For convenience, he/she will be referred to in this section as the Monitoring Supervisor;
- the individual radiation worker retains his or her badge or monitor with his/her name attached. The badge is changed every cycle;
- the changeover dates (beginning of a new monitoring period) vary from group to group and the Radiation Protection Bureau (RPB) will notify each monitoring group. Shortly before the changeover date, the Monitoring Supervisor collects the old badges and provides the new badges from everyone in his/her group; and
- the "exposed" plaques are measured automatically by the RPB, and a report of the radiation dose received by each worker is sent to the Radiation Safety Officer (RSO) at Environmental Health & Safety. The RSO will then:
- examine the report and note any unusually high value;
- send a copy to the department or laboratory (minimum once a year or on request);
- retain the original report for record purposes; and
- investigate any dose which is either:
- over 10mSv per 2 consecutive periods of 3 months in the case of an NEW; or
- is significantly greater than the "normal" value (exceeding 0.5 mSv for 2 consecutive periods of 3 months) for the individual concerned or for the group of workers who are engaged in similar activities, as in the case of a Radiation User or the general public. (See section 3.7 for details)
The individual radiation worker must:
- take good care of their assigned monitor at all times;
- wear the monitor at all times during working hours. The badge may be worn either at the wrist as a bracelet or on the finger as a ring and finally at the head/neck area or chest height as a whole body monitor. Where protective clothing such as a lead apron is worn, the badge must be placed under the protective clothing since its function is to record the radiation reaching the body, not the radiation reaching the protective clothing;
- guard the badge as a personal monitor, issued to a named individual. Under no circumstances should the badge be loaned to another person;
- take care that the badge is not dropped or accidentally placed in a position where it could be exposed to a level of radiation higher than the ambient level;
- take care that the badge is not accidentally splashed or otherwise contaminated by a radioactive liquid;
- take care that, outside working hours, the monitor is stored in a safe place which is well away from any radiation source and from any source of intense heat such as a radiator; and
- report any problems with the monitor to the Monitoring Supervisor or to the RSO.
See Appendix G for more details.
Personnel monitoring, of the type described in the preceding paragraphs is a satisfactory general indicator of the whole-body dose arising from external sources of penetrating radiation such as X- and gamma-rays. However, the system has some important limitations:
- the badge reading can be interpreted in terms of a whole-body dose only if the ambient radiation is penetrating, i.e., photons in the MeV energy range, or at least several hundred keV; and isotropic i.e., either the radiation comes from several directions or the radiation worker changes his orientation frequently with respect to the source of radiation. If these conditions are not met, then the badge reading represents only the dose to superficial tissues and/or to part of the body such as the front of the trunk;
- the badge does not record any additional dose received by the extremities and/or face and neck in some procedures;
- the badge does not record doses due to low-energy-particles such as those from tritium(H-3), carbon-14(C-14) and sulphur-35(S-35);
- the 3-monthly cycle may be too long for individuals whose work carries a higher-than-average risk of radiation exposure; and
- the badge does not record internal exposure arising from ingestion, inhalation or injection of radioactive materials.
The first limitation (1) cannot be overcome. Each situation must be assessed to determine what the badge reading represents. In most cases, the badge reading is so low that it is of no importance whether it represents a whole-body or a partial-body dose.
Limitations 2 and 4 may be overcome by the use of monthly monitors or monitors such as TLD "finger badges" or pen-type pocket dosimeters. Arrangements for the issue of these monitors may be made through the Radiation Safety Officer. Monitoring of this type is usually considered as "Technique Monitoring" i.e., it is carried out as a special investigation to determine the relationship between the badge reading and the dose received by other parts of the body.
Limitation 3 can only be resolved by the purchase of special monitors, which are sensitive to low-energies; this comes under the heading of "Area Monitoring". Limitation 5 is overcome to some extent by the use of bioassay procedures as discussed in Section 5.5.
Radiation workers who may handle more than 50 MBq of phosphorus-32 (P-32), strontium-89 (Sr-89), strontium-90 (Sr-90) or yttrium-90 (Y-90) are required to wear an extremity TLD as a ring badge in addition to the whole body TLD. The National Dosimetry Service (NDS) also supplies the extremity TLDs and for additional information on extremity TLD personnel monitoring contact the RSO.
Radiation workers who may be exposed externally to neutrons (i.e. for unshielded neutron sources in excess of 20 GBq) are required to wear a special "neutron" badge in addition to the ordinary TLD badge. The National Dosimetry Service (NDS) also supplies the neutron monitor. However, users should be aware that the present neutron badge is somewhat unsatisfactory and the safety of the staff will depend on a thorough area survey (using a calibrated neutron survey meter) combined with good working practices. Further details on neutron personnel monitoring may be obtained from the RSO.
Portable monitors, suitable for measuring contamination arising from the type of radionuclide stored or used in that laboratory must be available. This should be used regularly to monitor accessible surfaces on benches, walls, floors and equipment, whenever there is a possibility that a liquid radioisotope has been splashed or spilled. The monitor should also be used on exposed skin surfaces and clothes of radiation workers when procedures involving the manipulation of significant activities of radionuclides are completed.
Any laboratory where unsealed radioisotopes are stored and/or used must undergo regular surface contamination checks called "wipe tests". Suspected surfaces are wiped with a moist swab of raw cotton or filter paper in order to remove contamination and the swab is then offered to a sensitive detector, such as a liquid scintillation counter or gamma well counter. Allowing for statistical uncertainties in low-level counting, any count rate which appears to be above the 0.5 Becquerels per centimeter squared (Bq/cm2) is regarded as evidence of contamination. Decontamination procedures are required and discussed in Section 5.8. Wipe tests are done at least once a week for frequent users (i.e. daily or weekly) or after each radioisotope procedure for occasional users (i.e. monthly or bimonthly). The results must be placed in the McGill Radiation Log Book and kept for a maximum of three years. For more complete details on wipe tests consult Section 5.7.
In addition to regular monitoring carried out by the radiation workers in a laboratory, every laboratory handling unsealed radioisotopes is subject to regular inspection and survey by Environmental Health & Safety. Such inspections are carried out annually, but more frequent surveys may be needed in some cases. The Permit Holder should request a special survey whenever an accident has occurred which might have resulted in contamination, or whenever a complex new procedure has been carried out for the first time. The rules listed in Section 5.2 for "Area and Technique" surveys carried out by Environmental Health & Safety also apply to the Contamination Surveys discussed here.
The CNSC has established effective dose limits for persons during a specified period. For the purpose of calculating the effective dose, one of the parameters used is the annual limit on intake (ALI). The ALI is defined as the activity, in Becquerels, of a radionuclide that will deliver an effective dose of 20 mSv during the 50 year period after the radionuclide is taken into the body of a person 18 years or older or during the period beginning at intake and ending at age 70 after it is taken into the body of a person less than 18 years old. See Appendix C, for a list of ALI according to radioisotope.
Radioactive materials may be ingested, inhaled or injected through the skin. If the absorbed radionuclide is a gamma-emitter, it can be detected and measured by a sensitive external counter. This is the procedure used for measuring the ingestion of radionuclides such as I-125, which concentrates mainly in the thyroid gland. A thyroid bioassay service is operated by Environmental Health & Safety and is available to all workers who handle iodine radioisotopes. Radiation workers in this category should be monitored regularly, at a frequency which depends on the particular isotope handled and, in the case of short-lived isotopes, on the individual workload.
The most commonly used radioisotope of iodine is I-125 (half-life 60 days) and in this case the monitoring frequency is one month, irrespective of the workload. The CNSC Regulatory Document R-58 requires persons handling I-125 or I-131 at above specified activity levels to submit to thyroid monitoring. These tests must be done within 5 days of the iodine manipulation. For details and arrangements for thyroid scans contact Environmental Health & Safety. The minimum activities requiring evaluation are listed below:
|Types of operations||Activity handled in unsealed form|
|Processes carried out in open room||2 MBq (54 µCi)|
|Processes carried out in fume hood||200 MBq (5.4 mCi)|
|Processes carried out within closed glove boxes||20 GBq (540 mCi)|
Bioassay procedures for other radionuclides have been developed and made available to all laboratory personnel. Body burden assessments can be performed using two methods:
- Excreta analysis (urine and faeces); and
- Whole-body counting (approximate for gamma emitting radionuclides)
In accident or emergency situations Environmental Health & Safety is able to carry out, or to make arrangements for any type of bioassay which may be required.
The radiation doses received by the great majority of radiation workers are so low that no correlation can be demonstrated between dose and any known physiological effect, and no deleterious effect can be unequivocally linked to a radiation exposure. Consequently, medical surveillance has no role in assessing the effectiveness of a program of radiation protection. Surveillance is therefore limited to the minority of radiation workers who, because of the nature of their work and/or the type of radiation sources they handle, are classified as Nuclear Energy Workers (NEWs). These workers are more likely than non-NEWs to receive a significant dose in the event of an accident.
Any person whose position is classified as a NEW should undergo a complete pre-employment medical examination. A NEW whose whole-body dose in the preceding 12-month period, as evidenced by personnel exposure records, exceeds 50 mSv (5 rem) must undergo further medical examination. This is also required in cases of accidental over-exposure (real or suspected). Where the over-exposure is severe (200 mSv (20 rem) or more), a cytogenetic examination is required.
The most effective method for measuring radioactive surface contamination is the wipe test technique. This procedure will indicate only the levels of removable contamination. No removable contamination should be tolerated. Begin the wipe test with a sketch of the laboratory that includes marked and numbered locations to be examined. Usually, 10 to 20 locations are adequate for most laboratories. The wipe test method includes the following steps:
- Moisten a filter paper (2 cm) with alcohol or water, and wipe over an area approximately 100 cm2 (10 cm x 10cm). Please note dry wipes can also be used.
- Place the filter paper in a vial and count in a gamma well counter (for gamma and/or X-ray contamination) or in a vial containing scintillation fluid, shake and count in a liquid scintillation counter for alpha and/or beta contamination. If a single radioisotope is being used in the laboratory, then appropriate window settings and a quench curve (only for liquid scintillation counting) are recommended. However, if several radioisotopes are being handled or contamination is unknown, then operate at full spectrum with no quench correction. It is suggested that before counting, the vials set aside for liquid scintillation counting be stored for 24 hours to reduce chemiluminescence.
- 3. Contamination is present if wipes exceed 0.5 Becquerels per centimetre squared (Bq/cm2). The contaminated area must be cleaned with water and detergent or with a commercial decontamination solution such as Decon75, Count-Off, Rad-Con or Contrad 70. Begin with the perimeter of the spill area and work towards the centre, being careful not to spread the contamination during cleaning. Repeat the wipe testing until the measurements are at or below 0.5 Bq/cm2.
The formula used to calculate "Bq/cm2" is:
|Bq/cm2||=||CPM net / [ C.E. x 60 x 100 x Weff ]|
|Bq/cm2||=||Becquerel per centimetre squared|
(use the counter C.E. for that radioisotope, or for simplicity use a C.E. of 50% for all radioisotopes)
|60||=||is for 60 seconds and related to counts per second (cps)|
|100||=||for a maximum surface area of 100 cm2 (10 cm x 10 cm)|
|Weff||=||Wipe Efficiency (use 10% or 0.1 for wet wipes, and 1% or 0.01 for dry wipes)|
In radiation safety, "decontamination" refers to the removal of loose or fixed surface radioactivity, and is required whenever wipe tests reveal contamination. The Radiation Safety Officer should be consulted for any decontamination effort.
Selection of a cleanser or decontaminant depends on factors such as the nature of the item to be decontaminated and the amount of dirt trapping the contamination. Either a commercial detergent or soap may be used. The key to effective decontamination is to use plenty of cleanser, a good brush or scouring pad, lots of water rinses, and absorbent paper to dry the area. Inadequate rinsing and drying may yield falsely elevated counts in post-cleaning wipe tests due to chemiluminescence, as a result of cleanser residue.
As a rule of thumb, if the contamination is dry (such as powder), keep it dry. Remove the powder by scraping away the contamination and pick up the small particles with adhesive tape. If contamination is wet, use absorbent materials to pick up the moisture.
Porous surfaces (such as wood and unpainted concrete) difficult to decontaminate and may require disposal. Some isotopes (such as tritium) may become chemically bonded to the surface and are extremely difficult to decontaminate.
Metals may be decontaminated with dilute mineral acids (nitric) a 10% solution of sodium citrate, or with ammonium bifluoride. When all other procedures fail for stainless steel, use hydrochloric acid - this process is effective but unfortunately it will etch the surface. As for bases, commercial polish cleaners will work well. Plastics may be cleaned with ammonium citrate, dilute acids or organic solvents.
For decontamination of both wet and dry surfaces, a final wiping with water or alcohol may be necessary. Decontamination should always be followed by wipe tests to confirm that the remaining radiation activity has been reduced to acceptable levels (less than or equal to 0.5 Bq/cm2). Finally, all used decontamination materials must be discarded as radioactive waste.
The Permit Holder shall ensure that all areas (i.e. laboratories, storage and waste facilities) identified on his/her Internal Permit are decommissioned or free from radioactivity upon the expiry or termination of the permit. Decommissioning shall include:
(1) Transfer or removal of all radioactive materials or devices to an approved site.
(2) Appropriate disposal of all radioactive waste.
(3) Removal of all radioactive warning signs and labels.
(4) Monitor all areas, decontaminate and remove surface contamination (i.e. loose and fixed) to meet the McGill and ultimately the CNSC prescribed limits. (See table below)
(5) Prepare a Decommissioning Report Form [.pdf] describing how the decommissioning requirements have been satisfied and forward it to Radiation Safety Officer.
(6) Update all records.
(7) Records must be retained by the Permit Holder for the period ending 3 years after expiry date of the last Internal Permit issued.
(8) Decommissioning Criteria: By Radionuclide Classification1. See table below.
CNSC AND MCGILL DECOMMISSIONING CRITERIA
|Radionuclide classif.||Fixed and non-fixed CNSC decommissioning limit (avg over area not to exceed 100 cm2)||Fixed and non-fixed McGill decommissioning limit (avg over area not to exceed 100 cm2)|
|Class A||0.3 Bq/cm2||0.05 Bq/cm2|
|Class B||3.0 Bq/cm2||0.5 Bq/cm2|
|Class C||30 Bq/cm2||0.5 Bq/cm2|
Note: The McGill radioactive surface contamination & decommissioning standard is more restrictive compared to the CNSC contamination and decommissioning standards.
1 For details on the radionuclide classifications, consult Section 6.2.
|
2026-01-26T04:50:39.860478
|
678,831
| 3.725109
|
http://www.sciencedaily.com/releases/2010/01/100107083909.htm
|
Jan. 11, 2010 The grooming behaviour displayed by primates is due to less rational behaviour than often thought. According to a computer model developed by scientists at the University of Groningen, one basic rule explains all possible grooming patterns: individuals will groom others if they're afraid they'll lose from them in a fight.
Primates are assumed to reconcile their conflicts by grooming each other after a fight. They are also supposed to carry out intricate trading of grooming for the receipt of help in fights. Professor and theoretical biologist Charlotte Hemelrijk shows in a computer simulation that many patterns of reconciliation and exchange surprisingly emerge simply from fear of losing a fight with another individual. 'This shows that reconciliation and exchange behaviour are not necessarily conscious behaviour', Hemelrijk -- specialist in self-organization in social systems -- states. 'It's simply a consequence of rank and of which primates are in the vicinity of the primate that wants to groom.' The results of the research conducted by the group that worked with Hemelrijk on the computer model have appeared in late December in the journal PloS Computational Biology.
'Primates are intelligent, but their intelligence is overestimated. The social behaviour of primates is explained on the basis of cognitive considerations by primates that are too sophisticated', Hemelrijk continues. 'Primates are assumed to use their intelligence continually and to be very calculating. They're supposed to reconcile fights and to do so preferably with partners that could mean a lot to them.' This would explain why primates prefer grooming partners higher in rank in order to gain more effective support in fights. Moral considerations would bring them to repay the grooming costs by grooming others.
Such behaviour patterns all presuppose a rational thought process, according to Hemelrijk: 'In order to reconcile, the primates must recall exactly which fight they last had and with whom. They must also be able to gauge the importance of each relationship. And for the reciprocity and repayment, they must keep careful track how often and from whom they have received which grooming or support 'service' in order to be able to repay it sufficiently.'
However, all these suppositions are unnecessary according to Hemelrijk: 'Our computer model GrooFiWorld shows that complex calculating behaviour is completely unnecessary. We can add the simple rule to the existing DomWorld model that an individual will begin grooming another when it expects to lose from it upon attacking the other. This in itself leads to many of the complex patterns of friendly behaviour observed in real primates.' In the DomWorld model, individuals group together and compete with their neighbours.
With the help of the computer model, Hemelrijk shows that most friendship patterns are due to the proximity of other animals. In turn, the proximity is the result of dominance interactions. The fear of losing a fight also plays an important role. 'Apparent reconciliation behaviour is the result of individuals being nearer their opponent after a fight than otherwise', the professor explains. Repaying grooming that has been received is the result of some individuals being nearer to certain others more often. Since they groom nearby primates in particular, any grooming received will automatically be repaid.'
The model and reality
That this is shown by the computer model does not mean that primates are not capable of displaying intelligent social behaviour, according to Hemelrijk. 'The resemblance of patterns of friendly behaviour in our model to those in reality means that more evidence is needed to be able to draw the conclusion that friendly relationships are based on human, calculating considerations. Our model is a 'null model' providing simple explanations which are especially useful for further research into friendly behaviour in primates, in particular into that of macaques.'
Such computer models are not only useful in analyzing primate behaviour, but also to gain insight into the social behaviour of all sorts of species that live in groups. It could for instance provide ideas for further research into the flocking behaviour of starlings. Hemelrijk: 'Simulations thus are also very important for researchers working out in the field. They can research the connection between models and reality.'
Other social bookmarking and sharing tools:
Note: Materials may be edited for content and length. For further information, please contact the source cited above.
- Puga-Gonzalez et al. Emergent Patterns of Social Affiliation in Primates, a Model. PLoS Computational Biology, 2009; 5 (12): e1000630 DOI: 10.1371/journal.pcbi.1000630
Note: If no author is given, the source is cited instead.
|
2026-01-28T17:36:05.444662
|
969,327
| 4.059621
|
http://www.mesoamerica-travel.com/english/ecotourism/ethnics/chortis/
|
The Chorti are assumed to be direct descendants of the Maya, according to the linguistic investigations done by Larde and Larin. The Mayan people constructed some 1500 years ago cities like Copan and Quirigua. The scientists have compared the different Mayan languages and proved that the language of the Chorti belongs to the language family of Maya - Chol. This language group has its origin in the today Mexican regions of Chiapas and Tabasco and is spoken by the tribes Chontales and Choles. Some two thousand or more years ago, the Chol lived in the region of what today are Guatemala and Honduras. The Chols got divided into two main groups: the Chol migrating to Chiapas, and the Chortis staying in the region till today.
The Maya Chorti is formed of different tribes and one of the leading groups was called Pipiles. They developed a regional trade system and ended the monogarchie of a few leading families. Recent research is bringing more and more evidences that the Chortis of the region of Copan are the direct descendants of the ancient Maya, whose empire ended in the years between 820 and 830 AC. The upper class of Copan left the city seeking shelter in Tikal and other places. The common people are supposed to have stayed in the Copan region till today, but the scientific proof of who lived in Copan between 830 and 1530 is still missed. Historic documents of the Spaniards conquest of the region of Copan in 1530 tells about the resistance of the Chorti king of Copan against the conquestador Hernando de Chavez and Pedro Amalin, leading to the conclusion that the common people never completely left the region.
The Spanish invaders brought hunger and suffer to the local population called by the Spaniards, Indians. The Indian populations of the whole Central American region, including the Chortis, were destroyed systematically. The Spaniards used a system called "Encomienda". The soldiers got land titles from the Spanish governors in Central America. The people living on that land where declared as property of the new owners, without any rights. Murdering Indians was common and legal. Not enough, the Europeans brought unknown diseases like measles causing epidemics between the local population because their immunity system never before was forced to develop a defense against these illnesses.
At the present time Chortis do not use their traditional and spiritual customs of their ancestors. The Spaniards killed the Indian leaders and priests, and stored the knowledge and the traditions forever in there graves. The few traditions and customs maintained by the few survivors got mixed with the elements of the Catholic Church. There are some correspondences between the catholic and the pre-Columbian Chorti religion. Both are using elements like baptism and confession, pilgrims and incense. The Chortis are celebrating devotions to the god of the earth and the virgin Maria. They are not accustomed to marriage in the catholic sense, but they do baptize their kids according to the Catholics. Baptism means the change from a newborn to a human person. In addition to the holy water, they are using also salt and oil in this religious ceremony.
Every town has its holy patron whoms picture or statue is devoted, and being guarded safely in the church or the house of a guard. The origin of the most of saints is unknown, and raises their mystery and power. The saints are closely connected with the local agriculture. "Chaac" - the rain, and "Panathuro - the wind, are influenced by the archangel San Miguel, responsible for climate and rain. The virgin Maria protects the corn, the basic food, and supports the gods producing rain.
The god of dreams is a male for men and a female for women. He steadily accompanies the god of the dead. It is baneful to follow the needs to have a rest or a nap during the day. The god of dreams is trying to make humans sleep during the day, selling them to his friend the god of the dead. During the night sleep is not harmful or dangerous. The god of the dead has both sexes and appears as skeleton wearing a white shawl. His weapon is a long stick with a knife of bone on the top. Invisible for everyone else, he appears to persons close to death and seems to be dangerous. The ghosts of dead people are said to appear to the living, or even attack them. To avoid that the Chortis offer gifts to the ghosts to maintain there friendship, for example pumpkins with honey (called "tzinkin" by the Chorti). This tradition takes place on the second of November, the day of the dead.
A holy place in the town is the cross on the church, where a Chorti stops and prays. The cross can help sick people to get healthy. There for, the day of the cross on the third of May is another important ceremonial day. The crosses of the region are decorated with flowers, fruits and corn. The cemetery of a town is not only the place for peace for the dead, but also a place of bad ghosts. The god of death hides in the cemetery. Human sacrifices are not used anymore, but the Chorti still take gifts like corn and pumpkins to the cemetery to motivate the saints of their town to protect them from the evil gods.
During a religious ceremony, the food is usually chicken or turkey. The blood of these animals is spread out over the altar, or thrown in the direction of the four cardinal points. Formerly, other animals have been sacrificed, frogs, snakes and vultures, in the hope for rain and fertility.
Scientists think that the Chortis are one of the oldest groups among the Maya, confirmed by their old and primitive language. Actually, in Honduras the language is disappeared, even where there is some old people in towns like El Paraiso, Carrizalon and Ostuman having certain knowledge the languages. Young people are not very interested in learning this dialect. Some Chortis in Guatemala are still able to speak both the Chorti and the Spanish language. Some old people in Guatemala only know the native Chorti language. Especially in the Guatemalan towns of Jocotan and Camotan the inhabitants use "tcor-ti", in daily life with other local residents. Where needed though they will speak Spanish. Scientists are trying to save the language by looking for interested Chorti willing to learn the old dialect from their brothers in Guatemala.
Food and Agriculture
Corn (maiz) and beans (frijol) are the basic and elementary elements of each plate, and seem to be more important the any other food. Corn tortillas and beans symbolies food, and among the Chorti of Guatemala the words "maiz" and "frijol" mean the same as food. The only other important plant produced by the Chorti is sugar cane. Livestock is not important, even though one can find chickens and turkeys. Cattle and pigs are produced by some Chorti to earn money by selling them to the Latinos.
Click here to view our Culture Tours.
|
2026-02-02T07:02:14.381891
|
631,616
| 3.679793
|
http://www.gcrg.org/bqr/16-4/dams.html
|
If you ever get a chance to camp
at Toroweap Overlook, go stand on the edge of the Esplanade and look down
at Lava Falls Rapid and all those lava flows and dams that remain frozen
to the canyon walls (Figure 1). If you stare hard and long enough, you’ll
expect to see the lava flow just west of Vulcan’s Throne start moving
again, flowing down Toroweap Valley, and into the Colorado River some
2000 feet below. You’ll begin to imagine what it would have been
like to stand at that same spot hundreds of thousands of years ago and
watch the hot lava flow into the Colorado River.
For many years, people have wondered how these lava dams were formed and
destroyed and on what time scales these events occurred. Through a series
of articles, we’ll present to you new ideas on how those lava flows
and the Colorado River may have interacted.
During the past two million years, significant volumes of basalt were
extruded from vents in the Uinkaret volcanic field (Hamblin, 1994). Many
of these flows cascaded over the rim, mainly on the north side of the
canyon, and into the canyon, particularly in the vicinity of present-day
Lava Falls and Whitmore Rapids. There are more than 150 flows present
in this volcanic field, and Hamblin (1994) identified the remnants of
at least thirteen different lava dams. Hamblin proposed that most lava
dams occurred between 10,000 and 1.8 million years ago, and that western
Grand Canyon lava dams took several days to several thousand years to
form. He hypothesized that the dams were stable, could have lasted up
to forty thousand years, and that deep, long-lived lakes backed all the
way up to Moab in one case. The lakes then filled with both water and
sediment, and the lava dams were gradually eroded through headward erosion,
similar to erosion at the base of Niagara Falls, as water flowed on top
of the sediments and down the face of the dam. In addition, Hamblin (1994)
identified unusually coarse river gravels with huge foresets—preserved
riverbed ripples—in a deposit overlying the remnant of a basalt
flow at river mile 188 (river left) indicative of a large-scale flood,
but he attributed the deposits to failure of a landslide dam upstream.
Lucchitta et al. (2000) proposed that major accumulation of basalt-rich
gravels in western Grand Canyon represents extremely vigorous erosion
of a lava dam as a result of overtopping, headward erosion and plunge-pool
New studies of those basalt-rich river gravels (Figure 2) suggest that
the gravels were emplaced by the rapid and catastrophic failure of lava
dams (Fenton et al., in press; 2002). Whether any of the lava dams lasted
long enough to allow the deposition of lake deposits in their upstream
reservoirs is uncertain, as deposits from deep-water lakes linked to lava
dams have not yet been verified in Grand Canyon (Kaufmann et al., 2002).
The chemical composition and different ages of the deposits lead us to
believe that at least five of these failures occurred not long after the
dams were formed. Among the geologic evidence of these floods are large
basalt boulders up to 115 feet in diameter and perched high above the
modern Colorado River. Rocks in the flood deposits are mostly basalt;
essentially these deposits are the rock that formed the dams.
We propose that some of the
dams were inherently unstable, too unstable to create long-lasting reservoirs
that would leave lake deposits behind. We hypothesize that basalt poured
over the rim of western Grand Canyon and into the gorge cut by the Colorado
River. The lava eventually “froze” in place following the
initial hydroexplosive interaction with the Colorado River, creating a
dam whose base and abutments rested on loose talus slopes and unconsolidated
river sediments. While the dam was forming, interaction of the lava and
water caused the explosive fragmentation of basalt glass and zones of
hydrothermal fracturing. These structurally weaker zones formed both at
the base and higher in the dam as the reservoir filled as quickly as the
lava piled up. At sufficient hydraulic gradients, water stored in the
reservoir flowed, or piped, through the now porous dam. The piping created
larger and larger conduits, eventually allowing water to entrain sediment
and dam material, ultimately causing the complete collapse of the lava
dam and the rapid draining of the lake behind it. Preliminary data indicate
that one of these floods was the largest ever to run through Grand Canyon
and it ranks among the largest known in the continental United States.
Until recently, the timing of landscape development in western Grand Canyon
has been mainly based on Hamblin’s (1994) interpretation of lava
dams near the Uinkaret volcanic field and age-dating of those lavas. Most
of the dating of the Uinkaret volcanic field was undertaken in the 1960s
and 1970s, and even at the time problems were known to exist with the
application of the technique to these lavas. In future articles, we will
discuss age dating of these lavas—both old and new—and detail
our studies on catastrophic dam failures and flood discharges. Stay tuned.
Cassie Fenton & Bob Webb
Fenton, C.R., Poreda, R.J., Nash, B.P., Webb, R.H., and
Cerling, T.E., Geochemical discrimination of five Pleistocene lava-dam
outburst-flood deposits, western Grand Canyon, Arizona, Journal of Geology,
Fenton, C.R., Webb, R.H., Cerling, T.E., Poreda, R.J., and Nash, B.P.,
2002, Cosmogenic 3He Ages and Geochemical Discrimination of Lava-Dam Out-burst-Flood
Deposits in Western Grand Canyon, Arizona, in House, K. et al., eds.,
Paleoflood Hydrology, American Geophysical Union, p. 191–215.
Hamblin, W.K., Late Cenozoic lava dams in the western Grand Canyon, 135
pp., Geol. Soc. Amer. Memoir 183, 139 pp., 1994.
Kaufmann, D., O’Brien, G., Mead, J.I., Bright, J., and Umhoefer,
P., 2002. Late Quaternary spring-fed deposits in the eastern Grand Canyon
and their implications for deep lava-dammed lakes, Quat. Res, 58, p. 329–340.
Lucchitta, I., G.H. Curtis, M.E. Davis, S.W. Davis, and B. Turrin, Cyclic
aggradation and downcutting, fluvial response to volcanic activity, and
calibration of soil-carbonate stages in the western Grand Canyon, Arizona,
Quat. Res., 53, 23–33, 2000.
|
2026-01-28T00:57:40.623838
|
325,067
| 3.626881
|
http://albertan1956.blogspot.com/2009_06_01_archive.html
|
Tuesday, June 30, 2009
Water flowed on the Red Planet only 2 million years ago, suggests new images from NASA's Mars Reconnaissance Orbiter.
In the Tuesday study in the journal Earth and Planetary Science Letters, orbiter images reveal melting tundra patterns preserved on Mars, leftovers of an ancient thawing.
“These observations demonstrate that ice melted near the Martian equator within the past few million years and then refroze,” says study researcher Matthew Balme of the Planetary Science Institute, based in Tucson, in a statement.
Cusped cliffs marked by trailing water marks reveal that water once flowed over the study region, 2 to 8 million years ago, according to the study. Balme suggests future landers examine the melt water region for signs of past life on Mars.
source of free photo of Mars:http://www.msss.com/mars_images/moc/2003/04/04/may02/Mars_earlyMay02.jpg
Genome Canada, Genome B.C. and Genome Alberta announced nearly $7.8 million in funding Monday for a research project to map the basic building blocks of trees and the pest that has attacked them.
Dr. Joerg Bohlmann, a professor of biotechnology at the University of British Columbia and one of the project leaders, said genomes could eventually help scientists fight the pine beetle infestation.
"The pine beetle epidemic has affected so far somewhere between 10 and 14 million hectares of pine forests in B.C. and it's going across the Rockies now into Alberta, and we really don't understand how much further it will go," he said in a telephone interview.
"One of the reasons why we have such a poor understanding and such a poor ability to predict the spread of this mountain pine beetle epidemic, is that for the longest time we haven't really fully appreciated that the mountain pine beetle epidemic isn't like an earthquake, which is beyond our control."
Bohlmann said the epidemic is caused by biological agents - in this case trees, bark beetles, and a fungus.
To read the remainder of this article go to:
"Species hunted on the high seas are particularly at risk,with more than half in danger of dying out.The main culprit is overfishing.Sharks are prized for their meat, and in Asia especially for their fins, a prestige food thought to convey health benefits."
For decades, significant numbers of sharks-including blue and mako have perished as a "bycatch" in commercial tuna and swordfish operations.
"More recently, the soaring value of shark meat has prompted some of these fisheries to target sharks as a lucrative sideline",said Sonja Forham, Policy Director for the Shark Alliance and co-author of the study.
The Spanish fleet of so-called surface fishing boats ostensibly targets swordfish, but 70% of its catch, by weight, from 2000 to 2004 was pelagic sharks.
"There are currently no restrictions on the number of sharks these fisheries can harvest,"Fordham stated."Despite mounting threats, sharks remain virtually unprotected on the high-seas."
"Sharks are especially vunerable to overfishing because most species take many years to mature and have relatively few young."
"The demand for shark fins, a traditional Chinese delicacy, has soared along with income levels in China over the last decade.Shark carcasses are often tossed back into the sea by fisherman after the fins are cut off.Despite bans in international waters, this practice-known as "finning"-is largely unregulated, experts say.
Some 100 million sharks are caught in commercial and sports fishing each year, and several species have declined by more than 80% in the past decade alone,according to the International Fund for Animal Welfare.
Source:Edmonton Journal June 25th, 2009
Wednesday, June 24, 2009
PASADENA, Calif. -- "For the first time, scientists working on NASA's Cassini mission have detected sodium salts in ice grains of Saturn's outermost ring. Detecting salty ice indicates that Saturn's moon Enceladus, which primarily replenishes the ring with material from discharging jets, could harbor a reservoir of liquid water -- perhaps an ocean -- beneath its surface."
"Cassini discovered the water-ice jets in 2005 on Enceladus. These jets expel tiny ice grains and vapor, some of which escape the moon's gravity and form Saturn's outermost ring. Cassini's cosmic dust analyzer has examined the composition of those grains and found salt within them."
"We believe that the salty minerals deep inside Enceladus washed out from rock at the bottom of a liquid layer," said Frank Postberg, Cassini scientist for the cosmic dust analyzer at the Max Planck Institute for Nuclear Physics in Heidelberg, Germany. Postberg is lead author of a study that appears in the June 25 issue of the journal Nature."
"Scientists on Cassini's cosmic dust detector team conclude that liquid water must be present because it is the only way to dissolve the significant amounts of minerals that would account for the levels of salt detected. The process of sublimation, the mechanism by which vapor is released directly from solid ice in the crust, cannot account for the presence of salt."
"Potential plume sources on Enceladus are an active area of research with evidence continuing to converge on a possible salt water ocean," said Linda Spilker, Cassini deputy project scientist at NASA's Jet Propulsion Laboratory in Pasadena, Calif. "Our next opportunity to gather data on Enceladus will come during two flybys in November."
"The makeup of the outermost ring grains, determined when thousands of high-speed particle hits were registered by Cassini, provides indirect information about the composition of the plume material and what is inside Enceladus. The outermost ring particles are almost pure water ice, but nearly every time the dust analyzer has checked for the composition, it has found at least some sodium within the particles."
To read the remainder of this article go to:
"Enceladus is the sixth-largest moon of Saturn.It was discovered in 1789 by William Herschel. Until the two Voyager spacecraft passed near it in the early 1980s, very little was known about this small moon besides the identification of water ice on its surface. The Voyagers showed that Enceladus is only 500 km in diameter, about a tenth the size of Saturn's largest moon, Titan, and reflects almost 100% of the sunlight that strikes it. Voyager 1 found that Enceladus orbited in the densest part of Saturn's diffuse E ring, indicating a possible association between the two, while Voyager 2 revealed that despite the moon's small size, it had a wide range of terrains ranging from old, heavily cratered surfaces to young, tectonically deformed terrain, with some regions with surface ages as young as 100 million years old."
"The Cassini spacecraft of the mid- to late 2000s acquired additional data on Enceladus, answering a number of the mysteries opened by the Voyager spacecraft and starting a few new ones. Cassini performed several close flybys of Enceladus in 2005, revealing the moon's surface and environment in greater detail. In particular, the probe discovered a water-rich plume venting from the moon's south polar region. This discovery, along with the presence of escaping internal heat and very few (if any) impact craters in the south polar region, shows that Enceladus is geologically active today. Moons in the extensive satellite systems of gas giants often become trapped in orbital resonances that lead to forced libration or orbital eccentricity; proximity to the planet can then lead to tidal heating of the satellite's interior, offering a possible explanation for the activity."
"Enceladus is one of only three outer solar system bodies (along with Jupiter's moon Io and Neptune's moon Triton) where active eruptions have been observed. Analysis of the outgassing suggests that it originates from a body of sub-surface liquid water, which along with the unique chemistry found in the plume, has fueled speculations that Enceladus may be important in the study of astrobiology (which is the study of the origin, evolution, distribution, and future of life in the universe. This interdisciplinary field encompasses the search for habitable environments in our Solar System and habitable planets outside our Solar System, the search for evidence of prebiotic chemistry, life on Mars and other bodies in our Solar System, laboratory and field research into the origins and early evolution of life on Earth, and studies of the potential for life to adapt to challenges on Earth and in outer space).
The discovery of the plume has added further weight to the argument that material released from Enceladus is the source of the E-ring."
"New data from the Chandra X-ray Observatory and other telescopes has helped NASA pinpoint the "coming of age" of galaxies and black holes. This is a crucial stage of the evolution of galaxies and black holes -- known as "feedback" -- that astronomers have long been trying to understand. The discovery also helps resolve the true nature of gigantic blobs of gas observed around very young galaxies."
"About a decade ago, astronomers discovered immense reservoirs of hydrogen gas -- which they named "blobs" – while conducting surveys of young distant galaxies. The blobs are glowing brightly in optical light, but the source of immense energy required to power this glow and the nature of these objects were unclear."
"A long observation from Chandra has identified the source of this energy for the first time. The X-ray data show that a significant source of power within these colossal structures is from growing supermassive black holes partially obscured by dense layers of dust and gas. The fireworks of star formation in galaxies are also seen to play an important role, thanks to Spitzer Space Telescope and ground- based observations."
"For ten years the secrets of the blobs had been buried from view, but now we've uncovered their power source," said James Geach of Durham University in the United Kingdom, who led the study. "Now we can settle some important arguments about what role they played in the original construction of galaxies and black holes."
"Galaxies are believed to form when gas flows inwards under the pull of gravity and cools by emitting radiation. This process should stop when the gas is heated by radiation and outflows from galaxies and their black holes. Blobs could be a sign of this first stage, or of the second."
"Based on the new data and theoretical arguments, Geach and his colleagues show that heating of gas by growing supermassive black holes and bursts of star formation, rather than cooling of gas, most likely powers the blobs. The implication is that blobs represent a stage when the galaxies and black holes are just starting to switch off their rapid growth because of these heating processes. This is a crucial stage of the evolution of galaxies and black holes - known as "feedback" - and one that astronomers have long been trying to understand."
To read the remainder of this article go to: http://www.nasa.gov/mission_pages/chandra/news/09-047.html
Sources of images:Left panel: X-ray (NASA/CXC/Durham Univ./D.Alexander et al.); Optical (NASA/ESA/STScI/IoA/S.Chapman et al.); Lyman-alpha Optical (NAOJ/ Subaru/Tohoku Univ./T.Hayashino et al.); Infrared (NASA/JPL-Caltech/ Durham Univ./J.Geach et al.); Right, Illustration: NASA/CXC/M.Weiss
Written by Jeff Tollefson :
(Information about Jeff can be read at: http://www.nature.com/news/author/Jeff+Tollefson/index.html)
"Modern refrigerants designed to protect the ozone layer are poised to become a major contributor to global warming because of their future explosive growth in the developing world, scientists report this week."
"Hydrofluorocarbon chemicals (HFCs) were developed to phase out ozone-depleting gases, in response to the Montreal Protocol. But they can be hundreds or thousands of times more powerful than carbon dioxide as greenhouse gases in trapping heat. HFCs are deployed in refrigerators and air-conditioning units, and their use is poised to grow in the coming decades."
"In the new study, a team led by Guus Velders at the Netherlands Environmental Assessment Agency in Bilthoven analysed the latest industry trends and then modelled HFC production to 2050. Their results suggest that HFC emissions could be the equivalent of between 5.5 billion and 8.8 billion tonnes of carbon dioxide annually by 2010 — roughly 19% of the projected CO2 emissions if greenhouse gases continue to rise unchecked (G. J. M. Velders et al. Proc. Natl Acad. Sci. USA doi:10.1073.pnas.0902817106; 2009)."
"The new numbers will fuel the efforts of environmentalists and others who have been pushing for aggressive new HFC regulations. Manufacturers could shift towards using HFCs with the lowest climate impact during the transition to a new generation of refrigerants — still under development — that affect neither the ozone layer nor the climate."
"Now is the moment to make a decision to steer this in a direction that you want," Velders says. "The developing world is already in the transition to HFCs."
"Although it makes no policy recommendations, the study could play into an ongoing political debate on regulating the chemicals. HFCs currently fall under the umbrella of the Kyoto Protocol on climate change, but advocates say the fastest and cheapest way to handle them is under the ozone treaty. Montreal delegates plan to discuss the issue when they meet in Geneva next month."
To read the remainder of this article go to:
The politician apologized late Monday afternoon for controversial comments he posted on his blog and insisted that he is not sexist.
"I can't defend or justify this action in any way at all," Elniski said.
On Tuesday, the rookie MLA was called to a meeting with Premier Ed Stelmach about the situation.
"These are all distractions they're totally inappropriate, they don't reflect my values, they don't reflect the values of our government, they don't reflect the values of the caucus nor of the Alberta Progressive Conservative Party," Stelmach said Tuesday.
The apology given by Elniski is enough punishment and the MLA will not be removed from any of his committee assignments, the premier said.
A guide will be distributed to all Conservative MLAs outlining what is appropriate to write on social networking sites, according to Ron Glenn, the premier's chief of staff.
In his blog, Elniski offered advice to junior high school girls. He suggested that a girl wear a smile when entering a room, and that men don't want to hear about that "treated equal" stuff.
Elniski's blog was taken down on Monday afternoon. He will now have his postings reviewed before he re-starts the blog, Elniski said."
An apology is not good enough for me, and I imagine it is not good enough for the people of our province, especially the women of our province and the world! Elniski's views are clearly sexist and he should be forced to resign his seat in the Alberta Government, and be replaced with a person who is tolerant towards women and all groups.If Mr. Elniski is a chauvanist one wonders what other intolerant viewpoints he has towards individuals or groups of people in this world!
Elniski words are clearly showing he has beliefs and an attitude that the female gender is inferior to, less competent, or less valuable than the male gender. He has made Alberta and Albertans look bad in the eyes of the world, made us look intolerant and backward in our thinking, and he should resign his seat in the legislature!
After some investigation, what is even more appalling to me is that I discovered that at one time Mr. Elniski actually was employed in the field of "Adult" education (see:http://www.assembly.ab.ca/net/index.aspx?p=mla_bio&rnumber=26.), and that currently he serves as deputy chair of the Alberta Heritage Savings Trust Fund Committee. Do the people of this province want someone with his attitudes in charge of Alberta's main financial resource, if he is going to show bias towards women and girls? I say no!
What is really interesting about this situation is that the residents of Alberta apparently have been told we cannot contact Mr. Elniski personally to express our views to him about his behavior. I say this because I was unable to send an email to
Mr Elniski containing my views about his behavior. Has Mr. Stelmach shut down the Mr. Elniski's email address so that the residents of this province cannot contact him?
I suggest all of you write to Mr Elniski at email@example.com and express your views regarding his comments and you contact the Premier to express your thoughts about this situation also. You can do so by going to http://www.premier.alberta.ca/contact/contact.cfm
Tuesday, June 23, 2009
"One such gas is nitrogen trifluoride (NF3), which is used to make retail items like microchips and flat-screen TVs. Nitrogen trifluoride is a colorless gas with a moldy odor.
NF3 is a greenhouse gas, with a GWP 17,200 times greater than that of CO2 when compared over a 100 year period. (Global warming potential a measure of how much a given mass of greenhouse gas is estimated to contribute to global warming. It is a relative scale which compares the gas in question to that of the same mass of carbon dioxide (whose GWP is by definition 1). A GWP is calculated over a specific time interval and the value of this must be stated whenever a GWP is quoted or else the value is meaningless.To see the mathematical formula which is used to calculate the GWP go to:http://en.wikipedia.org/wiki/Global_warming_potential)
In a study published in Geophysical Research Letters see ,
"researchers analyzed air samples and found that atmospheric NF3 seems to be growing by 11 percent each year across the globe. NF3 lingers in the air for 550 years, on average, and is 17,000 times better at trapping heat than CO2 on a molecule-per-molecule basis. Today the effect of NF3 on climate is just 0.04 percent that of carbon dioxide, but its role could grow dramatically if more manufacturers start using it, says study author Ray Weiss, a geochemist at the Scripps Institution of Oceanography. NF3 emissions are not currently regulated by any government."
"A more immediate problem for climate change is methane, which is released by landfills and melting permafrost and through farming practices. Levels of this gas are increasing today after eight years of stasis, according to another study in Geophysical Research Letters. Methane remains in the atmosphere one-tenth as long as CO2—about a decade—but traps 20 times as much heat."
According to an article entitled, "Exclusive: The methane time bomb", written by Steve Conner at http://www.independent.co.uk/environment/climate-change/exclusive-the-methane-time-bomb-938932.html
"The first evidence that millions of tons of a greenhouse gas 20 times more potent than carbon dioxide is being released into the atmosphere from beneath the Arctic seabed has been discovered by scientists.
The Independent has been passed details of preliminary findings suggesting that massive deposits of sub-sea methane are bubbling to the surface as the Arctic region becomes warmer and its ice retreats.
Underground stores of methane are important because scientists believe their sudden release has in the past been responsible for rapid increases in global temperatures, dramatic changes to the climate, and even the mass extinction of species. Scientists aboard a research ship that has sailed the entire length of Russia's northern coast have discovered intense concentrations of methane – sometimes at up to 100 times background levels – over several areas covering thousands of square miles of the Siberian continental shelf.
Methane is about 20 times more powerful as a greenhouse gas than carbon dioxide and many scientists fear that its release could accelerate global warming in a giant positive feedback where more atmospheric methane causes higher temperatures, leading to further permafrost melting and the release of yet more methane.
The amount of methane stored beneath the Arctic is calculated to be greater than the total amount of carbon locked up in global coal reserves so there is intense interest in the stability of these deposits as the region warms at a faster rate than other places on earth."
"The conventional thought has been that the permafrost 'lid' on the sub-sea sediments on the Siberian shelf should cap and hold the massive reservoirs of shallow methane deposits in place. The growing evidence for release of methane in this inaccessible region may suggest that the permafrost lid is starting to get perforated and thus leak methane... The permafrost now has small holes. We have found elevated levels of methane above the water surface and even more in the water just below. It is obvious that the source is the seabed."
Igor Semiletov of the Far-Eastern branch of the Russian Academy of Sciences believes
several possible reasons why methane is now being released from the Arctic, include the rising volume of relatively warmer water being discharged from Siberia's rivers due to the melting of the permafrost on the land.
The Arctic region as a whole has seen a 4C rise in average temperatures over recent decades and a dramatic decline in the area of the Arctic Ocean covered by summer sea ice. Many scientists fear that the loss of sea ice could accelerate the warming trend because open ocean soaks up more heat from the sun than the reflective surface of an ice-covered sea." (source:http://www.independent.co.uk/environment/climate-change/exclusive-the-methane-time-bomb-938932.html
No one yet knows the extent to which methane and NF3 will impact global temperatures, but NASA climate scientist Ralph Kahn says one thing is certain: “We know it’s more than just CO2 that matters.” His colleague James Crawford adds, “There’s going to be a lot more looking at this, trying to understand what is going on.”
source of article:"Dioxide May be the Least of Our Warming Worries, written by Melinda Wenner is:
Monday, June 22, 2009
You have to be kidding right? How can scientists compare sharks, which have an innate and learned behavior which allows them to survive (as in their ability to hunt prey species within the waters in which they swim), to Human serial killers who usually don’t have a connection with the victim and they very rarely if ever have a rational motive for their killing of another human or humans? Serial murderers also have different motives for their murders. One of the most obvious is that they turn to murdering for a sense of power. For this sense of power they usually attack societies weakest members and those weaker than themselves (Forwood). This includes the homeless, impaired, and usually the young of both sexes. When they kill the homeless and impaired they are usually acting on the Missionary Motive. This is when they feel that it is their responsibility to rid society of its unwanted inhabitants. There are also Visionary killers which are usually instructed to kill by the voices that are in their head (schizophrenia). The last type of these motives is the Hedonist. They kill because it brings them the pleasure to do so. Forwood, Bill. “Repeatedly Killing... Why?!”n.pag. Online. Internet. 22 April. 1999. http://www.geocities.com/CapeCanaveral/1682/Physio.htm
How can these scientists have the gall to say that these sharks murder their prey in cold blood,when in reality,they are doing what do to survive as individual life-forms on this planet.
In contrast to humans, shark's were placed on this earth within the food chain as predators. And as such they possess some amazing senses, which allow them to be the great predators that they are. The success of sharks is due largely to these physiological advancements -- they are superbly built to find food.
University of Florida shark attack researcher George Burgess, who had no role in the study done by these scientists who call sharks "serial killers of the seas" , said the researchers simply used a new tool to show what scientists pretty much knew all ready: “Sharks are like many other predators that have developed patterns to their attacking that are obviously beneficial as a species.”
According to an article written by Katherine Harmon at the the website:
"a lone male wolverine arrived in northern Colorado earlier this month, making him the first confirmed wolverine in the state since 1919.
In December, conservation biologists had outfitted the young wolverine, which is part of a reintroduction program farther north, with a tracking collar and watched him make the 500-mile (805-kilometer) journey from the Grand Teton National Park in Wyoming, crossing rugged landscapes and even busy Interstate-80, reports the Denver Post."
"Proponents of reintroduction efforts, however, maintain that these large members of the weasel family are mostly scavengers and don't pose much of a threat to livestock—despite their Latin name, Gulo gulo, which means "glutton."
" A popular target of early fur trappers in the American West, wolverines had pretty much vanished from the lower 48 states 80 years ago. Today, according to Inman's estimates, there are about 250 roaming the country. In more recent times, there have been several unconfirmed sightings in Colorado, according to a 2004 article from The Rocky Mountain News."
It is nice to see that the wolverine has returned to Colorado. I have always had a fascination with this amazing animal, especially since I watched an hour show about Wolverines on the Marty Stouffer show "Wild America". In that television show Marty was able to show viewers amazing video of Wolverines never seen before (including a conflict between a mother wolverine and her cub and a badger.The badger was able to survive this conflict because it was able to back into a tree stump, and the two wolverines could not do anything against it because the Badger's skin is so loose that the wolverines could not grab onto it. Apparently Wolverines and Badgers are mortal enemies.) Sadly, I see that Marty Stouffer's methods were subject to severe criticism and censure, and in fact he was forced to pay a large amount of money to the Aspen Center for Environmental Studies, back in 1996 because he illegally gouged a trail through protected land belonging to the Center. *sigh* see:http://outside.away.com/magazine/0696/9606diwi.html I guess we all can come to our own conclusions about Marty Stouffer based upon this news article.
Returning to the news article about the reintroduction of Wolverines into Colorado, made me want to see what the present population of Wolverines was in Canada. According to my research online:
"Within its range, the wolverine occupies many different kinds of habitats. Wolverines generally prefer remote areas, far away from humans and their developments. However, the specific characteristics of the wilderness that the wolverine depends upon are not yet known. Labrador and Quebec, for example, have not been recolonized by wolverines, despite the abundance of caribou and undisturbed habitat. This lack of knowledge about wolverine habitat makes it difficult for wildlife managers to manage the species and protect its habitat."
" One specific type of habitat wolverines need is the den used by the female to give birth and raise her kits. Finding such a den is difficult. Most dens that have been found are in tundra regions and consist of a complex of snow tunnels associated with boulders or rocks. The configuration of the rocks results in natural cavities under the snow, which form dens for the wolverines."
" The home range of an adult wolverine extends from less than 100 km2 for females to over 1 000 km2 for males. These home ranges are the largest reported for a carnivore of this size, and in many areas they rival the home ranges of bears, wolves, and cougars. The size of the home range varies depending on the availability of food and how it is distributed across the landscape — the more food there is, the smaller the home range needs to be."
"The density of wolverines ranges from one individual per 40 km2 to one per 800 km2. Those regions that have the most different kinds of habitat and prey, particularly those that include large ungulates, or animals with hooves, contain the most wolverines. The mountainous and forested areas of British Columbia and Yukon have the highest densities, although these numbers are still low compared with the densities of other carnivores. Densities of wolverines in Manitoba and Ontario are lower. The rarity of wolverines becomes readily apparent when their density is compared with the density of other solitary carnivores: one coyote per 0.5 to 10 km2 and one grizzly bear per 1.5 to 260 km2."
"The wolverine is found throughout all northern regions of the globe. Wolverines are not abundant anywhere, even where they do well. The species is known for a large home range and low density, which is a measure of its numbers. The Committee on the Status of Endangered Wildlife in Canada considers wolverines found west of Hudson Bay to be of “special concern” and the eastern population, found in Quebec and Labrador, to be “endangered.”
"Historically, before the appearance of Europeans in North America, wolverines occurred throughout Canada and Alaska, with some small extensions of this range into the western United States and into the Great Lakes area. They occupied a wide variety of habitat types, excepting very dry, hot areas."
"A portion of the wolverine’s historical range has been lost. Wolverines have also disappeared from areas with relatively intact habitats. Eastern Canada and the western United States have been particularly hard hit. Wolverines disappeared most rapidly at the edges of their distribution and in Eastern Canada. We do not know if any wolverines still occur in Eastern Canada, although Labrador and Quebec are still considered part of the current distribution. Similarly, whether wolverines still occur on Vancouver Island is unknown."
"There are two main reasons why wolverine populations disappeared from parts of North America. The first is that wolverines are scavengers—which means they feed on carrion, or dead animals—and are attracted to bait. Because the wolverines damaged traplines, early trappers used any means to kill them, including poison. The extensive wolf poisoning programs that occurred throughout Canada beginning in the late 1700s also killed many wolverines."
"The second, and more important, reason for the decline of wolverine populations is that wolverines have a low resiliency because of their low densities and low reproduction, or the number of young that are successfully produced and raised. This means that wolverine populations have a difficult time rebounding once their numbers have been lowered by either nature or human-influenced factors."
According to the website:http://www.currentresults.com/Wildlife/Endangered-Species/Endangered-Mammals/wolverine-709211.php
"On mainland BC, wolverines are estimated to number 3,520. They comprise part of the western Canada population that spans boreal and arctic regions. Western Canadian wolverines in 2003 probably totalled 15,000 to 19,000 animals. Their populations, however, have lately fallen in Alberta, Ontario and southern BC."
"The same factors that have removed wolverines from much of their range - overharvesting and human encroachment into their habitat - continue to plague them. Wolverines suffer from unsustainable hunting and trapping in 21% of BC's population units. A 2005 study in western Montana found that licensed trapping largely contributed to wolverine population declines of 30% a year in four mountain ranges."
"Research in BC's Columbia Mountains concludes that outdoor recreation and logging activities cause wolverines to avoid habitat. In this popular winter recreation region, wolverines stay away from areas used by heli-skiers or backcountry skiers. In summer, females also shun roads and recent logging."
According to the website: http://www.naturecanada.ca/endangered_know_our_species_wolverine.asp
" Wolverines have a lifespan of 17 years. Their size varies between the male and the female: Adult males weigh approximately 14 kg; females, 9 kg. The Adult male is
approximately 1 m long, the female is shorter in length.
"Wolverines are known for traveling long distances. Their range extends from less than 100-sq. km for females to more than 1000-square-kilometres for males. These are the largest reported home ranges for a carnivore."
"When food is scarce, a high percentage of the population will not have young. Females have a delayed implantation mechanism that allows them to have young when food is most abundant and to adjust the size of the litter to the availability of food."
"Wolverines are non-migratory and do not hibernate during the winter. They’re active day and night and alternate three to four hour periods between activity and sleep."
Based upon my reading online it is wonderful that several provinces in Canada have decided to engage in studies of the wolverine (for example in Ontario see this website:
and in British Columbia:
I hope that concerning scientists can find out more about wolverines so they can be conserved in the wild. Hopefully,these scientists will remember their ethics and values in conducting their research, and be most interested in the Wolverine and its survival on our planet! Thanks to NPS Photo for allowing me to use their photo of the Wolverine for this blog entry. If you need a photo for your blog or for other reasons go and visit http://www.weforanimals.com/
source of photo:http://www.weforanimals.com/free-pictures/wild-animals/wolverines/1/wolverine-2.htm
Currently, at the Scientific America website is an interesting article about jellyfish. The article written by Katherine Harmon is called "Jellyfish Jamboree--Are They Set to Seize the Seas?" and includes a slide show of jellyfish. Katherine Harmon begins this article by stating:
"Bloomin' jellyfish! Overfishing, climate change and ocean dead zones may be downers for humans and other critters, but they turn out to be a boon for jellyfish schools, reports the recent "Jellyfish Joyride" paper in Trends in Ecology and Evolution.
A surge in jellyfish populations may eventually lead to what study authors call "a less desirable gelatinous state," which could have "lasting ecological, economic and social consequences."
To read the remainder of this article go to:
According to an article at: http://www.news24.com/News24/Technology/News/0,,2-13-1443_2441803,00.html
"Huge swarms of stinging jellyfish and similar slimy animals are ruining beaches in Hawaii, the Gulf of Mexico, the Mediterranean, Australia and elsewhere, US researchers reported on Friday.
The report says 150 million people are exposed to jellyfish globally every year, with 500 000 people stung in the Chesapeake Bay, off the US Atlantic Coast, alone.
Another 200 000 are stung every year in Florida, and 10 000 are stung in Australia by the deadly Portuguese man-of-war, according to the report, a broad review of jellyfish research.
The report, available on the internet at www.nsf.gov, says the Black Sea's fishing and tourism industries have lost $350m because of a proliferation of comb jelly fish.
The jellyfish eat the eggs of fish and compete with them for food, wiping out the livelihoods of fishermen, according to the report.
And it says a third of the total weight of all life in California's Monterey Bay is made up of jellyfish.
Human activities that could be making things nice for jellyfish include pollution, climate change, introductions of non-native species, overfishing and building artificial structures such as oil and gas rigs.
Creatures called salps cover up to 100,000 sq km of the North Atlantic in a regular phenomenon called the New York Bight, but researchers quoted in the report said this one may be a natural cycle.
"There is clear, clean evidence that certain types of human-caused environmental stresses are triggering jellyfish swarms in some locations," William Hamner of the University of California Los Angeles says in the report.
These include pollution-induced "dead zones", higher water temperatures and the spread of alien jellyfish species by shipping."
According to J. E. Purcell, W. W. Graham, and H. J. Dumont, editors, in the book, "Jellyfish Blooms: Ecological and Societal Importance",
"Jellyfish are to the oceans what pigeons are to cities. Both animals seem to be able to flourish in environments that have been radically altered by human activities.
In many places around the world, jellyfish populations are dramatically increasing. Although the increase may be part of a natural cycle in some areas, the overall upward trend far exceeds anything that would be naturally expected. The suspected cause of the increase is human disruption of coastal ecosystems and other human-induced environmental stress, such as nutrification of the water from sewage or fertilizer runoff, overfishing of competitor fish species, depletion of sea turtle populations, and rising water temperatures from global warming.
The proliferation of jellyfish has caused record numbers of stings—some resulting in fatalities—of beachgoers and aquatic sports enthusiasts in Hawaii, New Zealand, and Australia, among other places. Likewise, the booming jellyfish populations have wreaked severe economic damage in the Gulf of Mexico and some other places where fishing nets are now filled with slimy gelatin instead of succulent shrimp.
So again we have human actions causing a dramatic change in the natural balance of life in the World. Let us hope for the sake of those who like to visit and swim in the oceans of the world, that steps are taken to educate people about their role in the proliferation of jellyfish in the World's acquatic regions,so that behavioral changes will allow these jellyfish populations to be returned to what they used to be, in balance with the other life forms in the oceans and other water areas where
marine life exists on our planet.
So see a United States National Foundation video about jellyfish go to:http://www.nsf.gov/news/special_reports/jellyfish/index.jsp
Source of image:http://www.pdphoto.com/
Friday, June 19, 2009
"Pollution in Southeast Asia's Mekong River has pushed
freshwater dolphins in Cambodia and Laos to the brink
of extinction." The World Wide Fund For Nature
(WWF) said only 64 to 76 Irawaddy dolphins remain in
the Mekong after toxic levels of pesticides,mercury and
other pollutants were found in more than 50 calves who
have died since 2003.
The Mekong River, (see map below,the river is represented in the color blue) approximately 4180km in length, originates from Tibet and runs through Yunnan province of China, Myanmar, Thailand, Laos, Cambodia and South Vietnam.
"These pollutants are widely distributed in the environment and so the source of this pollution may involve several countries through which the Mekong River flows," said WWF veterinary surgeon Verne Dove in a press statement.
"The organization said it was investigating how environmental contaminents got into the Mekong, which flows through Cambodia,Laos,Myanmar,Thailand, Vietnam and the southern Chinese province of Yunnan."
"The WWF said it suspects that high levels of mercury found in some dead dolphins came from gold mining activities."
"It added that Irrawaddy dolphins in Cambodia and Laos urgently needed a health program to counter the effects of pollution on their immune system."
"The Mekong River Irawaddy dolphin, which inhabits a 190-kilometer stretch of Cambodia and Laos, has been listed as critically endangered since 2004," said the WWF....there numbers were cut by illegal fishing nets and Cambodia's drawn out civil conflict, in which dolphin blubber was used to lubricate machine parts and fuel lamps."
"The Cambodian Government, however has been promoting dolphin-watching to attract ecotourism and has cracked down on the use of illegal nets which entangle them."
"The river is the World's largest inland fishery, producing some 2.5 million tonnes of fish per year valued at more than $2 billion."
source of map:http://www.absoluteastronomy.com/topics/Prehistoric_Malaysia
"A leading Canadian expert on circumpolar politics is praising the ruling federal Conservative government in Canada for strengthening the country's control over its Arctic waters through environmental legislation that came into force last week."
This legislation extends Canadian authority over Arctic shipping by an additional 100 nautical miles (185 kilometers), beyond the current 100-mile control zone in the waters off Canada's northern coastline. Stated Canadian Government Transport Minister John Baird
"With these amendments, Canada increases its ability to protect its Arctic waters from pollution by expanding the geographic area covered under the Arctic Waters Pollution Prevention Act, which is aimed at preventing ship-source pollution, said Baird. "These measures will help to ensure environmentally responsible shipping in our Arctic waters."
My reaction to this is that it is great that the Canadian Government is going to take more responsibility for ensuring our Arctic waters are protected from pollution. I wish however, the Government had indicated how they were going to monitor this area so that other nations of the world feel obligated to ensure that the ships owned by their people are sea-worthy and in a good state of condition,to ensure that any chance of pollution to these waters is minimized. Does the Conservative Government intend to purchase ships and long-range aircraft to monitor the shipping in the Arctic, to ensure that these waters of ours are not polluted and also ensure that Canada's claim to these waters as our own is enforced?
GM Michael Adams will be participating in the upcoming Canadian Open Chess Tournament in my home city of Edmonton,Alberta, a tournament which runs from July 11th-19th, 2009. The official tournament website for the Championship is: http://monroi.com/2009-canadian-open-chess-championship-schedule.html
GM Boris Savchenko finished in second place with a score of 5 1/2 points and GM Georg Meier ended the tournament with 5 points which resulted in a third place finish for the German chess player.
Congratulations GM Dominguez Perez!
To me this indicates that Environment Canada must find ways to inform our population that an air-quality advisory has occurred. This includes using the media, and the internet to give Canadians (especially those who may suffer severe, and even life-threatening symptoms due to exposure to poor quality air) more warnings. Companies could be asked to mention the quality of air that is occurring in a city during the day, to their employees, and some sort of electronic message system could be used to inform the population that an air quality advisory exists.
I have noticed that certain organizations online have been created which will send email to people who subscribe to a list, indicating to the subscriber that a air-quality advisory has been issued in a certain area. For instance this website: http://ysaqmd.enviroflash.org/ does that. This website has been established by the United States Environmental Protection agency.
In Canada, the Government of Ontario has established the website: http://www.airqualityontario.com/reports/summary.cfm, where residents of Ontario can check to see what the air quality is on a daily basis in their living area. It is a matter of education the population regarding the sources of this information so people can go to these sources and then take the necessary precautions if an air quality advisory has been issued in their place of residence.
"A University of Alberta study shows the amount of mercury flowing into the Arctic Ocean from the Mackenzie River estuary may be more than originally thought."
"Mercury is important, especially in Canada's Arctic, because it's a neurotoxin in foods,"said Jennifer Graydon,who published her findings in the journal of /Science of the Total Environment/ earlier this year.
"Large amounts of mercury in the body can damage the nervous system." "Previous published studies have shown Western Arctic belugas (whales), have higher concentrations of methyl mercury than those found in the Eastern Artic, "Graydon said. "So we thought let's look at the river,not as a direct source to the whales, but as least as a source of mercury to that region."
"Graydon and three colleagues tested mercury concentrations in samples taken in 2004 from the Mackenzie River upstream from the Mackenzie River Delta, and in six floodplain lakes."
"They discovered the total amount of mercury from the river during that three-month period was EQUAL TO TO AN ENTIRE YEAR'S WORTH OF MERCURY CALCULATED IN PREVIOUS STUDIES."!!!!
"Since then, samples taken over another three years have contained mostly the same concentrations."
"A University of Manitoba researcher and colleague discovered that most of the mercury flowing into the Mackenzie River Basin is picked up by water as it flows down from the Mackenzie Mountains."
"The mineral soils of the mountains contain naturally high levels of mercury,said Jesse Carrie, a graduate student."The other major mercury contributor was a coal bed, he added.
"Carrie suspects the mountains have been adding mercury to the system since the last Ice Age.
But he notes that levels of mercury in animals in the area have increased "Dramatically" since the 1980's. For example mercury concentrations between 1986 and 2008 have doubled in a fish species called burbot near Fort Good Hope. Carrie and his colleagues are now trying to figure out why."
"It is possible climate change has lengthened the growing season for bacteria and plankton which create the form of mercury that is accumulated by animals," he said.
Source:The Edmonton Journal, Wednesday, June 17,2009 Writer:Hanneke Brooymans :Journal Environmental Writer
Let us hope these researchers can determine why the mercury levels have increased so that whatever steps are possible can be taken to reduce these levels if at all possible. Of course if these levels are rising due to natural processes, it is going to make it difficult for scientists to come up with a plan to reduce levels of mercury in the Mackenzie River and the Mackenzie Delta
Source of map of Mackenzie Delta (click on it to enlarge it):http://www.canadiangeographic.ca/magazine/so07/indepth/images/map2.jpg
If you so desire you can click on the map provided above to enlarge it.
The Heritage Valley Town Center is expected to include two high-schools,a recreation center,an LRT Station and a main street to serve 100,000 people who will eventually live southwest of Anthony Henday Drive and Gateway Boulevard.
To reduce the need for private vehicles,the center will feature and LRT (Light-rail Transit) station.States senior city planner Tim Brockelsby, That's not to suggest everyone will work there and live there...but the option is there," he said. "A lot of our suburban development is auto-orientated...There's a shift to making communities more self-sustaining. Part of it relates to having more localized activity,the whole walkable notion."
Source:Edmonton Journal newpaper, June 17,2009. Writer:Gordon Kent Civic Affairs Writer
This sounds like a wonderful idea to me. I would much rather live in a part of the city where I can visit all the places I need to visit to live in short proximity to where I live,rather than having to use transportation which sometimes takes me where I have to go, but with me being the only person in the vehicle. If many or most of the services and businesses I had to visit and use during the week were in close proximity to me, it would take less fuel and cause less damage to the environment.
(a)Youth award:Eco-Air (Edmonton,Alberta)-Eco-Air is devoted to youth who volunteer their time towards the promotion of vehicle anti-idling initiatives. For more on Eco-Air go to:
http://greenedmonton.ca/eco-air or visit their Facebook page at: http://www.facebook.com/group.php?gid=9847017733
(b)Not for profit association award:Alberta Birds of Prey (Coaldale,Alberta)-Wildlife rescue center. For more information about the Alberta Birds of Prey foundation go to:http://www.burrowingowl.com/
(c)Community Group award:River Valley Alliance (Edmonton,Alberta)-Protection of the North Saskatchewan River Valley. To gain an understanding of this group visit their website at:
(d)Individual committment award:David Manz. P.Eng.P.Ag (Calgary,Alberta)-Inventor of the biosand water filter. For more information about David Manz and his biosand water filter visit
http://www.abheritage.ca/abinvents/inventors/davidmanz_biography.htm, http://www.manzwaterinfo.ca/faq.htm, and http://en.wikipedia.org/wiki/BioSand_Filter
(e)Individual committment award:Jill and Basil Seaton (Jasper-Alberta)-Lifetime of environmental volunteering. To learn more about the work of the Seatons visit this website:
(f)Education award (School or Classroom):Calgary,Alberta Zoo-Helping schools maintain natural areas For more information regarding what the Calgary Zoo did to help schools maintain natural areas go to: http://www.emeraldfoundation.ca/emerald_awards/past_recipients/2009/calgary_zoo
To visit the Calgary Zoo website click on: http://www.calgaryzoo.org/
(g)Education award (Non-Formal):City of Calgary,Alberta-Mayor's Environmental Expo To find out information about the Calgary Mayor's Environmental Expo go to this website: link
(h)Business award-Lafarge (Fish Creek,Alberta)-Reclamation of gravel pit into meadows For information concerning what Larfarge Cement Plant did go to: link
(i)Business award-Logical Creations Ltd. (Airdrie,Alberta)-Innovative recycling practises in furniture manufacturing. For infornation about the recycling practices of Logical Creations Ltd. go to: http://www.ec.gc.ca/pp/EN/storyoutput.cfm?storyid=142
(j)Government institutions award-Ecoteam (Calgary,Alberta)-Improving ecological literacy at the Calgary Board of Education. For more information about Ecoteam visit: http://www.imaginecalgary.ca/newsletters/imagineCALGARY_july_2007.pdf
Congratulations to all of the winners and thank you for taking such an interest in the environment!!
Wednesday, June 17, 2009
"The (Canadian) Federal Government is enhancing protections for endangered North Atlantic right whales by adding two important feeding grounds to the Species at Risk Act.
The Roseway Basin off Nova Scotia and the Grand Manan Basin in the Bay of Fundy have been listed as critical habitats in the Act.
The measure means Ottawa is obligated to legally protect the areas from activities that might harm them.
The feeding grounds are important for the massive mammals as they migrate from breeding grounds in the southern United States to Canadian waters, where they feed in the summer.
There are only about 400 of the animals left in the world, with ship strikes and entanglements in fishing gear posing the biggest threats.
David Millar of the federal Fisheries Department says Ottawa has 180 days to put the legal protections in place."
To see where these locations are go to this link:http://www.rightwhale.ca/images/distribution_canada_hd.jpg
Source of free image of the Northern right whale:http://animals.nationalgeographic.com/animals/enlarge/right-whale_image.html
Round 5 (June 12, 2009) The player on the left had the white pieces:
Meier, Georg - Dominguez Perez, Leinier result: draw ( ½-½) number of moves: 30 Opening: English Opening: Symmetrical variation ECO:A34
Savchenko, Boris- Khenkin, Igor result: draw (½-½) number of moves: 20 Opening:Sicilian Defense: Maroczy Bind ECO:B39
Timofeev, Artyom- Bruzon Batista, Lazaro result: draw ( ½-½) number of moves: 61
Opening: Sicilian Defense:Najdorf variation ECO:B92
These results meant the following standings existed after 5 rounds of play:
1st:Dominguez Perez 3 1/2 points, 2nd: Meier 3 points, 3rd/4th: Khenkin and Savchenko 2 1/2 points, 5th: Bruzon Batista 2 points, 6th: Timofeev 1 1/2 points.
Round 6 of the tournament was played on June 13th and only the result of the following game was posted at the official tournament website:
Bruzon Batista, Lazaro - Dominguez Perez, Leinier result: ½-½ number of moves: 14 Opening: Sicilian Defense: Najdorf variation ECO code:B92
Round 7 was played on June 14th with these results occurring:
Savchenko, Boris - Bruzon Batista, Lazaro result: draw (½-½) draw number of moves: 33 Opening: Queens Gambit Declined:Tartakover defense ECO:D59
Khenkin, Igor - Dominguez Perez, Leinier result: draw ( ½-½) number of moves: 17 Opening:Grunfeld Defense: Gruenfeld Defence ECO:D85
Timofeev, Artyom - Meier, Georg result: draw (½-½) number of moves: 36
Opening: French Defense: Rubinstein variation eco:C10
Round 8 of the tournament was played on Monday June 15th with these results occurring:
Dominguez Perez, Leinier - Timofeev, Artyom result:draw (½-½) number of moves: 43
Opening: Sicilian Defense:Sveshnikov variation ECO:B33
Meier, Georg - Savchenko, Boris result: draw ( ½-½) 10 moves, Opening: Ruy Lopez Exchange variation ECO:C68
Bruzon Batista, Lazaro - Khenkin, Igor result: draw (½-½) 12 moves
Opening: Ruy Lopez Berlin Defense ECO:C67
Round 9 of the tournament was held on Monday June 16th and these results occurred:
Meier, Georg - Bruzon Batista, Lazaro result: draw (½-½) number of moves: 42
Opening: Catalan ECO:E05
Savchenko, Boris - Dominguez Perez, Leinier result: draw (½-½) number of moves: 48
Opening: English Opening: Botvinnik variation ECO:A36
Timofeev, Artyom - Khenkin, Igor result: 1-0 number of moves: 38 Opening: Caro Kann
Here are the standings after 9 rounds of play:
(2 more rounds are to be played in the tournament):
1st. Dominguez Perez 5 1/2 points, 2nd.Meier 4 1/2 points, 3rd-4th.Savchenko and Bruzon Batista, 4 points, 5th-6th: Khenkin, and Timofeev 3 1/2 points
Friday, June 12, 2009
GM Leinier Dominguez Perez Cuba, elo 2717, (seen on the left), GM Artyom Timofeev, Russia,elo 2677,
GM Boris Savchenko, Russia, elo 2655, GM Georg Meier Germany 2641, GM Igor Khenkin, Germany, elo 2630, and GM Lazaro BruzonBatista Cuba, elo 2617.
Here are the standings after
four rounds of play:
1st:Dominguez Perez 3 points, 2nd: Meier 2 1/2 points, 3rd/4th: Khenkin and Savchenko 2 points, 5th: Bruzon Batista 1 1/2 points, 6th: Timofeev 1 point.
Here are the last round (round 8) results:
Gashimov- Efimenko result: ½-½ (draw) number of moves: 44 Opening:Ruy Lopez Berlin Defense ECO code:C67
Rublevsky- Naiditsch result: 1-0 number of moves: 55 Opening:Scotch Game ECO:C45
Bologan- Motylev result: ½-½ (draw) number of moves: 59 Opening: Petroff's Defence ECO code: C42
Onischuk- Inarkiev result: ½-½ (draw), number of moves: 41 Opening: King's Indian Defense: Classical variation ECO:E92
Shirov- Sutovsky result:½-½ (draw), number of moves: 28 Opening: French Defense: Tarrasch variation ECO:C07
Here are the final standings:
1st :Motylev 7 points, 2nd:Gashimov 6 points, 3rd:Sutovsky 5 points, (on tiebreak):
4th:Inarkiev 5 points, 5th:Rublevsky (on tie-break) 4 1/2 points, 6th:Bologan 4 1/2 points,
7th:Onischuk 4 1/2 points, 8th:Naiditsch 3 1/2 points, 9th Efimenko 3 points, 10th:Shirov 2 points
Thursday, June 11, 2009
Motylev--- Gashimov result:1-0 number of moves: 75 Opening: Petroff's Defence ECO code:C42
Sutovsky---- Onischuk result: ½-½ (draw) number of moves: 21 Opening: Ruy Lopez:
Chigorin defense ECO code:C92
Inarkiev--- Rublevsky result: 0-1 number of moves: 118! Opening: Sicilian Defense:Paulsen
variation, ECO code:B46
Naiditsch---- Bologan result: 0-1 number of moves: 109! Opening:Petroff's Defense
ECO code: C43
Efimenko, Zahar--- Shirov, Alexei result: ½-½ (draw) number of moves: 23 Opening: Ruy Lopez: Anderssen variation ECO code:C77.
Based upon these results here are the standings after 8 rounds of play:
10th Karpov Poikovsky RUS 2009
1st Motylev 6.5 points
2nd Gashimov 5.5 points
3rd-4th Sutovsky and Inarkiev 4.5 points
5th-6th Onischuk and Bologan 4 points
7th Rublevsky and Naiditsch 3.5 points
8th Efimenko 2.5 points
9th Shirov 1.5 points
Average elo: 2694 <=> Category: 18
Here are the pairings for the last round of play:the player having the white pieces is on the left:
Wednesday, June 10, 2009
Here are the results from all of the games today:
Rublevsky, -Sutovsky, result: ½-½ (draw) number of moves:56
Opening: Scotch Game ECO code:C45,
Onischuk, -Shirov, result:1-0 number of moves: 37
Opening:Semi-Slav Defense:ECO code:D46
Motylev, -Efimenko, result:½-½ (draw) number of moves:15
Opening: Ruy Lopez: Berlin Defense:ECO code:C67
Gashimov, -Naiditsch, result:1-0 number of moves: 40
Opening:Ruy Lopez: Berlin Defense:ECO code:C67
Bologan, - Inarkiev, result: ½-½ (draw) number of moves: 30
Opening:Petroff's Defence ECO code:C43
Here are the standings after 7 rounds of play:
10th Karpov Poikovsky RUS 2009
1 Gashimov, 2730 and Motylev 2677 both have 5 1/2 points
3 Inarkiev , 2676 4 1/2 points
4 Sutovsky, 2660 4.0 points
5/6 Naiditsch,A 2700 and Onischuk,Al 2684 3.5 points
7 Bologan,V 2690 3 points
8 Rublevsky,S 2702 2.5 points
9 Efimenko,Z 2682 2.0 points
10 Shirov,A 2745 1.0 point
Average elo: 2694 <=> Category: 18
Here are the pairings for round 8:The player with the white pieces is on the left:
There is one more round of play in the tournament after this one.
The study has found that not only are reefs dying faster and on a wider scale than previously thought, but they are quickly crumbling after they die, in a process scientists call “reef flattening.”
The scale of the collapse is massive.
“Probably the most stark finding of our result is that this isn't just a flattening in one patch, one area the size of Vancouver, or even an area the size of British Columbia… the whole Caribbean has been flattened in the past decade, mainly as a result of climate change,” said Nicholas Dulvy of SFU's department of biological sciences. “There are no detectable complex reefs [left].”
The team of international researchers looked at nearly 40 years of data compiled in 500 surveys of 200 reefs, for the first time piecing together the big picture of what has been happening throughout the Caribbean, which is famous for its thousands of beautiful reefs, including one second in size only to Australia's Great Barrier Reef.
Dr. Dulvy said wherever they looked they saw signs of rapid and devastating decline. Reefs are dying and then collapsing on themselves, filling in the nooks and crannies that provide shelter for a myriad of species.
“We've lost 80 per cent of the living coral cover in the Caribbean over the last four decades. So that's a rate of loss that's far greater than the loss of deforestation of the Amazon rain forest. In fact, we're losing coral twice as fast as we're chopping down the Amazon rain forest,” Dr. Dulvy said."
While coral can recover from bleaching events, many weakened Caribbean reefs are now succumbing to a fatal coral disease known as white plague. Last fall (2005) marine biologists in Puerto Rico reported that 42 coral species on some reefs had bleached. source: National Geographic
Local threats to coral reefs, such as sewage pollution, overfishing, and deforestation, must be addressed primarily by countries containing coral reefs, supplemented when appropriate by international financial and technical assistance. Regional problems, such as the transport of water and air pollution across national boundaries, must be addressed through regional legal instruments, again with international assistance to implement regional policies. Finally, the threats posed to coral reefs by global warming, ozone depletion, and international trade in coral reef organisms and natural products can only be reduced through international accords.
Indeed, truly comprehensive environmental protection must include long term environmental monitoring, integrated coastal management, effective marine sanctuaries, appropriate technologies for pollution prevention, and environmentally sensitive ways to utilize reef resources. These can only be implemented with substantial international economic and technical assistance. Thus, measures to ensure international assistance and cooperation in ameliorating the many threats to coral reefs must be included in all pertinent international agreements. The Framework Convention on Climate Change, the Convention on Biological Diversity, and Agenda 21, which were negotiated during the United Nations Conference on Environment and Development and signed in Rio de Janeiro, address this to some degree. However, firm commitments to alleviate environmental threats and implement environmentally sensitive development with new enforceable policies and funding are lacking.Because of their high rates of calcification, coral reefs play a major role in the global calcium cycle despite their limited areal extent, fixing about half of all the calcium entering the sea into calcium carbonate. The health of coral reefs is affected by the quality of surface waters, groundwater, and air for miles around. Pollutants migrate in coastal currents, air flow patterns, rivers, and underground aquifers. Activities throughout the airshed and watershed, including those which destroy or degrade mangrove forests and seagrass meadows, further threaten the integrity of reefs.
The report said better management of forests, more careful agricultural practices and the restoration of peatlands could soak up significant amounts of carbon dioxide, the most common of the gases blamed for global warming.
"We need to move toward a comprehensive policy framework for addressing ecosystems," said co-author Barney Dickson, releasing the report at the U.N. climate negotiations in Bonn, Germany. The event was webcast worldwide.
Millions of dollars are being invested in research on capturing and burying carbon emissions from power stations, but investing in ecosystems could achieve cheaper results, the report said.
It also would have the added effects of preserving biodiversity, improving water supplies and boosting livelihoods.
Halving deforestation by mid-century and maintaining that lower rate for another 50 years would save the equivalent of five years of carbon emissions at the current level, said Dickson, the agency's head of climate change and biodiversity.
The loss of peatlands, mainly drained for palm oil and pulp wood plantations in Southeast Asia, contributes 8% of global carbon emissions. China could capture about 5% of its carbon emissions from burning fossil fuels by returning straw to croplands, it said.
Agriculture has the largest potential for storing carbon if farmers use better techniques, such as avoiding turning over the soil and using natural compost and manure rather than chemical fertilizers, it said."
BONN, Germany — Global warming is uprooting people from their homes and, left unchecked, could lead to the greatest human migration in history, said a report released Wednesday.
Estimates vary on how many people are on the move because of climate change, but the report cites predictions from the International Organization for Migration that 200 million people will be displaced by environmental pressures by 2050. Some estimates go as high as 700 million, said the report, released at U.N negotiations for a new climate treaty.
Researchers questioned more than 2,000 migrants in 23 countries about why they moved, said Koko Warner of the U.N. University, which conducted the study with CARE International.
The results were "a clear signal" that environmental stress already is causing population shifts, she said, and it could be "a mega-trend of the future."
The potential for masses of humanity fleeing disaster zones or gradually being driven out by increasingly harsh conditions is likely to be part of a global warming agreement under negotiation among 192 countries.
A draft text calls on nations to prepare plans to adapt to climate change by accounting for possible migrations.
At U.S. insistence, however, the term "climate refugees" will be stricken from the draft text because refugees have rights under international law, and climate migrants do not fill the description of "persecuted" people, said Warner.
The report, "In Search of Shelter: Mapping the Effects of Climate Change on Human Migration and Displacement," studies people in some of the world's great river deltas who could be subject to glacial melt, desert dwellers who are vulnerable to increasing drought, and islanders whose entire nations could be submerged by rising sea levels.
It did not try to assess conflicts caused by climate change. The war in Sudan's desert Darfur region has partly been blamed on contested water supplies and grazing lands, and concern over future water wars has mounted in other areas of the world.
The report said 40 island states could disappear, in whole or in part, if seas rise by six feet. The Maldives, a chain of 1,200 atolls in the Indian Ocean has a plan to abandon some islands and build defenses on others, and has raised the possibility of moving the entire population of 300,000 to another country.
Melting glaciers in the Himalayas threaten repeated flooding in the Ganges, Mekong, Yangtze and Yellow river basins, which support 1.4 billion people, or nearly one-fourth of humanity, in India, southeast Asia and China. After the floods will come drought when seasonal glacier runoff no longer feeds the rivers, it said.
In Mexico and Central America drought and hurricanes have led to migrations since the 1980s and they will get worse, it said.
Homes are not always abandoned forever, the researchers said. "Disasters contribute to short-term migration," especially in countries that failed to take precautions or lack adequate responses, said Charles Ehrhart of CARE.
Most migration will be internal, from the country to the city, it said.
Written by Byron Acohida, USA TODAY
"Scareware has become the scourge of the Internet.
Those deceptive promotions crafted to panic you into spending $30 to $80 for worthless antivirus protection can hit you just about anywhere you turn on the Web. They arrive as booby-trapped Web links in e-mail and social network messages. They lurk hidden, and set to activate, when you click to popular, legitimate websites.
And now scareware purveyors are embedding triggers in places you wouldn't expect: on advertisements displayed at mainstream media websites; amid search results from Google, Yahoo Search and Windows Live search; alongside comments posted on YouTube videos; and, most recently, in "tweets" circulating on Twitter.
"Scareware is becoming a dominating force," says Joe Stewart, director of SecureWorks Counter Threat Unit. "There are hundreds of criminals using every tactic they can think of to push these programs."
Click on a trigger and you'll get caught in an unnerving loop impossible to abort. A scanner window will appear with red-letter warnings listing viruses purportedly infesting your hard drive. A series of dialogue boxes will follow giving you choices that all lead to the same screen: a sales pitch.
Make the purchase, and you get a bogus inoculation. Try to cancel it, and you'll get repeated offers. "It's like stepping into quicksand," says Paul Royal senior researcher at security firm Purewire. "The more you try to get out of it, the deeper you sink."
Scareware has been a prominent part of the Internet since 2004, when a cybergang based in St. Petersburg, Russia, launched the iframecash.biz website and began offering commissions to anyone who helped them spread the SpySheriff fake antivirus program. Hackers began to taint legitimate websites so that pop-up ads for SpySheriff would launch on the PC of anyone who visited a corrupted Web page.
That simple arrangement has evolved into a steadily growing industry that marked a banner year in 2008. By late last year, more than 9,200 different types of scareware programs were circulating on the Internet, up from 2,800 at midyear, according to The Anti-Phishing Working Group. Microsoft recently reported that scareware infections rose 48% in the second half of 2008 vs. the first half. Microsoft analyzed data collected by use of its Malicious Software Removal Tool and found one specific fake security program on 4.4 million PCs.
"These guys are very innovative," says Roel Schouwenberg, senior virus researcher at Kaspersky Lab. "They're constantly looking for newer and easier ways to make money."
Cutting-edge scareware marketing campaigns are being delivered via:
•YouTube and Twitter. The bad guys sign up for a handful of new YouTube or Twitter accounts. In the case of YouTube, crooks recently used about a dozen new accounts to begin posting comments on 30,000 videos, says Luis Corrons, technical director of PandaLabs. The comments enticed users to click on a link that triggered a scareware promotion.
In a variation of this ploy, crooks in late May created new Twitter accounts and began broadcasting tweets declaring "Best video" with a Web link of http://juste.ru, says Schouwenberg. Clicking on the link launched a sequence that replicated the message to everyone on the victim's friends list, then launched a scareware promo.
•Search results. The bad guys create malicious Web pages and fill them with words and phrases that are likely to be popular search queries, such as "American Idol winner" or "NCAA tournament bracket," says Yuval Ben-Itzhak, CTO of security firm Finjan. Next they insert tiny copies of their bad links on popular, legit websites that don't do a thorough job of preventing such hacks.
"Search engine optimization" then takes over. SEO is the technology that determines the relevance of Web links to search queries. By embedding a malicious link on a popular website, the hackers imbue their Web page with high relevance. So when the legit site turns up as the No. 1 or No. 2 result for a popular search query, their bad link turns up as the No. 4 or No. 6 result. Anyone who clicks on the bad link gets a scareware pitch.
•Online ads. The bad guys purchase blocks of ad space on popular websites through a legit ad agency, says Roger Thompson, senior researcher at AVG. Next they instruct the ad agency to begin posting innocuous ads. To avoid detection, they only sporadically feed a corrupted ad into the mix. The bad ad looks safe, but carries instructions to route anyone who clicks to a scareware pitch. "It's the most common attack we see every day," Thompson says."
To read the rest of this article go to:
Tuesday, June 9, 2009
Experts believe that wetland vegetation could have helped to resist the coastal erosion caused by Hurricane Katrina in 2005 and other recent disasters, such as the Indian Ocean tsunami of 2004 and Cyclone Nargis, which hit Burma last year.
Rusty Feagin of Texas A&M University in College Station and his team devised some experiments that he expected would demonstrate one way in which wetland plants could do this — by resisting the erosion caused by waves beating at the land's edge. He was surprised to find no such effect. It turned out that soil type is much more important, and that the presence or absence of vegetation doesn't make much difference."
Help Find Canada's missing children. Please visit: these websites:
Thoughts worth thinking about
Laws alone can not secure freedom of expression; in order that every woman and man present their views without penalty, there must be spirit of tolerance in the entire population.- Albert Einstein Too often we underestimate the power of a touch, a smile, a kind word, a listening ear, an honest compliment, or the smallest act of caring, all of which have the potential to turn a life around. - Leo Buscaglia
A person's true wealth is the good he or she does in the world. - Mohammed
Our task must be to free ourselves... by widening our circle of compassion to embrace all living creatures and the whole of nature and its beauty. -Albert Einstein
The best way to find yourself, is to lose yourself in the service of others. - Ghandi
The unselfish effort to bring cheer to others will be the beginning of a happier life for ourselves. - Helen Keller
Aim for success, not perfection. Never give up your right to be wrong, because then you will lose the ability to learn new things and move forward with your life. Remember that fear always lurks behind perfectionism. Confronting your fears and allowing yourself the right to be human can, paradoxically, make yourself a happier and more productive person. - Dr. David M. Burns
Life is as dear to a mute creature as it is to man. Just as one wants happiness and fears pain, just as one wants to live and not die, so do other creatures. -His Holiness The Dalai Lama
Mankind's true moral test, its fundamental test (which lies deeply buried from view), consists of its attitude towards those who are at its mercy: animals. And in this respect mankind has suffered a fundamental debacle, a debacle so fundamental that all others stem from it. -
Milan Kundera, The Unbearable Lightness of Being
The worst sin towards our fellow creatures is not to hate them, but to be indifferent to them. That's the essence of inhumanity. -George Bernard Shaw
Ego's trick is to make us lose sight of our interdependence. That kind of ego-thought gives us a perfect justification to look out only for ourselves. But that is far from the truth. In reality we all depend on each other and we have to help each other. The husband has to help his wife, the wife has to help the husband, the mother has to help her children, and the children are supposed to help the parents too, whether they want to or not.-Gehlek Rinpoche Source: "The Best Buddhist Writing 2005 pg. 165
The hostile attitude of conquering nature ignores the basic interdependence of all things and events---that the world beyond the skin is actually an extension of our own bodies---and will end in destroying the very environment from which we emerge and upon which our whole life depends.
|
2026-01-23T04:43:56.821045
|
822,619
| 3.66053
|
http://www.newscientist.com/article/mg16021633.600-innocent-bystanders.html
|
LOW-LEVEL radiation causes more widespread changes in human cells than previously thought, new research suggests. Scientists in Boston have found that cells that suffer a direct hit from radiation alter the level of gene activity in neighbouring cells. The phenomenon, known as the bystander effect, might either be an adaptive mechanism to minimise cell damage, or a harmful effect of radiation.
Jack Little and his colleagues at the Harvard School of Public Health in Boston say that irradiated cells and their non-irradiated neighbours show similar increases and decreases in the activity of five cell-regulating genes. This, claims Little, is the first firm proof of the bystander effect.
The researchers exposed cultures of human tissue to plutonium, and ensured that only a small proportion of the cell nuclei were hit by alpha radiation. They observed increased expression of the tumour suppressor genes
To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content.
|
2026-01-31T00:52:03.355976
|
1,056,760
| 3.625062
|
http://www.lhup.edu/~dsimanek/scenario/labman3/polarize.htm
|
L-9A POLARIZATION (Elementary)
To investigate some phenomena of polarized light.
Classical electromagnetic theory provides a model of light which is adequate to describe many phenomena of polarization. This model pictures light as an electromagnetic wave in which the electric and magnetic fields have oscillatory variation. At any point and time the electric and magnetic field vectors are mutually perpendicular, and at any point the electric and magnetic vectors' size maintain the same proportion as time goes on. Because of this strict proportionality between the electric and magnetic vectors, only one vector is needed to describe the phenomena. When discussing light we generally concentrate on the electric vector only, especially since it is the electric field which is the one acting on light detectors, including the human eye. So we use the electric vector to "represent" the light wave.
Though much of this discussion is essentially "classical" physics, one non-classical, quantum aspect of light must be kept in mind: Light is not emitted in a continuous "stream" but in random "bursts," called photons.
In light sources such as incandescent lamps (which produce "black body" radiation) the photons are emitted with random, uncorrelated polarization. Each individual photon may be thought of as polarized, but over a large sample of photons very many randomly oriented directions of polarization are represented. Such a light source is said to be unpolarized.
Laser light sources emit light which has a number of special properties, including spatial and temporal coherence. Of particular interest here is the fact that lasers can emit photons which all have the same direction of polarization. Such a laser is said to be linearly polarized. Linear (or plane) polarization means that the electric vector of the light has only one fixed direction in space; that direction is called the direction of polarization of the light.
Polarized light may also be produced from ordinary light: (1) by reflection, and (2) by transmission through certain optically non-isotropic materials.
Light incident on the surface of a transparent material is partly reflected and partly refracted. The reflected and refracted beams are partially polarized at most angles of incidence, but at one special angle, called Brewster's angle (1), the reflected beam is completely polarized with its electric vector parallel to the reflecting surface.
Some natural crystals, such as tourmaline, sulfate of ido-quinine, and some man-made plastic materials, have the property of "converting" ordinary light into fully or partially polarized light. They do this by a process of absorption and emission of photons, the emitted photons being forced into certain polarization states by the geometric arrangement of atoms or molecules of the material. In the process, more than 50% of the light energy is absorbed, and the rest is emitted. Polaroid ™, a brand of plastic polarizing material, has this property, and is an inexpensive device for producing polarized light from an unpolarized source. It is also useful for detecting polarization and for determining the direction of polarization.
Circularly polarized light may be thought of as having its electric vector rotating with constant amplitude at a constant angular velocity around its propagation axis. If this model is viewed head on the electric vector appears to be rotating at constant angular velocity, its head tracing out a circle. (2)
Elliptically polarized light has the same character, except that the amplitude also varies, and the head of the electric vector would trace out an ellipse when seen head on.
Some materials have two indices of refraction, and are therefore called birefringent or doubly refracting. Calcite, quartz and mica are examples. The light velocity in such materials depends on the direction of propagation and on the polarization direction of the light, both measured with respect to the crystal structure. A light beam entering such a material is effectively resolved into two components, polarized at right angles to each other. These two components will, in general, travel at different speeds, and also may take different paths through the material. One of these components will obey Snell's law of refraction (3), and it is called the ordinary ray, O. The other one does not obey Snell's law, and is called the extraordinary ray, E. The extraordinary ray does, indeed, seem to do some extraordinary things, for example, it can be refracted even when entering a crystal face at normal incidence, as illustrated in Fig. 4.
At some angles of incidence (4), the ordinary and extraordinary ray take the same path through a doubly refracting material, but their speeds may still differ. After passing through a thickness of the material they are no longer in phase. Suppose we let linearly polarized light enter such a crystal in this way, with its electric vector oriented so that the ordinary and extraordinary rays in the material have about equal intensity. The progressive shift in their phase is equivalent to a progressive change from linear to elliptical polarization (and even back to linear, if the material is thick enough). After passage through a certain thickness of the material, the light emerges either linearly or elliptically polarized.
A vector model of this effect is shown in Fig. 5. On entering the crystal, the polarized incident ray is effectively resolved into components parallel and perpendicular to the crystal axis, x and y. These two components have different speeds, and have a phase difference upon emergence from the crystal. For a given thickness of crystal there are some wavelengths for which the phase difference is exactly one half wavelength. This causes a 180° shift in one component (relative to the other). Now the light is circularly polarized. Passage through the second polarizer will resolve the vectors into components along the polarizer's axis, and these will destructively interfere with each other. Figure 5 shows this case.
If elliptically polarized light passes through a linear polarizer, the polarizer "extracts" a component of the electric vector, and linearly polarized light emerges from it.
In the above discussion we implicitly assumed monochromatic light. Now consider what happens if linearly polarized white light passes through such a crystal. The white light has a continuous distribution of wavelengths. After passing through a given thickness of the material, some wavelengths emerge with their O and E components exactly out of phase. This corresponds to elliptically polarized light. Some other wavelengths may come out with O and E components exactly in phase. This corresponds to linearly polarized light. This light may be allowed to pass through a second linear polarizer. If the second polarizer has its polarization axis oriented at right angles to those wavelengths which emerged from the first polarizer linearly polarized, those wavelengths will be blocked, but the others will get through. The light which emerges will now no longer be white, because certain colors (wavelengths) have been removed. This is how colors are produced when birefringent materials are viewed between two linear polarizers. (See Fig. 5 for the theory and Fig. 6 for the experimental arrangement.)
A piece of polarizing material may be used as a polarizer, or as an analyzer. A Polarizer is used to produce a polarized beam from an unpolarized one. An analyzer is used to detect whether light is polarized, and if so, determine the character of the polarization (linear or circular) and its direction (if linear) or sense of rotation (if circular).
5. QUALITATIVE OBSERVATIONS OF POLARIZATION
Even if you have seen some of these demonstrations in class, you are advised not to skip them, but make sure you can reproduce the phenomena. As you do this you may also make some new discoveries.
(1) Use one polarizing sheet as a polarizer and the other as an analyzer. Observe the change in transmitted light intensity as either of the sheets is rotated. This combination acts as a "light valve." [See section 8, (3) for quantitative study of this.]
(2) Light reflected from smooth surfaces is polarized. Look through a single polarizing sheet at the light reflected from a shiny floor, smooth table top, or sheet of glass. Use a table lamp as a source of light and try different angles of incidence. From electromagnetic theory we know that the light reflected from a smooth surface has a predominant polarization orientation which lies parallel to the surface. Draw a picture of this, and use this fact to determine the polarization axis of the polarizing sheets you are using.
Notice that the polarization is greatest (most complete) at a particular angle of incidence, called "Brewster's angle." This angle depends on the material of the reflecting surface. Estimate it for the reflecting surfaces available. You can use a spectrometer to look at light reflected from a glass surface to accurately determine Brewster's angle. [See section 8, (1) for quantitative study of this.]
(3) Sky light is polarized, but to observe this you must be careful.
Do not look at the sun directly, and especially do not look at the sun through an optical instrument such as a telescope or binoculars.
Do not look at the sun through crossed polarizing sheets either. Crossed polarizers (5) do not effectively absorb the ultraviolet light or infrared light, both of which can permanently damage your vision.
Face a patch of blue sky located about 90° from the sun. Look at this part of the sky through a single polarizer, and rotate the polarizer to check the direction of the sky's polarization. Which parts of the sky are polarized most? Where are they, in relation to the sun? What is the direction of polarization relative to the location of the sun? [Consider an imaginary plane which includes your line of sight and the sun. This is the plan in which the light travels before and after it is scattered. Is the scattered light's direction of polarization in this plane, or perpendicular to it?]
6. OBSERVATIONS OF DOUBLY REFRACTING MATERIALS
All of these phenomena may be projected onto a white screen by placing a specimen of material between polarizers on an optical bench, with "projection lantern" illumination (see experiment L-3). This arrangement is called a "projection polariscope." A commercial polariscope may also be used for larger specimens, or one may be constructed from a light box with frosted glass. For very small specimens a polarizing microscope may be made by putting a polarizer in the light from the substage illuminator, and another just above the ocular. The latter may be easily rotated.
(1) Observe the double refraction of calcite. Place the calcite crystal over a small dot on a piece of paper and observe the double image of the dot seen through the crystal. If you look straight down at the paper, one dot image seems to be in it's "normal" place, the other seems displaced sideways. [Rotate the crystal to confirm this.] These two images result from the "ordinary" and "extraordinary" rays (components) of the light passing through the crystal.
The calcite can be mounted over a small hole in an opaque card, and placed in the projection polariscope. You will see two images of the hole on the screen. Check the polarization of each image. Rotate the crystal, and identify which is the extraordinary ray.
(2) Some crystal specimens are available "sandwiched" between glass plates for protection. When these are placed between polarizers they will appear brilliantly colored. Try this with the two polarizers' axes parallel to each other, and also with them perpendicular to each other. Notice that when one polaroid is rotated 90° the colors change to the complementary colors. What happens when the axes of the polaroids are at 45° to each other? Where did the colors go? Specific materials to try: Mica sheets. Quartz (yellow light will be rotated 90°).
(3) Put a clear plastic material between crossed polarizers. [A plastic comb or ruler works well.] These colors are from "strains" "frozen into" the plastic during the molding and curing process. [Not all plastic materials show this.]
(4) If glass is annealed too quickly it can have similar "frozen" strains. Some specimens of "Prince Rupert's drops" and "Bologna bottles" are available which exhibit this. These are manufactured by rapidly cooling (quenching) the hot glass. [The hot glass is plunged into a cool liquid.] If done "properly" this gives added strength to glass by forming a "skin" layer which is under tensile stress. The Bologna bottle can be used to drive small nails, but if merely scratched inside, it shatters. Prince Rupert's drops are made by dropping hot glass in cold liquid. The large part of the drop resists hammer blows, but if the slender tail is broken off, the entire drop shatters. Do not attempt the shattering experiments, your instructor will demonstrate them.
Some "safety" glass for car windshields and glass doors is made this way. When it breaks it shatters into many small pieces rather than large shards with sharp knife-like edges. [Do not confuse this with another type of "safety" glass which is a sandwich of a plastic layer between two sheets of glass.]
(5) Look at the rear windows of automobiles through a polarizer. Also look at plate glass doors if there are any nearby. These are probably the laminated type of safety glass, the glass and/or plastic sheets being strained by the pressure of the laminating process.
(6) Look at fingernail clippings between polarizers. Do this with a polarizing microscope. [Put two polarizers in the light path of the microscope, with the specimen between them.]
(7) Obtain three polarizing sheets. Arrange two with axes at right angles for extinction of the light. Now predict what will be observed if a third polarizer is slipped between them, with its axis at 45° to theirs. Try it, and see if your prediction was correct. Try other angles. Try to develop a working model of what is happening.
Note: Many books "explain" polarization by a "picket fence" analogy. Fig. 7 is from such a book. Waves set up in a rope passing between the slats of a picket fence can vibrate only in a plane parallel to the slats. If the rope passes through two picket sections, with slats at right angles, now waves in the rope can pass through both. As with most analogies, this one is misleading and inadequate. If you imagined a third fence with slats at 45° you would have predicted the wrong result. This simple experiment has shown how this much used (actually abused) analogy is imperfect, and really doesn't "explain" polarization at all. Prospective teachers, take note!
(8) Look at sheets of cellophane between polarizers. At certain angles you'll see colors. [Cellophane tape may be used, stuck onto a plain glass plate.]
(9) Law of Malus. (6) The intensity of light passing through two polarizers depends on the angle θ between the polarizers' axes:
Apparatus is available which allows measurement of the angular position of one polarizer. A photocell or other light detector may be used to measure the light intensity emerging from the analyzer, to experimentally verify this law. This is best done on an optical bench in a fully darkened room. Take care to avoid errors due to scattered light from nearby walls.
(10) Optical rotation. Some materials have the effect of rotating the plane of polarization of light, the amount of rotation being proportional to the path length through the material. Sugar solutions (or dextrose), and turpentine, are common examples. The amount of rotation will also depend on the concentration of the solution. The sense of rotation, as seen along the beam, also depends on the material. It may be to the right (clockwise) or to the left (counterclockwise).
Sugar solutions are interesting because some produce clockwise rotation, while others, with the same chemical formula, produce counterclockwise rotation. The difference is in the fact that the molecules of the two kinds are mirror images of each other. It is interesting also to note that some bacteria are so specialized that they will only eat one kind of the sugar, and will die of starvation in a solution of the other kind.
You may observe a small rotation in a deep beaker or tube of solution.
The specific rotation (ratio of rotation angle to length of path) provides a means for measuring solution concentrations. Those students with an interest in chemistry may wish to check this with a solution of known concentration.
(1) Explain the observations of section 6, part 7, in particular, show how the "picket fence" analogy is wrong and misleading, and how the vector model does give correct results. Use diagrams.
(2) Some books suggest that polarization of light represents strong evidence for the wave theory of light (as opposed to a strictly particle theory). To challenge this glib assertion, devise a particle theory (no waves allowed!) which would be adequate to explain the experimental phenomena studied in this experiment. Be creative, but be sure that your model meets the criteria of good physics:
a) The model must correctly describe known physical facts.
c) The model must be quantitative.
b) The model must be testable and potentially refutable by experimental test.
c) All features of the model must have a clear and precise relation to observables (it must not include any features unrelated to observables).
Do not be prejudiced or limited by any models of light you have already learned.
Even if you are not successful, carry this far enough to show why it would be very difficult to devise such a model.
Text and drawings, © 1995, 2004 by Donald E. Simanek
1. Brewster, Sir David (1781-1868). It is given by tanqB = n'/n. Brewster's angle for glass is 56°.
2. Of course we are speaking of "seeing" the vector model of light. The electric vectors are a mathematical construct, not something "real" one can see with the eye.
3. Snell, Willibrord (1591-1626).
4. The optical axis of the crystal is that unique direction in which the ordinary and extraordinary ray paths are coincident.
5. Polarizing sheets are often called "polarizers" whether they are being used as polarizers or as analyzers.
6. Malus, Etienne Louis (1775-1812).
|
2026-02-03T13:31:30.674294
|
567,649
| 4.281043
|
http://www.washington.edu/news/2013/07/14/some-volcanoes-scream-at-ever-higher-pitches-until-they-blow-their-tops/
|
It is not unusual for swarms of small earthquakes to precede a volcanic eruption. They can reach a point of such rapid succession that they create a signal called harmonic tremor that resembles sound made by various types of musical instruments, though at frequencies much lower than humans can hear.
A new analysis of an eruption sequence at Alaska’s Redoubt Volcano in March 2009 shows that the harmonic tremor glided to substantially higher frequencies and then stopped abruptly just before six of the eruptions, five of them coming in succession.
“The frequency of this tremor is unusually high for a volcano, and it’s not easily explained by many of the accepted theories,” said Alicia Hotovec-Ellis, a University of Washington doctoral student in Earth and space sciences.
Documenting the activity gives clues to a volcano’s pressurization right before an explosion. That could help refine models and allow scientists to better understand what happens during eruptive cycles in volcanoes like Redoubt, she said.
The source of the earthquakes and harmonic tremor isn’t known precisely. Some volcanoes emit sound when magma – a mixture of molten rock, suspended solids and gas bubbles – resonates as it pushes up through thin cracks in the Earth’s crust.
But Hotovec-Ellis believes in this case the earthquakes and harmonic tremor happen as magma is forced through a narrow conduit under great pressure into the heart of the mountain. The thick magma sticks to the rock surface inside the conduit until the pressure is enough to move it higher, where it sticks until the pressure moves it again.
Each of these sudden movements results in a small earthquake, ranging in magnitude from about 0.5 to 1.5, she said. As the pressure builds, the quakes get smaller and happen in such rapid succession that they blend into a continuous harmonic tremor.
“Because there’s less time between each earthquake, there’s not enough time to build up enough pressure for a bigger one,” Hotovec-Ellis said. “After the frequency glides up to a ridiculously high frequency, it pauses and then it explodes.”
She is the lead author of a forthcoming paper in the Journal of Volcanology and Geothermal Research that describes the research. Co-authors are John Vidale of the UW and Stephanie Prejean and Joan Gomberg of the U.S. Geological Survey.
Hotovec-Ellis is a co-author of a second paper, published online July 14 in Nature Geoscience, that introduces a new “frictional faulting” model as a tool to evaluate the tremor mechanism observed at Redoubt in 2009. The lead author of that paper is Ksenia Dmitrieva of Stanford University, and other co-authors are Prejean and Eric Dunham of Stanford.
The pause in the harmonic tremor frequency increase just before the volcanic explosion is the main focus of the Nature Geoscience paper. “We think the pause is when even the earthquakes can’t keep up anymore and the two sides of the fault slide smoothly against each other,” Hotovec-Ellis said.
She documented the rising tremor frequency, starting at about 1 hertz (or cycle per second) and gliding upward to about 30 hertz. In humans, the audible frequency range starts at about 20 hertz, but a person lying on the ground directly above the magma conduit might be able to hear the harmonic tremor when it reaches its highest point (it is not an activity she would advise, since the tremor is closely followed by an explosion).
Scientists at the USGS Alaska Volcano Observatory have dubbed the highest-frequency harmonic tremor at Redoubt Volcano “the screams” because they reach such high pitch compared with a 1-to-5 hertz starting point. Hotovec-Ellis created two recordings of the seismic activity. A 10-second recording covers about 10 minutes of seismic sound and harmonic tremor, sped up 60 times. A one-minute recording condenses about an hour of activity that includes more than 1,600 small earthquakes that preceded the first explosion with harmonic tremor.
Upward-gliding tremor immediately before a volcanic explosion also has been documented at the Arenal Volcano in Costa Rica and Soufrière Hills volcano on the Caribbean island of Montserrat.
“Redoubt is unique in that it is much clearer that that is what’s going on,” Hotovec-Ellis said. “I think the next step is understanding why the stresses are so high.”
The work was funded in part by the USGS and the National Science Foundation.
For more information, contact Hotovec-Ellis at firstname.lastname@example.org.
|
2026-01-27T03:31:32.971824
|
1,156,111
| 4.030536
|
http://brown.edu/Research/Breuer-Lab/research/batflight.html
|
Kenny Breuer, School of Engineering
Sharon Swartz, Ecology and Evolutionary Biology
Although the natural world has countless examples of creatures with extraordinary flight capabilities, bats have evolved with truly extraordinary aerodynamic capabilities that enable them to fly in dense swarms, to avoid obstacles, and to fly with such agility that they can catch prey on the wing, maneuver through thick rainforests and make high speed 180 degree turns. Bats possess specialized features that may contribute to their flight performance, including highly articulated and flexible skeletons, flexible and compliant membrane wings, thousands of tiny hair sensors distributed over their wing surface as well as a series of muscles embedded in the wing membrane whose function appears to be the active control of camber during flight. Our multidisciplinary research team consists primarily of researchers from Biology and Engineering and includes significant collaborations with researchers in Computer Science and Applied Mathematics, all working to characterize these unique flight capabilities, to understand the roles that the bats' bones, skin morphology and wing motion all play in enabling this behavior, to model these mechanisms, and ultimately to emulate them in engineered systems.
Unlike insects and birds, both of whom have relatively rigid wings that can move with only a few degrees of freedom, the bat's wing is comprised of a thin, highly compliant skin membrane that is supported on a very flexible jointed skeleton with numerous degrees of freedom. The aerodynamics of flexible, articulated wings is extremely complex and poorly understood, and our team is studying their characteristics using high-speed measurements of the bat's wing and body motion. These kinematic measurements are synchronized with Particle Image Velocimetry (PIV) measurements of the fluid velocity in the wake behind the animal and, together, the kinematic and fluid measurements will shed light on the lift and thrust mechanisms that bats use during straight flight as well as maneuvers. In support of these biological flight experiments, we are performing wind tunnel tests on physical models that mimic features observed in nature, material tests on bat bones and wing membranes, numerical simulations, theoretical modelling and advanced scientific visualizations.
Our research is supported by AFOSR and NSF
Bat wake measured using PIV, from Hubel et al 2009
Some videos from our research
We have a YouTube Channel with a bunch of videos and news stories from our research
here are some of the other videos
Cynopterus brachyotis (lesser dog-faced fruit bat), fliying the wind tunnel
Tadarida brasiliensis (Mexican free-tail bat), flying in the wind tunnel
|
2026-02-05T04:13:30.920223
|
46,855
| 3.830454
|
http://stardate.org/print/5163
|
An obscure cat pads through the northern sky at this time of year. It's known as Lynx, and it stands high in the northeast in early to mid evening. It's about halfway between the outer stars in the bowl of the Big Dipper and the bright "twins" of Gemini.
German astronomer Johannes Hevelius named the constellation about 300 years ago. At the time, there was no well-established star pattern in that region of the sky, mainly because it doesn't offer any bright stars. Hevelius named the faint stars he saw there for a lynx -- not because they formed the shape of a cat, but because he said observers must have the eyes of a lynx to see it.
The brightest star in Lynx is at its southeastern tip. It has no proper name -- it's simply designated by the Greek letter "alpha" followed by the name of the constellation.
Alpha Lyncis is about 220 light-years from Earth. The distance is a bit uncertain. But if the estimate is right, then the light we see from the star tonight began its journey through space around the year 1790 -- just as George Washington was settling in as the first president of the United States.
Alpha Lyncis is a red giant, which means that it's old and bloated. Fairly soon, its outer layers will puff away into space, leaving only a hot cosmic cinder called a white dwarf. When that happens, Lynx will lose its brightest star -- so you really will need the eyes of a lynx to see what's left.
Script by Damond Benningfield, Copyright 1997, 2002, 2005, 2009
For more skywatching tips, astronomy news, and much more, read StarDate magazine.
|
2026-01-19T01:18:26.920647
|
1,024,339
| 3.743182
|
http://oregonstate.edu/instruct/bb451/winter12/lectures/highlightsmembtrans.html
|
Highlights of Membrane Transport
1. Proteins that move more than one chemical in the same direction across a membrane are called symports (synports). Those that move them in opposite directions are called antiports. If a net charge difference arises as a result of the movemet, the system is referred to as electrogenic. If no charge difference arises, they are called electrneutral.
2. Nerve cells use the gradient of Na+ and K+ built up by the Na+/K+ pump to transmit signals. In nerve transmission, special "gates" open and close to allow Na to diffuse into nerve cells and K to diffuse out of nerve cells.
3. The first step in nerve transmission involves opening of Na+ gates. These allow Na+ to diffuse into the cell, since Na+ concentration is higher outside of cells than inside. Movement of the positively charged sodium ion causes a change in the electrical potential of the cell near the Na+ gate. To compensate for the voltage change, the K+ gates open and Na+ gates close, allowing K+ to flow out of the cell. This results in an overcompensation of the voltage. The K+ gates close and the region where the original movement of ions occurred recovers. During this time, no nerve signal can be transmitted at that point.
4. The nerve signal is transmitted as a consequence of the initial movement of Na+ into the cell. Before it can be pumped out, some of the sodium diffuses down to the next Na+ gate and the change in the voltage environment causes it to open and trigger the same events as occurred in the last step. Thus, the signal moves from one junction to another to another, ultimately arriving at the end of the axon.
5. Tetrodotoxin is a neurotoxin because it inhibits the action of nerve cells. It is found in the puffer fish and it blocks the Na+gates.
6. Channels (gates) are made by protein molecules in the membranes of cells. Channels are generally very specific for what they will allow to pass through them. Glucose channels, for example are fairly specific for glucose. Sodium and potassium channels are very specific for each respective ion.
7. Ion specificity is accomplished by two mechanisms. The first is physical. If an ion is too big to fit in a channel, it is excluded. This is the case of the sodium channel, which excludes potassium ions because they are too big.
8. The second mechanism of specificity is energy. An example is the potassium channel, which excludes sodium ions. In this case, the channel allows larger ions (potassium) to pass through, but blocks smaller ions, like sodium ions.
9. The mechanism of exclusion of the potassium channel relates to the energies of solvation of each ion. For potassium ions, the energy of desolvation of the ion as it enters the channel is overcome by the energy of resolvation as it enters the channel. Thus, entry of potassium ions is energetically favored. This is due to the geometry of the potassium channel closely matching the dimensions of the potassium ion.
10. When sodium ions try to enter the channel, their energy of desolvation is greater than is realized by their resolvation in the channel. Thus, they do not enter. Their energy of resolvation not as favorable due to their ion sizes not matching the dimensions of the potassium channel.
11. After the nerve has "fired" the gradient must be restored and this is the job again of the Na/K ATPase.
12. After a nerve "signal" has moved along the entire length of a nerve cell, it must move to the adjacent cell. This requires a neurotransmitter. Neurotransmitters are small molecules encapsulated in vesicles (synaptic vesicles) that are released from the nerve cell carrying the signal (presynaptic membrane) to the adjacent nerve cell (postsynaptic membrane). The example I showed as acetylcholine, which is released in presynaptic vesicles and when the contents (acetylcholine) bind receptors on the postsynaptic membrane, channels open to allow sodium and potassium ions to move as before, causing the next nerve cell to start an action potential.
13. Mitochondria are the locations in cells where oxidation/reduction and ATP synthesis occurs, along with metabolism. Mitochondria have distinctive structural features, including an inner membrane that is impermeable to protons, infoldings of the inner membrane called cristae, an outer membrane that is not very impermeable, an intermembrane space between the inner and outer membrane, and the matrix. This last "structure" is simply the fluid in the inner mitochondrion and it is here where the enzymes of the citric acid cycle and fatty acid oxidation are found.
14. Remember that for every oxidation, there is an equal and Loss of elecrons by one molecule means gain of them by another one. Oxidation is a process that involves the loss of electrons. Reduction is a process that involves the gain of electrons.
15. Electrons are carried to the electron transport system in the mitochondria by NADH and FADH2.
16. Mitochondria are the site of electron transport and oxidative phosphorylation.
17. The compound 2,4 DNP (dinitrophenol) was marketed as a miracle diet drug about a century ago. It kills because it destroys the proton gradient of mitochondria without allowing sythesis of ATP. I'll have more to say about it later.
|
2026-02-03T01:49:17.644305
|
768,329
| 3.516195
|
http://ecmweb.com/basics/understanding-basics-wye-transformer-calculations
|
Find more articles on: Transformers
Last month's Code Calculations article covered transformer calculation definitions and some specifics of delta transformer calculations. This month we turn our attention to the differences between delta and wye transformers and to wye transformer calculations. We'll close by looking at why it's so important to know how to perform these calculations, but you'll likely see the reasons as we go. In a
Last month's Code Calculations article covered transformer calculation definitions and some specifics of delta transformer calculations. This month we turn our attention to the differences between delta and wye transformers and to wye transformer calculations. We'll close by looking at why it's so important to know how to perform these calculations, but you'll likely see the reasons as we go.
In a wye configuration, three single-phase transformers are connected to a common point (neutral) via a lead from their secondaries. The other lead from each of the single-phase transformers is connected to the line conductors. This configuration is called a “wye,” because in an electrical drawing it looks like the letter Y. Unlike the delta transformer, it doesn't have a high leg.
Differences in wye and delta transformers. The ratio of a transformer is the relationship between the number of primary winding turns to the number of secondary winding turns — and thus a comparison between the primary phase voltage and the secondary phase voltage. For typical delta/delta systems, the ratio is 2:1 — but for typical delta/wye systems, the ratio is 4:1 (Fig. 1 above).
If the primary phase voltage in a typical delta/delta system is 480V, the secondary phase voltage is 240V. If the primary phase voltage in a typical delta/wye system is 480V, the secondary phase voltage is 120V.
Delta and wye transformers also differ with regard to their phase voltage versus line voltage and phase current versus line current. In a delta transformer,
EPhase=ELine and ILine=IPhase×√3.
In a wye transformer,
IPhase=ILine and ELine=EPhase×√3.
These differences affect more than just which formulas you use for transformer calculations. By combining delta/delta and delta/wye transformers, you can abate harmonic distortion in an electrical system. We'll look at that strategy in more detail after addressing wye transformer calculations.
Wye current and voltage calculations. In a wye transformer, the 3-phase and single-phase 120V line current equals the phase current (IPhase = ILine) (Fig. 2 on page C20).
Let's apply this to an actual problem. What's the secondary phase current for a 150kVA, 480V to 208Y/120V, 3-phase transformer (Fig. 3 on page C20)? ILine=150,000VA÷(208V×1.732)=416A, or IPhase=50,000VA÷120=416A
To find wye 3-phase line and phase voltages, use the following formulas:
Since each line conductor from a wye transformer is connected to a different transformer winding (phase), the effects of 3-phase loading on the line are the same as on the phase (Fig. 4 on page C21). A 36kVA, 208V, 3-phase load has the following effect:
Phase power=12kVA (any winding)
Wye transformer balancing and sizing. Before you can properly size a delta/wye transformer, you must make sure that the secondary transformer phases (windings) or the line conductors are balanced. Note that balancing the panel (line conductors) is identical to balancing the transformer for wye transformers. Once you balance the wye transformer, you can size it according to the load on each phase. The following steps will help you balance the transformer:
Step 1: Determine the loads' VA ratings.
Step 2: Put one-third of the 3-phase load on Phase A, one-third on Phase B, and one-third on Phase C.
Step 3: Put one-half of the single-phase, 208V load on Phase A and Phase B, or Phase B and Phase C, or Phase A and Phase C.
Step 4: Place 120V loads (largest to smallest): 100% on any phase.
Now consider the following wye transformer sizing example: What size transformer (480V to 208Y/120V, 3-phase) would you need for the following loads: 208V, 36kVA, 3-phase heat strip; two 208V, 10kVA, single-phase loads; and three 120V, 3kVA single-phase loads?
a) three single-phase, 25kVA transformers
b) one 3-phase, 75kVA transformer
c) a or b
d) none of these
The Table sums up the kVA for each phase of each load. Note that the phase totals (23kVA, 22kVA, and 20kVA) should add up to the line total (65kVA). Always use a “checksum” like this to ensure you have accounted for all items and the math is right.
If you're dealing with high-harmonic loads, the maximum unbalanced load can be higher than the nameplate kVA would indicate. Matching the transformer to the anticipated load then requires a high degree of accuracy if you want to get a reasonable level of either efficiency or power quality.
One approach to such a situation is to supply high-harmonic loads from their own delta/delta transformer. Another is to supply them from their own delta/wye and double the neutral. The approach you choose will depend on the characteristics of your loads and how well you lay out your power distribution system.
For example, you might put your computer loads (which have switching power supplies) on a delta/delta transformer, which you would feed from a delta/wye transformer. This would greatly reduce the presence of harmonics in the primary system, partly due to the absence of a neutral connection. But the behavior of the delta/delta transformer itself, combined with the interaction of delta/delta and delta/wye, will also cause a reduction in harmonics. Notice the word “might” in the question of whether to implement this kind of design. Grounding considerations can make it an undesirable approach, depending on the various loads and the design of the overall electrical system. Keep in mind that this is one of the many ways to mix and match transformers to solve power quality problems.
Due to uptime or power quality concerns with complex loads, you may need to mix and match transformer configurations as in the previous example. And that's something you can't do unless you understand both delta and wye calculations.
Another issue is proper transformer loading. As a rule of thumb, 80% loading is a good target. If you overload the transformer, though, it goes into core saturation and output consists of distorted waveforms. The clipped peaks typical of saturated transformers cause excess heating in the loads. This issue of transformer loading means you're going to have to perform the transformer calculations just to get basic power quality and reasonable efficiency.
So it's important not to oversimplify your approach to transformer selection. It's usually best to do all the calculations using the nameplate kVA. Then, design the distribution system as though all loads are linear. When that's done, identify which loads are high harmonic, such as electronic ballasts, computer power supplies, and motors with varying loads. At this point, you can efficiently work with a transformer supplier to develop a good solution.
Now that you understand delta and wye transformer calculations, you can see how important they are to being able to do a quality installation any time you're specifying transformers or considering adding loads to existing transformers. This ability is also important if you're trying to solve a power quality problem or a problem with “unexplained” system trips. You may wish to sharpen this ability by purchasing an electrical calculations workbook or taking on this kind of work in your electrical projects.
|
2026-01-30T03:27:34.216718
|
362,151
| 3.618764
|
http://www.studymode.com/essays/Music-Math-191540.html
|
These two matters first seem to be completely different from each other, standing on the opposite sides of human activities – purely logical and exact science (math) “a queen of all sciences” and the beautiful, seeming to be far from most of the logical laws - music. Most may say that these two fields of studies are actually impossible to combine. Surprisingly, math and music tend to have a lot in common and organically complement one another. It turns out that math and music are often inseparable and can benefit each other greatly.
Music theorists often try to use mathematics in order to recognize musical organization and support original ways of examination of music. This has led to musical relevance to the set theory, abstract algebra, and the number theory. “Music researchers have also used mathematics to comprehend harmonious scales, and some composers have incorporated the Golden ratio and Fibonacci numbers into their work” (“Danil Fokker”).
The history of math and music correspondence is actually very old: the first individual to make a connection between math and music was Pythagoras of Samos, a famous logician and philosopher who lived in southern Italy in 5th century BC. He also is the first who got the idea that mathematics is actually present everywhere. Pythagoras has been searching for rationality in any activity, and music gave him a broad field to study. One bit of proof of fundamental rational numbers was present in the Greek music. At those times, music was not as much complicated as it is in the modern world. The Greek octave had only as much as five notes and Pythagoras mentioned that each note was a part of a string. So, if the first note was “A”, “B” was 4/5th of the previous, and consequently – “C” as the 3/5th, “D” – 2/5th and “E” – 1/5th of the whole note.
Most of the Greek instruments of the time had 6 strings corresponding to these notes. Though the length of the strings varied a lot the main idea... [continues]
Cite This Essay
(2009, 02). Music & Math. StudyMode.com. Retrieved 02, 2009, from http://www.studymode.com/essays/Music-Math-191540.html
"Music & Math" StudyMode.com. 02 2009. 02 2009 <http://www.studymode.com/essays/Music-Math-191540.html>.
"Music & Math." StudyMode.com. 02, 2009. Accessed 02, 2009. http://www.studymode.com/essays/Music-Math-191540.html.
|
2026-01-23T18:39:08.024160
|
929,214
| 3.957109
|
http://simple.wikipedia.org/wiki/Star
|
It radiates heat and light, and every other part of the electromagnetic spectrum, such as radio waves, micro-waves, X-rays, gamma-rays and ultra-violet radiation. The proportions vary according to the mass and age of the star.
The energy of stars comes from nuclear fusion. This is a process that turns a light chemical element into another heavier element. Stars are mostly made of hydrogen and helium. They turn the hydrogen into helium by fusion. When a star is near the end of its life, it begins to change the helium into other heavier chemical elements, like carbon and oxygen. Fusion produces a lot of energy. The energy makes the star very hot. The energy produced by stars radiates away from them. The energy leaves as electromagnetic radiation.
Birth of a star[change | edit source]
A star begins as a collapsing cloud of material made mostly of hydrogen, with helium and tiny amounts of heavier elements. Once the stellar core is dense enough, some of the hydrogen is changed into helium through nuclear fusion. The energy moves away from the core by a combination of radiation and convection. The star's radiation stops it from collapsing further under its own gravity. Once the hydrogen fuel at the core has been used up, those stars with at least 0.4 times the mass of the Sun expand to become a red giant. In some cases they fuse heavier elements. When the star is very old it might expand until its outer layers are pushed away. If the star is heavier, it might explode and spread most of its mass into space. The matter it spreads into space may make a new generation of stars. If the star is even larger it may collapse and form a black hole.
Earth and Sun[change | edit source]
The star nearest to Earth is the Sun. It is not accurate to say "most of the energy on Earth comes from it". It is more accurate to say the energy of sunlight supports almost all life on Earth by photosynthesis, and drives Earth's climate and weather. The Earth has its own source of heat in its interior: see Age of the Earth. The internal heat comes from the original gravitational formation of the Earth, and from the radioactive materials inside it.
We can see other stars in the night sky when the Sun goes down. They are made mostly of hydrogen and a little bit of helium plus other elements. The sun is such a star.
Planets[change | edit source]
Most stars look like shiny dots from Earth, because they are far away. Our Sun is the closest star to us. The earth moves around (orbits) the Sun in an oval shape. The Sun and all things that orbit it are called the Solar System. Many other stars sometimes have planets orbiting them: they are called exoplanets.
Numbers, distances[change | edit source]
The nearest star to our Solar System, and the second nearest star to Earth after the Sun, is Proxima Centauri. It is 39.9 trillion kilometres away. This is 4.2 light years away, meaning that light from Proxima Centauri takes 4.2 years to reach Earth.
Astronomers think there are a very large number of stars in the Universe. They estimate (guess) that there are at least 70 sextillion stars. That is 70,000,000,000,000,000,000,000; which is about 230 billion times the number of stars in the Milky Way (our galaxy).
Most stars are very old. They are usually thought to be between 1 and 10 billion years old. The oldest stars are thought to be around 13.7 billion years old. Scientists think that is close to the age of the Universe.
Stars vary greatly in size. The smallest neutron stars (which are actually dead stars) are no bigger than a city. The neutron star is incredibly dense. If you were to take a layer a micrometre thick and apply it onto a tank, it would be a very tough armor. The tank would be so heavy, it would sink into the center of the Earth.
Hypergiant stars are the largest stars in the Universe. They have a diameter over 1,500 times bigger than the Sun. If you changed the sun into one of these huge stars down where the sun is, its outer surface would reach beyond the orbit of Jupiter, and the earth would be well inside the star. The star Betelgeuse, in the Orion constellation is a red supergiant star.
When seen in the night sky without a telescope, some stars appear brighter than other stars. This difference is measured in terms of apparent magnitude. There are two reasons for stars to differ in apparent magnitude. If one star is much closer than another otherwise similar star, it will appear much brighter, in just the same way that a candle that is near looks brighter than a big fire that is far away. If one star is much larger or much hotter than another star at about the same distance, it will appear much brighter, in just the same way that if two fires are the same distance away, the bigger or hotter one will look brighter. A star's true luminosity is its absolute magnitude.
Stars are a source of a gravity field. This is what keeps planets close to them. It is also not unusual for two stars to orbit each other. This happens when they are close together. This is also because of gravity, in the same way as the Earth orbits the Sun. These binary stars (binary meaning "two") are thought to be very common. There are even groups of three or more stars orbiting each other. Proxima Centauri is the smallest star in a group of three.
Stars are not spread evenly across all of space. They are grouped into galaxies. A typical galaxy contains hundreds of billions of stars.
History of seeing stars[change | edit source]
Stars have been important to people all over the world for all of history. Stars have been part of religious practices. Long ago, people believed that stars could never die. Astronomers organized stars into groups called constellations. They used the constellations to help them see the motion of the planets and to guess the position of the Sun. The motion of the Sun and the stars was used to make calendars. The calendars were used by farmers to decide when to plant crops and when to harvest them.
The life of stars[change | edit source]
Stars are made in nebulas. These are big areas that has more gas than normal space has. The gas in a nebula is attracted to all the other gas by gravity. This makes the gas in the nebula very thick. Stars form in these thick areas. The Orion Nebula is an example of a place where gas is coming together to form stars.
Stars spend most of their lives burning (fusing) hydrogen to make energy. When hydrogen is fused it makes helium. To fuse hydrogen it must be very, very hot and the pressure must be very, very high. Fusion happens at the center of stars. The center part of the star is called "the core".
The smallest stars (called red dwarfs) fuse their hydrogen very slowly and live for 100 billion years. Red dwarfs live longer than any other type of star. At the end of their lives, they become dimmer and dimmer. Red dwarfs do not explode.
When very heavy stars die, they explode. This is called a supernova. When a supernova happens in a nebula, the explosion pushes the gas in the nebula together. This makes the gas in the nebula very thick. Gravity and exploding stars both help to make new stars in nebulas.
Most stars use up the hydrogen at their core. When they do, their core shrinks and becomes very, very hot. It becomes so hot it pushes away the outer layer of the star. As the outer layer expands it makes a red giant star. Astro-physicists think that in about 5 billion years, the sun will be a red giant. Our sun will be so large it will eat the Earth. After our sun stops using hydrogen to make energy, it will use helium in its very hot core. It will be hotter than when it was fusing hydrogen. Heavy stars will also make elements heavier than helium. As a star makes heavier and heavier elements, it makes less and less energy. Elements as heavy as iron are made in heavy stars.
Average stars (like our sun) will push away their outer gases. The gas it pushes away makes a cloud called a planetary nebula. The core part of the star will remain. It will be a ball as big as the Earth and called a white dwarf. It will fade into a black dwarf over a very, very long time.
In heavier stars, heavier and heavier elements are made by fusion. Finally the star makes a supernova explosion. Most things happen in the universe so slowly we do not notice. But supernova explosions happen in only 100 seconds. When a supernovae explodes it's flash is as bright as a 100 billion stars. The dying star is so bright it can be seen during the day. Supernova means "new star" because people used to think it was the beginning of a new star. Today we know that a supernova is the death of an old star. The gas of the star is pushed away by the explosion. It forms a giant cloud of gas called a planetary nebulae. The Crab Nebula is a good example. All that remains is a neutron star. If the star was very heavy, the star will make a black hole in the universe. Gravity in a black hole is very, very strong. It is so strong, not even light can escape from a black hole.
The heaviest elements are made in the explosion of a supernova. After billions of years of floating in space, the gas and dust come together to make new stars and new planets. The gas and dust in space comes mostly from supernovae. Our sun, the Earth, and all living things are made from star dust.
References[change | edit source]
- Bahcall, John N. 2000. "How the Sun shines". Nobel Foundation. http://nobelprize.org/nobel_prizes/physics/articles/fusion/index.html. Retrieved 2006-08-30.
- Richmond, Michael. "Late stages of evolution for low-mass stars". Rochester Institute of Technology. http://spiff.rit.edu/classes/phys230/lectures/planneb/planneb.html. Retrieved 2006-08-04.
- "Stellar evolution & death". NASA Observatorium. http://observe.arc.nasa.gov/nasa/space/stellardeath/stellardeath_intro.html. Retrieved 2006-06-08.
- Simon A. 2001. The real science behind the X-Files: microbes, meteorites, and mutants. Simon & Schuster. pp. 25–27. ISBN 0684856182. http://books.google.com/?id=1gXImRmz7u8C&pg=PA26&dq=bacteria+that+live+with+out+the+sun.
- Forbes, George (1909) (Free e-book from Project Gutenberg). History of Astronomy. London: Watts & Co.. ISBN 1153627744. http://www.gutenberg.org/etext/8172.
- Hevelius, Johannis (1690). Firmamentum Sobiescianum, sive Uranographia. Gdansk.
- Tøndering, Claus. "Other ancient calendars". WebExhibits. http://webexhibits.org/calendars/calendar-ancient.html. Retrieved 2006-12-10.
Related websites[change | edit source]
|The Simple English Wiktionary has a definition for: star.|
- Media related to Star at Wikimedia Commons
- Green, Paul J (2005). "Star". World Book Online Reference Center. World Book, Inc. http://www.nasa.gov/worldbook/star_worldbook.html. Retrieved 2010-08-20.
- Kaler, James. "Portraits of Stars and their Constellations". University of Illinois. http://www.astro.uiuc.edu/~kaler/sow/sow.html. Retrieved 2010-08-20.
- "Query star by identifier, coordinates or reference code". SIMBAD. Centre de Données astronomiques de Strasbourg. http://simbad.u-strasbg.fr/sim-fid.pl. Retrieved 2010-08-20.
- "How To Decipher Classification Codes". Astronomical Society of South Australia. http://www.assa.org.au/sig/variables/classifications.asp. Retrieved 2010-08-20.
- "Live Star Chart". Dobsonian Telescope Community. http://www.mydob.co.uk/community_star.php. Retrieved 2010-08-20. View the stars above your location
- Prialnick, Dina; Wood, Kenneth; Bjorkman, Jon; Whitney, Barbara; Wolff, Michael; Gray, David; Mihalas, Dimitri (2001). "Stars: Stellar Atmospheres, Structure, & Evolution". University of St. Andrews. http://www-star.st-and.ac.uk/~kw25/teaching/stars/stars.html. Retrieved 2010-08-20.
|
2026-02-01T17:58:50.677032
|
150,686
| 3.947995
|
http://newpapyrusmagazine.blogspot.com/
|
During the 82nd annual meeting of the American Association of Physical Anthropologists on April 2 of 2013, an international group of scientist presented strong evidence that the 7 million year old North African fossil hominoid (humans and apes), Sahelanthropus tchadensis, possessed numerous cranial features in it's brainstem, occipital lobes, and prefrontal cortex that are characteristic of hominins (humans the bipedal fossil relatives of humans). The reconstructed endocast of Sahelanthropus was compared with other fossil apes, modern apes, australopithecines, and modern humans. The results of the volumetric, linear and angular measurements strongly suggest that Sahelanthropus was indeed the earliest African human ancestor.
Since its discovery, some researchers were bothered by the early date of Sahelanthropus. The oldest fossil sahelanthropines are dated somewhere between 6.8 million and 7.2 million years before the present. Such an early hominin is particularly troublesome to those who are strict advocates of the molecular clock hypothesis which typically dates the human-chimpanzee divergence at less than 6 million years ago. However, there has been strong evidence that the molecular clock runs faster in small animals and, correspondingly, slower in larger ones. Evidence of slower running molecular clocks have also been found in apes relative to monkeys. So no molecular clock dates for human and ape divergences can really be taken seriously unless the slower rate of mutation in humans and apes is accounted for.
The argument by some researchers that Sahelanthropus is more ape-like than hominid-like can also be quickly dispelled. When the skull of Sahelanthropus is compared with modern humans, modern apes, and the fossil australopithecines: the bipedal australopithecines have 14 significant cranial-dental similarities with Sahelanthropus, humans, 10, chimps and gorillas, only 5, the orangutan, only 4, and the gibbon only 3 cranial-dental similarities to Sahelanthropus.
Click above to see table
Curiously, a slightly earlier swamp ape, Oreopithecus bambolii of Italy, had 15 significant cranial-dental similarities with Sahelanthropus. Oreopithecus was also the earliest bipedal hominoid in the fossil record. This fossil Italian ape lived in geographic isolation on the Mediterranean island of Tuscany-Sardinia for at least 2 million years. Sahelanthropus suddenly appears in the fossil record in North Africa just around the time when Tuscany became part of the Italian peninsula which was also during a time when Italy was was geographically connected to North Africa after global sea levels began to fall between 7.2 and 7.1 million years ago. Global sea levels continued to fall, eventually isolating the Mediterranean basin from the Atlantic Ocean around 6.1 million years ago.
|
2026-01-20T13:44:28.840917
|
247,965
| 3.848625
|
http://www.angelfire.com/pa2/passover/faq/origin-meaning-of-the-name-passover.html
|
What is the origin of the name Passover as the name for the festival ?
In 1530, the English bible translator William Tyndale translated the first five books of the Septuagint (known as either the Torah or Pentateuch), the Greek version of the Hebrew Bible, or Old Testament to Christians, into English. Up until that time, neither the Hebrew Bible in its original Hebrew nor its Greek version, the Septuagint, had been translated into English. The Hebrew Bible used the word Pesach to describe the lamb that was sacrificed on the 14th day of the first Hebrew month while the Septuagint used the Greek word "Pascha". However, Tyndale didn't want to use the Septuagint's Greek word "Pascha" as the name for the lamb that was sacrificed on the 14th day of the first Hebrew month. Since the word Pascha was the equivalent of the word Pesach, and since Pesach was linguistically related to another Hebrew word mentioned in the Hebrew Bible, Pasach, where both Pesach and Pasach were spelled the exact same way in Hebrew but pronounced differently by Jewish tradition, and since Pasach meant either to "skip over (or on)" or to "pass over (or on)", Tyndale connected the meaning of Pasach to the meaning of Pesach and Pascha, and so instead of using the word Pascha, he instead chose the English word Passover to describe the lamb sacrificed on the 14th day of the first Hebrew month. However, he used the name Passover only for the 22 times this lamb was mentioned in the Torah or Pentateuch because the name Passover was correct for the context of the time frame of events in which those 22 references were mentioned. However, when he translated the New Testament from Greek into English in 1525-1526, Tyndale did not use the name Passover to describe the lamb that was sacrificed on the 14th day of the first Hebrew month in the New Testament. Rather, he used the word Easter for the first time in the first English version of the New Testament because people in England in the 16th century associated the spring season with the Easter festival and because the word Easter was correct for the context of the time frame of events in the New Testament.
Subsequent English translations of the New Testament, however, gradually replaced the word Easter with the word Passover until just one mention of the word Easter remains from William Tyndale's original English translation of the New Testament in 1525-1526: in the New Testament in Acts 12:4 . Even so, there are some versions of the New Testament that replace the word Easter in Acts 12:4 with the word Passover, such as the New King James version.
|
2026-01-22T00:30:58.970469
|
521,955
| 3.643424
|
http://en.wikibooks.org/wiki/Statistics/Distributions/NegativeBinomial
|
Negative Binomial Distribution
Just as the Bernoulli and the Binomial distribution are related in counting the number of successes in 1 or more trials, the Geometric and the Negative Binomial distribution are related in the number of trials needed to get 1 or more successes.
The Negative Binomial distribution refers to the probability of the number of times needed to do something until achieving a fixed number of desired results. For example:
- How many times will I throw a coin until it lands on heads for the 10th time?
- How many children will I have when I get my third daughter?
- How many cards will I have to draw from a pack until I get the second Joker?
Just like the Binomial Distribution, the Negative Binomial distribution has two controlling parameters: the probability of success p in any independent test and the desired number of successes m. If a random variable X has Negative Binomial distribution with parameters p and m, its probability mass function is:
A travelling salesman goes home if he has sold 3 encyclopedias that day. Some days he sells them quickly. Other days he's out till late in the evening. If on the average he sells an encyclopedia at one out of ten houses he approaches, what is the probability of returning home after having visited only 10 houses?
The number of trials X is Negative Binomial distributed with parameters p=0.1 and m=3, hence:
The mean can be derived as follows.
Now let s = r+1 and w=x-1 inside the summation.
We see that the summation is the sum over a the complete pmf of a negative binomial random variable distributed NB(s,p), which is 1 (and can be verified by applying Newton's generalized binomial theorem).
We derive the variance using the following formula:
We have already calculated E[X] above, so now we will calculate E[X2] and then return to this variance formula:
Again, let let s = r+1 and w=x-1.
The first summation is the mean of a negative binomial random variable distributed NB(s,p) and the second summation is the complete sum of that variable's pmf.
We now insert values into the original variance formula.
|
2026-01-26T10:34:43.179552
|
1,046,592
| 3.558874
|
http://www.reference.com/browse/Spotbill
|
This duck is resident in the southern part of its range from Pakistan and India to southern Japan, but the northern subspecies, the Chinese Spotbill (A. p. zonorhyncha), is migratory, wintering in southeast Asia. It is quite gregarious outside the breeding season and forms small flocks. The northernmost populations have been expanding their range northwards by more than 500 km since the early 20th century, possibly in reaction to global warming (Kulikova et al. 2004). These are Mallard-sized mainly grey ducks with a paler head and neck and a black bill tipped bright yellow. The wings are whitish with black flight feathers below, and from above show a white-bordered green speculum and white tertials. The male has a red spot on the base of the bill, which is absent or inconspicuous in the smaller but otherwise similar female. Juveniles are browner and duller than adults.
The Chinese Spotbill is darker and browner; its body plumage is more similar to the Pacific Black Duck. It lacks the red bill spot, and has a blue speculum.
It is a bird of freshwater lakes and marshes in fairly open country and feeds by dabbling for plant food mainly in the evening or at night. It nests on the ground in vegetation near water, and lays 8-14 eggs.
Both the male and female have calls similar to the Mallard.
The phylogenetic placement of this species is enigmatic. The Chinese Spotbill is considered to be near the point where it might be considered a distinct species (e.g. Johnson & Sorenson 1999). And while molecular analyses and biogeography indicate that most species of the mallard group in the genus Anas form two distinct clades, hybridization between all of these species is a regularly-occurring phenomenon and the hybrids are usually fully fertile. The present species is known to produce fertile hybrids with the Pacific Black Buck and the Philippine Duck in captivity (Carboneras 1996), and naturally hybridizes with the Mallard as their ranges now overlap in the Primorsky Krai due to the Spotbill's northward expansion (Kulikova et al. 2004).
The reason for this is that the mallard group evolved quite rapidly into lineages that differ in appearance and behavior, but are still compatible genetically. Thus, stray individuals of any one mallard group species tend to mate successfully with resident populations; this renders mtDNA data of spurious value to determine relationships, especially as molecular studies usually have a very low sample size.
The problem with the present species lies in the fact that its position in the mallard group is ambiguous. The mallard lineages cannot be reliably separated by behavior, but only by biogeography, and it is only the Pacific radiation in which there are species with a distinct male nuptial plumage. However, although this species, judging from its distribution, seems to belong to the Asian group, it occurs close enough to the Bering Straits not to discount an originally North American origin.
An initial study of mtDNA cytochrome b and NADH dehydrogenase subunit 2 sequences, using one individual each of the Indian and the Chinese Spotbills, suggested that these were well distinct and that the former was a more recent divergence from the Mallard's ancestors, and both being solidly nested within the Pacific clade (Johnson & Sorenson 1999).
But another study (Kulikova et al. 2004), utilizing a good sample of Chinese Spotbill and Mallard specimens from the area of contact, and analyzing mtDNA control region and ornithine decarboxylase intron 6 sequence data, found A. (p.) zonorhyncha to be more closely related to the American clade, which contains such species as the Mottled and American Black Ducks. It further revealed that, contrary to what was initially believed, female Spot-billed Ducks do not seem to prefer the brightly-colored Mallard drakes to their own species' males, with hybrids being more often than not between Spotbill drakes and Mallard hens, but this might simply be due to the more strongly vagrant drakes being over-represented in the northwards-expanding population.
In conclusion, it seems clear that Johnson & Sorenson's 1999 study cannot be relied upon: the perceived relationships as presented there are far more likely than not due to the small sample size. But the apparent similarities to the American species are also misleading: thorough analysis of mtDNA control region haplotypes (Kulikova et al. 2004, 2005) concluded that the similarities between the Spotbill and the American "mallardines" were due to convergent evolution on the molecular level. Rather than being derived from the North American clade, the spotbill seems to hold a phylogenetic position close to the point where the Pacific and American lineages separated, evolving independently from there except for occasional hybridization events with the Mallard, although the relationships of zonorhyncha to the Pacific Black Duck deserve further study.
|
2026-02-03T09:49:52.706990
|
816,386
| 3.776867
|
http://insects.about.com/od/antsbeeswasps/tp/10-native-pollen-bees.htm
|
Though honeybees get all the credit, native pollen bees do the bulk of the pollination chores in many gardens, parks, and forests. Pollen bees are also called solitary bees; unlike the highly social honeybees, nearly all pollen bees live solitary lives.
Native pollen bees work more efficiently than honeybees at pollinating flowers. They don't travel far, and so focus their pollination efforts on fewer plants. Native bees fly quickly, visiting more plants in a shorter amount of time. Both males and females pollinate flowers, and native bees begin earlier in spring than honeybees.
Bumblebees (Bombus spp.) are probably the most widely recognized of our native pollen bees. They're also among the hardest working pollinators in the garden. As generalist bees, bumblebees will forage on a wide variety of plants, pollinating everything from peppers to potatoes.
Bumblebees fall within the 5% of pollen bees that are eusocial; a female queen and her daughter workers live together, communicating with and caring for one another. Their colonies survive only from spring until fall, when all but a mated queen will die.
Bumblebees nest underground, usually in abandoned rodent nests. They love to forage on clover, which many homeowners consider a weed. Give the bumblebees a chance – leave the clover in your lawn.
Though often considered pests by homeowners, carpenter bees (Xylocopa spp.) do more than burrow into decks and porches. They're quite good at pollinating many of the crops in your garden. They rarely do serious structural damage to the wood in which they nest.
Carpenter bees are quite large, usually with a metallic luster. They require warm air temperatures (70º F or higher) before they start foraging in spring. Males are stingless; females can, but rarely do, sting.
Carpenter bees have a tendency to cheat. They sometimes tear a hole into the base of the flower to access the nectary, and so don't come into contact with any pollen. Still, these native pollen bees are worth encouraging in your garden.
3. Sweat Bees
Sweat bees (family Halictidae) also make their living off pollen and nectar. These small native bees are easy to miss, but if you take the time to look for them, you'll find they're quite common. Sweat bees are generalist feeders, foraging on a range of host plants.
Most sweat bees are dark brown or black, but the blue-green sweat bees bear pretty, metallic colors. These usually solitary bees burrow in the soil.
Sweat bees like to lick salt from sweaty skin, and will sometimes land on you. They're not aggressive, so don't worry about getting stung.
4. Mason Bees
Like tiny mason workers, mason bees (Osmia spp.) build their nests using pebbles and mud. These native bees look for existing holes in wood rather than excavate their own. Mason bees will readily nest in artificial nest sites made by bundling straws or drilling holes in a block of wood.
Just a few hundred mason bees can do the same work as tens of thousands of honeybees. Mason bees are known for pollinating fruit crops, almonds, blueberries, and apples among their favorites.
Mason bees are slightly smaller than honeybees. They're fairly fuzzy little bees with blue or green metallic coloring. Mason bees do well in urban areas.
5. Polyester Bees
Though solitary, polyester bees (family Colletidae) sometimes nest in large aggregations of many individuals. Polyester or plasterer bees forage on a wide range of flowers. They're fairly large bees that burrow in the soil.
Polyester bees are so called because females can produce a natural polymer from glands in their abdomens. The female polyester bee will construct a polymer bag for each egg, filling it with sweet food stores for the larva when it hatches. Her young are well-protected in their plastic bubbles as they develop in the soil.
6. Squash Bees
If you've got squash, pumpkins, or gourds in your garden, look for squash bees (Peponapis pruinosa) to pollinate your plants and help them set fruit. These pollen bees begin foraging just after sunrise, since cucurbit flowers close in the afternoon sun. Squash bees are specialized foragers, relying only on cucurbit plants for pollen and nectar.
Solitary squash bees nest underground, and require well-drained areas in which to burrow. Adults live just a few months, from mid to late summer when the squash plants are in flower.
7. Dwarf Carpenter Bees
At just 8 mm in length, dwarf carpenter bees (Ceratina spp.) are easy to overlook. Don't be fooled by their small size, though, because these native bees know how to work the flowers of raspberry, goldenrod, and other plants.
Females chew an overwintering burrow into the stem of a pithy plant or old vine. In spring, they expand their burrows to make room for their brood. These solitary bees forage from spring to fall, but won't fly very far to find food.
8. Leafcutter Bees
Like mason bees, leafcutter bees (Megachile spp.) nest in tube-shaped cavities and will use artificial nests. They line their nests with carefully sheared pieces of leaves, sometimes from specific host plants – thus the name, leafcutter bees.
The leafcutter bees forage mostly on legumes. They're highly efficient pollinators, working flowers in mid-summer. Leafcutter bees are about the same size as honeybees. They rarely sting, and when they do, it's quite mild.
9. Alkali Bees
The alkali bee earned its reputation as a pollinating powerhouse when alfalfa growers started using it commercially. These small bees belong to the same family (Halictidae) as sweat bees. They're quite pretty, with yellow, green, and blue bands encircling black abdomens.
Alkali bees nest in moist, alkaline soils (thus their name). They live in arid regions west of the Rocky Mountains. Though they prefer alfalfa when its available, alkali bees will fly up to 5 miles for pollen and nectar from onions, clover, mint, and a few other wild plants.
10. Digger Bees
Digger bees (family Adrenidae), aka mining bees, are widespread and numerous, with over 1,200 species found in North America. These medium-sized bees begin foraging at the first signs of spring. While some species are generalists, others form close foraging associations with certain types of plants.
Digger bees, as you might suspect by their names, dig burrows in the ground. They often camouflage the entrance to their nest with leaf litter or grass. The female secretes a waterproof substance, which she uses to line and protect her brood cells.
- Alternative Pollinators: Native Bees, Lane Greer, NCAT Agriculture Specialist, 1999.
- Kaufman Field Guide to Insects of North America, Eric R. Eaton and Kenn Kaufman, 2007.
- Native Bee Benefits, joint publication of Bryn Mawr College and Rutgers University, May 2009.
|
2026-01-30T22:21:19.017765
|
306,751
| 3.546586
|
http://www.oregonlive.com/environment/index.ssf/2012/02/wildlife_officials_weigh_an_et.html
|
The draft document lists a range of alternatives that differ in terms of the number of barred owls to be removed and the locations, duration and estimated cost of the experiment.
The U.S. Fish & Wildlife Service will accept comments on the plan for 90 days. Officials acknowledge the ethical dilemma of killing one species to benefit another. However, they point to an ongoing experiment on private land in northern California that has shown spotted owls returned to historic territories in every instance where barred owls were removed.
Also today, the wildlife service will release its designation of western land that is considered critical habitat for spotted owls. The designation requires federal agencies such as the Forest Service to consult with the wildlife agency when approving logging, road building or other activity in federal forests. Timber industry groups worry the designations will add another layer of review and possible legal challenges to federal timber sales, and question the impact on private land.
Spotted owls have been listed as threatened under the federal Endangered Species Act since 1990. Scientists say the owl population continues to decline in most of its range, and estimate 7,000 to 10,000 owls remain. Habitat loss due to logging and fire, coupled with competition from barred owls, are the primary threats to spotted owls.
|
2026-01-22T21:53:00.786191
|
941,728
| 3.5962
|
http://www.iub.edu/~iuam/online_modules/islamic_book_arts/exhibit/modern_revivals/index.html
|
Traditional Arts in Turkey
Traditional arts in Turkey suffered greatly in 1928 when the new Turkish Republic changed the alphabet. The official script was changed from the Arabic script to the Latin alphabet. Consequently, the Ottoman past and its arts were somewhat left behind as Turkish culture began to align itself with western arts, and technology. However, beginning in the 1980s, traditional Islamic arts such as calligraphy, marbling, and ceramics experienced a revival in practice and popularity. This phenomenon was probably a result of the relative political stability of Turkey alongside a renewed interest in various cultural traditions. This revivalist movement was catalyzed by the demands of collectors, museums, and tourists, as well as the efforts of talented new artists.
Marbled papers were traditionally considered an ancillary to book arts and calligraphy. Today, they also abide by modern abstract artistic trends, which put precedence on the formal values of color and form.
Marblers sprinkle paint onto a solution made from gun tragacanth that keeps the pigments suspended on its surface. Once finished with applying paint, the marbler can then manipulate the pigments on the surface of the solution into various swirls and designs with a comb or awl. Afterwards, a sheet of paper is carefully placed upon the surface. All of the pigments on the surface of the solution attach to the paper, leaving an impression that can look like waves, marble, or even flowers.
|
2026-02-01T22:01:13.299658
|
143,695
| 3.727473
|
http://www.cpc.ncep.noaa.gov/products/assessments/assess_97/hurricane.html
|
The North Atlantic hurricane season runs from June through November, with peak activity between August to October primarily linked to systems developing from African easterly wave disturbances. Overall, 9-10 tropical storms are observed over the North Atlantic in an average season, with 5-6 becoming hurricanes and 2-3 reaching intense hurricane status [measured by a category 3, 4, or 5 on the Saffir-Simpson scale (Simpson 1974)]. The suppressed 1997 hurricane season featured 7 named storms (Fig. 38), with 3 of these systems becoming hurricanes and 1 reaching intense hurricane status. This latter system, Erica, developed in September and remained intense for only 2 days. Interestingly, hurricane Erica was the only tropical cyclone to form over the North Atlantic basin during August-September, a record low number for the period since the beginning of the aircraft reconnaissance era in 1944.
In contrast, the 1995 and 1996 Atlantic hurricane seasons were very active (Fig. 38), with a high percentage of tropical storms becoming hurricanes in each year (Halpert and Bell 1997, Halpert et al. 1996). During 1995, 11 of 19 tropical systems became hurricanes, with 5 reaching intense hurricane status. During 1996, 9 of 13 tropical storms became hurricanes, with 6 reaching intense hurricane status. A significant majority of these tropical cyclones and hurricanes in both years (16 of 19 systems in 1995 and 9 of 13 systems in 1996) developed from African easterly wave disturbances during August-October.
Over the eastern North Pacific, the 1997 hurricane season featured 17 named storms (normal is 16), 9 of which became hurricanes (normal is 9) with 7 becoming major hurricanes (normal is 5). The season also featured an expanded area of tropical cyclone activity compared to normal, with four systems moving well west of 135°W and two major hurricanes affecting southwestern North America. In contrast, the 1995 and 1996 seasons featured well below-normal tropical storm and hurricane activity across the eastern North Pacific.
(ii) Vertical wind shear
Tropical storm and hurricane activity over the North Atlantic and eastern North Pacific ocean basins is strongly affected by the vertical wind shear between the upper (200-hPa) and lower (850hPa) levels of the atmosphere. Strong vertical shear inhibits tropical cyclogenesis while weak vertical shear (less than approximately 8 m s-1) favors tropical cyclone development. Climatologically, strong vertical wind shear during the hurricane season is observed throughout the Caribbean, large portions of the subtropical North Atlantic and the northern Gulf of Mexico. In contrast, weak vertical shear is normally observed over the tropical eastern North Atlantic and over a large area of the eastern North Pacific between 10°-17.5°N (Fig. 39a). Thus, tropical cyclone formation is normally favored over the eastern tropical Atlantic and North Pacific basins, while comparatively less activity is favored in the Caribbean region.
The El Niño/Southern Oscillation (ENSO) can substantially influence the year-to-year variability of vertical wind shear over both the North Atlantic and eastern North Pacific ocean basins, and thus the interannual variability of hurricane activity in these regions. Gray (1984) has shown that Pacific warm episodes (El Niño) often favor suppressed tropical storm activity and a reduction in intense hurricane activity over the North Atlantic by helping to maintain or enhance the normally high vertical wind shear. In contrast, he notes that Pacific cold episodes (La Niña) often favor enhanced tropical storm activity and increased intense hurricane activity by helping to reduce the vertical wind shear across most of the tropical North Atlantic. Extreme phases of ENSO often have an opposite impact on tropical storm and hurricane activity over the eastern North Pacific, with El Niño favoring an expanded area of tropical cyclone activity by reducing the vertical wind shear in that region, and La Niña favoring suppressed tropical cyclone activity by enhancing the vertical wind shear.
This ENSO influence on tropical storm and hurricane activity has been particularly prominent during the 1990s. The prolonged ENSO-like conditions during 1991-February 1995 were accompanied by extremely low Atlantic tropical storm and hurricane activity. In contrast, the cold-episode years of 1995 and 1996 featured an increase in tropical storm activity over the North Atlantic and substantially reduced activity across the eastern North Pacific. Subsequently, the transition to very strong warm episode conditions during 1997 brought a return to below-normal activity over the North Atlantic and with an increase tropical storm activity over the eastern North Pacific.
During August-October 1997 large vertical wind shear covered most of the western and central North
Atlantic, the Caribbean Sea and the Gulf of Mexico, with favorable shear conditions (under 8 m
s-1) confined to the
eastern tropics generally south of 10° latitude (Fig. 39a). Enhanced vertical wind shear was observed primarily over the Caribbean Sea and off the west coast of Africa between 10°-15°N (Fig. 39b), with near average shear observed over the central subtropical North Atlantic. However, these normal shear values remained too large to support tropical cyclone development.
A vertical profile of the atmospheric winds over the Caribbean Sea region (Fig. 40), where no tropical storms developed during the 1997 hurricane season, indicates enhanced vertical shear resulting primarily from an ENSO-related increase in the upper-level westerlies. In contrast, the near absence of vertical wind shear over the region during August-September 1995 (Fig. 40) resulted from weak easterly winds throughout the depth of the troposphere in association with La Niña conditions.
The entire eastern North Pacific featured low vertical wind shear during August-October 1997 (Fig. 39a), with generally reduced wind shear throughout the primary region of tropical storm formation between 10°-17°N and 105°-125°W (Fig. 39b). This reduced shear resulted primarily from an ENSO-related collapse of the normal easterly winds in the upper atmosphere (Fig. 41). These conditions contrast with the enhanced easterly winds and stronger-than-normal vertical shear observed throughout the region during the suppressed 1995 season.
(iii) The African easterly jet and African wave disturbances
Over the North Atlantic, another notable distinction between the inactive 1997 hurricane season and the active 1995 season was a marked difference in the percentage of tropical storms and intense hurricanes that developed from African easterly waves. These disturbances typically move across western Africa between 10o-15°N, and then propagate westward across the subtropical North Atlantic. During the peak of the hurricane season in August-September these easterly waves are in many cases the very systems which eventually intensify into tropical storms. However, the potential for this intensification is heavily influenced by two factors: the vertical wind shear (discussed in the previous section) and the structure/location of the low-level African easterly jet within which the disturbances move and evolve (Reed et al. 1977).
The easterly jet normally extends westward from western Africa to the central subtropical North Atlantic (Figs. 42a, b ) and reaches peak strength between the 600-700-hPa levels. This jet provides the "steering flow" for the easterly waves and is an important initial energy source for these disturbances as they propagate through the cyclonic shear zone (denoted by the region of red shading) along the southern flank of the jet (Reed et al. 1977). This cyclonic-shear zone is normally well-defined over the eastern tropical North Atlantic and western Africa between 8°-15°N, and overlaps the area of low vertical wind shear (Figs. 43a, b). The overlap is normally most extensive in September (Fig. 43b) during the climatological peak in the Atlantic tropical cyclone activity.
During August and September 1997, the African easterly jet was centered 2°-3° south of normal near 11°N. The jet was also broader than normal, with an abnormally weak meridional gradient in wind speed evident along its cyclonic-shear side (Figs. 42c, d). As a result, the primary region of cyclonic vorticity was displaced to south of 10°N in both months, a region generally considered too far south to favor efficient tropical cyclogenesis. Also during August 1997, the jet was weaker than normal and quite diffuse over the eastern tropical North Atlantic, with a relatively small region of cyclonic vorticity present. This area of weak cyclonic vorticity was displaced well south of the region of low vertical wind shear (Fig. 43c), with almost no overlap of the two features present. In September the easterly jet and accompanying cyclonic vorticity structure were better defined and extended farther west than normal (Fig. 42d). However, the overlap region of cyclonic relative vorticity and low vertical wind shear generally remained south of 10°N (Fig. 43d). Also during September, the vertical wind shear was much larger than normal across the central and western North Atlantic, further precluding any significant tropical development.
In contrast, during the active August and September 1995 period the African easterly jet was
well-defined and centered north of normal (approximately 1°-3° latitude) to between
15°-18°N (Figs. 42e, f
). Also, there was a strong meridional gradient in zonal wind in the region immediately south of the jet core in both
months, resulting in large areas of cyclonic relative vorticity covering the entire eastern North Atlantic between
10°-15°N. These conditions contrast with the near-absence of cyclonic vorticity at these latitudes during 1997.
Also, August and September 1995 featured an extensive overlap of the regions of large cyclonic relative vorticity
and low vertical wind shear between 10°-15°N across the central and eastern North
Atlantic (Figs. 43e, f
), compared with no overlap of these two features in this latitude band during 1997. This favorable location and
zontal structure of the African easterly jet during August-September 1995, combined with its proximity to the extended region of low vertical wind shear, contributed to recurring tropical cyclogenesis and intense hurricane development from easterly waves throughout the period.
Back to Table of Contents
|
2026-01-20T11:08:55.043147
|
454,749
| 3.602351
|
http://nichcy.org/disability/specific/tbi
|
In This Publication:
- Susan’s story
- What is traumatic brain injury?
- How is TBI defined?
- How common is TBI?
- What are the signs?
- Is there help available?
- What about school?
- Tips for parents
- Tips for teachers
- Resources of more information
Susan was 7 years old when she was hit by a car while riding her bike. She broke her arm and leg. She also hit her head very hard. The doctors say she sustained a traumatic brain injury. When she came home from the hospital, she needed lots of help, but now she looks fine.
In fact, that’s part of the problem, especially at school. Her friends and teachers think her brain has healed because her broken bones have. But there are changes in Susan that are hard to understand. It takes Susan longer to do things. She has trouble remembering things. She can’t always find the words she wants to use. Reading is hard for her now. It’s going to take time before people really understand the changes they see in her.
What is Traumatic Brain Injury?
A traumatic brain injury (TBI) is an injury to the brain caused by the head being hit by something or shaken violently. (The exact definition of TBI, according to special education law, is given below.) This injury can change how the person acts, moves, and thinks. A traumatic brain injury can also change how a student learns and acts in school. The term TBI is used for head injuries that can cause changes in one or more areas, such as:
- thinking and reasoning,
- understanding words,
- remembering things,
- paying attention,
- solving problems,
- thinking abstractly,
- walking and other physical activities,
- seeing and/or hearing, and
The term TBI is not used for a person who is born with a brain injury. It also is not used for brain injuries that happen during birth.
How is TBI Defined?
The definition of TBI below comes from the Individuals with Disabilities Education Act (IDEA). The IDEA is the federal law that guides how schools provide special education and related services to children and youth with disabilities.
IDEA’s Definition of “Traumatic Brain Injury”
Our nation’s special education law, the Individuals with Disabilities Education Act (IDEA) defines traumatic brain injury as…
“…an acquired injury to the brain caused by an external physical force, resulting in total or partial functional disability or psychosocial impairment, or both, that adversely affects a child’s educational performance. The term applies to open or closed head injuries resulting in impairments in one or more areas, such as cognition; language; memory; attention; reasoning; abstract thinking; judgment; problem-solving; sensory, perceptual, and motor abilities; psycho-social behavior; physical functions; information processing; and speech. The term does not apply to brain injuries that are congenital or degenerative, or to brain injuries induced by birth trauma.” [34 Code of Federal Regulations §300.8(c)(12)]
How Common is Traumatic Brain Injury?
Approximately 1.7 million people receive traumatic brain injuries every year. (1) Of children 0-19 years old, TBI results in 631,146 trips to the emergency room annually, 35,994 hospitalizations, and nearly 6,169 deaths. (2)
What Are the Signs of Traumatic Brain Injury?
The signs of brain injury can be very different depending on where the brain is injured and how severely. Children with TBI may have one or more difficulties, including:
Physical disabilities: Individuals with TBI may have problems speaking, seeing, hearing, and using their other senses. They may have headaches and feel tired a lot. They may also have trouble with skills such as writing or drawing. Their muscles may suddenly contract or tighten (this is called spasticity). They may also have seizures. Their balance and walking may also be affected. They may be partly or completely paralyzed on one side of the body, or both sides.
Difficulties with thinking: Because the brain has been injured, it is common that the person’s ability to use the brain changes. For example, children with TBI may have trouble with short-term memory (being able to remember something from one minute to the next, like what the teacher just said). They may also have trouble with their long-term memory (being able to remember information from a while ago, like facts learned last month). People with TBI may have trouble concentrating and only be able to focus their attention for a short time. They may think slowly. They may have trouble talking and listening to others. They may also have difficulty with reading and writing, planning, understanding the order in which events happen (called sequencing), and judgment.
Social, behavioral, or emotional problems: These difficulties may include sudden changes in mood, anxiety, and depression. Children with TBI may have trouble relating to others. They may be restless and may laugh or cry a lot. They may not have much motivation or much control over their emotions.
A child with TBI may not have all of the above difficulties. Brain injuries can range from mild to severe, and so can the changes that result from the injury. This means that it’s hard to predict how an individual will recover from the injury. Early and ongoing help can make a big difference in how the child recovers. This help can include physical or occupational therapy, counseling, and special education.
It’s also important to know that, as the child grows and develops, parents and teachers may notice new problems. This is because, as students grow, they are expected to use their brain in new and different ways. The damage to the brain from the earlier injury can make it hard for the student to learn new skills that come with getting older. Sometimes parents and educators may not even realize that the student’s difficulty comes from the earlier injury.
Is There Help Available?
Yes, there’s a lot of help available, beginning with the free evaluation of the child. The nation’s special education law, IDEA, requires that all children suspected of having a disability be evaluated without cost to their parents to determine if they do have a disability and, because of the disability, need special services under IDEA. Those special services are:
Early intervention | A system of services to support infants and toddlers with disabilities (before their 3rd birthday) and their families.
Special education and related services | Services available through the public school system for school-aged children, including preschoolers (ages 3-21).
To access early intervention:To identify the EI program in your neighborhood, consult NICHCY’s State Organizations page (online at: http://nichcy.org/state-organization-search-by-state). Early intervention is listed under the first section, State Agencies. The agency that’s identified will be able to put you in contact with the early intervention program in your community. There, you can have your child evaluated free of charge and, if found eligible, your child can begin receiving early intervention services.
To access special education and related services: We recommend that you get in touch with your local public school system. Calling the elementary school in your neighborhood is an excellent place to start. The school should be able to tell you the next steps to having your child evaluated free of charge. If found eligible, he or she can begin receiving services specially designed to address your child’s needs.
In the fall of 2011, nearly 26,000 school-aged children (ages 3-21) received special education and related services in our public schools under the category of “traumatic brain injury.” (3)
What About School?
Although TBI is very common, many medical and education professionals may not realize that some difficulties can be caused by a childhood brain injury. Often, students with TBI are thought to have a learning disability, emotional disturbance, or an intellectual disability. As a result, they don’t receive the type of educational help and support they really need.
When children with TBI return to school, their educational and emotional needs are often very different than before the injury. Their disability has happened suddenly and traumatically. They can often remember how they were before the brain injury. This can bring on many emotional and social changes. The child’s family, friends, and teachers also recall what the child was like before the injury. These other people in the child’s life may have trouble changing or adjusting their expectations of the child.
Therefore, it is extremely important to plan carefully for the child’s return to school. Parents will want to find out ahead of time about special education services at the school. This information is usually available from the school’s principal or special education teacher. The school will need to evaluate the child thoroughly. This evaluation will let the school and parents know what the student’s educational needs are. The school and parents will then develop an Individualized Education Program (IEP) that addresses those educational needs.
It’s important to remember that the IEP is a flexible plan. It can be changed as the parents, the school, and the student learn more about what the student needs at school.
Tips for Parents
Learn about TBI. The more you know, the more you can help yourself and your child. The resources and organizations listed below will connect you with a great deal of information about TBI.
Work with the medical team to understand your child’s injury and treatment plan. Don’t be shy about asking questions. Tell them what you know or think. Make suggestions.
Keep track of your child’s treatment. A 3-ring binder or a box can help you store this history. As your child recovers, you may meet with many doctors, nurses, and others. Write down what they say. Put any paperwork they give you in the notebook or throw it in the box. You can’t remember all this! Also, if you need to share any of this paperwork with someone else, make a copy. Don’t give away your original!
Talk to other parents whose children have TBI. There are parent groups all over the U.S. Parents can share practical advice and emotional support. Call NICHCY (1-800-695-0285) or find resources in your state, online at (www.nichcy.org/state-organization-search-by-state) to locate parent groups near you.
If your child was in school before the injury, plan for his or her return to school. Get in touch with the school. Ask the principal about special education services. Have the medical team share information with the school.
When your child returns to school, ask the school to test your child as soon as possible to identify his or her special education needs. Meet with the school and help develop a plan for your child called an Individualized Education Program (IEP).
Keep in touch with your child’s teacher. Tell the teacher about how your child is doing at home. Ask how your child is doing in school.
Tips for Teachers
Find out as much as you can about the child’s injury and his or her present needs. Find out more about TBI through the resources and organizations listed below. These can help you identify specific techniques and strategies to support the student educationally.
Give the student more time to finish schoolwork and tests.
Give directions one step at a time. For tasks with many steps, it helps to give the student written directions.
Show the student how to perform new tasks. Give examples to go with new ideas and concepts.
Have consistent routines. This helps the student know what to expect. If the routine is going to change, let the student know ahead of time.
Check to make sure that the student has actually learned the new skill. Give the student lots of opportunities to practice the new skill.
Show the student how to use an assignment book and a daily schedule. This helps the student get organized.
Realize that the student may get tired quickly. Let the student rest as needed.
Keep in touch with the student’s parents. Share information about how the student is doing at home and at school.
Be flexible about expectations. Be patient. Maximize the student’s chances for success.
Resources of More Information
American Academy of Family Physicians. (2010). Traumatic brain injury. Online at: http://familydoctor.org/familydoctor/en/diseases-conditions/traumatic-brain-injury.html
CDC | Centers for Disease Control and Prevention. (2010). Traumatic brain injury. Online at: www.cdc.gov/TraumaticBrainInjury/
National Institute of Neurological Disorders and Stroke. (2011, January). NINDS traumatic brain injury information page. Online at: http://www.ninds.nih.gov/disorders/tbi/tbi.htm
Brain Injury Association of America
Main website: http://www.biausa.org/
Find your state BIA affiliate: http://www.biausa.org/state-affiliates.htm
National Brain Injury Information Center: 1.800.444.6443 (brain injury information only)
National Resource Center for Traumatic Brain Injury (NRCTBI)
Family Caregiver Alliance
Information in English, Spanish, and Chinese.
TBI Recovery Center
1 | National Center for Injury Prevention and Control. (2012). Traumatic brain injury. Available online at the Centers for Disease Prevention and Control (CDC) website: http://www.cdc.gov/TraumaticBrainInjury/index.html
2 | CDC. (2010). Traumatic brain injury in the United States: Emergency department visits, hospitalizations and deaths, 2002–2006. Available online at: http://www.cdc.gov/traumaticbraininjury/pdf/blue_book.pdf
3 | Data Accountability Center. (2012). Data tables for OSEP state reported data. Available online at:
|
2026-01-25T06:42:56.063442
|
582,831
| 3.600231
|
http://www.teachwithme.com/blogs/getting-to-the-core?start=84
|
1-2-3 Come Study Johnny Appleseed With Me!
I've had several requests for some activities about Johnny Appleseed, so I designed this 10-page packet, which will help your students develop their writing skills.
Click on the link to view/download the Johnny Appleseed packet.
To help learn some basic facts, and include singing into your day, there's a Johnny Appleseed song on YouTube that's under 2 minutes.
A while back, Disney came out with a Johnny Appleseed movie. It's only 17 minutes long and can be viewed on YouTube. This would make a nice culminating activity to your Johnny Appleseed studies.
I use coloring pages to make worksheets with letters, numbers, shapes etc. I also turn them into math sheets and connect the dots via skip counting. When my students are done with the task at hand, they can color the picture. I'm always on the lookout for coloring pages that fit my theme. A Johnny Appleseed coloring page can be found at this link. Martin also has a Johnny Appleseed coloring page, as well as education world.
Thanks for visiting today. Feel free to PIN anything from my site. If you'd like to ensure that "pinners" return to THIS blog article, click on the green title at the top; it will turn black; now click on the "Pin it" button, located on the menu. If you'd like to take a peek at all of the awesome educational items that I spend a portion of my day pinning away, click on the heart button to your right.
"I keep six honest serving men; they taught me all I know; their names are What and Why and When and How and Where and Who." -Rudyard Kipling
1-2-3 Come Spy Some Apple Fractions With Me
Whenever I do a theme, I try to incorporate a variety of standards, that encompass all of my subjects. Because fractions are sometimes difficult for younger kiddo's to understand, it's very important to SHOW these math concepts, and then to reinforce them, by having students follow up with several hands-on activities. If you teach first grade, these fraction lessons will help with the Common Core State Standard: 1.G.3
There's nothing like food to grab a child's attention, so I suggest showing children a variety of apples, explaining that they are not only red, which many of them think, but yellow and green as well.
Display an uncut apple and explain that it is a WHOLE apple, then cut the apple down the middle and explain that now the apple is cut in half, and that 2 halves make a whole. Show this by putting the two pieces back together.
Ask children if any one knows how many pieces you'll have, if you cut the apple in quarters, then show them, by cutting the apple in half and then in half again. Count the 4 pieces; review that one of the 4 pieces of an apple is called a quarter or 1 fourth. Rubberband the 4 pieces together, to show that 4 pieces equal a whole apple. Ask your students to choose a partner and explain what they have just learned to each other.
While they are doing that, cut up the apples so that everyone can have a little bite of each kind. Tell them to remember which colored apple was their favorite, so you can graph the results. If you'd like a copy of this apple graph as well as all sorts of other apple graphing templates, (22 different apple graphs) click on the link.
Later, to reinforce and practice fractions, students put together an apple flip-up booklet. To make one, run off the printable on red, yellow and green construction paper.
Children choose a color and fold it in half horizontally. This is another opportunity to review the word half with them, as well as what horizontal means. Students cut the top "doors" so that they will "flip up." Remind students to open their paper, so they are less likely to cut the bottom one at the same time they are slitting the top.
Children write their name on the front of their apple flip up booklet and glue apple pictures under the "doors" to match the fraction words on the top. When everyone has completed their "flip up" review as a whole group.
Included in this packet, is also a trace and write apple fraction booklet, so that the math vocabulary is reinforced in yet another way. This is a great activity for your Daily 5 Word Work. There are matching apple fraction pocket or word wall word cards as well. Click on the link to view/download the Apple Fraction Packet.
If you feel students need more practice, or you'd like a quick review, follow up the next day by having them do the apple pie flip up or the apple pie trace and write booklet. Click on the link to view/download the Apple Pie Fraction Packet.
At the end of the day, I review things that we've learned, using anchor charts. After we go over the concepts, I let children help decide where we should hang the latest posters. Click on the link to view/down load the Fraction Anchor Chart Posters.
Because my Y5's especially enjoyed "craftivities" (great for fine motor skill practice) I often set up a more "artsy" center, for students who completed their table top lesson.
These independent centers were highly motivating for students to get down to business and complete their work, so they could make "something special." To avoid hurt feelings, children who ran out of time, got to collect the "pieces" and materials for the project to take home.
The Fraction Apple Flip craftivity is perfect for these independent centers. Click on the link to view/download it.
To make one, simply run off the templates on red, lime green and yellow construction paper. Students cut and collate their apple so that the 1/4 is on the top, followed by the half and then the whole apple. Staple the corner and review. I've included a stem and leaf template to make the fraction sections look like an apple. Pre-cut these for students to glue to the top-back of their apple.
Finally, games are a terrific way to practice life skills, as well as reinforce standards, in an interesting and fun way. This "Spin to Win" game, is called Apple Fraction Action.
Students can play indepently, or in a group of 2 or 3. Whatever apple they land on, they mark an x under the matching fraction apple on their graph. When the timer rings, students total up their columns and circle which apple they have spun the most.
I've included a whole class graph as well, so you can review, by charting everyone's answers. Click on the link to view/download the Apple Fraction Action game.
Thanks for visiting today! As always, feel free to PIN away. To ensure that "pinners" return to THIS blog article, click on the green title at the top; it will turn black, now click on the "Pin it" button located on the menu. If you'd like to see all of the really creative and educational things I spend way too much time pinning, click on the heart to your right.
I blog and design every day; hope you can pop back tomorrow for the newest freebie(s).
"Treat a [student] as he is, and he will remain as he is. Treat him as he can and should be, and he will become, as he can and should be." -Johann Wolfgang von Goethe
1-2-3 Come Write About Apples With Me!
Increase your students' writing skills with this quick and easy apple "craftivity." Before hand, brainstorm a list of adjectives that describe apples. For a source of correct spelling, as well as ideas, write the words on the board to be used as a word bank, for your students to refer to as they write their "Apple Sense." Encourage them to use at least one adjective for each section.
Review what the 5 senses are and discuss them as they apply to apples. So students know what to do, and can independently get to work, make an example of your own to share.
To add that finishing touch, have students glue their school picture to the leaf. These make an "apple-icious" bulletin board. Your caption could be: A Crop/Bushel of Great Work or Mr(s). _______________ 's Students Get To The Core Of Writing. You could also punch a hole in the stem, and suspend the apples back-to-back from the ceiling.
Click on the link to view/download the Apple Sense Writing Activity.
Thanks for visiting today. As always, feel free to PIN away. To ensure that "pinners" are able to return to THIS blog article, click on the green title at the top; it will turn black; now click on the "Pin it" button on the menu. If you'd like to see the terrific educational things I pin, click on the heart button to the right.
"I find that a great part of the information I have acquired, was by looking up something and finding something else along the way." -Franklin P. Adams
(This is so true for me, especially when I'm researching something on the Internet or Pinerest! One thing definitely leads to another as the day flies by!)
1-2-3 Come Study Antonyms and Synonyms With Me!
Since vocabulary building is such a huge part of learning to read and write, I try to think of interesting ways to do that. Puzzles and games always grab students' attention, so I thought I'd design some with an apple theme for September, and because of the many requests for antonym and synonym activities, I decided to incorporate those.
Run off on red, yellow and green construction paper; laminate and trim the 66 antonym apples to make puzzles. Use them for games too, such as Memory Match or toss them in a basket and have students choose several to play "I Have; Who Has?" The apples provide 132 words to help build student vocabularies. A blank apple template is also included.
Be sure and check out my list of 290 antonyms + a cover so students can make their own antonym word booklets.
I've also included 80 synonym leaves with 2 blank leaf templates. Run off on green construction paper, laminate and trim. Encourage students to write in synonyms of their own.
These activities are wonderful for Daily 5 Word Work. Click on the link to view/download The Antonym Apples packet
I also whipped together a little activity to help build apple-themed vocabulary specifically. Students cut off the apple word list bookmark on the left of the page, and then write the apple words in alphabetical order on the right. Click on the link to view/download the Apple Word activities.
Thanks for visiting. Feel free to PIN anything from my site. I truly appreciate your sharing. To ensure that "pinners" return to THIS blog article, click on the green title at the top of the page; it will turn black, now click on the "Pin it" button on the menu bar. If you'd like to take a look at all of the terrific educational items I spend way too much time pinning, click on the big heart to the right.
"America's future, walks through the doors of our schools each day." -Mary Jean Le Tendre
1-2-3 Come Make An Apple Puzzle With Me!
A quick, easy and fun way to get your kiddo’s sequencing numbers is via a number puzzle, which is also great for fine motor and higher level-thinking practice. One of my Y5 report card standards was to be able to put a puzzle together, so this was especially beneficial.
Here's How You Make A Puzzle: Choose either apple puzzles with number strips from 1-10, for younger students, or skip counting apple puzzles, with number strips that count by 10's to 100. Print off the apple puzzles on white construction paper or card stock, laminate and cut out the individual numbered strips.
Keep each puzzle in its own Ziplock Baggie. Pass the Baggies out to your students and set a timer. Challenge them to complete their puzzle before the timer rings. You can also partner students up, who have the same puzzle, so they can play "Speed" against each other, to see who can put their puzzle together the quickest.
When students are done with one, they may exchange theirs with another child who has a different puzzle. You can use these each year, or skip the lamination and give each child a puzzle to take home. They can cut their own strips, mess them up and put them together.
Another thing you can do with the puzzles, is make a puzzle flip book. I used 4 apple puzzles for my booklet. Print the puzzles and cut out the strips. Each puzzle should have a pile of strips 1-10. Lay the number strips for each puzzle on top of each other, so that the number one strip is at the top. Now make piles of all of the number ONE pieces, then a pile of the number TWO pieces etc.
Arrange the pieces so that when you make your flip book, the pages will show a mixed up puzzle. (See photo.) Glue just the number portion of each strip, to the top of the 1-10 puzzle template. Children flip the pages, to find the matching pieces, to complete each puzzle.
Click on the link to view/download the Apple Number Puzzle Packet.
Thanks for visiting today. Feel free to PIN anything from my site. To ensure that "pinners" return to THIS blog article, click on the GREEN title at the top; it will turn BLACK; now click on the "Pin it" button located on the menu bar at the top. To see all of the awesome educational items that I pin, simply click on the big heart to your right. I have a separate board for Apple Activities.
"There are no rules of architecture for a castle in the clouds." -G.K. Chesterton
|
2026-01-27T08:38:04.112952
|
31,421
| 3.707193
|
http://scripture4all.org/ISA2_help/DatabaseInfo/HOT_intro.html
|
HEBREW OLD TESTAMENT
The ISA program uses as its
Hebrew text the WLC Westminster Leningrad Codex, which
reproduces the Leningrad Codex B 19a (L), considered the oldest dated manuscript
of the complete Hebrew Bible.
Much of the Old Testament is in Hebrew, but
a few passages (Ezra 4:8 - 6:18, 7:12-26, Jeremiah 10:11, and Daniel 2:4 - 7:28)
are in Chaldee, a close cognate language. In the ISA program, words in Aramaic
(Chaldee) passages are tagged with an (A).
Hebrew is a member of the Semitic family
of languages. Biblical Hebrew is the name used for the Hebrew of the Old Testament.
The reader should bear in mind that the pronunciation of Hebrew in the time
of the writing of the Old Testament books differs from that in the time of the
Masoretic punctuation. The vocalization of the Hebrew Bible represents the pronunciation
as of about the 7th cent. A.D. Much of the discussion about the pronunciation
of the Hebrew text is often hypothetical.
Strictly speaking the sounds of
a language can only be studied in the spoken form. But in dealing with ancient
languages, we have access only to written forms. When the orthography of a language
is not readily recognized by an unfamiliar reader, a system of transliteration
is generally employed. Such a system is neither phonemic nor phonetic, but rather
deals with identification.
The Hebrew letters have been preserved for us in
acrostics, so that we know that there are twenty-two characters in the order as
given in these literary forms (cf. Ps. 34, 111, 112, 119, and 145; Prov. 31-11-31;
Lam. 1,2,3 and 4). The ISA program uses in transcription Latin letters for each
The following course has been proceeded :
The Hebrew text
used in the Hebrew text includes not only the alphabet but also the vowel points,
other pointing, the accents, and the sentence indication.
The original Hebrew
alphabet was borrowed from the Phoenicians, who either invented it or borrowed
it from an unknown source. The writing of this alphabet was from right to left.
The 'square letters' of our Hebrew Bible, according to Jewish and Christian
tradition, were introduced in the time of Ezra (5th cent. B.C.). At
some point after adopting the Aramaic writing, different forms were developed
for five of the consonants (k,m,n,ph,tz), depending on whether they stood in final
position (word-end) or nonfinal.
As Hebrew became more and more a dead language,
the reading of the text became increasingly difficult. The traditional pronunciation
was preserved by the Masoretes (c. 500-1000 A.D.) who added marks to the 'consonantal
text' to indicate the vowels.
Kethib - Qere
certain places the Masoretes preserved a tradition that differed from the 'consonantal
Since the 'consonantal text' was considered sacred and inviolable,
the Masoretes added the traditional reading in the margin, and placed the vowels
of the traditional reading, together with a mark calling attention to the note,
on the 'consonantal text'.
The 'consonantal text' is called 'kethib', 'written'.
The Masoretic addition of vowel points and marginal letters is called 'qri' or
Some common words are always read according to the 'qere' which is
not placed in the margin. This phenomenon is referred to as a 'perpetual qere'.
most common examples of perpetual qere are :
[ieue], ('Yahweh'), the proper name of God) is pointed either with the vowels
[adni], ('Lord'), or pointed with the vowels of אלהים
[aleim], ('Elohim'), and is to be pronounced as the word whose vowels it borrows.
[eua], occurs throughout the Pentateuch in place of
היא [eia], the normal spelling of the third person, feminine,
singular pronoun ('she'). There is no clear explanation for this.
the exception of 'perpetual qere', all Kethib-Qere variants are noted in the ISA
program and have been accordingly translated.
|
2026-01-18T19:58:19.495642
|
532,571
| 3.619654
|
http://www.ucsfbenioffchildrens.org/conditions/stroke/index.html
|
During a stroke, a blood vessel in the brain becomes blocked or bursts. Stroke is rare in children and management of the disease requires the expertise of many specialists, as well as the most advanced diagnostic and treatment approaches.
Pediatric strokes are divided into two categories: neonatal strokes, or those occurring at birth or shortly after birth, and childhood strokes.
Neonatal strokes are not well understood. It is thought that they happen around the time of birth, possibly related to many factors that cause a blood clot to travel to the brain, resulting in a stroke. Babies who have had a neonatal stroke are unlikely to have recurrent strokes.
One in 4,000 babies has a neonatal stroke and they are largely unpreventable. There is nothing that a mother can do during pregnancy to either increase or decrease a baby's risk of having a stroke, except avoid using certain illicit drugs. Drugs like cocaine and amphetamines used during pregnancy can increase the risk of stroke in a baby.
About four children per 100,000 experiences a childhood stroke each year and most occur in children under 2 years of age. The cause of stroke in children differs from neonatal strokes and from that of adults. Heart problems, genetic disorders, certain infections, trauma to the head and blood disorders, such as sickle cell anemia, have been shown to increase a child's risk for stroke. In some cases, the cause is unknown.
Children have a higher risk of recurrent strokes than adults or babies born with a stroke, although children may recover better than adults. Stroke symptoms vary, affecting many aspects of a child's development including movement, speech, behavior and learning, though improvements in these areas may be made for several months after a stroke. About a third of children will have a permanent disability after a stroke.
Children with stroke are cared for at the UCSF Pediatric Stroke and Cerebrovascular Disease Center, the only center in the country offering comprehensive care by the world's leading experts in childhood stroke and cerebrovascular disease.
To make a donation online to the Pediatric Stroke and Cerebrovascular Disease Center, visit https://makeagift.ucsf.edu/children, click on "Choose a designation," and select "Pediatric Stroke Research."
There are two main types of strokes:
Each child may experience symptoms of stroke differently, depending on the area of their brain that has been affected.
Early diagnosis of stroke is important for many reasons, including to prevent a second stroke — which is a higher risk for children — and to start a treatment program for recovery. If you think your child has suffered from a stroke, contact your child's doctor immediately for a diagnosis.
In most cases, your child's pediatrician will refer you to a neurologist, a board-certified doctor who specializes in nervous system disorders. The neurologist will perform a thorough physical examination of your child to determine if he or she has had a stroke. If so, the doctor will locate the affected blood vessel and determine the cause of the stroke.
Your child's examination may include:
Your child's doctors will determine the cause of your child's stroke and then a treatment plan to meet your child's individual needs. Recovery differs for each child, depending on the area of the brain affected, the cause of the stroke and any underlying medical conditions your child may have.
Many therapies are available to help prevent a stroke from recurring. Medications — such as aspirin, heparin or warfarin — may be used to thin the blood and make it less likely to clot.
Other therapy such as occupational therapy, physical therapy, speech and language therapy play an important part in your child's recovery. These therapies typically begin as soon as possible after stroke.
Treatment is usually most intense in the early stages after a stroke. A therapist may offer ideas for hands-on therapy for your family and child's school to help your child participate fully in play and other activities. Equipment, such as ankle or hand splints, may help your child move more easily and reduce the risk of permanent joint stiffness.
Reviewed by health care specialists at UCSF Benioff Children's Hospital.
Neuro-Intensive Care Nursery
505 Parnassus Ave., 15th floor
San Francisco, CA 94143
Phone: (415) 353-1565
Fax: (415) 353-1202
|
2026-01-26T14:57:18.272591
|
837,201
| 3.92249
|
http://www.wildlifeadaptationstrategy.gov/adaptation-planning.php
|
Climate change is already here. It is clear from current trends and future projections that we are now committed to a certain amount of changes and impacts, making climate adaptation planning a critical part of responding to this complex challenge. Coordinated adaptation planning can help limit the damage caused by climate change to our natural resources and communities, and will require new approaches, additional resources, and a pragmatic perspective.
Most simply, climate adaptation means helping people and natural systems prepare for and cope with the effects of a changing climate. More specifically, the IPCC defines climate adaptation as:
"Adjustment in natural or human system in response to actual or expected climatic stimuli or their effects, which moderates harm or exploits beneficial opportunities."
Climate adaptation is an essential complement to climate change mitigation, which refers to efforts to decrease the rate and extent of climate change through reducing greenhouse gas emissions or enhancing carbon uptake and storage.
This Strategy is a key component of the growing effort by federal, state and tribal governments and non-governmental entities to reduce risks and impacts of climate change. The Strategy drew from existing adaptation efforts by States, Federal agencies and others and is designed to complement and support such efforts.
At the federal level, several climate change planning efforts are underway that relate to the National Fish, Wildlife and Plants Climate Adaptation Strategy, and involve multiple agencies, departments, and jurisdictions. The Council on Environmental Quality (CEQ) is working to help align these efforts and to ensure a coordinated and comprehensive response to the impacts of climate change on public health, communities, coasts, wildlife, and water resources. Efforts include:
- NEW: President Obama recently laid out a new comprehensive plan to reduce greenhouse gas emissions, prepare our country for the impacts of climate change, and lead global efforts to fight it. The Plan is a recognition that climate change is unequivocal, its primary cause is greenhouse gas pollution from burning fossil fuels, and it is threatening the health of our communities, natural systems, and the economy. The Plan specifically references the development of the national strategy to address impacts of climate change on fish, wildlife and plants, and supports important adaptation as well as mitigation efforts. Learn more.
- U.S. Global Change Research Program: The U.S. Global Change Research Program (USGCRP) coordinates and integrates federal research on changes in the global environment and their implications for society. The USGCRP began as a presidential initiative in 1989 and was mandated by Congress in the Global Change Research Act of 1990. The USGCRP oversees the National Climate Assessments including the 2009 Global Climate Change Impacts in the United States report.
- National Climate Assessment: The U.S. is conducting a comprehensive National Assessment of climate impacts and response options every four years as required by law. The National Assessment provides a mechanism for engaging communities at the regional, tribal, state, and local levels to build a shared vision of our nation's most pressing challenges related to climate change.
- Interagency Climate Change Adaptation Task Force: The Interagency Climate Change Adaptation Task Force (Task Force) was established in 2009 to provide recommendations on how the policies and practices of Federal agencies can be made compatible with and reinforce a national climate change adaptation strategy. CEQ is co-chairing the Climate Change Adaptation Task Force with the Office of Science and Technology Policy (OSTP) and NOAA. The Task Force cuts across sectors (water, health, coasts, insurance, etc.) and is comprised of over 200 federal agency staff, broken into various workgroups.
- The Task Force has issued two Progress Reports to the President in 2010 and 2011, which included the recommendation to develop the National Fish, Wildlife, and Plants Climate Adaptation Strategy and identified a set of guiding principles that public and private decision-makers should consider in designing and implementing adaptation strategies.
- National Action Plan for Freshwater Resources: In October of 2011, CEQ released the National Action Plan: Priorities for Managing Freshwater Resources in a Changing Climate to provide an overview of the challenges that a changing climate presents for the management of the Nation's freshwater resources, and describe actions that Federal agencies propose to take in response to these challenges. The Interagency Task Force’s Water Resources Working Group led the development of this national plan with input from key stakeholders.
- National Ocean Policy: In July of 2010 the President called for the development of a Strategic Action Plan to Strengthen the Resilience of Coastal, Ocean, and Great Lakes Ecosystems through Executive Order 13547. The Order established a National Ocean Policy and tasked the interagency National Ocean Council with developing strategic action plans to achieve nine national priority objectives that address some of the most pressing challenges facing our ocean, coasts, and Great Lakes.
- In January of 2012, the National Ocean Council released a draft National Ocean Policy Implementation Plan to address some of the most pressing challenges facing the ocean, our coasts, and the Great Lakes. It includes series of actions to address the Resiliency and Adaptation to Climate Change and Ocean Acidification priority objective, one of nine priority objectives identified by the National Ocean Policy (NOP).
- Landscape Conservation Cooperatives: Landscape conservation cooperatives, or LCCs, are self-directed, applied conservation science partnerships that will drive successful conservation at landscape scales. Collectively they create a national network of interdependent partnerships between the U.S. Fish and Wildlife Service, the U.S. Geological Survey, other federal agencies, states, tribes, NGOs, universities and other entities which will inform resource management decisions to address national-scale stressors, including climate change.
- Agency Adaptation Planning: In June of 2012, federal agencies submitted Climate Change Adaptation Plans evaluating climate and extreme weather related risks to their missions, policies, and services, as part of their annual Sustainability Plan.
A wide variety of state-level climate change planning is either in place or in progress across the country. The majority of states have produced or are developing climate action plans laying out goals for reducing greenhouse gas emissions, and many have developed impact assessments to examine how climate change will continue to affect local resources, communities, infrastructure, and landscapes. Many states have also worked to integrate management recommendations for habitats or species impacted by climate change into existing State Wildlife Action Plans.
In addition, a growing number of states are putting forward true climate adaptation plans or strategies, detailing strategies for addressing and reducing climate impacts and planning for coming changes. These adaptation efforts have been instigated through both executive orders from the governor as well as legislative mandates, and often involve cross-sector working groups or advisory councils with representatives from various state agencies as well as academics, industry, and the public.
Many Tribes across the country are working to plan and prepare for coming climate changes on their lands and natural resources. For example, the Swinomish Tribe has recently put together an adaptation plan to address potential impacts to the Swinomish Reservation from climate change. The Tribe is also facilitating the development of the Skagit Climate Science Consortium, which works to assess ongoing research, identifying gaps in the science, developing Skagit River Basin specific climate models, and to explain those models to local decision makers. A variety of other Tribes and Native American groups have been working on planning at the regional and local level: look for more examples to be posted here shortly.
Visit the Media & Materials section for more examples of federal, state, and tribal climate adaptation plans and efforts.
|
2026-01-31T06:55:20.888210
|
843,120
| 3.720453
|
http://www.wired.com/wiredscience/2011/08/how-fast-is-falling-rain/
|
How Fast Is Falling Rain?
Twitter person David Cox (@dcox21) asks:
Read a random fact yesterday that said the “average rain drop falls at 17mph.” Is that reasonable?
Let the physics begin. You might think: hey, wont’ the speed depend on how high the water started? Well, it would if air resistance on the water drop were not important. However, I suspect that the rain will fall at terminal velocity. Terminal velocity is the case when the air resistance on the object is equal to the gravitational force on the object. When this happens, the net force is zero (the zero vector) and the object falls at a constant speed.
Here is a diagram of a water drop at terminal speed.
Since the air resistance force depends on the speed of the object (but the gravitational force does not), there is one speed for which these two forces add up to the zero vector. Near the surface of the Earth, the magnitude of the gravitational force can be modeled as:
Where g is the local gravitational field (not the acceleration due to gravity – that is a non-good name for it). And what about the air resistance? It can probably be modeled as:
- ρ is the density of air (about 1.2 kg/m3).
- A is the cross-sectional area of the object. If the object was a sphere, this area would be the area of a circle.
- C is the drag coefficient. This depends on the shape of the object. A cone and a flat circle will have the same A, but different drag coefficients.
- v is the magnitude of the velocity of the object with respect to the air.
- It won’t matter for this case too much, but the direction of the air resistance force is in the opposite direction to the velocity.
At terminal velocity, the magnitudes of these two forces will be equal. I can write that as:
Now, what about the mass (m)? Let me assume that it is made of water (like most rain) and is spherical (even though that is not likely – it would probably be “rain drop shaped”). If I call the density of water ρw and the radius of the drop r, then the mass would be:
Putting this into the “weight = air resistance” expression above as well as an expression for the cross-sectional area in terms of r, I get:
The cool thing here is that the terminal speed of the water drop depends on the size (radius). Larger drops will have a larger terminal velocity. So, could you just make a water melon sized water drop? No. Why not? Because at some point, the force from the air on the drop is going to break the water drop apart. The surface tension holding the drop together just won’t be enough to maintain its drop status.
Then how big can it get? I have no idea. Oh, and then there is the problem of real drop instead of spherical drops. Let me look at that first. Wikipedia lists the coefficient of drag for a smooth sphere as 0.1. A rain drop should be less than this – but how much less? Well, a rain drop would take some of the water to form some sort of tail. This would decrease the cross sectional area as well as decrease the drag coefficient. I am not sure how to calculate the volume of a non-spherical rain drop, so for now I will just use a spherical drop with a drag coefficient of 0.08. I know that is wrong, but it will give me an idea about the terminal speed.
Now, how big should it be? How about I don’t decide. Instead I will plot the terminal speed for a range of rain drop sizes. Let me look at drops from 0.5 mm to 5 mm. Here is that plot.
Well, the original question asked about the speeds in units of miles per hour. Here is the same plot but with different units.
Based on my estimations, 17 mph would be on the low end – but possible. It could be likely that I grossly overestimated the size of a raindrop.
Homework: Yes, there is homework. If the rain drop has a radius of 0.5 mm, from how high would it have to drop to get pretty close to the terminal velocity?
As usual, I rush into things without exploring things in more depth. My assumption of a raindrop shaped raindrop appears to be bogus. Who would have guessed that? Anyway, here are some very useful links from commenters (Jens and Charles) and a large thanks to them.
- A German kid’s video showing the shape of a raindrop (I think).
- A nice summary of findings for falling rain drops.
- Terminal Velocity of Rain Drops Aloft – paper from the Journal of Applied Meteorology (pdf)
- Here is another link from @swansontea: Bad Rain: Raindrops are not tear drop shaped.
|
2026-01-31T09:23:24.984114
|
645,759
| 3.776731
|
http://www.globalissues.org/article/761/democracy
|
Author and Page information
Democracy (“rule by the people” when translated from its Greek meaning) is seen as one of the ultimate ideals that modern civilizations strive to create, or preserve. Democracy as a system of governance is supposed to allow extensive representation and inclusiveness of as many people and views as possible to feed into the functioning of a fair and just society. Democratic principles run in line with the ideals of universal freedoms such as the right to free speech.
Importantly, democracy supposedly serves to check unaccountable power and manipulation by the few at the expense of the many, because fundamentally democracy is seen as a form of governance by the people, for the people. This is often implemented through elected representatives, which therefore requires free, transparent, and fair elections, in order to achieve legitimacy.
The ideals of democracy are so appealing to citizens around the world, that many have sacrificed their livelihoods, even their lives, to fight for it. Indeed, our era of “civilization” is characterized as much by war and conflict as it is by peace and democracy. The twentieth century alone has often been called “the century of war.”
In a way, the amount of propaganda and repression some non-democratic states set up against their own people is a testament to the people’s desire for more open and democratic forms of government. That is, the more people are perceived to want it, the more extreme a non-democratic state apparatus has to be to hold on to power.
However, even in established democracies, there are pressures that threaten various democratic foundations. A democratic system’s openness also allows it to attract those with vested interests to use the democratic process as a means to attain power and influence, even if they do not hold democratic principles dear. This may also signal a weakness in the way some democracies are set up. In principle, there may be various ways to address this, but in reality once power is attained by those who are not genuinely support democracy, rarely is it easily given up.
This web page has the following sub-sections:
- Pillars of a functioning democracy
- Challenges of democracy
- Paradoxes of Democracy
- Voting in non-democratic forces
- Minorities losing out to majorities
- The fear of the public and disdain of democracy from elites (while publicly claiming to supporting it)
- Democracy requires more propaganda to convince masses
- Limited time in power means going for short term policies
- Anti-democratic forces undermine democracy using democratic means
- Those with money are more likely to be candidates
- Confusing political ideology with economic Ideology
- Democracies may create a more effective military
- Democracy, extremism and War on Terror; people losing rights
- Democratic choice: parties or issues?
- Election challenges
- Democratic governments and the military
- Powerful countries: democratic at home; using power, influence and manipulation abroad
- Democracy of Nation States in the age of Globalization
- The dangers of apathy in a democracy
- How can democracy be safe-guarded?
The word “democracy” literally means “rule by the people”, taken from the Greek terms, demos (meaning “people”), and kratos (meaning “rule”). It is a political concept and form of government, where all people are supposed to have equal voices in shaping policy (typically expressed through a vote for representatives).
Democracy past and present
The Ancient Greek philosopher, Aristotle, the student of Plato and teacher to Alexander the Great, is considered one of the most important founders of what is now described as Western philosophy. In his work, Politics, he offered some comparisons with other forms of government and rule, but also included some warnings,
It is often supposed that there is only one kind of democracy and one of oligarchy. But this is a mistake.
We should ... say that democracy is the form of government in which the free are rulers, and oligarchy in which the rich; it is only an accident that the free are the many and the rich are the few.... And yet oligarchy and democracy are not sufficiently distinguished merely by these two characteristics of wealth and freedom. Both of them contain many other elements ... the government is not a democracy in which the freemen, being few in number, rule over the many who are not free ... Neither is it a democracy when the rich have the government because they exceed in number.... But the form of government is a democracy when the free, who are also poor and the majority, govern, and an oligarchy when the rich and the noble govern, they being at the same time few in number.
— Aristotle, Politics, Part 4, 350 B.C.E
The following table offers only the briefest overview of democracy throughout the years. Of course, the earlier forms of democracy were not close to what we consider as democracy today, but were often important precursors or “proto-democracies” that laid down important foundations and principles. The examples shown here are also not complete—each and every instance is not mentioned or detailed, but a sampling of the more common or interesting ones to get an idea:
|Ancient||600-5 B.C||Ancient Greece||Various forms of rule, ultimately resulting in Athenian Democracy, a form of “direct democracy,” as opposed to representative democracy.
An exclusive club, however, as only adult male Athenian citizens that had completed military training could vote. Women, slaves, and foreigners could not.
|500 B.C – 27 B.C||Ancient Roman Republic||Planted the seeds of “representative democracy.” Like other systems of the same period, it was exclusive, and not like democracies we consider today. After this time, Rome had an emperor characterized by dictatorial rule, and eventual decline.|
|600 B.C – 400 A.D||Ancient India||Early forms of democracy, republics and popular assemblies, especially where Buddhism and Jainism was more prevalent.
(Today, Hinduism is the main religion in India, but in ancient times, Brahmanism, as it has also been referred to, co-existed with Buddhism and Jainism. While Brahmanism was also the main religion then, Buddhism and Jainism were far more widespread.)
The caste system, though not as rigid then as it would later become, nonetheless meant it was not a type of democracy we think of today, just like Athenian democracy and the Roman republic systems would not be.
(See Democracy in Ancient India by Steve Muhlberger, Associate Professor of History, Nipissing University, for more details).
|Middle Ages||5th Century to 16th Century||Throughout Europe||Small examples of elections and assemblies|
|1265 –||England||Parliamentary system. The Magna Carta restricted the rights of kings. Election was very limited to a small minority. The monarchy’s influence over Parliament would eventually wane.|
|1688||England||Revolution of 1688 saw the overthrow of King James II, paving way for a stronger parliamentary democracy, strengthened by the 1689 English Bill of Rights|
|18th Century to Present||1788||United States of America||Adoption of the Constitution provided for an elected government and protected civil rights and liberties. Considered the first liberal democracy, but started off with limitations: voting by adult white males only (before 1788, propertied white males only). Women and slaves (predominantly African) would have to wait a long time still.|
|1789||France||French Revolution and the Declaration of the Rights of Man and of the Citizen, a precursor to international human rights conventions, for it was universal in nature (but still only applied to men, not women or slaves). This and the American Constitution are considered influential for many liberal democracies to come after.|
|1917||Russia||The Bolshevik Revolution saw the autocratic Tsar replaced. Led by a Marxist-Lenin ideology, a form of democracy known as Soviet Democracy was initially supported where workers elected representative councils (soviets). This was a form of “direct democracy.”
However, the Russian Civil War and other various other factors led this to be replaced by a more bureaucratic and top-down rule, ultimately resulting in Stalin’s authoritarian rule and any remaining democracy appeared only on paper, not practice. In other words, democratic rule combined with Communist economic ideology quickly gave way to paranoia and authoritarian rule combined with Communist economic ideology.
|World War II||Europe||Democracies give way to fascists in an attempt to retain or increase power. Allied forces also become more militarized to counter Hitler. With the help of the US, all eventually become democracies after the War.|
|Post World War II||Colonized “Third” World||Colonial breaks for freedom as Europe weakened itself during World War II. Many breaks for freedom saw fledgling democracies overthrown by Western Democracies who favored dictatorships to retain key geostrategic control. Some new democracies were claimed to be under Soviet influence. In some cases this may have been true, in many others, it was just an excuse. (See this site’s Control of Resources section for more detail.)|
|Post World War II||Africa||Initially characterized by corrupt dictatorships, now has over 40 countries that have moved towards participatory elections and democratic tendencies though many challenges still remain. Some are democracies on paper, while others flaunt it as and when it suits (a recent example seems to be Zimbabwe and Robert Mugabe).|
|1947||India||Gains independence from British rule, splitting into India, Pakistan, and East Pakistan (later Bangladesh). India becomes the world’s largest democracy, while the other two struggle with both dictatorships and democracy.|
|Post World War II||Latin America||Initially characterized by numerous dictatorships, often supported into power by the US. Almost all are now democracies now struggling more with economic ideology issues.|
|Post World War II||Asia||Some countries remain dictatorships. Many transition eventually into democracies.|
Is Democracy a Western or Universal Value?
Democracy is often described as one of the greatest gifts the West has given to the world. It certainly is one of the greatest gifts to humanity. But is it “Western” or more universal a principle? The previous table suggests there is some universality.
A common Euro-centric view of world history describes ancient Greek democracy as Western democracy, with ancient Greece as part of that Western/European identity.
Yet, as John Hobson writes in his anti Euro-centric book, The Eastern Origins of Western Civilisation, (Cambridge University Press, 2004), ancient Greece and Rome were not considered as part the “West” until much later; that is, Greece and Rome were part of a whole Middle East center of civilization, in some ways on the edge of it, as more was happening further Eastward.
Western Europe adopted or appropriated ancient Greek achievements in democracy as its own much later when it needed to form a cohesive ideology and identity to battle the then rising Islam and to counter its defeats during the Crusades.
And, as also noted much further below, it was the Middle East in the 9th – 12th centuries that preserved a lot of Ancient Greek and Roman achievements after Rome collapsed (which Europe then thankfully also preserved when the Middle East faced its own invasion and collapse — by the Mongols.
The point here is that democracy is perhaps more universal than acknowledged and that there is a lot of propaganda in how history is told, sometimes highlighting differences amongst people more than the similarities and cross-fertilization of ideas that also features prominently in history. After all, great battles throughout the ages are often celebrated far more than cross cultural fertilization of ideas which require more study and thought and doesn’t make for epic tales!
As discussed further below, there are elements within both Western and non-Western societies that are hostile to democracy for various reasons.
State of democracy around the world today
Wikipedia’s Democracy article collates interesting images from organizations that research democracy issues. Some of these images show what countries claim to be democracies, and to what degree they really are (or not) democratic:
As George Orwell noted, the word democracy can often be overloaded:
In the case of a word like DEMOCRACY, not only is there no agreed definition, but the attempt to make one is resisted from all sides. It is almost universally felt that when we call a country democratic we are praising it: consequently the defenders of every kind of regime claim that it is a democracy, and fear that they might have to stop using the word if it were tied down to any one meaning. Words of this kind are often used in a consciously dishonest way. That is, the person who uses them has his own private definition, but allows his hearer to think he means something quite different.
— George Orwell, Politics and the English Language
While most countries claim themselves to be democratic, the degree to which they are varies, according to Freedom House, which surveys political and human rights developments, along with ratings of political rights and civil liberties:
Perhaps it is no wonder Churchill once said,
Democracy is the worst form of government, except all the others that have been tried.
— Sir Winston Churchill
On the one hand then, there has never been as much democracy as present. And yet, many countries suffer from poor representations, election anomalies and corruption, “pseudo democracy”, etc. While these issues will be explored further below, first a look at some of the fundamentals of a democratic system.
Pillars of a functioning democracy
In a democratic government key principles include free and open elections, the rule of law, and a separation of powers, typically into the following:
- Legislature (law-making)
- Executive (actually governing within those laws)
- Judiciary (system of courts to administer justice)
It is felt that separating these powers will prevent tyrannical rule (authoritarianism, etc). Critics of this may argue that this leads to extra bureaucracy and thus inefficient execution of policy.
Not all countries have or need such a complete separation and many have some level of overlap. Some governments such as the US have a clear separation of powers while in other countries, such as the United Kingdom, a parliamentary system somewhat merges the legislature and executive.
An edition of a Wikipedia article looking at the separation of powers noted that “Sometimes systems with strong separation of powers are pointed out as difficult to understand for the average person, when the political process is often somewhat fuzzy. Then a parliamentarian system often provides a clearer view and it is easier to understand how ‘politics are made’. This is sometimes important when it comes to engaging the people in the political debate and increase the citizen [participation].”
This suggests that education of politics is also important. The US for example, attempts to teach children about their system of governance. In the UK, for example (also writing from personal experience) this is not typically done to the same extent (if at all). This may also be a factor as to why further separation of powers in the US has been reasonably successful.
Some people talk of the difference between a minimalist government and direct democracy, whereby a smaller government run by experts in their field may be better than involving all people in all issues at all time. In a sense this may be true, but the risk with this approach is if it is seen to exclude people, then such governments may lose legitimacy in the eyes of the electorate. Direct democracy, on the other hand, may encourage activism and participation, but the concern is if this can be sustained for a long period of time, or not. (There are many other variations, which all have similar or related problems; how to handle efficiency, participation, informed decision making and accountability, etc. Different people use different terms such as deliberative democracy, radical democracy, etc.)
The historical context for some countries may also be a factor. Many examples of successful democracies include nations that have had time to form a national identity, such as various European or North American countries.
Other nations, often made up of many diverse ethnic groups, may find themselves forced to live together. A major example would be most African countries, whose artificial borders resulted from the 1885 Berlin Conference where European colonial and imperial powers, (not Africans) carved up Africa (for the colonial ruler’s own benefit, not for Africans).
Such nations may find themselves in a dilemma: an intertwined set of branches of government may allow democratic institutions to be strengthened, but it may also lead to corruption and favoritism of some groups over others. Furthermore, many such countries have been emerging from the ravages of colonialism in the past only to be followed by dictatorships and in some cases social and ethnic tensions that are freed from the restraints of authoritarian rule. As such, many poor nations in such a situation do not have the experience, manpower or resources in place to put in an effective democracy, immediately.
It is therefore unclear if what is determined as best practice for an established democracy is necessarily, or automatically, the recipe for a newly emerged democracy. For example, a country coming out of dictatorship may require a strong leadership to guide a country towards further democracy if there are still elements in the society that want the old ways to come back. This might mean more integration of powers, to prevent instability or the old rulers attempting to manipulate different branches of government, for example. However, in this scenario, there is of course a greater threat that that strong leadership would become susceptible to being consumed by that power, and it may become harder to give it up later.
Getting this one aspect of governance right, let alone all the other issues, is therefore incredibly challenging in a short time. As such, an effective democracy may not be easy to achieve for some countries, even if there is overwhelming desire for it.
In addition to those formal aspects of a functioning democracy, there are other key pillars, for example,
- Civilian control of the military
Civilian control over the military is paramount. Not only must the military be held to account by the government (and, be extension, the people), but the military leadership must fully believe in a democratic system if instability through military coups and dictatorships are to be avoided. (This is discussed further below.) Indeed, some nations do not have full-time professional armies for the reason that coups and military take-over is less likely. Others, notably the more established powers, typically do have it, because they have had a recent history of war and their place in the world stage may make it seem a necessary requirement.
To achieve the openness that transparency and accountability gives, there is an important need for a free press, independent from government. Such a media often represents the principle of the universal right to free speech. This combination is supposed to allow people to make informed choices and decisions thereby contributing to political debate, productively.
Transparency and accountability also requires more bureaucracy as decisions and processes need to be recorded and made available for the general public to access, debate and discuss, if necessary. This seems easy to forget and so it is common to hear concerns raised about the inefficiency of some governmental department.
Efficiency, however, should not necessarily be measured in terms of how quickly a specific action is completed or even how much it costs (though these can be important too). The long-term impact is often important and the need to be open/transparent may require these extra steps.
A simple comparison on procuring a service may help highlight this:
- A responsible government may request a tender for contract. An open process to document these and how/why a final choice was made is important so that there is openness, understanding, and accountability to the people. For example, the media, and citizenry can use this to determine whether or not decisions have been made with the best interests in mind. Some of the higher profile issue may require sustained public discourse and expensive media coverage, too.
- With a private company, the same process could be followed, but all workers (especially in a large company) and shareholders are not equal, and the company’s board is usually entrusted to make many decisions quickly. They do not have to record every single detail or even request an open tender for contract if they don’t want to. The “market” and the shareholders will presumably hold the company to account.
Even when companies are subject to these same requirements of openness (to shareholders, to whom public companies are accountable), governments may have requirements that companies do not have, such as providing universal access to a service such as health care. Companies, however, can chose what market segments they wish to go for.
A government may therefore incur costs and expenditures that are not needed by a private company. This raises legitimate concerns about excessive drives for privatization being led by misguided principles, or the wrong type of efficiency. Conversely, one could hide behind the excuse of democratic accountability if accused of not acting quickly and decisively enough. Openness, transparency, independent media, etc. are therefore key to assuring such processes are not abused in either direction.
[Side note: To avoid claims of inefficient government being just based on ideology, perhaps the cost of being open and transparent in all decision making could be more thoroughly factored into these economic calculations. This is something not typically required in private companies and organizations, for example, which can then appear more efficient. There is also the counter point that some things cannot be efficiently done or developed by committee, but instead by specialized groups that get to focus on the task at hand.
There are, of course, many legitimate concerns and examples of unnecessary/wasteful bureaucratic processes in government, as well as in the private sector which do require addressing. A look at works by William Easterly’s White Man’s Burden, or J.W. Smith’s World’s Wasted Wealth II would give many detailed examples of this.]
Challenges of democracy
Low voter turnouts
There have been numerous cases where democracies have seen leaders elected on low voter turnouts. In the US for example, in recent elections, the President has been elected with roughly 25% (one quarter) of the possible votes because a full 50% did not vote, and the “close” election race saw the remaining 50% of the votes split almost equally between the final Democrat and Republican candidates. Other countries, such as the UK has also seen such phenomenons.
Does an “elected” official represent the people if turnout is too low?
What does it mean for the health of a democracy if 75% of the electorate, for whatever reason, did not actually vote for the “winner”?
Such a low voter turnout however, represents a concern for a genuine democracy as a sufficient percentage of the electorate has either chosen not to vote, or not been able to vote (or had their votes rejected).
Some countries mandate voting into law, for example, Belgium. Others require a clear percentage of votes to be declared a winner which may result in the formation of coalitions (oftentimes fragile) to get enough votes in total.
As far as I can find, there are no countries that entertain the thought of negative votes, or voting for a list of candidates in order of preference that may help provide some further indications as to which parties are really the popular ones.
For example, many accused Ralph Nader for Al Gore’s loss to George Bush in the infamous 2000 US elections—ignoring for the moment accusations that Bush never won in the first place. If there had been the ability to list your preferred candidates in order of preference, would many of Nader’s supporters put Gore as their second option. Many right-wing alternatives may have put George Bush as their alternatives too, but perhaps this would have encouraged those who do not normally vote—such as those believed that their vote for a third candidate would have been pointless—to vote?
Why a low voter turnout?
There are numerous reasons for low voter turnout, including
- Voter apathy
- Parties not representing people
- Voter intimidation
The common criticism leveled at those who do not vote seems to be to blame them for being apathetic and irresponsible, noting that “with rights come responsibilities.” There is often some truth to this, but not only are those other reasons for not voting lost in this blanket assumption of apathy, but voting itself isn’t the only important task for an electorate.
Being able to make informed decisions is also important. In many nations, including prominent countries, there is often a view that the leading parties are not that different from each other and they do not offer much to the said voter. Is choosing not to vote then apathy or is it an informed decision? In other cases, the media may not help much, or may be partisan making choices harder to make.
In some countries voter intimidation can take on violent forms and discourage people to vote for anyone other than a militia’s favored group. (A recent example is that of Zimbabwe where the leading opposition felt they had to withdraw from the election process as voter intimidation by militias supporting Robert Mugabe was getting too violent. Mugabe’s government decided to carry on with the elections anyway, which seemed pointless to most but not to him; as he obviously would—and did—win.)
These concerns will be explored further later on.
Paradoxes of Democracy
Democracy, with all its problems, also has its paradoxes. For example,
- People may vote in non-democratic forces
- Democracies may discriminate the minority in favor of the majority
- Those with non-democratic political ambitions may use the ideals of democracy to attain power and influence
- More propaganda may be needed in democracies than some totalitarian regimes, in order to gain/maintain support for some aggressive actions and policies (such as waging war, rolling back hard-won rights, etc.)
- Regular elections lead to short government life-time. This seems to result in more emphasis on short term goals and safer issues that appeal to populist issues. It also diverts precious time toward re-election campaigns
- Anti-democratic forces may use the democratic process to get voted in or get policies enacted in their favor. (For example, some policies may be voted for or palatable because of immense lobbying and media savvy campaigning by those who have money (individuals and companies) even if some policies in reality may undermine some aspects of democracy; a simple example is how the free speech of extremist/racist groups may be used as an excuse to undermine a democratic regime)
- Those with money are more able to advertise and campaign for elections thus favoring elitism and oligarchy instead of real democracy
- Deliberate confusion of concepts such as economic preferences and political preferences (e.g. Free Markets vs. Communism economic preferences, and liberal vs authoritarian political preferences) may allow for non-democratic policies under the guise of democracy
- Democracies may, ironically perhaps, create a more effective military as people chose to willingly support their democratic ideals and are not forced to fight.
Some of these are discussed further, here:
Voting in non-democratic forces
Two examples of this paradox are the following:
Hitler and his party were voted in. He then got rid of democracy and started his gross human rights violations and genocidal campaigns as a dictator.
Hamas was also recently voted in by Palestinians. The “International community” (really the Western countries) withheld funds and aid because Hamas is regarded as a terrorist organization (though most Palestinians would seem to disagree). The lack of aid, upon which the Palestinians have been quite dependent contributed to friction amongst Palestinians who support Hamas and those that do not and this has been amplified by the worsening economic situation there. The Israel/Lebanon conflict also affected the Gaza Strip contributing to the in-fighting between various Palestinian factions.
The Hitler example highlights the importance media and propaganda play and the need for continued open self-criticism to guard against these tendencies.
The Hamas example is complicated by the general Middle East situation and the view on the one hand that American/Israeli power and influence in Palestine is undermining peace between Israel and Palestine, while on the other hand, the terrorist activities of Hamas and other organizations push American and Israel to even more authoritarian reactions.
That the majority of Palestinian people would vote in Hamas suggests that they have not seen the fruit of any recent attempts at a peace process (which has long been regarded by the “international community” – minus the US and Israel – as one-sided) and this has driven people to vote for a more hard line view.
Minorities losing out to majorities
Another criticism of democracy is that sometimes what the majority votes for or prefers, may not necessarily be good for everyone. A common example plaguing many countries which have diversity in race and religion is that a dominant group may prefer policies that undermine others.
Some quick examples include Nigeria which has large Christian and Muslim populations; some Muslims there, and in other countries, want Sharia Law, which not all Muslim necessarily want, let alone people of other faiths. If only a very slight majority can override a very large minority on such an important issue as how one should live, then there is a real chance for tension and conflict.
Another example is India, often help us an example of pluralism throughput the ages, despite all manner of challenges. Yet, unfortunately an Indian government report finds that its claims to religious integration and harmony are on far shakier grounds than previously believed. Muslims in India, for example, a large minority, are also under-represented and seem to be seen as India’s new ‘underclass.’
Wealthier countries also have similar problems, ranging from France with its challenge to integrate/assimilate a large foreign population, to Spain which struggles with a large Basque population wanting independence, to the US where large immigrant populations are struggling to integrate.
To address such potential issues requires more tolerance, understanding, and openness of society, such that people are not insecure due to the presence of others (and so that they do not, as a result, turn to more extreme/fundamental aspects of their own beliefs). This can come through various outlets, including, a diverse mainstream media, institutions such as religious and legal ones, schooling, family upbringings, etc
Equally important are the underlying economic conditions and situations of a country. Generally, it seems, where economically people are generally doing well, where the inequality gap is not excessive, people have less of a reason to opt for more defensive, reactionary or aggressive policies that undermine others.
At the same time, concerns of undesirable social engineering would also need to be addressed, and it is likely that in different countries there will be different “formulas” for this to be successful, for the historical context within which people live, the specific circumstances of the day and various other factors will differ amongst and within nations.
The fear of the public and disdain of democracy from elites (while publicly claiming to supporting it)
People often see democracy as an equalizing factor that should not allow the elite or wealthy in a society to rule in an autocratic, despotic, unaccountable manner. Instead they have to respond to the will of the people, and ultimately be accountable to them. Furthermore and ideally, it should not only be the wealthy or elite that hold the power. There should be some form of equality when representing the nation.
However, this has also meant at least two accompanying phenomena:
- Democracy is seen as a threat to those in power, who worry about the masses, referring to them as a “mob”, or some other derogatory phrase (“tyranny of the majority” is another), and
- To get votes, parties may appeal to populist issues which are often sensational or aim for short-term goals of elections.
Interestingly, leading up to the 2006 US mid-term elections, amidst all sorts of allegations of corruption coming to light, in an interview by Democracy Now!, writer James Moore, provided a classic example of political utility: Karl Rove, the influential, but controversial, advisor and strategist for President George W. Bush, despite actively campaigning to get the “Religious Right” to support Bush was not religious at all (and possibly despised the evangelical Christian extremists that he actively worked to get the votes of) and Bush himself apparently called them “wackos” years earlier:
James Moore: What people do not realize about [Karl Rove] is that everything about him is political utility. When he looked at what was going on with the megachurches ... Karl decided he was going to take these gigantic churches on the Christian right and to turn them into a gigantic vote delivery system. And that’s precisely what he has done. This is not a man who has deeply held religious faith. It’s a man who believes that faith can be used to drive voters to the polls. In fact, his own president, in an interview with—or an offhand unguarded moment aboard the press plane with my co-author, Wayne Slater, had referred to the Christian right and the fundamentalists north of Austin as “whackos.” They hold these people in more disdain than these individuals are aware of.
— Karl’s Rove Secret, Democracy Now, November 2, 2006
This is just one example, where parties have simply targeted people to get votes for power. And yet, many in the religious right believe that Bush represents them and some even see him as an instrument of God, showing just how effective political utility and manipulation has been.
Noting that different people refer to, and think of democracy in different ways, (even some despots have called themselves democratic!), Bernard Crick concedes that,
We must not leap to the conclusion that there is a “true democracy” which is a natural amalgam of good government as representative government, political justice, equality, liberty, and human rights. For such volatile ingredients can at times be unstable unless in carefully measured and monitored combinations. Is “good government” or “social justice” unequivocally democratic, even in the nicest liberal senses? Probably not. Tocqueville wrote in the 1830s of the inevitability of democracy, but warned against “the dangers of a tyranny of the majority.” Well, perhaps he cared less for democracy than he did for liberty. But even Thomas Jefferson remarked in the old age that “an elective despotism was not what we fought for”; ... John Stuart Mill whose Essay on Liberty and Considerations on Representative Government are two of the great books of the modern world, came to believe that every adult (yes, women too) should have the vote, but only after compulsory secondary education had been instituted and had time to take effect.
— Bernard Crick, Democracy: A Very Short Introduction, (Oxford University Press, 2002), pp.10-11
Democracy requires more propaganda to convince masses
In a democracy, people are generally accustomed to questioning their government, and should be empowered—and encouraged—to do so.
In some countries, healthy cynicism has given way to outright contempt or excessive cynicism at anything a government official promises!
What this does mean, however, is that those with ambitions of power and ulterior agendas have to therefore resort to even more propaganda and media savvy manipulation, as Crick notes:
“Totalitarian” ... was a concept unknown and unimaginable in a pre-industrial age and one that would have been impossible but for the invention and spread of democracy as majority power. For both autocrats and despots depend in the main on a passive population; they had no need to mobilize en masse.... Napoleon was to say: “the politics of the future will be the art of mobilizing the masses.” Only industrialization and modern nationalism created such imperatives and possibilities.
— Bernard Crick, Democracy: A Very Short Introduction, (Oxford University Press, 2002), p.15
Media co-opting is one strategy that may be employed as a result, as Australian journalist, John Pilger notes:
Long before the Soviet Union broke up, a group of Russian writers touring the United States were astonished to find, after reading the newspapers and watching television, that almost all the opinions on all the vital issues were the same. “In our country,” said one of them, “to get that result we have a dictatorship. We imprison people. We tear out their fingernails. Here you have none of that. How do you do it? What’s the secret?”
— John Pilger, In the freest press on earth, humanity is reported in terms of its usefulness to US power, 20 February, 2001
(This site’s sections on the mainstream media and propaganda looks at these issues in more depth. The buildup to the Iraq invasion is also an example of the lengths that governments of two democracies, the US and UK, would go to to gain support for their cause.)
Limited time in power means going for short term policies
Many democracies have rules that elections must be held regularly, say every 4 or 5 years. The short life span of governments is there for an important reason: it prevents a party becoming entrenched, dictatorial, stagnant or less caring of the population over time. Competition in elections encourages people to stay on their toes; governments knowing they must deliver, and potential candidates/parties knowing they can participate with a chance.
Yet, at the same time, the short-termism that results has its problems too. As Crick also notes, in two of the world’s most prominent countries, democracy has almost become a mockery of what it is meant to be:
Today, the politics of the United States and Great Britain become more and more populist: appeals to public opinion rather than to reasoned concepts of coherent policy. Political leaders can cry ‘education, education, education’, but with their manipulation of the media, sound-bites, and emotive slogans rather than reasoned public debate, [John Stuart] Mill might have had difficulty recognizing them as products of an educated democracy. And our media now muddle or mendaciously confuse what the public happens to be interested in with older concepts of “the public interest.”
— Bernard Crick, Democracy: A Very Short Introduction, (Oxford University Press, 2002), p.11
[Side note: Noam Chomsky also details many times how the “national interests” have been used as a euphemism for the interests of only certain groups, such as some industry group, the government, a military industrial complex, or some other elitist/influential/powerful group.]
Anti-democratic forces undermine democracy using democratic means
In a number of countries, governments may find themselves facing hostile opposition (verbal and/or physical/military). Some governments find this opposition has foreign support, or, because of their own failures has created a vacuum (either a power vacuum, participation vacuum or some other failure that has allowed people to consider alternatives seriously). When a legitimate government is then deliberating, or taking, stronger actions, that government can easily be criticized for rolling back democracy, acting dictatorially or in some way undermining the rights of their people. This can then strengthen the non-democratic opposition further.
There are unfortunately countless examples of such foreign and domestic interference with potential and actual democracies to be listed here. It is common for example, to hear of say the former Soviet Union doing this. Unfortunately, while less common to hear about it in the mainstream, western governments have also been complicit in overthrowing and undermining democracies in other parts of the world in favor of puppet regimes, be they dictatorships or pseudo democracies. Two useful resources to read more about these include J.W. Smith’s Institute for Economic Democracy and the Noam Chomsky archives.
One recent example worth highlighting here is Venezuela, where Hugo Chavez managed to reverse a coup against him. This coup was aggressively supported by many in the Venezuelan elite media and also by the US. After the coup, news channels that actively supported the coup in 2002 to oust Chavez, were still allowed to remain in operation (which many democracies would not usually tolerate).
The main media outlet, RCTV, aggressively anti Chavez, was denied a renewal license in 2007, not because it was critical of Chavez policies, but because a pre-Chavez government law did not look too kindly on broadcasters encouraging coups (after all, what government would!). RCTV and their supporters tried to insist otherwise; that this was an issue of free speech. The US mainstream media has generally been hostile to Chavez (as has been the Bush administration itself), and this was therefore added to the other mis-characterizations often presented, lending credence to the view that Chavez is a dictator. In essence a law enacted during the previous dictatorial regime (backed by the US and others) is now being turned around and used against Chavez as another example of power-grabbing.
If and when nations such as the US want to further undermine the democratic processes in Venezuela, such incidents will be brought back into the mainstream, without these caveats, and a more favorable/puppet regime may likely be the aim.
Chavez is not helping his own cause by his often vocal and inflammatory antics, but it should not be forgotten how much foreign influence may be contributing to the undermining of democracy tendencies. Venezuela has been through a succession of dictatorships and many supporters of the previous regimes are in the anti-Chavez groups. Regardless of whether one is pro- or anti- Chavez, it certainly seems that democratic participation has increased during his tenure, given all the increased political activity, both pro- and anti-Chavez.
In another example, for a number of years now, in the US, a number of Christian groups in various Southern states have been campaigning hard to get schools to either reject teaching subjects such as the theory of evolution in science classes, or to “balance” them off with things like Creationism stories from the Bible or Intelligent Design ideas, in the name of free speech and academic freedom. In mid 2008, Louisiana became one of the first states to pass a religiously motivated anti-evolution “academic freedom” law that was described by Ars Technica as being “remarkably selective in its suggestion of topics that need critical thinking, as it cites scientific subjects ‘including, but not limited to, evolution, the origins of life, global warming, and human cloning.’”
(On this particular issue, the point is not to ban stories on Creationism; they are better taught in religious classes, not science classes. Instead, religious views of the world have been pushed forward arguing that scientific theories are just that, ideas without proof, and so religious-based ones should compete on a level field allowing people to make more “informed” decisions. Yet, often missed from that is that scientific theories are usually based on a well-substantiated explanation that gets tested whenever possible, whereas religious ideas usually are required to be accepted on faith. More generally in the United States, there is however, a growing concern at the rise in an extreme religious right that wants to replace the democratic system with a Christian State.)
Although we are accustomed to hear about Muslim extremists pushing for relgious-based states in various Middle East countries, this example is one in a democracy where despite the principle of a separation of Church and State, Christian religious extremists push forward with their agenda, anyway.)
Those with money are more likely to be candidates
It is a common concern in many democratic countries that those with sufficient funds, or fund-raising capability are the ones who will become the final candidates that voters choose from. Some criticize candidates for “selling out” to mega donor, who then expect favors in return.
Others, who may be more democratic, but are either poor, or lack the finances of the leading contenders, or will not likely support policies that influential mega donors support, will often lose out.
In the US for example, “campaign finance reform” has long been a concern. It has been common to hear leading candidates only wanting themselves to appear on television election “debates” because of concerns about technicalities such as the time needed to accommodate other candidates with no realistic chance of winning. Yet, one would think in a democracy, time should be afforded to make all popular voices hear, not just the leading four from the two main parties, as that just results in the leading four becoming unfairly popular at the expense of the rest, and makes the concern they raise into a self-serving argument.
Understandably, finding time for all candidates might not be practical if there are many, but always limiting it to the four from the two leading parties results in the same choices people have to chose from each time, limiting diversity (especially when many feel the two leading parties are quite similar on many issues).
Attempts to suggest caps on finances of any sort to address this undue influence are met with support from those who have little, but ferocious resistance from those who stand to lose out.
Newspapers and other media outlets are often less than impartial in election campaigns. The high concentrated ownership of major media outlets does not always bode well for democracies as it puts a lot of influence into a handful of owners. For example, Rupert Murdoch’s ownership of the Sun tabloid in the UK and the paper’s switch from being a long time Conservative supporter to Labour supporter was described by many as a key reason that Tony Blair first came into power in 1997.
In the US, it can be argued that the differences between some Democrats and Republicans are quite small in the larger context, and the media owners come from the same elite pool, thus reinforcing the impression of vast differences and debate on major issues. The result is that many get put off and the remaining who do want to vote have access to just a few voices from which to make any notion of informed decisions.
Confusing political ideology with economic Ideology
As discussed on this site’s neoliberalism section, and explored in more depth at the Political Compass web site, the mainstream often mixes concepts such as democracy, authoritarian/totalitarian regimes, with free markets and communist economic ideologies. The terms of “left” and “right” wing politics is a gross oversimplification:
See the neoliberalism section for various other graphs that show how most major political parties and leaders of major countries are more neoliberal/right wing, even if they may be considered left (e.g. the Labour Party in UK).
In summary, democracy does not automatically require free markets and free markets does not automatically require democracy. Many western governments supported dictatorships during the Cold War that practiced free market economics in a dictatorial/fascist manner, for example.
Leading up to World War II, a number of European nations saw their power determined by fascists, often via a democratic process. Today, many European democracies attempt a social model of economic development ranging from socialist to somewhat managed markets.
To the alarm of the US which considers the area its area of political influence, Latin America has been flirting with various socialist/left wing economic policies and direct/radical democracy.
In the Indian state of Kerala, for example, a party was voted in that has put communist practices in place with some reasonable success. Of course, many communist regimes in reality have also been accompanied by dictatorships and despots in an attempt to enforce that economic ideology.
And during the beginnings of free markets, the major European powers promoting it were themselves hardly democratic. Instead they were dominated by imperialist, racist, colonialist and aristocratic views and systems.
The point here is that by not making this distinction, policies can often be highlighted that appear democratic, or even could undermine democracy (depending on how it is carried out) as many African countries have experienced, for example. As a recent example, as South Africa came out of apartheid, it was praised for its move to democracy, its truth and reconciliation approach and other political moves. Less discussed however, were the economic policies and conditions that followed.
A report describing a conference celebrating 10 years of South African independence from Apartheid noted how difficult a democratic system is to establish when combined with factors like regional and international economics (i.e. globalization) which were identified as being “responsible for some of the problems” in the region:
In the conditions of a unipolar world and the development of multinationals, which are highly technologically advanced, it is hard for Africa to find an entry point into this ‘globalised’ context.... The conference examined the implications of the globalisation context for the prosperity of the region’s economic structure and the implications for the consolidation of democracy. The question of how the international world relates to and indeed is responsible for some of the problems was also deliberated at the conference. While the consensus was on Africans ... taking responsibility for their own welfare and problems, the conference acknowledged the interconnectivity among local, regional, continental and international economies. Indeed, some of the economic problems of the countries in the region can be traced back to their relationships with former colonial masters. More recently, the structural adjustment programmes of the 1980s continue to affect the economic stability of SADC countries.... provisional relief of debt has been linked to certain conditions, including political conditionality, which is basically a commitment to a narrow form of democracy, and economic policies, which have created deeper disempowerment. Some African scholars have dubbed this phenomenon ‘choiceless democracies’.
The link between globalisation and democratisation was further debated in the economic session of the conference. Suffice to say, democracy is threatened when a state cannot determine its own budget. The conditionality cripples the development of a socially transformative democracy. A number of the debt rescheduling agreements have fostered cutbacks on social spending, and have created conditions of further economic marginalisation and social exclusion of the poor. In the long term, the consolidation of democracy is threatened because the conditions have the effect of fostering social unrest.
— Nomboniso Gasa, Southern Africa, Ten Years after Apartheid; The Quest for Democratic Governance, Idasa, 2004, p.11
One irony noted by John Bunzl of the Simultaneous Policy Organization (Simpol) is that the world’s leading democracies have, through the lobbying by corporate-friendly think-tanks, governments and companies, unleashed a corporate-friendly form of globalization that even they can’t fully control. As a result, even these countries are finding pressures on their democratic systems, resulting in unpopular austerity measures and cutbacks in cherished services and rights, such as health and education (though nowhere near to the level that has happened in the developing world, under the benign phrase “Structural Adjustment”).
How this has happened is detailed by many people. One detailed source to go to might be the Institute for Economic Democracy and the work of J.W. Smith.
Democracies may create a more effective military
It may seem ironic to many, considering that one principle underlying democracy is the desire for freedom, but democracies may create a more effective military.
Unlike a totalitarian regime, or, in the past, systems that used slaves, democracies that do not have forced military service, might create a more effective military because people have to willingly chose to participate in military institutions, and may have sufficient pride in protecting their democracy.
Of course, in reality it is more complex than that and democracy may be one ingredient of many, but potentially an important one that is hard to fully measure quantitatively. For example, sufficient funding, technology, skills and so on, are all required too, to transform an eager and enthusiastic military to an effective one.
Crick, quoted above, noted Plato’s observation that often a democratic system of rule would need to allow the few to govern on behalf of the many. This is what modern democracies typically are. But, as Crick notes, this has historically meant “rule by the few always needed to placate the many, especially for the defense of the state and the conduct of war.” (Democracy, p.17) In other words, propaganda is needed. This occurs today, too, as discussed earlier.
In some countries, the military will offer lots of incentives to join (good salary, subsidized education, etc.) which may appeal to poorer segments of society, so “defending” one’s democracy may not be the prime reason for joining the military; it may be an important way for someone in poverty to overcome their immediate predicament.
People may also be free to chose not to participate in a military, and/or reduce the money spent on it. Hence, a lot of fear politics and propaganda may be employed to gain support for excessive military spending, or to wage war, as the build-up to war against Iraq by some of the world’s most prominent democracies exemplified.
Many political commentators have noted, for example, that since the end of the Cold War, the US has struggled to fully demilitarize and transform its enormous military capacity into private, industrial capacity, and still spends close to Cold War levels. (This has been observed way before the so-called War on Terror.) Many regard the US as a more militarized state than most other industrialized countries.
Democracy, extremism and War on Terror; people losing rights
Fear, scare stories and political opportunism
The use of fear in a democratic society is a well known tactic that undermines democracy.
For example, the US has also been widely criticized for using the War on Terror to cut back on various freedoms in the US, and often undermining democracy and related principles. By raising fears of another terrorist attack it has been easy to pass through harsher policies ranging from more stringent borders, to snooping on citizens in various ways.
Another example is the US military commissions act in 2006 which has increased already formidable presidential powers further, rolling back some key principles of justice such as habeas corpus (the traditional right of detainees to challenge their detention), allowing the President to detain anyone indefinitely while giving US officials immunity from prosecution for torturing detainees that were captured before the end of 2005 by US military and CIA. (It is also an example of how a seemingly non-democratic bill is passed in through a democratic system. The previous link goes into this in a lot more detail.)
Fear, scare stories and political opportunism have also been a useful propaganda tools during election time. For example, A November 6 Democracy Now! interview noted that the US government had long ago predetermined when the sentencing of Saddam Hussein would take place: conveniently just before the 2006 mid-term elections so as to try and get extra votes through the appearance of a successful action coming to a close.
Another example comes from the Iranian hostage crisis where Iranian students held some American hostages for over a year: A documentary that aired on a British cable channel (cannot recall details unfortunately) explained how Reagan, challenging Carter in the US presidential race, used a propaganda stunt that also helped him achieve popular support: Reagan and George H. W. Bush had struck a deal with the Iranian mullahs to provide weapons if they released the hostages the day after he was sworn in as President, rather than before, during Carter’s term.
This would allow Reagan to be sworn in with a very positive and triumphant view, and provide an image of him that could be used again and again in the future to help bolster him and his party, even though, as Robert Parry commented, “The American people must never be allowed to think that the Reagan-Bush era began with collusion between Republican operatives and Islamic terrorists, an act that many might view as treason.” [Robert Parry, The Bushes & the Truth About Iran, Consortium News, September 21, 2006]
Cynics will note (rightly) that such tactics are not new and they happen all the time. The problem is that many people (often cynics themselves) believe it, or importantly, believe it at that time. Because these things have happened throughout history does not automatically mean it should also happen in the future too.
Supposedly, society becomes more sophisticated and improves its knowledge of how these aspects work. We are supposed to be able to learn from past experiences, and if that were true, knowing that such things can happen, and yet they continue to do so all the time also signals a weakness or problem in the democratic institutions if such actions are not held accountable for they deceive the public into mis-informed decisions.
This is an overly complex situation as it goes to the heart of society and questions whether a society suffering this problem is truly democratic if systemically the mainstream media fails to hold those in power to account, either through fear of criticism that they are not being patriotic or through being part of the same elite establishment that reinforces each others views and perspectives, etc. The point is, perhaps regardless of whether this is easy to address or not, there may be a fundamental problem: not enough democracy, openness, transparency and accountability, thus letting these things happen, repeatedly.
Weak democracies and hostile oppositions
It seems that where democracies are weak (e.g. through government corruption, favoritism, or incompetence, or just because a nation is newly emerging, or only recently moving out of dictatorship and towards democracy) there is a greater risk in the rise of hostile opposition.
Sundeep Waslekar is president of the Strategic Foresight Group, a respectable think tank from India. He captures these concerns describing how this can pave the way for extremism:
Bangladesh has terrorist groups belonging to Islamist as well as leftist ideologies. They gathered strength in the late 1990s in a political vacuum created by constant infighting between the principal leaders of the democratic politics. The situation in Bangladesh is similar to that in Nepal, which had autocratic rule in one form or another until 1991. With the induction of democracy in 1991, it was hoped that the voiceless would now have a space to press for their priorities. However, those in power, in partnership with their capitalist cronies, concentrated on the development of the capital region. They also engaged in such a bitter fight with one another that democracy was discredited as a reliable institution, creating a void that was quickly filled by extremists. In the case of Nepal, the Maoists stepped in. In the case of Bangladesh, it was the extremists of the left and the religious right. Having tested popular support, they have developed a vested interest in their own perpetuation. The result is that the Nepali political parties have had to accept an arrangement with the Maoists while the Bangladeshi political parties are courting Islamic extremists.
— Sundeep Waslekar, An Inclusive World in which the West, Islam and the Rest have a Stake, Strategic Foresight Group, February 2007, p.6
As Waslekar also argues, the forces of extremism can be more dangerous than the forces of terrorism:
Terrorism involves committing acts of [criminal] violence.... As they tend to be illegal, it is conceivable for the state machinery to deal with them. Extremism may not involve any illegal acts. In fact, extremism may surface using democratic means.
— Sundeep Waslekar, An Inclusive World in which the West, Islam and the Rest have a Stake, Strategic Foresight Group, February 2007, p.14
Waslekar notes that extremism often takes a religious face, and is not just in parts of the Middle East and other Islamic countries (Islamic extremism), but growing in countries and regions such as the United States (Christian extremism), Europe (racism and xenophobia of a small minority of White Europeans, and Islamic extremism by a small minority of Muslim immigrants), India (Hindu extremism), Israel (Jewish extremism), Sri Lanka (Buddhist extremism), Nepal (Maoists), Uganda (Christian extremism) and elsewhere.
Furthermore Waslekar finds that “a closer look at the patterns of terrorism and extremism around the world reveals that there are some common drivers—grievances and greed leading to supply and demand.” There is “clear evidence that young people are drawn to the terrorist or extremist mindset because they feel excluded by the society around them or by the policy framework of the state.”
And it is not necessarily absolute poverty that has the potential to breed new recruits for terrorist organizations, but more likely inequality and relative poverty. People suffering absolute poverty are generally struggling for their daily lives, and less likely to have the leisure to think about their grievances and injustices.
Another issue that Waslekar summarizes well is how terrorism is understood and reported:
Whether it is the mainstream media or the blogs, the analysis of the global security environment revolves around the mutual love-hate relationship between Western and Islamic countries. The fact that there are more serious patterns of terrorism elsewhere in the world is ignored by both sides. The fact that there are issues bigger than the growing mutual hatred between Western and Islamic countries is forgotten. In the eyes of the Western elite and its media, the death of 5000 odd people in terrorist attacks launched by Al Qaeda and its affiliates in the last five years is the ultimate threat to global security. In the eyes of Arab public opinion, the death of 50,000 to 500,000 innocent people in Iraq, Afghanistan, Lebanon and the Palestine is the real tragedy. Both sides forget that their woes are serious but that some 50 million children lost their lives in the last five years since 9/11 due to policy neglect by a world that is overly obsessed with one issue.
— Sundeep Waslekar, An Inclusive World in which the West, Islam and the Rest have a Stake, Strategic Foresight Group, February 2007, p.20
What do these issues have to do with democracy? A functioning, democratic society is ideally one that is able to take inputs from different segments of society and attempt to address them. Issues such as inequality and social/political differences may have a better chance of being resolved without resort to violence in a process that actually is (and is also seen to be) open, accountable and inclusive.
Lack of inclusiveness undermines democracy, strengthens extremism
Democracy by it self is no panacea as the various issues here have shown, but is a crucial part of the overall process. A functioning mainstream media has a democratic duty to inform citizens, but around the world the media repeatedly fails to do so, and often reflects its regional biases or perspectives of an established elite few. If concerns and grievances are not addressed, or if they addressed through violence, Waslekar warns of “an age of competitive fundamentalism” and is worth quoting again, this time at length:
The project of collaborative development of human knowledge and culture that began under the sponsorship of Arab and Islamic rulers a thousand years ago eventually became subject to the West. The Palestinian issue has been a symbol of the continuation of the Western monopoly on power ... Iraq has been added as another symbol not only of this Western power and arrogance, but also of Western callousness. The rhetoric about Syria and Iran pose the risk of more such symbols arising.
As the Arab elite have failed to provide an effective response to the Western stratagem, Islamic preachers have come up with an alternative vision ... not in harmony with Islam’s core message of peace, learning, and coexistence. On the contrary, it presents an absolutist idea of the society. On the other hand, the Christian Evangelical preachers and European xenophobic politicians present visions of a closed society to their followers. It seems that the world is entering an age of competitive fundamentalism.
While the West is obsessed with the Middle East, forces of extremism and nationalism in Asia and Latin America pose the real challenge to its monopoly and arrogance. Western discourse on terrorism and extremism is focused on the Arab region at its own peril.... The conditions for relative deprivation prevail all over the world, from Muslim migrants in Western Europe, the poor in the American mid-west to farmers in Colombia and the Philippines. The intellectual project to define terrorism only in relation to the groups in the Middle East turns a blind eye to the growth of terrorism and extremism not only outside the Middle East, in Asia and Latin America, but also in the American and European homelands.
In the age of competitive fundamentalisms, human rights and liberties are compromised. The states ... may indulge in human rights violations. And at times they may use terrorism as an excuse to punish legitimate opposition. Several people are more afraid of anti terrorist measures than acts of terror. Thus, terrorism abets authoritarianism and undermines freedom. Since many of the states today engaged in counter terrorism campaigns claim to be champions of freedom, terrorist groups defeat them philosophically by forcing them to undermine the freedom of innocent civilians. Terrorism wins when powerful security agencies forbid mothers from freely carrying milk and medicine for their infants on aeroplanes. Terrorism wins when democratically elected representatives cannot allow their constituents to move about freely around them. Terrorism wins when states use it as an excuse to kill their enemies, giving birth to a thousand suicide bombers.
Competitive fundamentalism threatens trust between individuals and societies.
— Sundeep Waslekar, An Inclusive World in which the West, Islam and the Rest have a Stake, Strategic Foresight Group, February 2007, pp.24-25
Democratic choice: parties or issues?
Democracies seem easy to manipulate in some circumstances. It may be during election campaigns when issues are oversimplified into simple slogans (e.g. “education, education, education”), and emotive issues (which may be hyped and exaggerated, such as immigration). Or it may be during fund raising for political parties (often from influential contributors with their own agendas), or it may be when running government where corruption, lack of transparency and unaccountability affects even the wealthiest of nations who are proud to be democratic.
The free press should act as a natural check against these issues in a functioning democracy, yet intertwined interests and agendas result in them often being mouthpieces of parties or just a press-release machine that unwittingly follows an agenda set by others resulting in limited analysis outside those boundaries.
Perhaps the way parties are voted into power is an issue?
Representative and Direct Democracy
Most democracies are representative democracies, whereby votes are usually for parties who propose candidates for various government positions. By their nature, representative democracies these days require lots of funding to get heard, which opens itself up to corruption. There are usually constitutions to check the power of representatives, but even this can be open to abuse.
One alternative is known as direct democracy where instead of voting for intermediaries, votes should be cast on issues themselves. Direct democracy may help prevent the perversion of democracy by those with power interests through the financing of parties and their various machines to garner votes. On the other hand, a possible risk with direct democracy may be that there is much more emphasis on voting for issues, which may mean minority groups do not get represented fairly, depending on the issues.
There is also the challenge of scale. Direct democracy may be ideal for small organizations and communities, including thousands of participants. But what about tens of millions? Referendums in various countries on all sorts of issues have shown that direct democracy is possible, but how can this be applied to a more “daily” routine on more routine and complex issues? Is it even possible, and how would issues come to the fore? The risk of demagogy is therefore a concern.
In either case, informed opinion would be paramount, which places importance on news media outlets to be truly impartial and broad in its diversity of issues covered. With globalization today, and the accompanying concentration of media in many countries, often owned by large global companies, the diversity and variety of views are suffering.
An interesting aside is an Internet-based project called the global vote, to allow direct voting on global issues, which go beyond national boundaries, or allow people to vote on aspects of policies in the countries of others.
This is interesting in a few ways. For example, voting beyond the nation state is something new, ironically perhaps afforded by globalization which some see as undermining democracy. It is also enabled by modern technology (the Internet in this case).
On the issue of technology, attempts to introduce other types of technology into voting, such as e-voting machines have been plagued with problems of insecurity, difficult usability for some people, lack of open access to the underlying source code, and even incorrect recording of votes, or possible manipulation. This is discussed further, below.
What makes voting meaningful?
Voting in a democracy is based on the assumption of a free and informed decision.
Without these you end up with an autocratic system pretending to be a democratic system while people believe they have made a free and informed choice. Over time, as a population becomes accustomed to living in such a system a self-perpetuating belief takes hold where the population believe that the system is democratic, even as informed opinion, political diversity and choices are reduced. Such a system is then able to sustain itself, having grown from the initial illusion of free choice.
The crucial challenge therefore is how to ensure the decision is free (and not influenced unduly by propaganda or some other form of manipulation) and informed (how does one get a full range of information? Is it even possible?).
Ensuring free decisions and informed decisions are of course are clearly interlinked, and political scientist Stephen Garvey thoroughly argues that voting the way it is typically done is so flawed that a more evaluative approach to democracy would be a better way to judge progress, determine leaders, and ultimately achieve a better (and real) democracy. This, he argues, is because an evaluative democracy
- Minimizes the role of political influence and manipulation by making the focus of political determinations on citizen evaluations which are based on the collective interest.
- Minimizes political campaigning.
- Minimizes or eliminates the role of political organizations.
- Minimizes the role of money.
- Establishes accountability of political and governmental decision-making through the standard of collective interest.
In essence, democracy (and the various issues raised for debate) would then driven by the people, not by leading political parties who decide the agendas based on their interests (which also results in a very narrow set of issues being discussed, and often contributes to low voter turnout). This has the potential, then, to be a much more people-driven (i.e. democratic) approach.
For more information see Garvey’s book, Anti-Election:Pro-Determination (Inexpressible Publications, 2007) and the web site, Evaluative Democracy.
In countries that have representative democracies a problem with election campaigning is that it requires a lot of money, and raising it often means appealing to those who have sufficient money to donate.
In the US, this has led to the criticism that both Democrats and Republicans have had to court big business and do not necessarily represent the majority of the people, as a result.
Such enormous campaign financing has meant that other potentially popular candidates have not been able to get further because they have not been able to spend as much on advertising and marketing.
This means that not only do political parties court big financiers but that these large entities/businesses and wealthy individuals can use the media to push their own agendas and interests which may not necessarily represent majority views.
Numerous calls for limits are welcomed by those without money, but resisted by those with it, for clearly one set of people would gain, while another would lose out.
This very much sounds more like a system of oligarchy, rather than democracy, as Aristotle had long warned of, quoted near the beginning of this article.
Electronic voting: efficiency or easier for corruption?
In a Democracy Now! broadcast on November 6, 2006 (just before US mid-elections) the issues being discussed were various ways that people were being prevented from voting. New York Times editor, Adam Cohen, who had been following this also talked about the problems with electronic voting machines, summarizing that they “are really not very good at making these machines” because they had all sorts of problems, even registering the wrong vote (e.g. some people put down a Democrat candidate and the summary page asking for confirmation showed it said a Republican candidate).
An HBO documentary Hacking democracy exposed numerous problems with electronic voting software/hardware in the US from a leading company, Diebold.
The documentary described how easy it was to tamper with the software and hardware. The initial question it asked was how do you know if the vote count is correct and accurate? How does America count its vote?
What they found was “secrecy, votes in the trash, and how to change the course of history” through things like extremely easy manipulation of electronic voting.
An example they noted was during the Al Gore/Bush campaign, a computer counted Al Gore’s votes backwards in Volusia County, Florida; he had negative votes. An investigation established that it could not have been through a computer glitch. Instead, it was thought it might have been tampering, but no-one will know for sure; It is against the law to look at the software used in electronic voting systems.
Furthermore, the documentary noted you can’t necessarily rely on the vote produced by the voting machines; As Democracy Now! had discussed above, this documentary had footage showing that when a vote was cast for a certain candidate, another candidate was repeatedly selected!
A concerned citizen-turned-activist discovered the code for “GEMS” the computer software code for some 40% of Diebold’s electronic voting software in use. Passwords, specifications, etc were all available. That was when the “wall of secrecy” around how these systems work, began to fall.
Computer Science PhDs at John Hopkins were shown the software code, and found:
- You could hack into the system without having to know how it works
- Security holes allowed serious manipulations
- It was not a problem limited to just Diebold
- $55 million was supposedly spent on security, accuracy and other critical features, a Diebold representative told Channel 4 News in UK. Yet the computer scientists broke into the system in 10 seconds.
There were countless more examples showing just how problematic electronic voting software has been even though the use of technology usually gives the impression of sophistication and accuracy.
The problem is not limited to the US. The Open Rights Group, a technology organization in the UK that works on civil liberties issues in relation to digital technology reported on e-counting of votes cast in the 2008 London Elections. They found that independent election observers were unable to “state reliably whether the results declared in the May 2008 elections for the Mayor of London and the London Assembly are an accurate representation of voters’ intentions”.
When the independent observers tried to actually observer the votes being counted, they could not and were hampered by the technology put in place. Furthermore, they found that an audit of the software used to count the vote could not be published because of commercial confidentiality. As they noted, for a public election these are very serious concerns because transparency in the election process is crucial. And yet, the election software company is to be paid some £4.5 million for delivering this “solution” (approximately $9 million).
Media manipulation and ownership
As discussed earlier, a free and impartial media is important for a functioning democracy. However, as also detailed in other parts of this web site, a lot of mainstream media suffers from concentrated ownership by a handful of companies that usually results in less diversity of views being aired, as those owning companies have their own interests to protect and promote.
In the US and UK for example, there have been various cases of media outlet parent companies contributing to election campaigns or candidates/parties. Famously, Tony Blair got support by Ruper Murdoch and the Sun tabloid, usually a right-leaning paper, which helped him come to power in 1997.
In Italy when Silvio Berlusconi became Prime Minister (on more than one occasion), he was a powerful media mogul and was able to use that to good effect to promote his agenda and sometimes controversial views. As one of Italy’s richest men he was also embroiled in various allegations of corruption, including from the influential Economist magazine. Berlusconi has been able to use his influence in business, media and politics to avert much criticism and charges in various ways.
In Venezuela there has been both an intense anti-Chavez mainstream media, but also a state run channel where Chavez has had is own TV program. (As an interesting aside, Chavez’s recent election win—an overwhelming win—has been described by some foreign media as an example of amassing more power. The irony here may be that he may have won a popular democratic vote, but because he is not looked at favorably by nations such as the US, and because many of the mainstream media outlets of those other nations often follow the government/establishment position on such things, the reporting by the mainstream media from there reflected that government position. Had Bush or any other US presidential candidate won US elections with such a majority it is unlikely to be described as amassing more power, but rather an example of democracy and overall success and popularity of that candidate.)
Danny Schechter, a media expert, wonders out aloud why we see some repeated good quality analysis (after an election) of why election reporting may have been problematic, and yet those problems occur again the next time:
After every election, there are post-mortems and then, after that, come the studies to confirm the presence of many institutional and deep seated flaws in our ritualized electoral-democracy.
Annually, journalists acknowledge their own limits and mistakes. The honest ones admit there was a uniformity of outlook in which the horse race is over-covered and the issues under covered.
They concede that there was a focus on polls without explaining their limits adequately or how polls in turn are affected by the volume and slant of media coverage. There are criticisms of how negative ads and entertainment values infiltrated election coverage, what Time magazine calls "electotainment." They bemoan the fact that there was more spin and opinionizing than reporting along with less investigative reporting.
And then they do it all over again.
— Danny Schechter, The 2006 Election: Another Nail in our Democratic Coffin?, ZNet, December 11, 2006
While Schechter is specifically commenting on US elections, these similar concerns often apply to many other countries, rich and poor.
Campaigning on personalities and sound-bites
Schechter above commented on the negative ads in US. This involves a lot of excessive and pointless attacking and degrading of opponents, rather than focusing on issues. It often involves a form of spin and slanting just to make the other candidate look bad, and both Democrats and Republicans get involved in this.
In the UK recent elections have been accompanied hype on populist issues such as immigration which, while the have issues, have been exaggerated and blown out of proportion to the issue itself.
The “image” of the candidate is often paramount, in that they must “appear” to support or not support a particular issue. Some media reports will try to make the most out of some minor issue such as appearance on a particular day and see if they can read any signals from it. The personality of the candidates themselves also become major issues.
Such tactics are arguably a waste of resources, and divert attention away from real issues which then get less time to be debated. Unfortunately these tactics will always be pursued because some of these do affect people’s views and opinions. It is well known that appearance, for example, does affect people’s opinions, regardless of whether it should or should not.
Threats of violence and intimidation
For developing countries in particular, the road to democracy is often fraught with dangers. In some cases militias threaten violence if their supported leaders are not voted for, or if some people choose to vote at all.
In East Timor militias supporting (and some accuse, supported by) the Indonesian ruling regime at the time resorted to enormous levels of violence, killings and intimidation to prevent people voting. Nonetheless in this case, the majority did vote, and achieve independence. Democracy has not automatically solved all the problems since, but it is a start.
In Burma/Myanmar, the military junta simply imprisoned/house-arrested the democratically elected Aung San Suu Kyi.
In the Democratic Republic of Congo, as well as inheriting elements of brutal colonial past, the nation’s rich resources have been a curse. Numerous neighboring countries and corporate interests (e.g. diamonds, mineral companies) have interacted into what has became numerous wars. Attempts at meaningful peace have proven largely unsuccessful.
Sierra Leone and many other countries going through conflict have militias that intimidate people to vote a certain way.
Zimbabwe has had similar problems of militias intimidating, even killing opposition supporters leading to the June 2008 elections, as noted much earlier.
It is extremely difficult in countries whose borders have been artificially imposed in recent decades. Countries such as the United Kingdom have had centuries to eventually integrate peacefully (Northern Ireland perhaps being an exception, as it is also a more recent struggle). Poorer countries, that have been around mostly only since a decade or two after the Second World War not only have had a shorter time span to consider, but also have another major factor affecting them: foreign influence and interference in democratic decision-making and election processes.
Disenfranchisement of voters
In a Democracy Now! broadcast on November 6, 2006 (just before the US mid-elections) the issues being discussed were various ways that people were being prevented from voting. The broadcast interviewed New York Times editor, Adam Cohen, who had been following this concern in detail and gave various examples of attempts to try and use rules that appear fair but are actually designed to prevent a certain group of people from voting so that a certain party will win. If parties can do it, they will try, he implies.
Furthermore, “if you look back at the history of voting in the United States, there has always been an attempt to use rules of various kinds to stop certain people from voting. It’s always been a partisan thing. One party realizes if it stops a particular ethnic group or racial group from voting, it may win, and they adopt rules that appear to be neutral, but actually aren’t neutral at all.”
As shocking and concerning as some of these tactics are, these issues of course are not limited to the US, and in some countries, attempts to prevent groups of people from voting are far worse, including use of violence as noted above. The US was chosen as an example here because of the high regard people have for its democratic process. If democratic principles are easy to violate in the US, then many other countries will have even worse problems.
Democratic governments and the military
In a truly functioning democracy, the military has to be subservient to the people. The US and most other industrialized democracies are a good example of this. The military pledges to serve the purpose of protecting democracy. (Ignore for the moment the issue of democratic governments waging war on other countries, sometimes against the wishes of their population.)
There are times when we witness military coups in a country where the generals coming into power claim it is in order to route out corruption that has made a mockery of their democratic systems, or some other such reason. The rule, they say, is temporary and necessary, but only until conditions are okay to restore democracy.
Yet, many times this has either been an excuse, or, even when intentions may have been genuine, dictatorship lingers on. One example is Pakistan. Enormous corruption in the democratic government was a reason cited by by General Musharraf when he lead a military coup. He promised a restoration of democracy as soon as possible. Many years later, the world was still waiting. Finally, rather than keeping to his promise, it was intense pressure (and miscalculation by his group, or those who favor him, by assassinating opposition leader and former Prime Minister, Benezir Bhutto) that led to Musharraf to give in, allowing elections in February 2008.
During this time, numerous other democracies looked the other way, as Musharraf was useful in the “war on terror” and some Western media eventually started to refer to him as President Musharraf, even though originally he was referred to as General Musharraf (which is what the media will often use when reporting on such rulers in in hostile countries. Ironically in Venezuela, former General, Huge Chavez, has occasionally been referred to as “General Chavez”, to give the impression that the country is a fake democracy being run by a military person. That would be equivalent of say someone like General Wesley Clark becoming President of the US and referring to him as General Clark).
Thailand has also seen a similar situation to that of Pakistan. And time again will have to tell if the military dictatorship is genuine in its desire for establishing democracy or not.
Another major factor for military coups and dictatorships that have overthrown fledgling governments is because of external factors, such as when the US, during the Cold War, overthrew many fledgling democracies in favor of puppet dictatorships.
Powerful countries: democratic at home; using power, influence and manipulation abroad
Foreign policy issues hardly feature in election campaigns of countries such as UK and US, and yet their influence around the world is immense. Recall the 2000 elections between Bush and Al Gore, where both virtually agreed with the other in a televised debate on foreign policy matters. (Admittedly, many parties feel their target audiences are not as interested in foreign policy. Perhaps that will change in near future as issues such as the war on terror, the rise of Asia, climate change, and other issues become more prominent.)
Elections are typically local and national events. Foreign involvement in a national election, however, does happen, and depending on the circumstances and perspective, it can be seen as anything from providing assistance and support, to political interference and undermining of the democratic process (if it is seen at all).
There are countless examples in recent decades, too many to list here, but some recent ones include the external funding of “democratic” parties often by some Western countries in parts of the developing world.
For instance, in Iran one of the opposition groups to the ruling regime is a monarch descendant and not necessarily democratic as such, but gets Western backing nonetheless.
In Nicaragua leading up to the 2006 elections, the US actually warned Nicaraguans not to vote for for Ortega. (In the mid 1980s, the US had actively supported Contra guerrillas in a war against the Nicaraguan government. Ortega was leader of Nicaragua at that time.) The US went as far as threatening economic sanctions and withdrawal of aid if Ortega won. Even Oliver North and Donald Rumsfeld went there to tell people not to vote for Ortega (though Rumsfeld denied he went for political reasons). North was one of the main people involved in the Iran-Contra scandal, the US deal to sell weapons to Iran and use proceeds to fund the bloody contras against Ortega and the Sandinista movement there, despite a congressional ban to do this (i.e. being against both US and international law).
A scandal caused around 2000 was when there was feared Chinese influence in US elections – at that time the media and politicians were (rightly) outraged at foreign interference, but ignored the immense number of incidents (and sometimes far worse) foreign interference their own country had taken part in, in other countries, before and since.
Recently, Russia has also been accused of interference with some of its former satellites (sometimes unsuccessfully such as with the the Ukraine “velvet revolution”).
By its very nature, it is hard to detect this kind of interference. Sometimes it is visible but accompanied by so much subtle propaganda that it seems benign, while other times it is only years later that the information comes out, by which time the damage has been done (and many people’s lives have been affected).
Yet, nations and organizations doing these things will often feel they have to for their own agendas and “national interests.” Of course the more powerful and influential countries will be able to pull this off far better than poorer ones, and is yet another tool in the arsenal of more powerful countries to try and maintain their position of advantage in the international arena. While it is easy to say and hard to do, transparency in all parts of a democratic process is key to help minimize or avert this kind of perversion of democracy.
(For far more detailed examples, including in particular the history of Western companies and front organizations funding groups to overthrow governments in the name of democracy but really to achieve various foreign policy interests, see the works of J.W. Smith from the Institute for Economic Democracy, and the various writings from Noam Chomsky. Whenever some of these things come to light, the mainstream and politicians of these interfering nations often claim these were mistakes that should not have happened, but Smith, Chomsky and many others show how systemmatic this has often been, implying it is part of a foreign policy agenda to shape the world, where possible, with governments that are friendly to their interests, democratic or not.)
Can democracy be forced upon a country through military means?
One part of the US neo-conservative movement’s ideology was highlighted in the buildup for war on Iraq: the use of military force, if necessary, to extend or maintain is super-power status in the world. The Middle East clearly suffers from ineffective, or no democracy. The American neo-conservative movement felt the US should use its military might selectively to enforce democracy where it wants.
Yet, as experience in Iraq has shown, and what many scholars and activists had long-predicted, democracy cannot be enforced from the outside; it has to be home-grown. Not only must it be home-grown, but must be genuine and seen to be genuine.
(As noted earlier, the US has also funded many supposedly democratic movements in various other countries, often for ulterior motives. In so many of those cases it has turned out that those groups have become puppet regimes or pseudo democracies. In many other cases, the US has actually supported the overthrow of a democratic government. As more and more people around the world have become aware of this, the legitimacy of such overt foreign influence has often been met with suspicion and domestic elections and democratic processes then suffer through the perception of them being tainted and not genuine. In worst cases, the consequences can include political instability and conflict, so it is a dangerous game to play. It is often done unaccountably too, as interference can be justified with that overloaded term “in the national interests.”)
Democracy of Nation States in the age of Globalization
As noted further above, the international arena has an affect on most countries today. Both democratic and non-democratic forces may be voted in that then institute policies that are in some way affected by globalization (for example, supporting aspects that are described as overly corporate friendly at the expense of local people, while benefiting a few wealthy elite, or reacting negatively to some of the effects of globalization such as whipping up hysteria against economic migrants, etc.)
In the case of Africa noted earlier, many countries have found themselves subject to harsh conditions for debt relief, which on paper sound fair, but in reality leads to an undermining of democracy.
When globalization challenges national borders and is international in scope, how meaningful are some national elections? Even when a party is voted in based on some sort of criticism on the way globalization is affecting their nation, there are numerous times when those very parties have been unable to do much other than go with the pressures globalization brings (e.g. poor countries opening up to foreign investment, mostly by large western companies, thus undermining any local sector which cannot compete against such established actors or breaking some promises made to electorates).
Some time leading up to the November 2000 US presidential election, I recall hearing on radio (can’t recall details, unfortunately) how a farmer in an African nation lamented how he could not vote in the US elections for what happens there has far more effect on his country than whatever vote he could make in his own country.
The challenge will remain; richer nations, supported by the wealthy and powerful companies that come from their territories are pushing for others to open up, as this will benefit their companies and possibly their own nations more generally (or at least the wealthier segments of their own society). Poor nations are open to the idea of globalization and international institutions to discuss these processes, but repeatedly find that international meetings at the IMF, World Bank and World Trade Organization are far from democratic.
Rich nations have long felt the pressure from the business sector and elsewhere to reduce spending on various social programs, and in most democratic elections the sound bites are about parties promising to uphold those social programs as best they can. If rich countries are struggling with this question, the challenge for emerging/developing nations is greater.
International institutions: democratic or representing those with the most power?
International institutions such as the WTO, IMF, WB, UN (or more specifically the UN Security Council) are themselves far from democratic even though most of them give the impression of being an international forum where nations can be fairly treated. But many of them prescribe policies on poor countries in such a way that it undermines those countries, even if they are democratic ones. Under such conditions, corruption is not uncommon.
The IMF and World Bank have come under criticism lately for their long non-democratic leadership, and are now beginning to address that balance. This has not come about because of democratic tendencies of the leading contributors (all Western democracies), but because a handful of developing nations, such as China, India and Brazil, have now become politically strong enough to gather sufficient backing to demand these Western-backed/influenced institutions open up and let them in and share power more fairly if they are to be truly international organizations that they want/claim to be.
It is very early days to see what will happen; will the emerging nations just become another group leaving the poorer ones still under-represented, or will they be able to fight for better global representation?
The World Trade Organization is another such problematic organization in this regards, while the UN, generally universal, suffers from the problem of the non-democratic UN Security Council with its handful of veto-power nuclear powers.
For example, during various rounds of WTO talks, developing countries frequently complained that rich nations keep circumventing established procedures or just prevent developing countries taking part or even produce documents and drafts so late in the process (e.g. the night before they are discussed) that they do not have time to analyze them sufficiently. Any of these things undermine negotiation processes for countries already limited by resources. The “Green Room” antics whereby rich countries selectively invite a few poor countries to closed door meetings, telling them how things will proceed, smells of divide and conquer. In the meanwhile industrialized country officials will celebrate these talks as being open and transparent, blaming developing countries for some unexplainable reason for being unreasonable when things go wrong.
(And when mainstream media outlets of wealthy countries rarely report these meetings, let alone the concerns and perspectives of poor countries, their officials, who sound like they genuinely want to help the poor but cannot understand why they won’t accept their offer, get away without being held to account as to why their offers to poor countries were actually so harsh and unfair in various ways.)
Although sounding boring for most of the public, these talks are some of the most important in the world, for they affect the lives of all citizens. Promises by wealthy countries of openness, transparency and other democratic like behavior are just that; promises. In reality this is politics, dirty negotiation tactics and doing anything that one can get away with in order to push one’s own interests in the international arena (which, unfortunately is somewhat understandable from the perspective of those individual nations doing it; they are trying to get the best for their own interest, even if they often present it as being best for everyone). [For additional details, see also this site’s page: WTO Meeting in Hong Kong, 2005]
Of these international institutions, the UN is perceived to be far more democratic and inclusive in comparison. However, it too is tainted, this time by the 5 permanent members of the UN Security Council who have veto powers over many decisions, thus giving them more power, regardless of any overwhelming international opinion or even votes by the UN General Assembly.
These 5 are permanent members with excess powers because they have nuclear weapons, helped form the UN (in the case of US, UK and France), or were invited in for Geopolitical balance (in the case of the former Soviet Union, now Russia, and China).
Military power, it appears, is the final arbiter of justice. This is ironic when key democratic principles include an independent judiciary and a military subservient to the people.
But what recourse do poor countries have? To whom can they complain and go to when wealthy countries violate the principles upon which they make grandiose claims of following?
Reality of foreign policy
Of course, as history also shows, any desire for democratic like behavior between nations is wishful thinking, perhaps naïve; powerful nations will always do what they can to preserve or extend their power. Democracy at an international level would reduce their advantage so it would not be in their interest to extend power and privilege to too many others (a few are needed).
Perhaps desire for democracy at the international level should be dismissed as a waste of time. Poorer nations would surely understand this more than people from richer nations who have not had to typically face the full brunt of someone else’s power and influence for a long time.
Yet, they still take part in the international arena. Some of that might be because they hold on to democratic ideals, but there may also be an understanding that as some powerful nations emerge such as Brazil, China, India, and some others, such nations may (for now) be useful allies in international political negotiations.
This may be one reason why the developing world as a whole was able to derail parts of the WTO “Development Round” talks in 2003 when the wealthy countries tried to unfairly impose extra issues and actions onto poorer countries without agreeing to almost anything themselves.
However, the diverse interests of poor nations also meant that at the follow up 2005 WTO round, rich countries were able to manipulate poor nations by appealing to some of the more powerful ones such as Brazil and India and get them to agree to weak drafts on behalf of the rest for a few small concessions of their own, while doing away with any pretense of a democratic, open and transparent process in the way the talks were held.
The other reason they may still be involved is that they have little choice; like it or not, their nations are more vulnerable to the forces of globalization. They almost have to try and get involved, even if it is an unequal system, just in case they can get some concessions or have their voice heard.
Why is democracy at the international level so important?
There may clearly be cases where at all levels a committee/consensus type approach may be inefficient (e.g. to respond to a natural disaster, where some command/control approach may help immediately), but even there, a democratic process can be useful to feed back into the decision making so that the command/control structure does not become close-minded.
Clearly though, a more democratic set of international institutions is one way to try and address inequalities caused by projection of power. Furthermore, understanding our commonalities, not just differences may help solidify humanity, which currently seems on a trajectory of distrust and violence. Sundeep Walsekar, mentioned earlier is worth quoting again to show just one seemingly small, but perhaps significant example:
It is generally believed that much of modern Western thought has its origins in Greek philosophy. In the post-Roman Empire period, many important Greek works were destroyed. It was largely to the credit of the Islamic rulers of the 9th to 12th century that some of these works were recovered, translated and analyzed. The Arab, Persian and Jewish scholars of the time built upon the knowledge they had gathered. Trade with China and India provided access to the knowledge developed in the Eastern societies for centuries. The scholars in the Middle East further created their own ideas and innovations.... In a historical twist, their works were destroyed by Mongol invaders and others but Western universities secured and preserved some of them. Critical and independent inquiry is needed to ascertain to what extent the evolution of knowledge is a result of cross-fertilization of ideas between people from different parts of the world.
— Sundeep Waslekar, An Inclusive World in which the West, Islam and the Rest have a Stake, Strategic Foresight Group, February 2007, p.29
Dwelling a bit further on this notion of humanity with more similarities than differences, a common Euro-centric view of world history describes ancient Greek democracy as Western democracy, with ancient Greece as part of that Western/European identity.
Yet, as John Hobson writes in the excellent anti Euro-centric book, The Eastern Origins of Western Civilisation, (Cambridge University Press, 2004), ancient Greece and Rome were not considered as part the “West” until much later; that is, Greece and Rome were part of a whole Middle East center of civilization, in some ways on the edge of it, as more was happening further Eastward.
Western Europe adopted or appropriated ancient Greek achievements in democracy as its own much later when it needed to form a cohesive ideology and identity to battle the then rising Islam and to counter its defeats during the Crusades.
The point here is that democracy is perhaps more universal than acknowledged and that there is a lot of propaganda in how history is told, sometimes highlighting differences amongst people more than the similarities and cross-fertilization of ideas that Waslekar alludes to. This better understanding, which would take a long time to permeate into mainstream society, would contribute to creating a more tolerant, hence eventually a more democratic, society.
The dangers of apathy in a democracy
Though it is ancient wisdom, Aristotle’s warning against concentrated power and wealth—in which democracy can be perverted into oligarchy—is applicable today. The more excessive this power, the more this oligarchy will tend towards monarchy and rule by individuals not laws:
If the men of property in the state are fewer than in the former case, and own more property, there arises a second form of oligarchy. For the stronger they are, the more power they claim, and having this object in view, they themselves select those of the other classes who are to be admitted to the government; but, not being as yet strong enough to rule without the law, they make the law represent their wishes. When this power is intensified by a further diminution of their numbers and increase of their property, there arises a third and further stage of oligarchy, in which the governing class keep the offices in their own hands, and the law ordains that the son shall succeed the father. When, again, the rulers have great wealth and numerous friends, this sort of family despotism approaches a monarchy; individuals rule and not the law. This is the fourth sort of oligarchy, and is analogous to the last sort of democracy.
— Aristotle, Politics, Part 4, 350 B.C.E
All citizens of democracies should watch out for this. Even in the richest countries in the world, if citizens do not continue to hold on to their democratic tendencies, unchecked power and use the platform of democracy to concentrate wealth, power, decision-making, and ultimately, the future of the citizenry.
How can democracy be safe-guarded?
Some feel that occasionally, a government may need to suspend democracy in order to save it. For example, a rollback on fundamental rights and decision making may expedite decision-making at times of threat and danger.
Governments may hand over power to the military, or more commonly, some in the military may take it on themselves (sometimes with pressure/support from outside) that their country needs saving from their government, and will step in accordingly (a coup).
It is hard to know if such coups were ever with the best intentions in mind, because it seems most coups have resulted in long term military dictatorship. The “stability” sought in such cases appears not to have been to ensure democracy, but perhaps to ensure stability for those with money and power, and ulterior agendas.
In other situations, the US War on Terror being perhaps the most obvious in recent times, the government has decided to roll back power of the people itself, and assume a stronger and more disconnected ruling regime.
Perhaps when a nation faces a direct threat of invasion, or some other pending disaster, a more efficient system of decision-making is needed, but in all these other circumstances to “save” democracy, is a temporary roll back of democracy warranted?
What about strengthening democracy, by increasing it? If a democracy is struggling due to corruption, a faltering economy or various social, political or other economic woes, or a threat of terrorism, is less democracy a cure? Could more democracy be better, by increasing accountability, participation and transparency?
As mentioned earlier, the idea of voting as it is practiced today might be flawed because of the potential for so much misuse, abuse, and people’s lack of access to full information, free from manipulation. Alternatives such as the Evaluative Democracy approach described earlier, and others, need far more mainstream discussion (which is hard to get when so much of the mainstream media and political establishments benefit from the current arrangements).
Just as Aristotle warned of apathy, another bit of ancient wisdom might be appropriate here, summarized by Professor Steve Muhlberger recounting a situation whereby a king of Maghada in ancient India, who wished to destroy the Vajjian confederacy, sent a minister to the Buddha to ask for his advice and whether his attack would be a success or not? In his response, the Buddha said the people of Vajjia could avoid decline if they continued their open and inclusive tradition.
The Buddha saw the virtues necessary for a righteous and prosperous community, whether secular or monastic, as being much the same. Foremost among those virtues was the holding of “full and frequent assemblies.” In this, the Buddha spoke not only for himself, and not only out of his personal view of justice and virtue. He based himself on what may be called the democratic tradition in ancient Indian politics—democratic in that it argued for a wide rather than narrow distribution of political rights, and government by discussion rather than by command and submission.
— Steve Muhlberger, Democracy in Ancient India, February 8, 1998
|
2026-01-28T05:50:01.743691
|
309,126
| 3.79482
|
http://bmisurgery.org/_live/index.php?ID=getthefacts
|
Get the Facts — Obesity & Your
You Are Not Alone
If you are overweight, you are not alone. The
facts are startling and disturbing:
Why is it called "Morbid"?
Morbid obesity is typically defined as
being 100 lbs. or more over ideal body weight or having a Body Mass Index of 40 or higher.
"Clinically severe obesity"
is a description of the same condition and can be used interchangeably.
- Today, more than 65% of adults are overweight or obese. 1
- 32% of children are overweight.1
- 4.8% of adults are morbidly obese (about 19 million).1
- Total medical cost for obesity in 2003 was $75 billion.2
- 325,000 obesity-related deaths occur annually.3
Morbid obesity is a serious disease and must
be treated as such, according to the National Institutes of Health Consensus
Report. It is also a chronic disease, meaning that it builds slowly over an extended period of time.
According to the National Institutes of Health (NIH),
an increase in 20% or more above your ideal body weight is the point at which excess
weight becomes a health risk.
Obesity becomes "morbid" when
it reaches the point of significantly increasing the risk of one or more obesity-related
health conditions or
to determine whether or not you will be become obese, such as:
Click below to learn more about each factor.
- eating disorders
- drugs (like steroids)
- medical conditions (like hypothyroidism).
Genetic Factors (click)
scientific studies have established that our genes play an important
role in our tendency to gain excess weight. A number of genes are probably directly
related to weight.
Some genes determine eye color or height, other
genes affect our appetite, our ability to feel full or satisfied, our metabolism,
our fat-storing ability, and even our natural activity levels.
The body weight of adopted children shows
no correlation with the body weight of their adoptive parents, who feed them
and teach them how to eat. Their weight does have an 80 percent correlation with
their genetic parents, whom they have never met.
Identical twins, with the same genes, show
a much higher similarity of body weights than do fraternal twins, who have different
genes. Certain groups of people, such as the Pima Indian tribe in Arizona, have
a very high incidence of severe obesity. They also have significantly higher
rates of diabetes and heart disease than other ethnic groups.
Environmental Factors (click)
and genetic factors are obviously closely intertwined. If you have a
genetic predisposition toward obesity, then the modern American lifestyle and
environment may make controlling weight more difficult.
Fast food, long days sitting at a desk, and
suburban neighborhoods that require cars all magnify hereditary factors such
as metabolism and efficient fat storage.
For those suffering from morbid obesity, anything
less than a total change in environment usually results in failure to reach and
maintain a healthy body weight.
to think of weight gain or loss as only a function of calories ingested
and then burned. Take in more calories than you burn, gain weight; burn more
calories than you ingest, lose weight. But now we know the equation isn't that
Obesity researchers now talk about a
theory called the "set point," a sort of thermostat in the brain that
makes people resistant to either weight gain or loss. If you try to override
the set point by drastically cutting your calorie intake, your brain responds
by lowering metabolism and slowing activity. You then gain back any weight you
Medical Conditions (click)
loss surgery is not a cure for medical conditions, such as hypo- thyroidism,
or eating disorders that can also cause weight gain.
That's why it's important that you
work with your doctor to make sure you do not have a condition that should be
treated with medication and counseling.
serious diseases (also known as co-morbidities). These result in significant physical
disability or even death.
Causes of Obesity
The reasons for obesity are multiple and complex. Despite conventional wisdom,
it is not simply a result of overeating.
Research has shown that in many cases a significant, underlying cause of morbid
obesity is genetic.
Studies have demonstrated that once the problem is established, efforts such
as dieting and exercise programs have a limited ability to provide effective long-term
Obesity & Life Expectancy
Research has shown that your BMI clearly affects your life expectancy, as can
be seen in the chart below.
- Younger and middle aged men and women have an increasing risk of dying prematurely
as their BMI increases from ideal (19-25) to over weight (25-30) to moderately
obese (30 to 40) and beyond.21
- Teens entering adulthood with BMI over 40 die 8-13 years earlier than general
Research shows that traditional treatment options, such as diet, exercise, and
behavior modification, are relatively ineffective in helping patients with
morbid obesity achieve and maintain weight loss over the long term.
Weight loss surgery is typically more effective, providing the longest period
of sustained weight loss in patients for whom all other options failed.
Click below for a comparison of the success rates of the three main types of
treatment for morbid obesity:
|
2026-01-22T22:47:34.308675
|
606,753
| 4.055791
|
http://en.wikipedia.org/wiki/Vomited
|
|This article needs additional citations for verification. (March 2013)|
A 1681 painting depicting a person vomiting
Vomiting (known medically as emesis and informally as throwing up and numerous other terms) is the forceful expulsion of the contents of one's stomach through the mouth and sometimes the nose. Vomiting can be caused by a wide variety of conditions; it may present as a specific response to ailments like gastritis or poisoning, or as a non-specific sequela of disorders ranging from brain tumors and elevated intracranial pressure to overexposure to ionizing radiation. The feeling that one is about to vomit is called nausea, which often precedes, but does not always lead to, vomiting. Antiemetics are sometimes necessary to suppress nausea and vomiting. In severe cases, where dehydration develops, intravenous fluid may be required.
Vomiting is different from regurgitation, although the two terms are often used interchangeably. Regurgitation is the return of undigested food back up the esophagus to the mouth, without the force and displeasure associated with vomiting. The causes of vomiting and regurgitation are generally different.
- 1 Complications
- 2 Pathophysiology
- 3 Differential diagnosis
- 4 Other types
- 5 Treatment
- 6 Epidemiology
- 7 See also
- 8 References
- 9 External links
Aspiration of vomit
Vomiting can be dangerous if the gastric content enters the respiratory tract. Under normal circumstances the gag reflex and coughing prevent this from occurring, however these protective reflexes are compromised in persons under the influences of certain substances such as alcohol or anesthesia. The individual may choke and asphyxiate or suffer an aspiration pneumonia.
Dehydration and electrolyte imbalance
Prolonged and excessive vomiting depletes the body of water (dehydration), and may alter the electrolyte status. Gastric vomiting leads to the loss of acid (protons) and chloride directly. Combined with the resulting alkaline tide, this leads to hypochloremic metabolic alkalosis (low chloride levels together with high HCO3− and CO2 and increased blood pH) and often hypokalemia (potassium depletion). The hypokalemia is an indirect result of the kidney compensating for the loss of acid. With the loss of intake of food the individual may eventually become cachectic. A less frequent occurrence results from a vomiting of intestinal contents, including bile acids and HCO3-, which can cause metabolic acidosis.
Repeated or profuse vomiting may cause erosions to the esophagus or small tears in the esophageal mucosa (Mallory-Weiss tear). This may become apparent if fresh red blood is mixed with vomit after several episodes.
Recurrent vomiting, such as observed in bulimia nervosa, may lead to destruction of the tooth enamel due to the acidity of the vomit. Digestive enzymes can also have a negative effect on oral health, by degrading the tissue of the gums.
Receptors on the floor of the fourth ventricle of the brain represent a chemoreceptor trigger zone, known as the area postrema, stimulation of which can lead to vomiting. The area postrema is a circumventricular organ and as such lies outside the blood–brain barrier; it can therefore be stimulated by blood-borne drugs that can stimulate vomiting or inhibit it.
There are various sources of input to the vomiting center:
- The chemoreceptor trigger zone at the base of the fourth ventricle has numerous dopamine D2 receptors, serotonin 5-HT3 receptors, opioid receptors, acetylcholine receptors, and receptors for substance P. Stimulation of different receptors are involved in different pathways leading to emesis, in the final common pathway substance P appears involved.
- The vestibular system, which sends information to the brain via cranial nerve VIII (vestibulocochlear nerve), plays a major role in motion sickness, and is rich in muscarinic receptors and histamine H1 receptors.
- The Cranial nerve X (vagus nerve) is activated when the pharynx is irritated, leading to a gag reflex.
- The Vagal and enteric nervous system inputs transmit information regarding the state of the gastrointestinal system. Irritation of the GI mucosa by chemotherapy, radiation, distention, or acute infectious gastroenteritis activates the 5-HT3 receptors of these inputs.
- The CNS mediates vomiting that arises from psychiatric disorders and stress from higher brain centers.
The vomiting act encompasses three types of outputs initiated by the chemoreceptor trigger zone: Motor, parasympathetic nervous system (PNS), and sympathetic nervous system (SNS). They are as follows:
- Increased salivation to protect tooth enamel from stomach acids. (Excessive vomiting leads to dental erosion). This is part of the PNS output.
- The body takes a deep breath to avoid aspirating vomit.
- Retroperistalsis, starts from the middle of the small intestine and sweeps up digestive tract contents into the stomach, through the relaxed pyloric sphincter.
- Intrathoracic pressure lowers (by inspiration against a closed glottis), coupled with an increase in abdominal pressure as the abdominal muscles contract, propels stomach contents into the esophagus as the lower esophageal sphincter relaxes. The stomach itself does not contract in the process of vomiting except for at the angular notch, nor is there any retroperistalsis in the esophagus.
- Vomiting is ordinarily preceded by retching.
- Vomiting also initiates an SNS response causing both sweating and increased heart rate.
The neurotransmitters that regulate vomiting are poorly understood, but inhibitors of dopamine, histamine, and serotonin are all used to suppress vomiting, suggesting that these play a role in the initiation or maintenance of a vomiting cycle. Vasopressin and neurokinin may also participate.
The vomiting act has two phases. In the retching phase, the abdominal muscles undergo a few rounds of coordinated contractions together with the diaphragm and the muscles used in respiratory inspiration. For this reason, an individual may confuse this phase with an episode of violent hiccups. In this retching phase nothing has yet been expelled. In the next phase, also termed the expulsive phase, intense pressure is formed in the stomach brought about by enormous shifts in both the diaphragm and the abdomen. These shifts are, in essence, vigorous contractions of these muscles that last for extended periods of time - much longer than a normal period of muscular contraction. The pressure is then suddenly released when the upper esophageal sphincter relaxes resulting in the expulsion of gastric contents. Individuals who do not regularly exercise their abdominal muscles may experience pain in those muscles for a few days. The relief of pressure and the release of endorphins into the bloodstream after the expulsion causes the vomiter to feel better.
The content of the vomitus (vomit) may be of medical interest. Fresh blood in the vomit is termed hematemesis ("blood vomiting"). Altered blood bears resemblance to coffee grounds (as the iron in the blood is oxidized) and, when this matter is identified, the term "coffee ground vomiting" is used. Bile can enter the vomit during subsequent heaves due to duodenal contraction if the vomiting is severe. Fecal vomiting is often a consequence of intestinal obstruction or a gastrocolic fistula and is treated as a warning sign of this potentially serious problem ("signum mali ominis").
If the vomiting reflex continues for an extended period with no appreciable vomitus, the condition is known as non-productive emesis or dry heaves, which can be painful and debilitating.
- Color of vomit
- Bright red in the vomit suggests bleeding from the esophagus
- Dark red vomit with liver-like clots suggests profuse bleeding in the stomach, such as from a perforated ulcer
- Coffee ground-like vomit suggests less severe bleeding in the stomach, because the gastric acid has had time to change the composition of the blood
- Yellow vomit suggests bile. This indicates that the pyloric valve is open and bile is flowing into the stomach from the duodenum. (This is more common in older people.)
Vomiting may be due to a large number of causes, and protracted vomiting has a long differential diagnosis.
Causes in the digestive tract
- Gastritis (inflammation of the gastric wall)
- Gastroesophageal reflux disease
- Pyloric stenosis (in babies, this typically causes a very forceful "projectile vomiting" and is an indication for urgent surgery)
- Bowel obstruction
- Acute abdomen and/or peritonitis
- Food allergies (often in conjunction with hives or swelling)
- Cholecystitis, pancreatitis, appendicitis, hepatitis
- Food poisoning
- In children, it can be caused by an allergic reaction to cow's milk proteins (Milk allergy or lactose intolerance)
Sensory system and brain
Causes in the sensory system
- Movement: motion sickness (which is caused by overstimulation of the labyrinthine canals of the ear)
- Ménière's disease
Causes in the brain
- Cerebral hemorrhage
- Brain tumors, which can cause the chemoreceptors to malfunction
- Benign intracranial hypertension and hydrocephalus
Metabolic disturbances (these may irritate both the stomach and the parts of the brain that coordinate vomiting)
- Hypercalcemia (high calcium levels)
- Uremia (urea accumulation, usually due to renal failure)
- Adrenal insufficiency
Drug reaction (vomiting may occur as an acute somatic response to)
- alcohol (being sick while being drunk or being sick the next morning, suffering from the after-effects, i.e., the hangover).
- selective serotonin reuptake inhibitors
- many chemotherapy drugs
- some entheogens (such as peyote or ayahuasca)
Illness (sometimes colloquially known as "stomach flu"—a broad name that refers to gastric inflammation caused by a range of viruses and bacteria.)
An emetic, such as syrup of ipecac, is a substance that induces vomiting when administered orally or by injection. An emetic is used medically where a substance has been ingested and must be expelled from the body immediately (for this reason, many toxic and easily digestible products such as rat poison contain an emetic). Inducing vomiting can remove the substance before it is absorbed into the body. Ipecac abuse can cause detrimental health effects.
It is quite common that, when one person vomits, others nearby become nauseated, particularly when smelling the vomit of others, often to the point of vomiting themselves. It is believed that this is an evolved trait among primates. Many primates in the wild tend to browse for food in small groups. Should one member of the party react adversely to some ingested food, it may be advantageous (in a survival sense) for other members of the party to also vomit. This tendency in human populations has been observed at drinking parties, where excessive consumption of alcoholic beverages may cause a number of party members to vomit nearly simultaneously, this being triggered by the initial vomiting of a single member of the party. This phenomenon has been touched on in popular culture: Notorious instances appear in the films Monty Python's The Meaning of Life (1983) and Stand By Me (1986).
Intense vomiting in ayahuasca ceremonies is a common phenomenon. However, people who experience "la purga" after drinking ayahuasca, in general, regard the practice as both a physical and spiritual cleanse and often come to welcome it. It has been suggested that the consistent emetic effects of ayahuasca — in addition to its many other therapeutic properties — was of medicinal benefit to indigenous peoples of the Amazon, in helping to clear parasites from the gastrointestinal system.
There have also been documented cases of a single ill and vomiting individual inadvertently causing others to vomit, when they are especially fearful of also becoming ill, through a form of mass hysteria.
Most people try to contain their vomit by vomiting into a sink, toilet, or trash can, as vomit is difficult and unpleasant to clean. On airplanes and boats, special bags are supplied for sick passengers to vomit into. A special disposable bag (leakproof, puncture-resistant, odorless) containing absorbent material that solidifies the vomit quickly is also available, making it convenient and safe to store until there is an opportunity to dispose of it conveniently.
An online study of people's responses to "horrible sounds" found vomiting "the most disgusting." Professor Cox of the University of Salford's Acoustic Research Centre said that "We are pre-programmed to be repulsed by horrible things such as vomiting, as it is fundamental to staying alive to avoid nasty stuff." It is thought that disgust is triggered by the sound of vomiting to protect those nearby from, possibly diseased, food.
- Eating disorders (anorexia nervosa or bulimia nervosa)
- To eliminate an ingested poison (some poisons should not be vomited as they may be more toxic when inhaled or aspirated; it is better to ask for help before inducing vomiting)
- Some people who engage in binge drinking induce vomiting to make room in their stomachs for more alcohol consumption.
- People suffering from nausea may induce vomiting in hopes of feeling better.
- After surgery (postoperative nausea and vomiting)
- Disagreeable sights or disgust, smells or thoughts (such as decayed matter, others' vomit, thinking of vomiting), etc.
- Extreme pain, such as intense headache or myocardial infarction (heart attack)
- Violent emotions
- Cyclic vomiting syndrome (a poorly understood condition with attacks of vomiting)
- High doses of ionizing radiation sometimes trigger a vomit reflex.
- Violent fits of coughing, hiccups, or asthma
- Overexertion (doing too much strenuous exercise can lead to vomiting shortly afterwards).
- Rumination syndrome, an underdiagnosed and poorly understood disorder that causes sufferers to regurgitate food shortly after ingestion.
Fecal vomiting (aka stercoraceous vomiting) is a kind of vomiting, or emesis, in which partially or fully digested matter is expelled from the intestines into the stomach, by a combination of liquid and gas pressure and spasmodic contractions of the gastric muscles, and then subsequently forcefully expelled from the stomach up into the esophagus and out through the mouth and sometimes nasal passages. Though it is not usually fecal matter that is expelled, it smells noxious. Alternative medical terms for fecal vomiting are copremesis and stercoraceous vomiting. Copremesis like all emesis may lead to aspiration. However, if contents of the large intestine are aspirated, severe or even fatal aspiration pneumonia results, secondary to the massive number of bacteria normally present distal to the ileocecal valve. Projectile vomiting refers to vomiting that ejects the gastric contents with great force. It is a classic symptom of infantile hypertrophic pyloric stenosis, in which it typically follows feeding and can be so forceful that some material exits through the nose.
Antiemetics act by inhibiting the receptor sites associated with emesis. Hence, anticholinergics, antihistamines, dopamine antagonists, serotonin antagonists, and cannabinoids are used as anti-emetics.
- Tintinalli, Judith E. (2010). Emergency Medicine: A Comprehensive Study Guide (Emergency Medicine (Tintinalli)). New York: McGraw-Hill Companies. p. 830. ISBN 0-07-148480-9.
- Hornby, PJ (2001). "Central neurocircuitry associated with emesis". The American Journal of Medicine. 111 Suppl 8A (8): 106S–112S. doi:10.1016/S0002-9343(01)00849-X. PMID 11749934.
- Ray Andrew P., Chebolu Seetha, Ramirez Juan, Darmani Nissar A (2009). "Ablation of Least Shrew Central Neurokinin NK1 Receptors Reduces GR73632-Induced Vomiting". Behavioural Neuroscience 123 (3): 701–706. doi:10.1037/a0015733.
- Decker, W. J. (1971). "In Quest of Emesis: Fact, Fable, and Fancy". Clinical Toxicology 4 (3): 383–387. doi:10.3109/15563657108990490. PMID 4151103.
- Moder, K. G.; Hurley, D. L. (1991). "Fatal hypernatremia from exogenous salt intake: report of a case and review of the literature". Mayo Clin Proc. 65 (12): 1587–94. doi:10.1016/S0025-6196(12)62194-6. PMID 2255221.
- Salt: a natural antidepressant? The Scotsman. April 6, 2009.
- Holtzmann NA, Haslam RH (July 1968). "Elevation of serum copper following copper sulfate as an emetic". Pediatrics 42 (1): 189–93. PMID 4385403.
- Wang, S. C.; Borison, Herbert L. (1951). "Copper Sulphate Emesis: A Study of Afferent Pathways from the Gastrointestinal Tract". Am J Physiol - Legacy Content 164 (2): 520–526.
- Olson, Kent C. (2004). Poisoning & drug overdose. New York: Lange Medical Mooks/McGraw-Hill. p. 175. ISBN 0-8385-8172-2.
- "Drugs to Control or Stimulate Vomiting". Merck Veterinary manual. Merck & Co., Inc. 2006.
- How to Induce Vomiting (Emesis) in Dogs
- 9 Best Vomit Scenes On Film, screenjunkies.com
- Shanon, B. (2002). The antipodes of the mind: Charting the phenomenology of the ayahuasca experience. Oxford: Oxford University Press.
- Andritzky, W. (1989). "Sociopsychotherapeutic functions of ayahuasca healing in Amazonia". Journal of Psychoactive Drugs 21 (1): 77–89. doi:10.1080/02791072.1989.10472145. PMID 2656954.
- Sickening sounds - research to make your ears cringe[dead link]. University of Salford. January 28, 2007.
- fecal vomiting - definition of fecal vomiting in the Free Online Medical Dictionary, Thesaurus and Encyclopedia
- Sleisenger, edited by Mark Feldman, Lawrence S. Friedman, Lawrence J. Brandt; consulting editor, Marvin H. (2009). Sleisenger & Fordtran's gastrointestinal and liver disease pathophysiology, diagnosis, management (9th ed.). St. Louis, Mo.: MD Consult. p. 783. ISBN 1-4160-6189-4.
- Helena Britt; Fahridin, S (September 2007). "Presentations of nausea and vomiting". Aust Fam Physician 36 (9): 673–784. PMID 17885697.
|Wikimedia Commons has media related to Vomiting.|
|Wikiquote has a collection of quotations related to: Vomiting|
|Look up vomiting in Wiktionary, the free dictionary.|
|
2026-01-27T16:35:05.954757
|
212,822
| 3.5254
|
http://www.gothamgazette.com/index.php/topics/development/2427-residential-segregation
|
Thanks to the fiftieth anniversary of the Supreme Court ruling in Brown v. Board of Education, talk this month is about segregation in the schools.
Our education administrators could easily brush off the fact that city schools remain separate and unequal by blaming residential segregation - and with some justification. After all, most students go to their neighborhood schools, particularly students whose parents have limited resources and can't afford long school commutes. Still, our educators should acknowledge that educational apartheid is both a product of residential apartheid and a stimulus for it. Whites protect their exclusive residential enclaves to protect their privileged schools.
In "Separate and Unequal: Racial and Ethnic Neighborhoods in the Twenty first Century," the Lewis Mumford Center found that the income disparity between black and white neighborhoods is gaping - black incomes are 55 percent of white incomes. The disparity between Hispanics and whites is even greater.
Today the ethnic division of the city is complex. Over one third of the city's population is foreign born. The city's newest immigrant groups are much more diverse than in the past. They come from many more countries and their incomes vary more widely. They don't easily fit into the historic black/white racial categories that have dominated the U.S. since the early days of slavery. Yet they are segregated.
There are of course many reasons for residential segregation, there is no agreed upon strategy for eliminating it, and it's not obvious what integration really means. But it is incredible that in this city where those formerly dubbed "minorities" are now the majority there is no public discussion about how the city's policies may be responsible for segregation and the inequalities that go along with it. The political establishment seems to think everyone is color blind.
One set of city policies that appears to be immune from scrutiny relates to land use and zoning.
Zoning and the City of Enclaves
The main government mechanism for exercising land use policy is zoning. In principle, zoning separates industrial, commercial and residential functions, and controls the form and density of new development. In practice, zoning has been used to separate people by income and race. Zoning by itself isn't the cause of segregation, but it codifies and reinforces segregation patterns created by discrimination in real estate, and makes it difficult to change them.
There are at least two main ways that zoning codifies segregation and inequality: by encouraging gentrification and by maintaining environmental racism.
When the city's planners propose zoning changes in diverse neighborhoods that would spur massive new housing development, as they have done in Greenpoint and Williamsburg (Brooklyn), for example, they encourage the displacement of low-income working people and people of color, stimulating the conversion of the neighborhood to an all-white upscale enclave. As neighborhoods improve and property values go up, property owners put pressures on the city to change zoning rules and allow for new development. The city's planners normally applaud such trends as evidence of healthy development, and they then bestow on property owners windfall gains by "upzoning" sections of the neighborhood. But this kind of zoning change has a ripple effect, jacks up property values in the surrounding areas, and makes the neighborhood unaffordable to the people who now live there. The end result is income segregation, and since blacks and Latinos disproportionately fall on the lower end of the income scale, the outcome is also racial segregation. Ironically, many of the working class tenants of European descent now living in Greenpoint will be forced to move from a relatively integrated neighborhood to a lily-white suburb.
There are tried and true zoning techniques that aim to prevent this gentrification from further segregating neighborhoods. One of these, inclusionary zoning, mandates that a portion of all new residential development be affordable to people with low incomes. Alas, the City Planning Department recently opposed a proposal for inclusionary zoning in Park Slope, Brooklyn and is resisting demands for inclusionary zoning in other parts of the city.
At the same time that relatively diverse central neighborhoods are turning into yuppie enclaves, the city's planners gleefully provide zoning protections for the less diverse lower density neighborhoods in the outer boroughs. Recently recommended zoning changes in Staten Island, for example, will provide further protections to white homeowner enclaves.
Another way that zoning reinforces segregation is by allowing most of the noxious, polluting infrastructure to be located only in the city's shrinking number of manufacturing districts. The people who live and work in and near these districts are disproportionately people with modest incomes and people of color. A fascinating but little-known study by Lehman College professor Juliana Maantay (Solid Waste and the Bronx: Who Pays the Price) showed how white neighborhoods have managed to get their industrial districts rezoned for new residential development much faster than low-income communities of color. Since 1961 the city's wealthiest borough, Manhattan, converted industrial zones to residential and commercial zones the fastest, while in the Bronx, the city's poorest borough, there have been the least conversions. Prof. Maantay looked more closely at waste-related facilities in the Bronx and found that 87% of people living within one-half mile of these noxious uses are people of color. These are neighborhoods with high rates of asthma and other diseases related to noxious uses.
Since the city has no long-range plan for retaining industry, it does very little to help clean up existing facilities so they will be compatible with residential neighborhoods and continue to provide industrial jobs in those areas. Instead, the city is basically telling neighborhoods the only way they can get rid of bad industrial neighbors is to become upscale, lily-white bedroom enclaves,
What About the Suburbs?
To be fair, New York City must share the responsibility for discriminatory zoning with the suburbs. Many towns in upstate New York, Long Island, New Jersey and Connecticut require minimum lot sizes for homes of half-acre, one acre, and in some cases up to 20 acres. As a result, only the most expensive homes can be built. Since whites are disproportionately among the wealthiest homeowners, this effectively excludes blacks and other people of color. It's also no coincidence that NYC has the majority of the region's low-income public housing units.
While whites find opportunities for affordable housing in the suburbs, middle class blacks usually have a much harder time finding any housing. As reported in the Mumford Center's Separate and Unequal study: "Findings indicate that even minorities who have achieved higher incomes and moved to the suburbs live in more disadvantaged conditions than their white peers."
Tom Angotti is Professor of Urban Affairs and Planning at Hunter College, City University of NY, editor of Progressive Planning Magazine, and a member of the Task Force on Community-based Planning.
|
2026-01-21T12:21:42.375147
|
91,084
| 3.535414
|
http://scienceofdoom.com/2011/01/18/the-cool-skin-of-the-ocean/
|
In the ensuing discussion on Does Back Radiation “Heat” the Ocean? – Part Four, the subject of the cool skin of the ocean surface came up a number of times.
It’s not a simple subject, but it’s an interesting one so I’m going to plough on with it anyway.
The ocean surface is typically something like 0.1°C – 0.6°C cooler than the temperature just below the surface. And this “skin”, or ultra-thin region, is less than a 1mm thick.
Here’s a diagram I posted in the comments of Does Back Radiation “Heat” the Ocean? – Part Three:
There is a lot of interest in this subject because of the question: “When we say ‘sea surface temperature’ what do we actually mean?“.
As many climate scientists note in their papers, the relevant sea surface temperature for heat transfer between ocean and atmosphere is the very surface, the skin temperature.
In figure 1 you can see that during the day the temperature increases up to the surface and then, in the skin layer, reduces again. Note that the vertical axis is a logarithmic scale.
Then at night the temperature below the skin layer is mostly all at the same temperature (isothermal). This is because the surface cools rapidly at night, and therefore becomes cooler than the water below, so sinks. This diurnal mixing can also be seen in some graphs I posted in the comments of Does Back Radiation “Heat” the Ocean? – Part Four.
Before we look at the causes, here are a series of detailed measurements from Near-surface ocean temperature by Ward (2006):
Note: The red text and arrow is mine, to draw attention to the lower skin temperature. The measurements on the right were taken just before midday “local solar time”. I.e., just before the sun was highest in the sky.
And in the measurements below I’ve made it a bit easier to pick out the skin temperature difference with blue text “Skin temp“. The blue value in each graph is what is identified as ΔTc in the schematic above. The time is shown as local solar time.
The measurements of the skin surface temperature were made by MAERI, a passive infrared radiometric interferometer. The accuracy of the derived SSTs from M-AERI is better than 0.05 K.
Below the skin, the high-resolution temperature measurements were measured by SkinDeEP, an autonomous vertical profiler. This includes the “sub-skin” measurement, from which the sea surface temperature was subtracted to calculate ΔTc (see figure 1).
The existence of the temperature gradient is explained by the way heat is transferred: within the bulk waters, heat transfer occurs due to turbulence, but as the surface is approached, viscous forces dominate and molecular processes prevail. Because heat transfer by molecular conduction is less efficient than by turbulence, a strong temperature gradient is established across the boundary layer.
Ward & Minnett (2001)
Away from the interface the temperature gradient is quickly destroyed by turbulent mixing. Thus the cool-skin temperature change is confined to a region of thickness, which is referred to as the molecular sublayer.
Fairall et al (1996)
What do they mean?
Here’s an insight into what happens at fluid boundaries from an online textbook (thanks to Dan Hughes for letting me know about it) – this textbook is freely available online:
The idea behind turbulent mixing in fluids is that larger eddies “spawn” smaller eddies, which in turn spawn yet smaller eddies until you are up against an interface for that fluid (or until energy is dissipated by other effects).
In the atmosphere, for example, large scale turbulence moves energy across many 100′s of kilometers. A few tens of meters above the ground you might measure eddies of a few hundreds of meters in size, and in the last meter above the ground, eddies might be measured in cms or meters, if they exist at all. And by the time we measure the fluid flow 1mm from the ground there is almost no turbulence.
For some basic background over related terms, check out Heat Transfer Basics – Convection – Part One, with some examples of fluid flowing over flat plates, boundary layers, laminar flow and turbulent flow.
Therefore, very close to a boundary the turbulent effects effectively disappear, and heat transfer is carried out via conduction. Generally conduction is less effective than turbulence movement of fluids at heat transfer.
A Note on Very Basic Theory
The less effectively heat can move through a body, the higher the temperature differential needed to “drive” that heat through.
This is described by the equation for conductive heat transfer, which in (relatively) plain English says:
The heat flow in W/m² is proportional to the temperature difference across the body and the “conductivity” of the body, and is inversely proportional to the distance across the body
Now during the day a significant amount of heat moves up through the ocean to the surface. This is the solar radiation absorbed below the surface. Near the surface where turbulent mixing reduces in effectiveness we should expect to see a larger temperature gradient.
Taking the example of 1m down, if for some reason heat was not able to move effectively from 1m to the surface, then the absorbed solar radiation would keep heating the 1m depth and its temperature would keep rising. Eventually this temperature gradient would cause greater heat flow.
An example of a flawed model where heat was not able to move effectively was given in Does Back-Radiation “Heat” the Ocean? – Part Two:
Note how the 1m & 3m depth keep increasing in temperature. See that article for more explanation.
The Skin Layer in Detail
If the temperature increases closer to the surface, why does it “change direction” in the last millimeter?
In brief, the temperature generally rises in the last few meters as you get closer to the surface because hotter fluids rise. They rise because they are less dense.
So why doesn’t that continue to the very last micron?
The surface is where (almost) all of absorbed ocean energy is transferred to the atmosphere.
- Radiation from the surface takes place from the top few microns.
- Latent heat – evaporation of water into water vapor – is taken from the very top layer of the ocean.
- Sensible heat is moved by conduction from the very surface into the atmosphere
And in general the ocean is moving heat into the atmosphere, rather than the reverse. The atmosphere is usually a few degrees cooler than the ocean surface.
Because turbulent motion is reduced the closer we get to the boundary with the atmosphere, this means that conduction is needed to transfer heat. This needs a temperature differential.
I could write it another way – because “needing a temperature differential” isn’t the same as “getting a temperature differential”.
If the heat flow up from below cannot get through to the surface, the energy will keep “piling up” and, therefore, keep increasing the temperature. Eventually the temperature will be high enough to “drive the heat” out to the surface.
The Simple 1-d Model
We saw a simple 1-d model in Does Back Radiation “Heat” the Ocean? – Part Four.
Just for the purposes of checking the theory relating to skin layers here is what I did to improve on it:
1. Increased the granularity of the model – with depths for each layer of: 100μm, 300μm, 1mm, 5mm, 20mm, 50mm, 200mm, 1m, 10m, 100m (note values are the lower edge of each layer).
2. Reduced the “turbulent conductivity” values as the surface was reached – instead of one “turbulent conductivity” value (used when the layer below was warmer than the layer above), these values were reduced closer to the surface, e.g. for the 100μm layer, kt=10; for the 300μm layer, kt=10; for the 1mm layer, kt=100; for the 5mm layer, kt=1000; for the 20mm layer, kt=100,000. Then the rest were 200,000 = 2×105 – the standard value used in the earlier models.
3. Reduced the time step to 5ms. This is necessary to make the model work and of course does reduce the length of run significantly.
The results for a 30 day run showed the beginnings of a cooler skin. And the starting temperatures for the top layer down to the 20mm layer were the same. The values of kt were not “tuned” to make the model work, I just threw some values in to see what happened.
As a side note for those following the discussion from Part Four, the ocean temperature also increased for DLR increases with these changes.
Now I can run it for longer but the real issue is that the model is not anywhere near complex enough.
Further Reading on Complexity
There are some papers for people who want to follow this subject further. This is not a “literature review”, just some papers I found on the journey. The subject is not simple.
Saunders, Peter M. (1967), The Temperature at the Ocean-Air Interface, J. Atmos. Sci.
Tu and Tsuang (2005), Cool-skin simulation by a one-column ocean model, Geophys. Res. Letters
McAlister, E. D., and W. McLeish (1969), Heat Transfer in the Top Millimeter of the Ocean, J. Geophys. Res.
Fairall et al, reference below
GA Wick, WJ Emery, LH Kantha & P Schlussel (1996), The behavior of the bulk-skin sea surface temperature difference under varying wind speed and heat flux, Journal of Physical Oceanography
Hartmut Grassl, (1976), The dependence of the measured cool skin of the ocean on wind stress and total heat flux, Boundary Layer Meteorology
The temperature profile of the top mm of the ocean is a challenging subject. Tu & Tsuang say:
Generally speaking, the structure of the viscous layer is known to be related to the molecular viscosity, surface winds, and air-sea flux exchanges. Both Saunders’ formulation [Saunders, 1967; Grassl, 1976; Fairall et al.,1996] and the renewal theory [Liu et al., 1979; Wick et al.,1996; Castro et al., 2003; Horrocks et al., 2003] have been developed and applied to study the cool-skin effect.
But the exact factors and processes determining the structure is still not well known.
However, despite the complexity, an understanding of the basics helps to give some insight into why the temperature profile is like it is.
I welcome commenters who can make the subject easier to understand. And also commenters who can explain the more complex elements of this subject.
A Heat Transfer Textbook, by Prof Lienhard & Prof Lienhard, Phlogiston Press, 3rd edition (2008)
Cool-skin and warm-layer effects on sea surface temperature, Fairall, Bradley, Godfrey, Wick, Edson & Young, Journal of Geophysical Research (1996)
Near-surface ocean temperature, Ward, Journal of Geophysical Research (2006)
An Autonomous Profiler for Near Surface Temperature Measurements, Ward & Minnett, Accepted for the Proceedings Gas Transfer at Water Surfaces 4th International Symposium (2000)
|
2026-01-19T16:29:09.225256
|
64,585
| 3.575935
|
http://sunrisehospital.com/your-health/condition_detail.dot?id=29008&lang=English&db=hlt&ebscoType=healthlibrary&widgetTitle=EBSCO%20-%20Condition%20Detail%20v2%20(good)
|
(Atrial Septal Defect; Atrioventricular Canal Defect; Atrioventricular Septal Defect; Endocardial Cushion Defect; Ventricular Septal Defect)
- Atrial septal defect (ASD)—a hole in the wall between the two upper chambers (atrium) of the heart
- Ventricular septal defect (VSD)—a hole in the wall between the two lower chambers (ventricles) of the heart
- Atrioventricular septal defect (AVSD)—a combination of ASD, VSD, and problems with opening between chambers (called valves)
|Ventricular Septal Defect|
|Copyright © Nucleus Medical Media, Inc.|
- Family history of congenital heart defects
- Exposure to a viral infection, drugs, or alcohol during pregnancy
- Shortness of breath
- Getting tired easily
- Poor growth
- High blood pressure in the lungs and possibly damage to the blood vessels in the lungs (in VSD and AVSD)
ASD treatment options include:
- About 40% of all ASDs will close on their own during the first year of life. This is more likely to occur with small defects.
- An ASD that still exists at age 2 is unlikely to ever close on its own. If it is not closed in childhood, it may cause problems in adulthood.
- Surgery may be recommended in children with ASD's past age 2 years.
- Some ASDs can be closed without surgery. A device is placed in the hole with cardiac catheterization . This is a process that send the device to the heart through a large blood vessel.
VSD treatment options include:
- Many VSDs will close on their own during the first year of life. This is more likely to occur with small defects.
- Small VSDs that do not close rarely cause problems.
- Medium and large VSDs may cause problems. They may need supportive treatment in the first few months of life.
- Surgery may be needed in children with defects that cause symptoms or do not close after 1 year.
AVSD treatment options include:
- Most infants with AVSD will have symptoms. Therefore they will need treatment.
- Medication may be needed. They can help the heart beat strongly, keep the heart rate regular, or decrease the amount of fluid in the blood flow.
- Physical activity may need to be limited.
- Surgery to close the defect.
- congestive heart failure—infants with signs of congestive heart failure may need to take medicine.
Living With Septal Defects
American Heart Association http://www.americanheart.org/
March of Dimes http://www.marchofdimes.com/
Canadian Cardiovascular Society http://www.ccs.ca/home/index%5Fe.aspx
Canadian Family Physician http://www.cfpc.ca/
Antibiotic prophylaxis. American Dental Association website. Available at: http://www.mouthhealthy.org/en/az-topics/p/Premedication-or-Antibiotics.aspx . Accessed August 10, 2012.
Congenital heart defects. March of Dimes website. Available at: http://www.marchofdimes.com/baby/birthdefects%5Fcongenitalheart.html . Accessed August 10, 2012.
Congenital heart defects. American Heart Association website. Available at: http://www.heart.org/HEARTORG/Conditions/CongenitalHeartDefects/Congenital-Heart-Defects%5FUCM%5F001090%5FSubHomePage.jsp . Accessed August 10, 2012.
Patent foramen ovale and other atrial septal defects (ASD). EBSCO DynaMed website. Available at: http://www.ebscohost.com/dynamed/what.php . Updated June 2012. Accessed August 10, 2012.
Ventricular septal defect. EBSCO DynaMed website. Available at: http://www.ebscohost.com/dynamed/what.php . Updated June 2012. Accessed August 10, 2012.
6/18/2010 DynaMed's Systematic Literature Surveillance http://www.ebscohost.com/dynamed/what.php : Jentink J, Loane M, Dolk H, et al. Valproic acid monotherapy in pregnancy and major congenital malformations. N Engl J Med. 2010;362(23):2185.
- Reviewer: Michael Woods
- Review Date: 09/2012 -
- Update Date: 09/26/2012 -
|
2026-01-19T07:28:40.295697
|
203,505
| 3.517672
|
http://www4.uwm.edu/soe/centers/cmser/publications/videos.cfm
|
The CMSER has published numerous videotapes designed to enhance teachers' understanding of different strategies for understanding Mathematics.
These videos are available to purchase through the CMSER. Please send check for the cost of video, plus $5.00 for shipping and handling, with your order. Make the check payable to the Center for Mathematics and Science Education Research. Businesses and institutions with purchase orders may choose to be billed. Allow three to four weeks for delivery.
Send orders to: Center for Mathematics and Science Education Research
Enderis Hall, Room 265
University of Wisconsin - Milwaukee
PO Box 413
Milwaukee, WI 53201-0413
For more information, call 414-229-6646 or email CMSER@uwm.edu.
Kids Handle the Numbers: Computational Fluency for Addition and SubtractionProduced in 2001; 18 minutes; $10.00
Kids Handle the Numbers is a video about computational fluency, which means that through problem solving kids learn understanding of numbers. Computational fluency is achieved when students can flexibly use a number of strategies to correctly solve a problem. The video shows clips of students in actual classrooms demonstrationg efficient computational fluency with several different methods to produce accurate answers to addition and subtraction problems.
Many students learn to consolidate and chose simpler numbers to work with or "friendly numbers". The students also learn by examining and comparing each others numbers. The idea behind computational fluency is that the ideas for math are connected together, so students may solve problems a number of ways and come up with the correct answer.
Telling the Data Story: Teaching Data Analysis in the Elementary SchoolProduced in 2001; 17 minutes; $10.00
This video highlights students showing proficiency in working with data in a variety of ways, such as formulating questions, collecting data, and organizing and representing. The footage is from actual classrooms in action, with children understanding data by asking questions that can be answered with data.
The video shows examples of why it is important for children to be a part of the entire process of understanding data, and demonstrates students planning how to organize data after collecting. It also offers real examples of students describing the data that they have categorized to show an understanding of what the data means, and finding a link to the real world.
Students are able to tell about where they see data gaps and data groups, or distribution of values. Through this they are able to compare findings. This video gives helpful strategies to guide students to be able to understand data when they can describe the data and summarize their findings. The video is another production from the CMSER featuring urban classrooms, diverse teachers and students, and students learning mathematics in alignment with national mathematics reform efforts.
Altering the Equation: A Video Documentary on the Milwaukee Public Schools Mathematics Proficiency Exam30 minutes; $10.00
Many school districts are debating how best to measure how well they are educating students; they may soon be turning to Milwaukee for answers. As part of a wide-ranging reform effort, Milwaukee Public Schools now administers a mathematics proficiency examination to all students as a prerequisite to high school graduation. The test assesses how ready students are to apply mathematics and critical thinking skills to "real-world" problems.
The mathematics proficiency exam was given, for the first time, to juniors at the end of the 1994-95 school year. In early summer the district announced 21 percent of students taking the test passed. That means nearly four out of five students failed the exam; and that became the focal point of national discussion and debate. While some praised the district's effort at relevant assessment, critics say the results are proof that the school system is failing. In a sense, the district was damned by some because of its steps toward a praiseworthy goal.
The Center for Mathematics and Science Education Research at the University of Wisconsin - Milwaukee has developed a fresh look at the Milwaukee Public Schools math test. The half-hour video documentary entitled Altering the Equation is the result of this effort. This program explores the nature of the exam, the goals of the school district, and how the district's efforts - particularly with this test - fit the broader education community's attempts at reform and revitalization.
Altering the Equation was produced by Jerry Grayson in association with Wisconsin Public Television and the University of Wisconsin - Milwaukee Center for Mathematics and Science Education Research and the School of Education. The development of the video documentary was sponsored by the Helen Bader Foundation, Incorporated.
Just Talk About It, Write About It, and Count On It: A Video Documentary on the NCTM Communications Standards: Grades 5-1230 minutes; $10.00
Standards are designed so that the role of students in the learning process shifts to prepare them for their entrance into the workforce. This goal makes it essential to create classrooms where students can become active in creating, constructing and communicating their mathematics understanding. Our job as educators, therefore, is to foster an atmosphere which allows students to feel free to make mistakes and take intellectual risks while exploring interesting problems and using important mathematical ideas and concepts. Such a climate improves confidence while building understanding, and empowering students.
Just Talk About It, Write About It, and Count on It discusses the NCTM Mathematics Standard of Communication. It includes detailed explanations of the various aspects of the communication standard along with teacher and student examples of that standard in classrooms. This video is a useful tool in facilitating a better implementation of this standard for both the educator and the student.
Just Talk About It, Write About It, and Count on It is the result of a partnership among Milwaukee Public Schools, Nicolet High School, and the Center for Mathematics and Science Education Research at the University of Wisconsin - Milwaukee. The focus of the tape is one of the National Council of Teachers of Mathematics Curriculum and Evaluation Standards: the Communications Standard for grades 5-8 and 9-12. This tape was made possible by a grant from the Dwight D. Eisenhower Education Act.
Mathematics in Milwaukee: Parents as Partners in Children's LearningProduced in 2000; 10 minutes; $10.00
Milwaukee Public Schools (MPS) adopted a new mathematics curriculum entitled Investigations in Number, Data, and Space. This video gives an overview of the curriculum and the approach it takes towards teaching and learning mathematics. Using examples of MPS students and teachers in the classroom, the video discusses how Investigations promotes reasoning, collaboration, several forms of communication and various ways of working with numbers to teach and learn mathematics.
The video provides testimonials of parents who have worked successfully with their children and the Investigations curriculum. Speaking about the benefits of interacting with their children in a mathematical context, parents give examples of how they have supported and reinforced their children as they find and apply knowledge in mathematics. Also included in the video are clips of parents and their children working together on math homework and games.
Milwaukee's Mathematics for the New Millennium: Milwaukee Public Schools Elementary Mathematics Curriculum10 minutes; $10.00
Investigations in Number, Data and Space is the mathematics curriculum used in the Milwaukee Public Schools (MPS). This video explains the curriculum and the approach it takes towards teaching and learning mathematics. Using examples of MPS students and teachers in the classroom, the video demonstrates how Investigations uses more than one way to solve problems, independent work and cooperative groups to develop flexibility, confidence, fluency, and communication skills in mathematics.
The instructional component of Investigations is complemented by the assessment portion of the curriculum. This video addresses various manners in which teachers can assess student achievement and adjust instruction. The video also explains the homework component which includes practice, preparation and family involvement.
|
2026-01-21T09:09:41.635242
|
172,727
| 3.592221
|
http://www.bowsite.com/bowsite/features/armchair_biologist/moon/index.cfm
|
For years, hunters have tried to accurately predict the best days
to hunt and which ones would better be spent in bed. Could there be
a scientific method to assure the good days from bad ones? Well, the
answer is yes and it’s called the Deer Activity Index (DAI). What exactly
is a DAI? A DAI is a tool that uses various moon characteristics to
assist hunters and biologists in determining daytime activity levels
When I was young, my grandfather answered all my deer hunting questions.
Although he would never reveal his actual method of analysis, his rationale
for predicting deer movement went something like this; full moons were
good and new moons (dark) were bad. Years went by and I started to
test some of Pap’s wisdom, but, to no avail. All the scrupulous notes
I had so religiously taken simply did not make any sense.
Then in 1991, while working on his dissertation, Grant Woods tried
to determine if the moon had any effect on daytime deer activities.
He worked with various astronomers in an effort to determine what moon
phase or moon position had the best relationship to deer movements.
Through some extensive statistical analysis, Woods dismissed the moon
phase and position components. Why? Because he found that the earth’s
distance from the moon, plus the moon’s declination or degrees the moon
is to the earth’s equator are very different from one full moon to the
next. In other words, all full moons are NOT created equal. The same
rationale also applies to quarter, half or whatever moon phase is present.
Using hunter compiled data in free ranging deer herds, Woods analyzed
information on 6,009 tree stand hours, in which 5,686 deer were seen
and 784 harvested. Other hunts from around the country were also added
to the original sample size and checked with various moon orbit characteristics.
Finally, after a lot of hard work all the statistical comparisons started
to make sense. The end result is called a Deer Activity Index (DAI).
The DAI rates daytime deer activity movements into seven categories
ranging from “4” to “10”. A ten reflects the highest deer activity,
while a “4” indicates the lowest. It is important to note that when
the DAI comes up with a “4”, there still will be deer activity. Although,
it will probably be a slow day afield.
To satisfy any skepticism concerning the statistical analysis, Woods’
research showed that on days with a “8”, “9” or “10” (high DAI) values
at a research site in South Carolina, hunters saw 76 percent more deer
than on “4",“5” or “6” (low DAI) value days. Similar data occurred
at a hunt club in New York. Hunters observed 54 percent more deer on
the high DAI days than on low DAI days.
Similar results occurred for hunters that participate in drives!
When the hunters at the club in New York started to drive deer, the
data indicated 26 percent more deer on high DAI days compared to low
DAI days. Intuitively, one would suspect no measurable or significant
changes with drives, but this was not the case.
I asked Woods his thoughts on hunting during the midday hours while
using the DAI? Woods stated, “I always stay in the woods as long as
my schedule allows when I’m hunting. However, given similar conditions,
I average observing significantly more deer during midday hours on “8”,
“9” and “10” days compared to “4”, “5” and “6” days.”
Although there have been many studies correlating weather conditions
to deer movement patterns, none of them are entirely consist with one
another. Although not significant, barometric pressure seems to be
the only weather parameter that is somewhat consistent with deer movements.
As veteran hunters know, this generally occurs with an onset of a storm.
Woods noticed this occurring during Hurricane Hugo in 1989. On the
day prior to the storm (DAI value “4”), he saw nothing unusual. However,
the day of the storm (which was also rated a “4”), Woods observed deer
feeding under every persimmon tree, even mature bucks. Obviously, extreme
weather conditions can have an influence on the DAI.
you accurately predict when deer are going to move from one day
to the next? Exciting new research can help you predict the good
days from the bad by using a Deer Activity Index (DAI). This hunter
uses his DAI every day and swears by their accuracy! Unlike the
unpredictable weather patterns or food supplies, DAI values are
given to you far in advance.
How is the DAI effected by extreme hunting pressure? Woods' states,
“Unusually high pressure can override the DAI, but under normal hunting
situations deer activities correspond very well.” Can you use the DAI
values to determine the rut? The answer is NO. Findings suggest that
there is no difference between the values when deer actually breed.
In other words, breeding may be just as likely on a “4” day as on a
“10”. If this doesn’t make sense, remember the DAI is for daytime activity
only. On “4” days, breeding can occur, although it most probably occurs
Exceptions do occur, in well managed herds with older age structures
and balanced sex ratios the rut probably out weights any DAI. However,
this is not the case in many parts of the country. As a result, the
DAI is probably a better indicator of total deer movements in areas
that experience a prolonged “trickle” rut.
Why does the DAI work? Some believe it has something to do with the
moon’s gravitation pull. We know that the moon’s gravitational pull
is different every day and the effects are felt through daily high and
low tides. Some argue all living creatures are effected by the moon…just
ask any bartender or police officer. For whatever the reason, the fact
remains that the DAI is the only statistical tool we have to predict
Woods believes, “I can now help hunters and researchers schedule
their field time more efficiently by picking the best days to hunt.
This should be especially helpful for those who have limited opportunities
to hunt.” Testimony to this is reflected by letters sent to him stating,
“By eliminating the least active days, my wife is happier due to the
additional time I was able to spend with her,” or “It was almost as
if the deer had made the activity schedule themselves and mailed it
out to us hunters."
It’s important to note that the DAI does NOT tell you where to hunt,
but a tool that indicates your most probable days for deer activity.
Obviously, hunting know-how is still paramount in being consistently
successful. The bottom line is this, if you have an hour to hunt, then
hunt. But, if you have a DAI then you can modify your limited times
afield to those key “8-10” days.
The DAI may be altered by extremes in weather conditions,
hunting pressure, food availability and/or the rut. However,
Woods believes “normal” environmental factors are generally
not enough to totally override the DAI. Since 1991, the DAI
has been accurate 69-75 percent of the time…not bad! DAI values
are available on a large poster calendar for $12.95 by calling
[888-760-3337] or sending a check or money order to DAI, P.O.
Box 36056, Birmingham, AL 35236-6056 (allow 3-4 weeks for shipping
Winand is a whitetail biologist from Randallstown, MD.
He is a staff writer for Bowhunter as well as Deer and Deer Hunting
|
2026-01-20T21:51:08.036793
|
463,526
| 4.374528
|
http://en.wikipedia.org/wiki/Sunrise
|
Sunrise or sun up is the instant at which the upper edge of the Sun appears over the eastern horizon in the morning. The term can also refer to the entire process of the Sun crossing the horizon and its accompanying atmospheric effects.
Although the Sun appears to "rise" from the horizon, it is actually the Earth's motion that causes the Sun to appear. The illusion of a moving Sun results from Earth observers being in a rotating reference frame; this apparent motion is so convincing that most cultures had mythologies and religions built around the geocentric model, which prevailed until astronomer Nicolaus Copernicus first formulated the heliocentric model in the 16th century.
Architect Buckminster Fuller proposed the terms "sunsight" and "sunclipse" to better represent the heliocentric model, though the terms have not entered into common language.
Beginning and end
Astronomically, sunrise occurs for only an instant: the moment at which the upper limb of the Sun appears tangent to the horizon. However, the term sunrise commonly refers to periods of time both before and after this point:
- Twilight, the period during which the sky is light but the Sun is not yet visible (morning), or has just passed out of visibility (evening). The beginning of morning twilight is called dawn.
- The period after the Sun rises during which striking colors and atmospheric effects are still seen.
Sunrise occurs before the Sun actually reaches the horizon because the Sun's image is refracted by the Earth's atmosphere. The average amount of refraction is 34 arcminutes, though this amount varies based on atmospheric conditions.
Also, unlike most other solar measurements, sunrise occurs when the Sun's upper limb, rather than its center, appears to cross the horizon. The apparent radius of the Sun at the horizon is 16 arcminutes.
Time of day
The timing of sunrise varies throughout the year and is also affected by the viewer's longitude and latitude, altitude, and time zone. These changes are driven by the axial tilt of Earth, daily rotation of the Earth, the planet's movement in its annual elliptical orbit around the Sun, and the Earth and Moon's paired revolutions around each other. The analemma can be used to make approximate predictions of the time of sunrise.
In the late winter and spring, sunrise as seen from temperate latitudes occurs earlier each day, reaching its earliest time near the summer solstice; the exact date varies by latitude. After this point, the sunrise time gets later each day, reaching its latest sometime around the winter solstice. The offset between the dates of the solstice and the earliest or latest sunrise time is caused by the eccentricity of Earth's orbit and the tilt of its axis, and is described by the analemma, which can be used to predict the dates.
Variations in atmospheric refraction can alter the time of sunrise by changing its apparent position. Near the poles, the time-of-day variation is exaggerated, since the Sun crosses the horizon at a very shallow angle and thus rises more slowly.
Accounting for atmospheric refraction and measuring from the leading edge slightly increases the average duration of day relative to night. The sunrise equation, however, which is used to derive the time of sunrise and sunset, uses the Sun's physical center for calculation, neglecting atmospheric refraction and the non-zero angle subtended by the solar disc.
Location on the horizon
Neglecting the effects of refraction and the Sun's non-zero size, whenever and wherever sunrise occurs, it is always in the northeast quadrant from the March equinox to the September equinox and in the southeast quadrant from the September equinox to the March equinox. Sunrises occur due east on the March and September equinoxes for all viewers on Earth. Exact calculations of the azimuths of sunrise on other dates are complex, but they can be estimated with reasonable accuracy by using the analemma.
Rayleigh scattering by smaller particles
Pure sunlight is white in color, containing a spectrum of colors from violet to red. When sunlight interacts with atmospheric particles much smaller than the wavelength of visible light, a phenomenon known as Rayleigh scattering occurs. In this process, light is scattered in various directions, with shorter wavelengths (violet, blue, and green) being scattered more strongly than longer ones (orange and red).
Because of this effect, the Sun generally appears yellow when observed on Earth, since some of the shorter wavelengths are scattered into the surrounding sky. This also makes the sky appear increasingly blue farther away from the Sun. During sunrise and sunset, the longer path through the atmosphere results in the removal of even more violet and blue light from the direct rays, leaving weak intensities of orange to red light in the sky near the Sun.
Mie scattering by larger particles
The intense reds and peach colors in brilliant sunrises come from Mie scattering by atmospheric dust and aerosols, like the water droplets that make up clouds. We only see these intense reds and peach colors at sunrise and sunset, because it takes the long pathlengths of sunrise and sunset through a lot of air for Rayleigh scattering to deplete the violets and blues from the direct rays. The remaining reddened sunlight can then be scattered by cloud droplets and other relatively large particles to light up the horizon red and orange. These larger particles, with sizes comparable to and longer than the wavelength of light, scatter light by mechanisms treated by the Mie theory.
Mie scattering does not depend heavily on wavelength, but it has the largest effect when an observer views the light directly (such as toward the Sun), rather than looking in other directions. Mie scattering is responsible for the light scattered by clouds, and also for the daytime halo of white light around the Sun (forward scattering of white light).
Ash from volcanic eruptions, trapped within the troposphere, tends to mute sunset and sunrise colors, whereas volcanic ejecta lofted into the stratosphere (as thin clouds of tiny sulfuric acid droplets) can yield beautiful post-sunset colors called afterglows and pre-sunrise glows. A number of eruptions, including those of Mount Pinatubo in 1991 and Krakatoa in 1883, have produced sufficiently high stratospheric sulfuric acid clouds to yield remarkable sunset afterglows (and pre-sunrise glows) around the world. The high-altitude clouds serve to reflect strongly-reddened sunlight still striking the stratosphere after sunset down to the surface.
Sunrise vs. Sunset colors
Sunset colors are sometimes more brilliant than sunrise colors because evening air typically contains more large particles, such as clouds and smog, than morning air. These particles glow orange and red due to Mie scattering during sunsets and sunrises because they are illuminated with the longer wavelengths that remain after Rayleigh scattering.
If the concentration of large particles is too high (such as during heavy smog), the color intensity and contrast is diminished and the lighting becomes more homogenous. When very few particles are present, the reddish light is more concentrated around the Sun and is not spread across and away from the horizon.
Optical illusions and other phenomena
- Atmospheric refraction causes the Sun to be seen while it is still below the horizon.
- The Sun appears larger at sunrise than it does while higher in the sky, in a manner similar to the moon illusion.
- The Sun appears to rise above the horizon and circle the Earth, but it is actually the Earth that is rotating, with the Sun remaining fixed. This effect results from the fact that an observer on Earth is in a rotating reference frame.
- Occasionally a false sunrise occurs, demonstrating a very particular kind of Parhelion belonging to the optical phenomenon family of halos.
- Sometimes just before sunrise or after sunset a green flash can be seen. This is an optical phenomenon in which a green spot is visible above the sun, usually for no more than a second or two.
See also
- Day length
- Earth's shadow, visible at sunrise
- False sunrise
- Sunrise equation
- Sunrise - Definition and More from the Free Merriam-Webster Dictionary
- The Earth Is the Center of the Universe: Top 10 Science Mistakes
- Karen Masters (October 2004). "Curious About Astronomy: How does the position of Moonrise and Moonset change?". Curious About Astronomy? Ask an Astronomer. Cornell University Astronomy Department. Retrieved 2012-03-20.
- "Where Do the Sun and Stars Rise?". Stanford Solar Center. Retrieved 2012-03-20.
- K. Saha (2008). The Earth's Atmosphere - Its Physics and Dynamics. Springer. p. 107. ISBN 978-3-540-78426-5.
- Hyperphysics, Georgia State University
- Craig Bohren (ed.), Selected Papers on Scattering in the Atmosphere, SPIE Optical Engineering Press, Bellingham, WA, 1989
- E. Hecht (2002). Optics (4th ed.). Addison Wesley. p. 88. ISBN 0-321-18878-0.
- B. Guenther (ed.) (2005). Encyclopedia of Modern Optics. Vol. 1. Elsevier. p. 186.
- Corfidi, Stephen F. (February 2009). "The Colors of Twilight and Sunset". Norman, OK: NOAA/NWS Storm Prediction Center.
- "Atmospheric Aerosols: What Are They, and Why Are They So Important?". nasa.gov. August 1996.
- Selected Papers on Scattering in the Atmosphere, edited by Craig Bohren ~SPIE Optical Engineering Press, Bellingham, WA, 1989
- "Red Sunset, Green Flash".
- Sun or Moon Rise/Set Table for one Year
- Full physical explanation of sky color, in simple terms
- An Excel workbook with VBA functions for sunrise, sunset, solar noon, twilight (dawn and dusk), and solar position (azimuth and elevation)
- Daily almanac including Sun rise/set/twillight for every location on Earth
- Monthly calendar with Sun/Moon rise/set times for every location on Earth
|Wikiquote has a collection of quotations related to: Sunrise|
|Wikimedia Commons has media related to: Sunrises|
|
2026-01-25T10:15:04.070223
|
695,138
| 4.228963
|
http://www.nps.gov/history/nr/travel/amistad/slavetrade.htm
|
An estimated 12 million Africans were transported across the Atlantic to the Western Hemisphere from 1450 to 1850. Of this number, about five percent were brought to British North America and, later, to the United States, most of them arriving between 1680 and 1810. A small number of Africans went first to the British West Indies and then to North America. 1
Africans were present while North and South America were explored and expropriated as European colonies (1500s-1700s), but their roles and status varied from Mexico to Brazil to the Carolinas and New Amsterdam. 2 Bonded labor, common both in Europe and Africa, declined in Europe while it became more important in Africa after trade with Europe was established. At the end of the 14th century Europeans, primarily the Portuguese and the Spanish, were exploring the west coast of Africa, looking both for trade opportunities and trade routes to the East. In their interaction with African merchants they began to export small numbers of slaves to their European homelands. With the exploration and eventual European settling of the New World, however, the trade in African slaves increased rapidly. Initially Europeans brought only small numbers of Africans to the New World. Yet as the need for labor grew with increased agricultural, mining, mercantile and other business interests, so too did the number of black slaves, the vast majority of whom were males. Brazil and the Caribbean had the largest number of imports and for the longest period of time, until the 1880s. Although most of the figures for the Atlantic slave trade system are imprecise, it is possible to estimate that Brazil received at least 4 million slaves and the islands of the Caribbean, colonized by the French, Dutch, English, Danish and Spanish, as well as Spain's mainland possessions, received at least 5.5 million. The mainland United States, as colonies and nation, imported about 450,000 slaves over a 250 year period. Slavery in this country began, then, as part of a long history of international trade in goods and people in Europe and in Africa. 3
Europeans divided the slave trade into three geographic regions--Upper Guinea, Lower Guinea and Angola. More than three-fifths of the slaves brought to the Chesapeake were from the Gold Coast or the Bight of Biafra. Many Sierra Leonians went to Carolina where they were outnumbered there by Angolans. Senegambians were prominent in both the Carolinas and Louisiana. Rivalries between ethnic and tribal groups, raids by North Africans and local soldiers, and piracy on the rivers of the African coast provided the majority of captured Africans. 4
Traditionally, the entry of Africans into British North America is dated from the 1619 sale of some 20 blacks from a Dutch ship in Virginia. Although there were undoubtedly other Africans in those regions which later became part of the United States, slavery as it developed in British North America and was continued in the American republic can be traced to what happened in the Chesapeake in the 1600s. For the first few decades, the status of Africans was uncertain. Some were treated as indentured servants and freed after a term of service, often 14 years. Others were kept on in servitude because their labor was needed, and it was too tempting for aspiring planters not to take advantage of the vulnerable black laborers. By the 1640s, court decisions began to reflect a different standard for Africans than for white servants and to accept the concept of lifetime black servitude. In the 1660s, Virginia decreed that a child followed the status of its mother, thus making lifetime servitude inheritable. A series of court decisions from the 1660s forward locked slavery into place in the Chesapeake and its existence was not questioned in the later development of the Carolinas. 5 Georgia resisted briefly and then accepted the institution. Slave law to the north of the Chesapeake did not differ significantly. 6
By the time a body of law regarding slavery was firmly in place, a number of free blacks who had escaped permanent bondage through indenture lived throughout the colonies. They married--other free blacks, slaves, American Indians, occasionally European servant women--and raised families. This, in addition to African sailors and free black arrivals from the West Indies, constituted the core of the free black class in the colonies. 9
When British North America severed ties with England, the slave trade between West Africa, the British West Indies, and North America was also officially severed, but colonial American merchant shipping was prepared to expand its role and replace the British. At the same time, in the Revolutionary Era, the public debate in favor of liberty from England strengthened arguments against the slave trade and human bondage. When legal codes were changed during the American Revolution, both the Continental Congresses and the individual states took the opportunity to condemn and restrict the slave trade. Reasons for condemning the slave trade varied. It was increasingly attacked as a moral evil by religious and benevolent societies; parts of the South feared slave insurrection if the numbers of Africans grew to be much greater than the white population; it appeared that the enslaved population could sustain itself and increase in numbers without significant importations. To end the slave trade, however, was not necessarily to favor an end to slavery. Here the colonies divided.
Since the Americans had argued for natural rights in their Declaration of Independence, there was some sentiment for ending the slave trade, although less political will for ending slavery. Ultimately, the Constitution did not follow up on the implications for "liberty" offered in the Declaration of Independence. The Constitutional compromise of 1787 put an end to the slave trade by 1808, but the Constitution and the Fugitive Slave Law of 1793 confirmed the rights of slaveholders to their property. Section 2, Article 4 of the Constitution referred to slavery without naming it when it said, "No person held to service or labour in one state, under the laws thereof, escaping into another, shall, in consequence of any law or regulation therein, be discharged from such service or labour, but shall be delivered up on claim of the party to whom such service or labour may be due."
After the American Revolution, Abolition Societies were formed in every part of the United States. The American antislavery movement was modeled on the English antislavery movement from the adoption of the Constitution in 1787 until the 1820s. English reformers, led by Quakers, evangelicals and certain politicians, had organized in 1787 to abolish English participation in the international slave trade. The British reformers ended the slave trade by 1807 and ended slavery by 1833, with compensation to owners. The Americans followed the British example of advocating the gradual and compensated abolition of slavery. But it was the activities of American free blacks and the resistance to slavery by American slaves that provided the movement with its most tireless workers and its best reason for persisting.
Above essay excerpted from the Underground Railroad Resources in the United States National Historic Landmarks Theme Study. Related information about The African Squadron: The U.S. Navy and the Slave Trade, 1920-1862 can be found on the Mystic Seaport, The Museum of America and the Sea website.
1 Phillip Curtin, The Atlantic Slave Trade: A Census (Madison: University of Wisconsin Press, 1969), 18.
2 For a summary of these diverse experiences, see Brenda Stevenson, "From Bondage to Freedom: Slavery in America," in Underground Railroad , (Handbook 156, Division of Publications, National Park Service, U.S. Department of the Interior, Washington, D.C., 1998.)
3 John Thornton. Africa and Africans in the Making of the Atlantic World, 1400-1680 . (New York: Cambridge University Press, 1992), xv. See also Donald R. Wright, African Americans in the Colonial Era: From African Origins Through the American Revolution (Arlington Heights, Ill.: Harland Davidson, Inc., 1990).
4 James Rawley, The Transatlantic Slave Trade: A History (New York: W.W. Norton, 1981), 17-18. See also Philip Curtin, Steven Feierman, Leonard Thompson and Jan Vansina, African History (New York: Longman, 1991); Patrick Manning, Slavery and African Life: Occidental, Oriental, and African Slave Trades . (Cambridge, UK: Cambridge University Press, 1990). 62-85.
5 Edmund Morgan, American Slavery, American Freedom: The Ordeal of Colonial Virginia (New York: W.W. Norton, 1975), 299-311; Winthrop Jordan, The White Man's Burden: Historical Origins of Racism in the United States (New York: Oxford University Press, 1974), 26-54.
6 James Oliver Horton and Lois E. Horton, In Hope of Liberty: Culture, Community and Protest Among Northern Free Blacks, 1700-1860 (New York: Oxford University Press, 1977), 5-12.
7 Thornton, 116-125.
8 Rawley, The Transatlantic Slave Trade, 247.
9 For the origins of northern free blacks, see James Oliver Horton and Lois E. Horton, In Hope of Liberty , Ch. 1-2. For a Chesapeake example, see T.H. Breen and Steven Innis, " Mynne Owne Ground": Race and Freedom on Virginia's Eastern Shore, 1640-1676 (New York: Oxford University Press, 1980).
10 Curtin, The Atlantic Slave Trade, 83; Daniel C. Littlefield, "The Slave Trade to Colonial South Carolina: A Profile," South Carolina Historical Magazine 91 (1990): 68-99; Susan Westbury, "Slaves of Colonial Virginia: Where They Came From," William and Mary Quarterly 3rd Series 42 (1985): 228-237.
Comments or Questions
|
2026-01-28T23:30:31.483442
|
1,052,660
| 3.910065
|
http://physics.aps.org/articles/v5/25
|
Water is one of the most abundant molecules in “icy” planets and moons in our solar system and probably in extrasolar planets, such as icy “super Earths” with masses from to times the mass of Earth, and “hot Neptunes” with masses comparable to or smaller that than the mass of Neptune, which is times the mass of Earth. Planetary “ice” is a mixture of , , and , whether frozen solid on the surface or as a fluid in the hot interior of an “icy” planet or moon. Ice in Neptune and Uranus, for example, exists only as a fluid. Ice is mostly , and generally treated as such, which is why the equation of state (EOS) of water is so important for modeling icy planets . Because of their large sizes and low thermal conductivities, planetary interior pressures range up to several gigapascals (GPa) and temperatures of several . As reported in Physical Review Letters, Marcus Knudson at Sandia National Laboratories, New Mexico, and colleagues have now made substantial advances in probing the properties of water under such conditions .
Researchers at Los Alamos National Laboratory developed an EOS of water at high pressure and temperature decades ago, which is one of many EOS’s archived in a database called Sesame. Sesame was started with relatively little experimental data and is used widely for planetary modeling. Initial shock data for water extended up to a maximum pressure of . Interior pressures in both Neptune and Uranus are and are much higher in many newly discovered icy planets. Because of recent advances in shock-compression techniques, experimentalists can now measure EOS data of at pressures and temperatures up to those in the cores of the icy giant planets Neptune and Uranus. Knudson et al. have made such measurements, and they have used their data to construct an EOS of water that can now be used by the planetary physics and chemistry community to construct accurate models of deep interiors of icy planets and moons.
More specifically, space flyby missions to Neptune and Uranus have measured their gravitational fields, from which mass-density distributions are derived. By knowing accurate EOS models for various likely constituents, the combination of models that best matches measured total planetary mass and mass distribution, provide a good indication of the likely chemical composition of the interiors. At present this technique works only for a few general classes of materials, namely light material (-), medium-weight material (ice, rock, and their mixtures), and heavy material (). This selection process is aided by knowledge of whether or not a planet has a magnetic field, which is discussed below.
For the past years, extreme pressures, densities, internal energies, and temperatures have been achieved in water by hypervelocity impact and by irradiation with an intense radiation pulse. These energy sources produce fast, high-pressure pulses in highly condensed matter, often within less than a nanosecond (ns) and last for to , depending on the source. These time scales are sufficiently long to achieve thermal equilibrium in most materials and sufficiently brief that the process is adiabatic (heat of compression is retained during experimental lifetime). In addition, corrosion by ionic fluids so produced has insufficient time to occur. Hydrodynamic waves produced by such fast processes are called shock waves. Energy deposited in these fast processes goes into compression and into dissipation in the forms of heating and disorder. The locus of states produced by a sequence of shock pressures in a material with fixed initial density is called a shock-compression curve, or a Rankine-Hugoniot curve, or simply a Hugoniot. Uniform shock states are achieved throughout the bulk sample, or uniform values can be obtained by modest corrections to measured data.
Knudson et al. achieved high shock pressures with impacts and measured EOS data points of water up to . In a Hugoniot EOS experiment, initial material densities, velocities of impactor, the shock wave induced by impact, and the velocity of mass just behind the shock front are measured. Knudson measured these quantities with optical interferometry. Basically, velocity is proportional to the number of interferometric fringes measured. Shock pressure, density, and internal energy are then calculated from measured initial mass densities and velocities by conservation of momentum, mass, and energy, respectively, across the front of a shock wave. The conservation equations are called the Rankine-Hugoniot equations . Temperature of shock-compressed matter is determined experimentally by measurements of gray-body thermal-emission spectra, provided the medium in front of a thermally radiating shock front is sufficiently transparent, or it is calculated theoretically.
Knudson et al. also measured optical reflectivity of a laser beam reflected from a moving shock front. Electrical conductivity of shock-compressed water can then be estimated from measured reflectivities. Electrical conductivities of water are needed to calculate planetary magnetic fields by magnetohydrodynamic dynamo calculations of fields produced by convecting, conducting fluid water mixed with rock . At extreme pressures and temperatures in planets, water dissociates into several chemical species. Thus, in general, if a planet has a magnetic field, it has a constituent of its interior that is conductive and perhaps metallic at interior pressures and temperatures. This latter condition also facilitates identification of likely chemical constituents of planets.
The magnitude and lifetime of a shock wave generated by impact depend sensitively on the velocity, density, and aspect ratio (diameter/thickness) of an impactor plate. Its aspect ratio should be sufficiently large to eliminate edge effects during experimental lifetime, which is determined by impactor size and mass. Impactor mass is important because it affects kinetic energy imparted to an impactor and the fraction of kinetic energy that goes into velocity. Achievement of ever-higher shock pressures by impact has required a series of shock drivers with ever-higher kinetic energies or radiation intensities.
Historically, water has been single-shocked to with planar shock waves generated with chemical explosives , to with a two-stage light-gas gun [4, 6], to with underground nuclear explosions [7, 8], and to with a pulsed giant laser by Lawrence Livermore National Laboratory at the Omega Laser . Now, this work of Knudson et al. pushes to with a giant pulsed electrical current at the Z Accelerator at Sandia National Laboratories . Today, only giant pulsed lasers and impactor plates accelerated with the Z machine to velocities above can achieve shock pressures above a few in water. The Z machine has achieved not only very high pressures but also small experimental error bars on pressure, compression, and energy, which essentially eliminate previous ambiguities concerning the correct Hugoniot of water in this newly accessible regime. Knudson’s small error bars are a major contribution of this work.
Highest shock pressure was generated by impact onto a sample holder of a -gram aluminum plate travelling at a velocity of and a kinetic energy of megajoules. The Z Accelerator, shown in Fig.1, produces a current pulse behind the impactor that rises to a peak current of mega-amps in . The pulsed magnetic field not only produces a magnetic pressure on the rear surface of the impactor plate, it also heats the impactor plate by magnetic flux diffusion. The limiting magnetic drive pressure is because it is produced by a current pulse that maintains the aluminum layer near the impact surface as a solid. This minimizes heating of the impactor behind the solid regime, which keeps the impact surface flat and stable. The result is a density variation that does not compromise use of the Hugoniot equations and fast hydrodynamics to interpret measured velocities in terms of thermodynamic variables.
Knudson’s EOS of shock-compressed water will lead to improved modeling of deep interiors of icy planets both within and beyond our solar system, and demonstrates a strong likelihood that many icy planets outside our solar system have magnetic fields. His reflectivities are at and range up to at higher pressures and temperatures. These reflectivities of a fluid correspond to electrical conductivities typical of poor metals (minimum conductivity of a metal), which are optimum values for supporting planetary magnetic fields .
Interesting work for the future includes (i) measuring the complex index of refraction of shocked water and other planetary fluids to be able to derive more accurate values of electrical conductivities from reflectivity measurements; (ii) determining experimentally relative contributions to optical emission from shocked fluid and shocked windows containing the fluids; and (iii) measuring temperatures of single and double-shocked water.
- D. J. Stevenson, Space Sci. Rev. 152, 651 (2010).
- W. B. Hubbard, Science 275, 1279 (1997).
- M. D. Knudson, M. P. Desjarlais, R. W. Lemke, T. R. Mattsson, M. French, N. Nettelmann, and R. Redmer, Phys. Rev. Lett. 108, 091102 (2012).
- W. J. Nellis, Rep. Prog. Phys. 69, 1479 (2006).
- J. M. Walsh and M. H. Rice, J. Chem. Phys. 26, 815 (1957).
- A. C. Mitchell and W. J. Nellis, J. Chem. Phys. 76, 6273 (1982).
- M. A. Podurets, G. V. Simakov, R. F. Trunin, L. V. Popov, and B. N. Moiseev, Sov. Phys. JETP 35, 375 (1972).
- R. F. Trunin, Shock Compression of Condensed Materials (Cambridge University Press, Cambridge, 1998)[Amazon][WorldCat].
- P. M. Celliers et al., Phys. Plasmas 11, L41 (2004).
- B. Schwarzschild, Phys. Today 56 No. 7, 19 (2003).
|
2026-02-03T12:01:45.421292
|
669,280
| 3.732995
|
http://wiki.linuxquestions.org/wiki/GFDL:Microkernel
|
A microkernel is a minimal form of computer operating system kernel providing a set of primitives, or system calls, to implement basic operating system services such as address space management, thread management, and inter-process communication. All other services, those normally provided by the kernel such as networking, are implemented in user-space programs referred to as servers.
One innovation of the Unix operating system is the use of a large number of small programs that can be strung together to complete a task with a pipe, as opposed to using a single larger program that includes all of the same functionality. The result is more flexibility and improved development; since each program is small and dedicated to a single role, it is much easier to understand and debug.
Under Unix, the "operating system" consists of many of these utilities along with the master control program, the kernel. The kernel provides services to start and stop programs, handle the file system and other common "high level" tasks that most programs share, and, perhaps most importantly, schedules access to hardware to avoid conflicts if two programs attempt to simultaneously access the same resource or device. In order to mediate such access, the kernel was given special rights on the system and led to the division between user-space and kernel-space.
Early operating system kernels were rather small, partly because computer memories were small. As the capability of computers grew, the number of devices the kernel had to control also grew. Early versions of UNIX had kernels of quite modest size, even though those kernels contained device drivers and file system managers. When address spaces increased from 16 to 32 bits, kernel design was no longer cramped by the hardware architecture, and kernels began to grow.
Berkeley UNIX (BSD) began the era of the "big kernel". In addition to operating a basic system consisting of the CPU, disks and printers, BSD started adding additional file systems, a complete TCP/IP networking system, and a number of "virtual" devices that allowed the existing programs to work invisibily over the network.
This growth trend continued for several decades, resulting in UNIX, Linux, and Microsoft Windows kernels with millions of lines of code in the kernel. Current versions of Linux, Red Hat 7.1 for instance, contain about 2.5 million lines of source code in the kernel alone (of about 30 million in total), while Windows XP is estimated at twice that.
Microkernels tried to reverse the growing size of kernels and return to a system in which most tasks would be completed by smaller utilities. Unix attempted to model the world as files, using pipes to move data between them. In an era when a "normal" computer consisted of a hard disk for storage and a printer for input/output, this model worked quite well as most I/O was "linear". The introduction of interactive terminals required only minor "adjustments" to this model; while the display itself was no longer strictly linear, the series of interactions between the user's input and the computer's output remained fairly similar to older systems.
However, modern systems including networking and other new devices no longer seemed to map as cleanly onto files. For instance, trying to describe a window being driven by mouse control in an "interrupt driven" fashion simply doesn't seem to map at all onto the 1960's batch-oriented model. Work on systems supporting these new devices in the 1980s led to a new model; the designers took a step back and considered pipes as a specific example of a much more general concept: inter-process communications, or IPC. IPC could be used to emulate Unix-style pipes, but it could also be used for practically any other task, such as passing data at high speeds between programs. Systems generally refer to one end of the IPC channel as a port.
With IPC the operating system can once again be built up of a number of small programs exchanging data through their ports. Networking can be removed from the kernel and placed in a separate user-space program, which is then called by other programs on the system. All hardware support is handled in this fashion, with programs for networking, file systems, graphics, etc.
Servers are programs like any others, although the kernel grants them privileges to interact with parts of memory that are otherwise off limits to most programs. This allows the servers to interact directly with hardware. A "pure" microkernel-based operating system would generally start a number of servers while booting, servers for handling the file system, networking, etc.
The system then functions as if it has a full Unix kernel; the fact that the networking support was being "handed off" is invisible. Instead of a single six million-line kernel, there are a series of smaller programs instead. Additionally, users can choose capabilities as needed and run only those programs, tailoring the system to their needs. For instance, an isolated machine could be instructed not to start the networking server, thereby freeing those resources. The same sort of changes to a traditional kernel, also known as a monolithic kernel or monokernel, are very difficult due to the high level of interconnectedness between parts of the system.
The role of the kernel in such a system is limited. In addition to providing basic task management (starting and stopping other programs), it provides the IPC system and security. When booting, the kernel starts up a series of servers to handle the hardware on the system, granting those servers additional rights as needed. New programs, those being started by the user, use the IPC system to access hardware, calling the kernel to pass along messages after being checked for rights and validity. To handle these tasks, ports introduce filesystem-like endpoints into the IPC system, complete with rights for other programs to use them. For instance, a network server would hold the write permissions to the networking hardware, and keep a number of ports open for reading to allow other programs to call it. Other programs could not take over the networking hardware without the kernel specifically granting this access, and only after the networking server agreed to give up those rights.
The "collection of servers" model offers many advantages over traditional operating systems. With the majority of code in well-separated programs, development on such a system becomes considerably easier. Developing new networking stacks on a traditional monolithic kernel required the entire kernel to be recompiled and rebooted, hard-crashing the machine and forcing a reboot if there is a bug. With a microkernel there is less chance that an updated networking system would do anything other than inconvenience the user and require that one program to be relaunched. It also offers considerably more security and stability for the same reasons. Additionally the kernel itself becomes smaller — later versions of Mach were only 44,000 lines of code.
Additionally, many "crashes" can be corrected for by simply stopping and restarting the server. In a traditional system, a crash in any of the kernel-resident code would result in the entire machine crashing, forcing a reboot. However, part of the system state is lost with the failing server, and it is generally difficult to continue execution of applications, or even of other servers with a fresh copy. For example, if a server responsible for TCP/IP connections is restarted, applications could be told the connection was "lost" and reconnect to the new instance of the server. However, other system objects, like files, do not have these convenient semantics, are supposed to be reliable, not become unavailable randomly and keep all the information written to them previously.
In order to make all servers restartable, some microkernels have concentrated on adding various database-like techniques like transactions, replication and checkpointing need to be used between servers in order to preserve essential state across single server restarts. A good example of this is ChorusOS, which was targetted at high-availability applications in the telecommunications world. Chorus included features to allow any "properly written" server to be restarted at any time, with clients using those servers being paused while the server brought itself back into its original state.
Essential components of a microkernel
The minimum set of services required in a microkernel seems to be address space management, thread management, inter-process communication, and timer management. Everything else can be done in a user program, although in a minimal microkernel, some user programs may require special privileges to access I/O hardware. A few operating systems approach this ideal, notably QNX and IBM's VM.
A key component of a microkernel is a good inter-process communication system. Since many services will be performed by user programs, good means for communications between user programs are essential, far more so than in monolithic kernels. The design of the inter-process communication system makes or breaks a microkernel. To be effective, the inter-process communication system must not only have low overhead, it must interact well with CPU scheduling.
Start up, or booting, of a microkernel can be difficult. The kernel alone may not contain enough services to start up the machine. Thus, either additional code for startup, such as key device drivers, must be placed in the kernel, or means must be provided to load an appropriate set of service programs during the boot process.
Some microkernels are designed for high security applications. EROS and KeyKOS are examples. Part of secure system design is to minimize the amount of trusted code; hence, the need for a microkernel. Work in this direction, with the notable exception of systems for IBM mainframes such as KeyKOS and IBM's VM, has not resulted in widely deployed systems.
Microkernels need a highly efficient way for one process to call another, in a way similar to a subroutine call or a kernel call. The traditional performance problems with microkernels revolve around the costs of such calls. Microkernels must do extra work to copy data between servers and application programs, and the necessary interprocess communication between processes results in extra context switch operations. The components of that cost are thus copying cost and context switch cost.
Attempts have been made to reduce or eliminate copying costs by using the memory management unit, or MMU, to transfer the ownership of memory pages between processes. This approach, which is used by Mach, adds complexity but reduces the overhead for large data transfers. L4 adds a lightweight mechanism using registers if the amount of data being passed is small, which can dramatically improve performance, both in terms of copying, as well as avoiding cache misses in the CPU's cache. On the other hand, QNX does all IPC by direct copying, incurring some extra copying costs but reducing complexity and code size.
Systems that support virtual memory and page memory out to disk create additional problems for interprocess communication. Unless both the source and destination areas are currently in memory, copying must be delayed, or staged through kernel-managed memory. Copying through kernel memory adds an extra copy cost and requires extra memory. Delaying copying for paging delays complicates the interprocess communication system. QNX ducks this problem entirely by not supporting paging, which is an appropriate solution for a hard real-time system like QNX.
Reducing context switch cost requires careful design of the interaction between interprocess communication and CPU scheduling. Historically, UNIX interprocess communication has been based on the UNIX pipe mechanism and the Berkeley sockets mechanism used for networking. Neither of these mechanisms has the performance needed for a usable microkernel. Both are unidirectional I/O-type operations, rather than the subroutine-like call-and-return operations needed for efficient user to server interaction. Mach has very general primitives which tend to be used in a unidirectional manner, resulting in scheduling delays. The Vanguard microkernel supported the "chaining" of messages between servers, which reduced the number of context switches in cases where a message required several servers to handle the request. A number of other microkernels "wrote down" information about the caller, allowing the message to be returned without having to look up the client.
Other microkernels used a variety of more advanced techniques to avoid both of these problems. One solution is to allow the operating system to optionally "promote" certain programs, notably servers, to run inside the kernel's memory space, a technique known as co-location. This can dramatically reduce the IPC overhead, reducing it to something similar to a normal procedure call. This solution does require additional complexity in the kernel's scheduler however, as it must now be able to schedule programs running "within" it, as well as normal programs running in other spaces. Similar complexity is being added to most kernels for other reasons, notably multiprocessor support. Co-location also reduces the number of context switches dramatically, at least in the case where the operating system as a whole interacts heavily with other co-located servers.
The question of where to put device drivers owes more to history than design intent. In the mainframe world, where I/O channels have memory management hardware to control device access to memory, drivers need not be entirely trusted. The Michigan Terminal System (MTS), in 1967, had user-space drivers, the first operating system to be architected in that way.
Minicomputers and microcomputers have not, with a few exceptions, interposed a memory management unit between devices and memory. (Exceptions include the Apollo/Domain workstations of the early 1980s.) Since device drivers thus had the ability to overwrite any area of memory, they were clearly trusted programs, and logically part of the kernel. This led to the traditional driver-in-the-kernel style of UNIX, Linux, and Windows.
As peripheral manufacturers introduced new models, driver proliferation became a headache, with thousands of drivers, each able to crash the kernel, available from hundreds of sources. This unsatisfactory situation is today's mainstream technology.
With the advent of multiple-device network-like buses such as USB and FireWire, more operating systems are separating the driver for the bus interface device and the drivers for the peripheral devices. The latter are good candidates for moving outside the kernel. So a basic feature of microkernels is becoming part of monolithic kernels.
Recently (2006) a debate has started about the potential security benefits of the microkernel design.
Many attacks on computer systems take advantage of bugs in various pieces of software. For instance, one of the common attacks is the buffer overflow, in which malicious code is "injected" by asking a program to process some data, and then feeding in more data than it stated it would send. If the receiving program does not specifically check the amount of data it received, it is possible that the extra data will be blindly copied into the receiver's memory. This code can then be run under the permissions of the receiver. This sort of bug has been exploited repeatedly, including a number of recent attacks through web browsers.
To see how a microkernel can help address this, first consider the problem of having a buffer overflow bug in a device driver. Device drivers are notoriously buggy, but nevertheless run inside the kernel of a traditional operating system, and therefore have "superuser" access to the entire system. Malicious code exploiting this bug can thus take over the entire system, with no boundaries to its access to resources. For instance, an attack on the networking stack over the internet could then ask the file system to delete everything on the hard drive, and no security check would be applied because the request is coming from inside the kernel. Even if such an check were made, the malicious code could simply copy data directly into the target drivers, as memory is shared among all the modules in the kernel.
A microkernel system is somewhat more resistant to these sorts of attacks for two reasons. For one, an identical bug in a server would allow the attacker to take over only that program, not the entire system. This isolation of "powerful" code into separate servers helps isolate potential intrusions, notably as it allows a the CPU's memory management unit to check for any attempt to copy data between the servers.
But a more important reason for the additional security is that the servers are isolated in smaller code libraires, with well defined interfaces. That means that one can audit the code, as its smaller size makes this easier to do (in theory) than if the same code was simply one module in a much larger system. This doesn't mean that the code is any more secure, per se, but that it should contain less bugs in general. This not only makes the system more secure, but more stable as well.
Key to the argument is the fact that a microkernel "automatically" isolates high-privilege code in protected memory because they run in separate servers. This isolation could likely be applied to a traditional kernel as well. However, it is precisely this mechanism that forces data to be copied between programs, leading to the microkernel's generally slower performance. In the past, outright performance was the main concern of most programs. Today this is no longer quite as powerful an argument as it once was, as security problems become endemic in a well-connected world.
But securing the kernel by no means guarentees system security. For instance, if a bug remained in the system's web browser that allowed attack, that attack could still legally ask the file system to erase the drives via the normal IPC messages. Securing against these sorts of "reasonable requests" is considerably more difficult unless a very complex system of rights is available. Even with this capability, the complexity of the interconnections between various programs in the system makes it difficult to apply security checks that are themselves free of bugs.
Examples of microkernels and operating systems based on microkernels:
- Chorus microkernel
- LSE/OS (a nanokernel)
- KeyKOS (a nanokernel)
- The L4 microkernel family, used in TUD:OS, GNU Hurd.
- Spring operating system
- Symbian OS
- Description from the Portland Pattern Repository
- Citations from CiteSeer
- The Tanenbaum-Torvalds Debate
|
2026-01-28T14:12:33.278154
|
979,214
| 3.796178
|
http://edweb.sdsu.edu/courses/edtec670/Cardboard/Card/C/C.rummy.html
|
Linda is currently employed as a software engineer for
SofTech, Inc. She has been in the EdTec program for longer
than she cares to admit, and hopes to graduate next year.
For more advanced learners, the following additional objectives would be:
This card game would be used by the students after some basic C language syntax has been introduced. It is intended to be used during lab time or outside of class for practice and/or remediation purposes. The students may start with a "subset" of the cards, with more cards being introduced as more topics are covered. This subset of cards is simply the deck of cards, with some of the cards removed by either the instructor or the learners. For example, if "for loops" haven't been discussed, the "for" cards may be removed. The deck is designed to be extremely adaptable in this way.
A card game is useful for this type of content for several reasons. The subject matter is rather dry, and the elements of play and competition provided by a game could make the task more palatable, while providing necessary practice. The learners would have incentive to pay attention to the picayune details of syntax in order to ensure that their opponents aren't trying to slide by with an illegal statement. The practice can take place when the computer isn't available, and can help the students reduce their reliance on the compiler to find syntax errors for them, thus reducing the number of compiler passes needed and increasing productivity.
First, choose a dealer.
Variation To keep the game interesting for more advanced learners, the players might also pay attention to the logic of the statements; 10 points scored by the first player (including the offending player) who points out legal but dangerous or inherently useless statements.
Card Design Examples of various card types are shown below. Card a, the semicolon, is a typical example of the one element card. Card b, the parenthesis, is an example of a card which may be used right side up or upside down, to form either the left or right parenthesis. Card c, the numeral 1, may be used as is for an integer value, or combined with decimals and other numerals to make float values. It may also be used as a character value, or as part of a string. It doesn't matter, so long as the context supports its meaning. Card d stands for any type of variable, while Card e may be used only for an integer variable. Card f is an example of a card which combines two elements that are commonly used together. Each card, simple or complex, is worth the same number of points.
Deck Design Each card displays on or more elements which can be combined with other cards to form complete C statements. As shown above, some cards may represent the name of a variable or function, or a combination of an identifier with customary accompanying punctuation, while other cards may simply display one character or numeral. Each card counts the same as every other, no matter what is displayed on it. The more complex cards may foster the construction of more elaborate statements, while the simpler cards are more versatile.
The first prototype deck lacked sufficient numbers of semicolons, parentheses, and numbers, making valid combinations overly difficult to create. Several trials are needed to determine the optimum balance of card elements to keep the game from being too easy or too difficult. Another issue is the possibility of using a subset of the deck, editing out concepts which haven't yet been discussed in class. This would make the game easier in another way, by making the proportion of supporting cards (semicolons, etc.) larger in proportion to the total number of cards. No specific subsets are planned by the designer, it is left entirely up to the users (instructor or players) to determine a meaningful deck for their game, dependent on the knowledge level of the learners.
A likely addition to the game is a set of card racks to hold the players' hands. The cards were designed with the values displayed in the corners to facilitate viewing; however, with up to 20 cards in a hand it's a challenge just to hold the cards, let alone view them all at once. Another possibility is that the players could lay down their statements once they are completed rather than waiting until the end of the hand.
|
2026-02-02T10:15:32.465462
|
57,776
| 3.947661
|
http://structuralarchaeology.blogspot.com/2012/01/hadrians-first-wall-part-2-of-3.html
|
The Vallum is a unique linear earthwork to the south of Hadrian’s Wall. When it was constructed, it ran in an unbroken line, following close to the course of the Wall, from Newcastle to Bowness-on-Solway, a distance of about 112km.
Where possible, the Vallum lies close to the rear of the Wall. However, unlike some sections of the Wall, it is laid out in very long straight sections, avoiding steep gradients, with a few very deliberate corners. In the famous central section where the Wall runs along the crags of the Whin Sill, the Vallum stays in the Valley. Where it crosses very soft ground, as at White Moss in Cumbria, the ditch is not dug in, but created between mounds of material, The ditch is uninterrupted except for causeways that have been found opposite the gates of the main Wall forts, and perhaps some milecastles.
Dating The Vallum
The Vallum followed the Turf west of Birdoswald Fort closely. When this Wall was replaced in stone on a new course further north, probably late in Hadrian’s reign, there was no attempt to move the Vallum.
What was the Vallum for?
The Vallum is unique, which is always a problem to archaeology, where insights so often come from comparing things. While most commentators have draw attention to the road-like nature of its layout, and noted its possible function in communications, they have sought its true function elsewhere. Little has changed in our understanding of this earthwork, as Tony Wilmott, a leading English Heritage archaeologist and Wall specialist, made clear recently:
“The straight lengths in which the Vallum is laid out are consistent with distances of the uninterrupted view of a surveyor. Changes in direction tend to occur where a new viewpoint is required to obtain another long straight view. This is the system used to lay out roads. It has been argued that as the Vallum was surveyed as a road would be, it must have functioned as a road. This does not follow, and if we accept that a road would require metalling, and metalling on the berms is sporadic, and sometimes doubtful, we have to discount all of the suggested variations of a function related to lateral communication.Richmond’s (1930) statement of the function of the Vallum remains valid:
‘the Vallum takes its place as a prohibited zone delimiting the south side of the military area, an unmistakable belt in which an obstacle is provided by the great ditch. Neither commerce nor interference with the soldiery could take place across it unchecked.’ ”
“ It was not a strictly military defensive system, for the ditch is not of military shape, nor are the symmetrical banks defensive; furthermore, the Vallum takes no account of commanding ground. Rather it is a barrier and line of demarcation defining the rear of the Wall-zone, and preventing entry except at fixed points.”
“The Vallum allowed army units to segregate themselves from civilians and to use the zone between the two barriers for grazing military pack animals and cavalry horses.”
The Vallum as an unfinished road
The first task here is to trace furrows, ripping up the maze of paths, and then excavate a deep trench in the ground.
The second comprises refilling the trench with other material to make a foundation for the road build-up.
The ground must not give way nor must bedrock or base be at all unreliable when the paving stones are trodden.
Next the road metalling is held in place on both sides by kerbing and numerous wedges.
Digging the Vallum
- Men with baskets;
- Animals, probably asses/donkeys with baskets [panniers];
- Carts or wagons, probably pulled by asses or oxen.
Since the ground along the steep sides of the trench is potentially unstable, the spoil can only be extracted from either in front, or behind, the excavation, and thus, two possible methods of construction suggest themselves:
Two models for spoil extraction from the Vallum
Reverse extraction: In this method, the spoil is loaded downwards into carts or baskets using the trench to take the spoil out backwards. If the trench face is reduced in spots, all but the lowest can be easily loaded into a cart or baskets, and potentially, the draught animals do the hard work of lifting the material by means of a ramp at the start of a section of the works. In addition, by dumping the material ahead of the trench, it is possible that as the excavation goes forward, the spoil heap moves backward, and so the distance between the two and the time taken to move the spoil remains fairly constant.
The first method relies on human power to lift the spoil from the trench, whereas the second uses draught power for part of this the task, but would move the spoil a greater distance and requires the transport to able to turn in the trench.
The Marginal Mound
The intermittent presence of this feature to the south of the Vallum ditch has probably prompted more debate than anything else. It was once the basis of the belief that the Vallum had been recut. A connection has also been suggested between the backfilling of the Vallum to create causeways, and the presence of the marginal mound.
"When I see a bird that walks like a duck and swims like a duck and quacks like a duck, I call that bird a duck."
- Laid out in straight lengths;
- Gentle corners;
- Avoids steep or uneven gradients;
- Avoids soft ground;
- Where it had to cross soft ground, the central trench is formed between earth banks;
- Follows close behind the wall and forts;
- Starts from a bridgehead at Newcastle;
- It has been suggested there were originally gaps in the northern spoil mound corresponding to milecastles.
Part 3; The construction of Hadrian's First Wall here
|
2026-01-19T05:08:31.004458
|
992,634
| 3.970706
|
http://gwhs-sfusd-ca.schoolloop.com/cms/page_view?d=x&piid=&vpid=1296917473557
|
Huckleberry Finn Webquest
Huckleberry Finn is an American Classic revolving around the adventures of a boy named Huck and a runaway slave named Jim, who travel down the Mississippi River on a raft. A study of the character of Huck Finn will provide insight into understanding American literature as well as American social and political views of the past and present.
In 1860 America is in turmoil. Slavery is the controversy. Northerners believe that slavery should be brought to an end and Southerners believe that slavery should be kept, even expanded.
Although slavery was abolished but a few years later in the U.S., it still exists today in many countries. Slavery was one of the major causes of the one of the most deadly wars in U.S. history, the Civil War. Thousands would die before this disagreement would be put to rest.
The period of rebuilding in the South after the Civil War was called Reconstruction. This era had a profound impact on Afro-Americans and race relations, which impacts our society to this day.
In this Webquest you will explore, analyze, construct and evaluate important events that have occurred during the Reconstruction Period, which followed the Civil War. You will focus on The Jim Crow Laws, Minstrel Shows, Slavery, Carpetbaggers, use of the “N” word and how it has affected society then and now.
The class will be divided into groups of four to five students. Each member of the group will pick a role from the list of scenarios below.
Once you have chosen your role, look at the websites that have been assigned to your scenario.
Once you feel that you have gathered enough information about your role proceed to work with your group on your presentation to the class.
From the time the novel was published it has been very controversial. When it was first published, many people opposed its progressive view of slavery and the depiction of a young boy helping a slave escape from the bonds of slavery. Although views of slavery have dramatically changed since this novel was first written, it continues to be considered controversial and it is banned from some public schools. Today, many people oppose the novel because of the use of the word “nigger” and the degrading images of African Americans in slavery. Although the novel is controversial, it is widely considered an American classic.
You will take on the role of parents who are opposed to find The Adventures of Huckleberry Finn as part of the 11th grade curriculum and are also bothered by it even being on the school’s library shelf. You will prepare a presentation to the Board of Education that will enable the Board to make a decision regarding banning the book from the 11th grade curriculum for the next school year.
You will take on the role of newspaper reporters covering the responses by critics and the public to The Adventures of Huckleberry Finn when it was first published in 1884. You will interview people from different walks of life and write about their reactions to the book in the local newspaper.
Questions to address for your project
Why is the teaching and reading of Huck Finn so controversial?
How have criticisms of the book changed from its 1885 publication to now?
What is racism and is Huck Finn racist?
Should Huck Finn remain required core literature in American Literature classes?
When, if ever, is censorship justified?
Censorship Then and Now
Slavery in Mark Twains books
Mark Twain in his Time
Reviews of Huckleberry Finn
Reconstruction was a difficult time in America for political leaders as well as ordinary people. Newly freed slaves held great hopes. They were emancipated and were ready to sing freedom's song. Defeated southern farmers held bitter resentment. During this period of Reconstruction, the Republican Party, which was based in the North, extended its organization to the South. The party gained control of the Southern States and granted civil rights to blacks, including the right to vote. It also worked to establish public schools and to increase opportunities for ordinary Southern whites. Carpetbaggers was a term of scorn and hostility used by Southerners to describe Northerners active in the Republican Party in the South. Some of the carpetbaggers were unprincipled and corrupt seeking private gain. However, many came to the South for honorable reasons.
You are a Radical Republican Carpetbagger who has just arrived in the South. You are looking for political and financial opportunities. You will be writing in a diary about your daily experiences dealing with hostile Southern Democrats.
Questions to address for your project
1. What is the Radical Republicans’ plan to readmit the south into the Union?
2. How did this differ from President Johnson’s plan?
3. Why did Radical Republicans want to make it difficult for Southern States to be readmitted into the Union?
You are a Democratic Southern Politician dealing with newly arrived Carpetbaggers from the North. You are preparing a speech about the evils of the Republican Party and the newly arrived Carpetbaggers.
Questions to address in your speech
1. What was the south like when Radical Republicans moved there to become politicians?
2. What were conditions like for Freedmen?
3. What did Radical Republicans do to help Freedmen?
4. Who were the Radical Republicans?
5. What is remarkable/special/ or reprehensible about them?
6. What was their life like?
7. What obstacles did they face? Why?
8. Where were they born?
9. Why did these people act or behave in a certain way?
10. How did these people think the south should be readmitted into the Union?
The term Jim Crow is believed to have originated around 1830 when a white, minstrel show performer, Thomas "Daddy" Rice, blackened his face with charcoal paste or burnt cork and danced a ridiculous jig while singing the lyrics to the song, "Jump Jim Crow." Rice created this character after seeing (while traveling in the South) a crippled, elderly black man (or some say a young black boy) dancing and singing a song ending with these chorus words:
"Weel about and turn about and do jis so,
Eb'ry time I weel about I jump Jim Crow."
The word Jim Crow became a racial slur synonymous with black, colored, or Negro in the vocabulary of many whites; and by the end of the century acts of racial discrimination toward blacks were often referred to as Jim Crow laws and practices.
The rise of the minstrel show coincided with the growing abolitionist movement in the North. Northerners were concerned for the oppressed blacks of the South, but most of them had no idea how these slaves lived day-to-day. The minstrels provided the North with a kind of knowledge of the blacks, albeit a greatly romanticized and exaggerated one. Slaves were shown as happy, cheerful simpletons, always ready to sing and dance and to please their master. The message to Northern audiences was clear: don't worry about the slaves; they are happy with their lot in life.
You are a political satirist during the Reconstruction Period creating cartoons for a museum display. You are presently doing a series on Minstrel Shows and the Jim Crow Era.
When creating your cartoons the following questions should be addressed:
1. What is the event or issue that inspired this political cartoon?
2. Are there symbols in the cartoon? What do they represent?
3. What kinds of ideas are included in political cartoons?
4. Are there people in the cartoon? Who are they, and what do they represent?
5. What is the subject of the cartoon?
6. What is the cartoonist's opinion on the subject?
7. What is the objective of a political cartoon?
Your role as a museum curator is to give the class a tour of the museum and be prepared to explain to the class the meaning of each cartoon.
When showing the exhibit to the class the above questions should be addressed.
This Webquest has offered you an opportunity to examine issues of censorship, racism, and education in American history, as you deepen your understanding of the Adventures of Huckleberry Finn, the Reconstruction Era, and ultimately consider the price we pay for freedom of expression and freedom from segregation.
|
2026-02-02T14:46:06.450059
|
437,280
| 3.634141
|
http://www.cprogramming.com/tutorial/3d/rotationMatrices.html
|
Rotations in Three Dimensions: 3D Rotation MatricesOkay, so I assume going into this tutorial that you know how to perform matrix multiplication. I don't care to explain it, and it's available all over the Internet. However, once you know how to perform that operation, you should be good to go for this tutorial.
The way presented for doing rotations in the last tutorial wasn't really a good one. It works just fine in two dimensions, but as soon as you want to rotate around the X or Y-axes, it becomes more difficult. Sure, it's easy to make equations that will represent a rotation on any one of those axes, but just go ahead and try to make equations that will represent changes on three axes at once. If you manage to pull that off, make sure to let us know. Meanwhile, I'll present a way to do the rotations with matrices.
Matrices might seem scary, especially to someone who has never used them before. However, they really aren't too difficult to use. They're also very powerful. The first thing to note is that it is possible to use a vector to specify a point in 3d space. Basically, every point is a displacement from the origin by a certain amount, which is described by the vector. Vectors are useful for lots of other things as well, and perhaps someday I'll write about some of those. Meanwhile, we'll just use them for storing points.
A vector can be multiplied by a matrix, and after the multiplication, you'll get a new vector. This may seem useless, but when you multiply the vector by the right matrix, you'll get a point that has been transformed by the matrix. This can mean rotated on any axis (including arbitrary ones! that will come later), translated, or both. You see, the thing with matrices is this: If you have one matrix representing rotation on the X axis, and another matrix representing rotation on the Y axis, you can multiply them together to get a new matrix which represents the rotation on both axes. However, in case you didn't catch this in any tutorials you read about matrices, please note that if A and B are matrices, A*B != B*A. This will cause us some problems later, but for now, just keep it in the back of your mind.
I'm not going to derive these matrices that I'm about to give you here. One reason is that it's been done other places. Another reason is that it would involve me explaining a lot of things that have also been explained better elsewhere. The most important reason is because I can't. However, that doesn't matter. You don't need to be able to derive these matrices; you just need to know how to use them. You'll also need to know which coordinate system you're using: left-handed or right-handed. OpenGL uses right-handed coordinates; DirectX uses left-handed coordinates. This is also explained in detail elsewhere, so I won't go into it. If you're not using OpenGL or DirectX, either figure out what your API uses, or if you're writing your own or something, pick one and stick with it. There will be no turning back.
* Where (phi) represents the rotation about the X axis, (theta) represents the rotation about the Y axis, and (psi) represents the rotation about the Z axisThose really aren't as complicated as they look. And for those of you wondering why I didn't store all of those in 3*3 matrices, just hold on ;) That's coming later. For the purposes of this tutorial, I'm going to try to avoid picking a coordinate system, so that it will be equally useful for both OpenGL and DirectX programmers. We'll call the rotation matrix for the X axis matRotationX, the rotation matrix for the Y axis matRotationY, and the rotation matrix for the Z axis matRotationZ.
By multiplying the vector representing a point by one of these matrices (with the values properly filled in), you can rotate the point around any axis. However, you'll probably want to allow rotation about all three axes. You could multiply the vector by one matrix, then multiply it by the next matrix, then multiply it by the next matrix... but that would produce some very slow code, because you would be performing far too many operations for each point. Matrices can be combined, which will save you some very valuable time in your code. We'll call the matrix which represents all your rotations matRotationTotal, and here's the way to generate it:
matRotationTotal = matRotationX * matRotationY * matRotationZ
After that, you can simply transform each point with matRotationTotal, and the point will be rotated about all three axes. When you need to change the amount of rotation, rebuild matRotationX, matRotationY, and matRotationZ, and then multiply them together to get the new matRotationTotal. Pretty easy, and it gets points rotating around in three dimensions. (Note: If you don't know how to multiply matrices, or if you don't know how to multiply a vector by a matrix, consult a basic tutorial and matrix math. To give you a hint, a vector can be represented by a 1*4 matrix...)
That wraps up this tutorial. Go implement this if you think you're ready. Alternatively, read the next tutorial first.
Previous: The basics of 3D Rotations
Next: Rotation About An Arbitrary Axis
Back to Graphics tutorials index
|
2026-01-24T23:51:04.008138
|
319,741
| 3.661594
|
http://www.native-languages.org/quiche.htm
|
Native American Indian cultures
What's new on our site today!
Quiche Indian Language
Quiche is a Mayan
language spoken by more than a million people in Guatemala.
There are many dialects of Quiche, and some linguists consider some of them to be separate
languages. The Quiche languages are agglutinative and display fairly free word order--
although Quiche sentences are often verb-initial, as in most Mayan languages,
SVO word order is also commonly used.
Our list of vocabulary words in the Quiche language, with comparison to words in other Mayan languages.
Quiche Maya Pronunciation Guide:
How to pronounce Quiche words.
Quiche Animal Words:
Illustrated glossary of animal words in the Quiche language.
Quiche Body Parts:
Online and printable worksheets showing parts of the body in Quiche.
Worksheet showing color words in Quiche.
˜Quiche Language Resources – Mayan Placenames in Guatemala:
Chart of Guatemalan place names in K'iche' and other Mayan languages.
Human Rights: Quiche:
Translation of the Universal Declaration of Human Rights into Quiche.
New Testament in Quiche Maya Genesis in Joyabaj Quiche Language Museum: Central Quiche Language Museum: West Central Quiche:
Quiche translations of Bible passages.
Cunen Quiche Prayer West Central Quiche Prayers:
Quiche translation of the Lord's Prayer and other Christian prayers.
Information on the Quiche language, including grammar, a text, and a linguistic map of Guatemala. Page in Spanish.
Article on the Quiche language including alphabet, grammar, and links.
House of Languages: Quiché:
Information about Quiche language usage.
Cunén K'iche' Central K'iche' Eastern K'iche' Joyabaj K'iche' San Andres K'iche' Coastal K'iche':
Demographic information about Quiche from the Ethnologue of Languages.
Kiché Language Tree:
Theories about Quiche's language relationships compiled by Linguist List.
Quiché Language Structures:
Quiche linguistic profile and academic bibliography.
Mayan Language Books For Purchase:
Sales of language materials in K'iche and other Mayan languages.
Article on Quiche history and culture.
K'iche': A Study in the Sociology of Language:
Book on Quiche Maya sociolinguistics and language use.
English translation of the Quiche Mayan epic.
Rabinal Achi: A Mayan Drama of War and Sacrifice:
English translation of a classical Quiche Mayan play.
We Were Taught to Plant Corn Not to Kill
Essays and artwork by Guatemalan Mayas about the 20th-century violence that rocked their communities.
The Ancient Maya:
Excellent historical overview of ancient Mayan civilization in general.
Evolving list of books about Mayas and Native Americans in general.
Links, References, and Additional Information
Encyclopedia articles on the Quiche Indians.
La Cultura Quiché El Origen de Pueblo Quiche Los Quiché Idioma K'iche':
Information about the Quiches in Spanish.
Back to our Native American nation list
Back to our Native American Indians websites for kids
Jicarilla Apache Indian
American Indian tattoos
Would you like to sponsor our work on the Quiche language?
Native Languages of the Americas website © 1998-2009 Contacts and FAQ page
|
2026-01-23T02:47:27.384350
|
894,958
| 4.015826
|
http://www.autismlearningfelt.com/2010/11/a-new-method-for-teaching-autistic-kids.html
|
Anyone who has an autistic child knows that one of the most difficult hurdles to overcome is communication. Without the ability to communicate effectively with a child, teaching them (in order that they might one day function at a high enough level to live their own lives) can be an extremely difficult and frustrating process for everyone involved. Not only do these children need extra help to comprehend academic materials, they often encounter social issues that can hold them back and isolate them, as well. This becomes even more compounded by the fact that many teachers are operating with larger class sizes and less resources, making it even harder to devote singular attention to the kids who need it the most. However, this issue is becoming more and more important as the number of kids diagnosed with autism spectrum disorder continues to rise. But according to a study done by researchers at the University of Missouri, there could be a workable solution that changes the way classrooms are run.
The program was developed by Janine Stichter, a professor at the MU College of Education whose focus is special education, and a team of researchers under her. The team surveyed 27 students between the ages of 11-14 and utilized cognitive behavioral principles to target social deficits in an effort to increase social functionality and thereby improve the ability of the students to communicate effectively (based on the premise that impaired social function hampers the ability to communicate and comprehend). The team therefor undertook to expand the social knowledge and teach acceptable social performance to the students.
It should be noted that this study really only included students who displayed high-functioning autism (HFA) or Asperger’s Syndrome (AS) in an effort to target those that appear to have a desire to be social but simply lack the knowledge and skills to behave correctly. The team used the issues inherent in this particular group (including understanding the thoughts and feelings of others, recognizing facial, gestural, and verbal expressions and putting them into context, and controlling impulses) to formulate a system by which these students might learn social competency. And the system has two parts.
The first aspect of the program relies on intervention. Teachers first need to be able to recognize which students are attempting to behave socially, but failing. Following this, teachers will implement a course of treatment (that can theoretically be carried on in a classroom setting) that helps students to recognize social signals (in the form of facial expressions, bodily gestures, or verbal clues), share ideas and feelings, take turns, and enhance problem-solving skills. The best part about this program is that it not only targets autistic children, it also includes the whole class in the process so that everyone is learning and improving together as well as helping those who are affected by autism to feel like a part of the group. This social programming has the potential not only to improve social function for these children, but also to increase their communication skills and provide a supportive and cooperative environment for learning.
While this study was conducted in a classroom setting, it occurred as an after-school program. The researchers concluded that it was possible to integrate the program into a real classroom with similar results (although they noted that further studies would be required to prove this theory). However, the study itself produced encouraging results; parents across the board reported significant improvement in social functioning of all students involved in the program. If the study is continued and expanded, we may soon see a new way of teaching children with autism spectrum disorders, and indeed, entire classrooms of diverse students.
For more information on this study, seek the following article: Stichter, J.P., Herzog, M.J., Visovsky, K., Schmidt, C., Randolph, J., Schultz, T. & Gage, N. (in press). Social competence intervention for youth with Asperger Syndrome and high-functioning autism: An initial investigation. Journal of Autism and Developmental Disorders. Published February 17, 2010.
Kyle Simpson writes for Security Labels where you can find custom security labels and stickers.
|
2026-02-01T06:46:11.965786
|
521,383
| 3.675586
|
http://www.nature.nps.gov/geology/parks/peco/index.cfm
|
National Historical Park
The most well-known of mining operations in the upper Pecos River watershed was the Tererro Mine located on a 19 acre site adjacent to the Pecos River, approximately 14 miles north of Pecos National Historical Park.
In 1881, a prospector named J.J. Case discovered an ore body containing relatively high grade copper, lead, and zinc ores near the confluence of Willow Creek with the Pecos River. Over the next two decades several shafts were dug and adits opened to explore these ore bodies, though the appropriate milling and metallurgical processes were not then available to exploit these resources economically(McDuff, 1993).
Subsequent discoveries of silver and gold ore, as well as significant advances in milling and metallurgical technologies for refining complex ores, altered the economics enough to make mining feasible by the mid-1920s. On January 1, 1927, the American Metal Company (a predecessor of Cyprus Amax Minerals Company) began mining production at the Tererro Mine, which soon supported the largest payroll in the State of New Mexico (McDuff, 1993).
From 1927 through May, 1939, approximately 2,293,000 tons of ore from the Tererro Mine were loaded on an aerial tramway and transported 12 miles to the El Molino mill located along Alamitos Creek near Pecos, New Mexico. This ore was refined to produce more than 440,000,000 pounds of zinc; 138,000,000 pounds of lead; 19,000,000 pounds of copper; 5,000,000 ounces of silver; and 178,000 ounces of gold. It was one of the better multiple ore producers operating in the United States from the late 1920s through the 1930s (McDuff, 1993).
Today, the land surface at the abandoned Tererro Mine consists of numerous unstabilized spoil and overburden piles located on a 19 acre site along Willow Creek and in the floodplain of the Pecos River.
Willow Creek flows across a portion of the waste piles before emptying into the Pecos River resulting in elevated zinc, lead, copper, and cadmium concentrations in Willow Creek and in the Pecos River below (Sinclair, 1990). O'Brien (1991) found elevated copper, zinc, selenium, and lead levels in brown trout (Salmo trutta) downstream from this site. In addition, the Lisboa Springs State Fish Hatchery, located 11.5 miles downstream from the Tererro Mine, has experienced major fish die-offs in raceways utilizing water from the Pecos River. It is possible that one of the factors related to these die-offs may be contaminants leaching from waste tailings piles near the site of the former Tererro Mine. These tailings are believed to be a source of high concentrations of zinc and other heavy metals periodically measured in surface flows and biota in the Pecos River (U.S. Fish and Wildlife Service, 1993).
While some minor drainage diversion work has recently been completed to divert surface runoff away from the waste rock piles, alternatives for site stabilization and remediation are currently being evaluated by Cyprus Amax Minerals Company and the New Mexico Environment Department (Johnnie Green, Cyprus Amax Minerals Company, and Stephen Wust, New Mexico Environment Department, pers. comm.).
McDuff, L.F. 1993. Terrero - the history of a New Mexico mining town. Suisun, California. 189 pp.
O'Brien, T.F. 1991. Investigation of trace element contamination from Tererro Mine waste, San Miguel County, New Mexico. Albuquerque , New Mexico : U.S. Fish and Wildlife Service. 47 PP.
Sinclair, S. 1990. Screening site inspection of Tererro Mine, San Miguel County, New Mexico. Santa Fe , New Mexico : New Mexico Environmental Improvement Division. 17 pp
U.S. Fish and Wildlife Service. 1993. Investigation of potential causes of periodic fish mortalities at Lisboa Springs State Fish Hatchery, Pecos , New Mexico . Albuquerque , New Mexico : New Mexico Ecological Services State Office. 75 pp.
Photo by David Muench
The General park map handed out at the visitor center is not available.For information about topographic maps, geologic maps, and geologic data sets, please see the geologic maps page.
A geology photo album has not been prepared for this park.For information on other photo collections featuring National Park geology, please see the Image Sources page.
Currently, we do not have a listing for a park-specific geoscience book. The park's geology may be described in regional or state geology texts.
Parks and Plates: The Geology of Our National Parks, Monuments & Seashores.
Lillie, Robert J., 2005.
W.W. Norton and Company.
9" x 10.75", paperback, 550 pages, full color throughout
The spectacular geology in our national parks provides the answers to many questions about the Earth. The answers can be appreciated through plate tectonics, an exciting way to understand the ongoing natural processes that sculpt our landscape. Parks and Plates is a visual and scientific voyage of discovery!
Ordering from your National Park Cooperative Associations' bookstores helps to support programs in the parks. Please visit the bookstore locator for park books and much more.
For information about permits that are required for conducting geologic research activities in National Parks, see the Permits Information page.
The NPS maintains a searchable data base of research needs that have been identified by parks.
A bibliography of geologic references is being prepared for each park through the Geologic Resources Evaluation Program (GRE). Please see the GRE website for more information and contacts.
NPS Geology and Soils PartnersAssociation of American State Geologists
Geological Society of America
Natural Resource Conservation Service - Soils
U.S. Geological Survey
Currently, we do not have a listing for any park-specific geology education programs or activities.
General information about the park's education and intrepretive programs is available on the park's education webpage.For resources and information on teaching geology using National Park examples, see the Students & Teachers pages.
|
2026-01-26T10:20:43.235790
|
388,014
| 3.898822
|
http://www.history.com/topics/committees-of-correspondence
|
More to Explore
People and Groups
This Day in History
On this day in 1885, the Statue of Liberty arrives in New York Harbor as a symbol of Franco-American friendship. Nine years late, the 300-foot statue was…
(1745-1829), member of the Continental Congress, diplomat, and first chief justice, U.S. Supreme Court.
One of the most renowned figures in American history, Benjamin Franklin was a statesman, publisher, author, scientist, inventor and diplomat.
During the American Revolution, Great Britain's 13 American colonies rose up in insurrection and won their independence.
In 1776, the Declaration of Independence announced that the 13 English colonies in North America were a sovereign nation: the United States of America.
Committees of Correspondence were the American colonies' first institution for maintaining communication with one another. They were organized in the decade before the Revolution, when the deteriorating relationship with Great Britain made it increasingly important for the colonies to share ideas and information. In 1764, Boston formed the earliest Committee of Correspondence, writing to other colonies to encourage united opposition to Britain's recent stiffening of customs enforcement and prohibition of American paper money. The following year New York formed a similar committee to keep the other colonies notified of its actions in resisting the Stamp Act. This correspondence led to the holding of the Stamp Act Congress in New York City. Nine of the colonies sent representatives, but no permanent intercolonial structure was established. In 1772, a new Boston Committee of Correspondence was organized, this time to communicate with all the towns in the province, as well as with "the World," about the recent announcement that Massachusetts's governor and judges would hereafter be paid by--and hence accountable to--the Crown rather than the colonial legislature. More than half of the province's 260 towns formed committees and replied to Boston's communications.
In March 1773, the Virginia House of Burgesses proposed that each colonial legislature appoint a standing committee for intercolonial correspondence. Within a year, nearly all had joined the network, and more committees were formed at the town and county levels. The exchanges that followed helped build a sense of solidarity, as common grievances were discussed and common responses agreed upon. When the First Continental Congress was held in September 1774, it represented the logical evolution of the intercolonial communication that had begun with the Committees of Correspondence.
The Reader's Companion to American History. Eric Foner and John A. Garraty, Editors. Copyright © 1991 by Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Fact Check We strive for accuracy and fairness. But if you see something that doesn't look right, contact us!
Keep up with the latest History shows, online features, special offers and more.Sign up
Classroom Study Guides
Jefferson is an insightful 2-hour presentation on HISTORY which examines his many identities and asks viewers to answer for themselves: who was the real Thomas Jefferson, and what is his most lasting legacy in our world today?
|
2026-01-24T04:26:39.813056
|
767,772
| 3.866572
|
http://www.education.com/study-help/article/pretest10/
|
Introductory Writing Practice Quiz
Introductory Writing Practice Quiz
This quiz contains 30 questions that will test your knowledge writing. The test should take about 30 minutes to complete. It will provide you with an accurate sense of your existing knowledge of grammar and writing, and serve as a guide to which areas of these subjects you need to learn better.
1. Every sentence you write must include, at the very least, which of the following parts?
a. subject, predicate, and object
b. noun as a subject
c. subject and predicate
d. noun and pronoun
2. Proper nouns are the parts of speech that
a. must always be capitalized.
b. always describe people.
c. always begin the sentence.
d. can be mistaken for verbs.
3. The most important function of verbs in most sentences is
a. to explain who is doing the action.
b. to describe the action.
c. to help define the subject.
d. to complete a sentence.
4. Which word is often used as a helping verb?
5. Adverbs are words that modify which parts of speech?
a. verbs, adjectives, or other adverbs
b. pronouns and nouns
c. nouns and verbs
d. verbs only
6. Which sentence uses the correct predicate?
a. The dog walk quickly.
b. The cat purred softly.
c. The snake done slither.
d. The kangaroos has jumped.
7. Which of the following word groups is a sentence fragment?
a. Writing well is often difficult for students.
b. But learning to write essays and poems.
c. Driving a car is also difficult to learn.
d. Running a marathon is perhaps the most difficult of all.
8. Which of the following sentences is a complex sentence?
a. While tapping her foot, the teacher demanded the students get to work.
b. The boys ran and the girls hopped.
c. The rules of English grammar are rarely the favorite topic of most classrooms.
d. James tried very hard to succeed at completing the test quickly.
9. Which of the following word groups is a dependent clause?
a. Nancy fell sound asleep.
b. At seven o'clock in the morning.
c. The teacher kept talking.
d. Exercising is exhausting.
10. Which of the following word groups is an independent clause?
a. Sammy loved pickles more than he loved salami.
b. When Jeannie made the sandwiches.
c. If she made them properly.
d. Eating pickles and ice cream.
11. Which of the following is a correct definition of a compound-complex sentence?
a. two independent clauses joined by and
b. two independent clauses and one dependent clause
c. one independent clause and one dependent clause
d. one independent clause and two dependent clauses
12. Which sentence below is correctly punctuated?
a. The day after tomorrow, luckily, is the day we will take the test.
b. The day after tomorrow, luckily; is the day we will take the test.
c. The day after tomorrow; luckily is the day we will take the test.
d. The day after tomorrow luckily is the day we will take the test.
13. Which sentence below contains a grammatical error?
a. The boys in the class wanted to eat there lunch at 11:30 A.M.
b. The girls in the class wanted them to sit quietly for another 30 minutes.
c. The teacher told her class to stop fighting over such a silly issue.
d. The lunch hour got to be a very important topic for all of them.
14. Which of these sentences uses pronouns correctly?
a. Who is the best speller in the class?
b. The teacher told me and her to go to the white board.
c. My aunt is whom I like best of all the relatives.
d. My aunt invited him and I to go to the movies.
15. Which of these sentences is correctly punctuated?
a. Smiling sweetly, the teacher explained the assignment, including its due date.
b. Smiling, sweetly the teacher explained the assignment including its due date.
c. Smiling sweetly the teacher explained; the assignment including its due date.
d. Smiling sweetly, the teacher explained, the assignment including its due date.
16. The best place for an essay's thesis statement is
a. in the second or third paragraph.
b. in the first or second paragraph.
c. in the last paragraph.
d. wherever it makes the most sense.
17. Determining the identity of your reader is important because
a. knowing will help you get a better grade.
b. knowing will help you write with more focus.
c. knowing will help you write faster.
d. knowing will help you establish your point of view.
18. All essays should contain
a. at least three paragraphs.
b. five paragraphs.
c. as many as the writer determines is appropriate.
d. as many as the assignment specifies.
19. An introduction should contain
a. background information.
b. a lively anecdote.
c. a thesis statement.
d. a welcoming statement.
20. Which is the correct order of steps in the writing process?
a. brainstorming, drafting, revising
b. planning, revising, editing
c. brainstorming, editing, revising
d. planning, proofreading, editing
21. Which is a correct definition of a thesis?
a. the way a writer introduces an essay
b. an essay that is 350–500 words long
c. the main idea of an essay
d. the prompt for an essay
22. Support for your essay can come from
a. personal experience.
c. both a and b
d. none of the above
23. Which of the following is a major benefit of writing an outline?
a. An outline will help you figure out what you think.
b. An outline will tell you how long your essay should be.
c. An outline will help you find grammatical errors.
d. An outline will let you know if your thesis is workable or weak.
24. What is the most common essay organizational pattern?
a. main idea, arguments for and against, conclusion
b. introduction, body, conclusion
c. introduction, comparison and contrast, solution
d. main idea, examples, conclusion
25. Which of these is the correct definition of an expository essay?
a. Expository essays explain the differences between two things.
b. Expository essays are personal essays.
c. Expository essays explain a topic or a process.
d. Expository essays ask questions and then answer them.
26. The conclusion of an essay should
a. restate the introduction's main idea.
b. provide a new strong idea.
c. leave the reader wondering.
d. suggest a future topic.
27. Writing a first draft should occur when?
a. before identifying your conclusion
b. before doing interviews
c. before settling on a thesis
d. after writing an outline
28. What is the main problem in the following sentence? The teacher handed out the test papers before she told us what we were supposed to write in the essay it was part of the standardized test that every grade has to take.
a. It is not punctuated correctly.
b. It lacks a main idea.
c. It uses more words than it needs to.
d. It is a run-on sentence.
29. Which of the answer choices best describes the problem with the following paragraph? Global warming is an important subject. Plants and animals are disappearing or dying. The atmosphere is really polluted, and we need to pay attention.
a. poor punctuation
b. lack of sentence structure variety
c. lack of complex sentences
d. grammatical errors
30. How would you describe the organizational strategy of the paragraph from question 29?
b. exposition of ideas
c. general to specific
d. compare and contrast
Today on Education.com
- Coats and Car Seats: A Lethal Combination?
- Kindergarten Sight Words List
- Child Development Theories
- Signs Your Child Might Have Asperger's Syndrome
- 10 Fun Activities for Children with Autism
- Why is Play Important? Social and Emotional Development, Physical Development, Creative Development
- First Grade Sight Words List
- Social Cognitive Theory
- The Homework Debate
- GED Math Practice Test 1
|
2026-01-30T03:14:42.031632
|
25,261
| 3.732643
|
http://samhaskell.co.uk/blogs/sam?page=5
|
Our cells generate most of the energy they need in tiny structures inside them called mitochondria, which can be thought of as the cells' powerhouses. Mitochondria have their own DNA, independent of the cell's nuclear genome, which is compelling similar to the DNA of bacterial genomes. What this suggests is that many thousands of years ago, mitochondria were not just components of our cells, but were in fact unicellular organisms in their own right. According to this hypothesis – the endosymbiotic theory – mitochondria (and possibly some other organelles) originated as free-living bacteria which later became incorporated inside other cells in a symbiotic relationship.
Like man-made powerhouses, mitochondria produce hazardous by-products as well as useful energy. They are the main source of free radicals in the body – hugely reactive particles which cause damage to all cellular components through oxidative stress. They attack the first thing they come across, which is usually the mitochondrion itself. This hazardous environment has put the genes located in the mitochondrion at risk of mutational damage, and over many years of evolutionary pressure the mitochondrial DNA has gradually moved into the cell's nucleus, where it is comparatively well-protected from the deleterious effects of free-radicals alongside all of the cell's other DNA. This is called allotopic expression, and it has moved all but thirteen of the mitochondrion's full complement of at least one thousand genetic instructions for proteins into the 'bomb-shelter' of the nucleus.
However, the remaining thirteen genes in the mitochondrion itself are subject to the ravages of free-radicals, and are likely to mutate. Mutated mitochondria, as Aubrey de Grey has identified, may indirectly accelerate many aspects of ageing, not least when their mutation causes them to no longer produce the required energy for the cell, in turn impairing the cell's functionality. In order to combat the down-stream ageing damage as a consequence of mitochondrial mutation, de Grey believes that the mitochondrial DNA damage itself needs to be repaired or rendered harmless.
His characteristically bold solution to this problem is to put the mutations themselves beyond use by creating backup copies of the remaining mitochondrial genetic material and storing them in the safety of the cell's nucleus. Allotopically expressed here, like the rest of the mitochondrial DNA, any deletions in the mitochondrial DNA can be safely overwritten by the backup master copy, which is much less likely to mutate hidden away from the constant bombardment of free radicals. There are several difficulties to this solution, not least the fact that the remaining proteins are extremely hydrophobic and so don't 'want' to be moved at all, and additionally the code disparity between the language of the mitochondrial DNA and the nuclear DNA which makes a simple transplantation without translation impossible.
Even if this engineered solution to the problem proves impracticable, at the very least the theory is sound. If we can devise a way systematically defend our mitochondria from their own waste products, we will drastically reduce the number of harmful free radicals exported throughout our bodies, thereby reducing preventing a lot of the damage that distinguishes the young from the old, extending and improving the quality of our lives as a result.
Dr Aubrey de Grey, a gerontologist from Cambridge, believes that ageing is a disease that can be cured. Like man-made machines, de Grey sees the human body as a system which ages as the result of the accumulation of various types of damage. And like machines, de Grey argues that this damage can be periodically repaired, potentially leading to an indefinite extension of the system's functional life. De Grey believes that just as a mechanic doesn't need to understand precisely how the corrosive processes of iron oxidation degrades an exhaust manifold beyond utility in order to successfully repair the damage, so we can design therapies that combat human ageing without understanding the processes that interact to contribute to our ageing. All we have to do is understand the damage itself.
De Grey is confident that he has identified future technologies that can comprehensively remove the molecular and cellular lesions that degrade our health over time, technologies which will one day overcome ageing once and for all. In order to pursue the active development and systematic testing of these technologies, de Grey has made it part of his mission to break the 'pro-ageing trance' that he sees as a widespread barrier to raising the funding and stimulating the research necessary to successfully combat ageing. De Grey defines this trance as a psychological strategy that people use to cope with ageing, fuelled from the incorrect belief that ageing is forever unavoidable. This trance is coupled with the general wisdom that anti-ageing therapies can only stretch out the years of debilitation and disease which accompany the end of most lifetimes. De Grey contends that by repairing the pathologies of ageing we will in fact be able to eliminate this period completely, postponing it with new treatments for indefinitely longer time periods so that no-one ever catches up with the damage caused by their ageing.
To get over our collective 'trance' it is worth realising that this meme has made perfect psychological sense until very recently. Given the traditional assumption that ageing cannot be countered, delayed or reversed, it has paid to make peace with such a seemingly immutable fact, rather than wasting one's life preoccupied with worrying about it. If we follow de Grey's rationale that the body is a machine that can be repaired and restored, we have to accept that there are potential technologies that can effectively combat ageing, and thus the trance can no longer be rationally maintained.
Telomeres are repetitive DNA sequences which cap the ends of chromosomes, protecting them from damage and potentially cancerous breakages and fusings. They act as disposable buffers, much as the plastic aglets at the end of shoelaces prevent fraying. Each time a cell divides, the telomores get shorter as DNA sequences are lost from the end. When telomeres reach a certain critical length, the cell is unable to make new copies of itself, and so organs and tissues that depend on continued cell replication begin to senesce. The shortening of telomeres plays a large part in ageing (although not necessarily a causal one), and so advocates of life extension are exploring the possibility of lengthening telomeres in certain cells by searching for ways to selectively activate the enzyme telomerase, which maintains telemore length by the adding newly synthesized DNA code to their ends. If we could induce certain parts of our bodies to express more telomerase, the theory goes, we will be able to live longer, healthier lives, slowing down the decline of ageing.
Every moment we're fighting a losing battle against our telomeric shortening; at conception our telomeres consist of roughly 15,000 DNA base pairs, shrinking to 10,000 at birth when the telomerase gene becomes largely deactivated. Without the maintenance work of the enzyme our telomeres reduce in length at a rate of about 50 base pairs a year. When some telomeres drop below 5,000 base pairs, their cells lose the ability to divide, becoming unable to perform the work they were designed to carry out, and in some cases also releasing chemicals that are harmful to neighbouring cells. Some particularly prominent cell-types that are affected by the replicative shortening of telomeres include the endothelial cells lining blood vessels leading to the heart, and the cells that make the myelin sheath that protects our brain's neurons. Both brain health and heart health are bound to some degree to the fate of cells with a telomeric fuse. The correlation between telomere length and biological ageing has motivated a hope that one day we will be able to prevent and perhaps reverse the effects of replicative senescence by optimally controlling the action of telomerase.
The complexity of synthesizing proteins for specific purposes is so great that predicting the amino acid sequences necessary to generate desired behaviour is a huge challenge. Mutations far away from the protein’s active site can influence its function, and the smallest of changes in the structure of an enzyme can have a large impact on its catalytic efficacy – a key concern for engineers creating proteins for industrial applications. Even for a small protein of only 100 amino acids long there are more possible sequences than there are atoms in the universe.
What this means is that an exhaustive search through the space of all possible proteins for the fittest protein for a particular purpose is essentially unachievable, just as a complete search through all possible chess games to decide the absolutely optimal next move is computationally impractical. This is true both for scientists and for nature. This means that even though evolution has been searching the space of all possible proteins for billions of years for solutions to survival, it has in fact explored only a minute corner of all possible variations. All evolved solutions are likely to be 'good enough' rather than the absolute optimum – it just so happens that the ones already 'discovered' are sufficient to create and maintain the diversity and richness of life on planet earth.
New ways of efficiently searching this vast space of possible sequences will reveal proteins with properties that have never before existed in the natural world, and which will hopefully provide answers to many of our most pressing problems. Directed evolution not only provides a faster way of searching this space than many other methods, but it also leaves a complete 'fossil record' of the evolutionary changes that went into evolving a specific protein, providing data on the intermediate stages which will offer insight after detailed study into the relationship between protein sequence and function. Unlike natural evolution, directed evolution can also explore sequences which aren't directly biologically relevant to a single organism's survival, providing a library of industrially relevant proteins, and perhaps one day creating bacteria capable of answering worldwide problems caused by pollution and fossil fuel shortage.
Neo-evolution is factorially faster than normal evolutionary processes. Our genetically engineered organisms have already neo-evolved – shortcutting traditional evolution to produce desirable results without the costly time-delay of selection over hundreds or thousands of generations. Higher-yielding and insecticide-resistant crops have been engineered through the painstaking modification of individual genes, achieving better results than years of selective breeding in a fraction of the time. Genetic engineering of humans, both embryonic and those already alive, will perhaps one day bring the benefits of this new type of evolution to our bodies.
At the moment, we simply do not understand how DNA sequences encode useful functions, and so genetic engineering remains a tremendously costly and laborious process. It cost $25 million and took 150 person-years to engineer just a dozen genes in yeast to cause it to produce an antimalarial drug, and commercial production has yet to begin. The amount of time and money required to effect a beneficial result through genetic engineering – even if it involves relatively simple changes to only a dozen genes – is so costly that the transformative idea of neo-evolved humans has been kept at a safe distance.
But there are other ways to neo-evolve that might make the possibility of too-good-to-miss genetic enhancements in humans a reality before long. Earlier this year, for instance, the National Academy of Engineering awarded its Draper Prize to Francis Arnold and Willem Stemmer for their independent work towards 'directed evolution', a technique which harnesses the power of traditional evolution in a highly optimized environment to accelerate the evolution of desirable proteins with properties not found in nature. Rather than attempt to manually code the strings of individual DNA letters necessary to effect a particular trait, directed evolution and its associated 'evolution machines' take a prototype 'parent' gene, create a library of genetic variants from it and apply selection pressures to screen for the strains that produce the desired trait, iterating this process with the best of each batch until the strongest remain. This was first evidenced in 2009, when geneticist Harris Wang used directed evolution to create new proteins in E. coli bacteria that would produce more of the pigment that makes tomatoes red than was previously possible.
To achieve this genetic modification without manually fine-tuning each gene, Wang synthesized 50,000 DNA strands which contained modified sequences of genes that produce the pigment, and multiplied them in his evolution machine. After repeating the process 35 times with the results of each cycle fed into the next, he produced some 15 billion new strains, each with a different combination of mutations in the target pigment-producing genes. Of these new strains, some produced up to five times as much pigment as the original strain, more than the entire biosynthesis industry had ever achieved. The process took days rather than years.
There are three distinct possibilities for how technological and medical advancement will impact future human evolution. The first contingency is that the human species will undergo no further natural selection, because we may have already advanced to a position of evolutionary equipoise, where our technologies have artificially preserved genes that would otherwise have been removed by natural selection; evolution no longer has a chance to select. As a species we already control our environment to such an extent that traditional evolutionary pressures have been functionally alleviated – we adapt the environment to us rather than the other way around. Indeed, local mobility and international migration allow populations to genetically integrate to such a degree that the isolation necessary for evolution to take place may in fact already no longer possible.
The second possibility is that we will continue to evolve in the traditional way, through inexorable selection pressures exerted by the natural environment. The isolation necessary to allow the impact of any environmental changes to be selected for in the population will now be on the planetary scale, enabled by colonization of distant space.
The third possibility is that we will evolve in an entirely new way, guided not by unconscious natural forces but by our own conscious design decisions. In this neo-evolution we would use genetic engineering to eliminate diseases like diabetes, protect against strokes and reduce the risks of cancer. We would be compressing a natural process which takes hundreds of thousands of years into single generations, making evolutionarily advantageous adjustments ourselves.
From an economic perspective, cheating is a simple cost-benefit analysis, where the probability of being caught and the severity of punishment must be weighed against how much stands to be gained from cheating. Behavioural economist Dan Ariely has conducted experimental studies to test whether there are predictable thresholds for this balance, and how they can be influenced.
In one study, Ariely gave participants twenty maths problems with only five minutes to solve them. At the end of the time period, Ariely paid each participant one dollar for each correctly answered question; on average people solved four questions and so received four dollars. Ariely tempted some members of the study to cheat, by asking them to shred their paper, keep the pieces and tell him how many questions they answered correctly. Now the average number of questions solved went up to seven; and it wasn't because a few people cheated a lot, but rather that everyone cheated a little.
Hypothesizing that we each have a “personal fudge factor”, a point at which we can still feel good about ourselves despite having cheated, Ariely ran another experiment to examine how malleable this standard was. Before tempting participants to cheat, Ariely asked them to recall either ten books they read at school or to recall The Ten Commandments. Those who had tried to recall the Commandments – and nobody in the sample managed to get them all – did not cheat at all when given the opportunity, even those that could hardly remember any of the Commandments. When self-declared atheists were asked to swear on the Bible before being tempted to cheat in the task, they did not cheat at all. Cheating was also completely eradicated by asking students to sign a statement to the effect that they understood that the survey falls under the “MIT Honor Code”, despite MIT having no such code.
In an additional variant of the same experiment, Ariely tried to increase the fudge-factor and to encourage cheating. A third of particpants were told to hand back their results paper to the experimenters, a third were told to shred it and ask for X number of dollars for X completed questions, and a third were told to shred their results and ask for X tokens. For this last group, tokens were handed out, and the participants would walk a few paces to the side and exchange their tokens for dollars. This short disconnect between cash and token encouraged cheating rates to double in this last group.
Putting these results in a social context, Ariely ran yet another variant of the experiment, to see how people would react when they saw examples of other people cheating in their group. Subjects were given envelopes filled with money, and at the end of the experiment they were told to pay back money for the questions that they did not complete. An actor was planted in the group, without the knowledge of the other participants. After thirty seconds the actor stood up and announced that he had finished all of the questions. He was told that the experiment was completed for him, and that he could go home (i.e. keeping the contents of the envelope). Depending on whether he was wearing a shirt identifying him as from the same university as the rest of the students in the test or not, cheating went either up or down respectively. Carnegie Mellon students would cheat more if he was identified as a Carnegie Mellon student, whilst cheating would decrease if he was identified by a University of Pittsburgh shirt.
Ariely's results show that the probability of getting caught doesn't influence the rate of cheating so much as the norms for cheating influence behaviour: if people in your own group cheat, you are more likely to cheat as well. If a person from outside of your group cheats, the personal fudge factor increases, and the likelihood of cheating drops, just as it did with the Ten Commandments experiment, reminding people of their own morality.
The stock market combines a worrying cocktail of features from these experiments. It deals with 'tokens', stocks and derivatives and not 'real' money. Stocks are many steps removed from real money, and for long portions of time. This encourages cheating. Any enclaves of cheating will be reinforced by people mirroring the behaviours of those around them, and this is precisely what happened in the Enron scandal.
Here is a syllogism that is deeply embedded in Western society. Welfare is maximized by maximizing individual freedom. Individual freedom is maximized by maximizing choice. Welfare increases with more choice.
Supermarkets are an embodiment of this belief. They are symbols of affluence and empowerment conferred through their superabundance of choice. The range of products they offer is dizzying. So disorientingly so, in fact, that too many options have paralyzing effects, making it very difficult to choose at all – a fact that completely undermines the belief that maximizing choice has unqualified beneficial effects.
If we finally do manage to make a decision and overcome this paralytic effect, too much choice diminishes the satisfaction that can be gained compared with choices made between fewer options. This is because if the choice you make leaves you feeling dissatisfied in any way it is easy to simulate the myriad of other choices that could have been better. These imagined alternatives, conjured from the myriad real alternatives, can induce regret which dilutes the satisfaction from your choice, even if it was a good one. The wider the range of options, the easier it becomes to regret even the smallest disappointment in your decision.
A wider range of choice also makes it easier to imagine the attractive features of the alternatives that have been rejected, once more diminishing the sense of satisfaction with the chosen alternative. This phenomenon is known as the opportunity cost, the sacrificial loss of other opportunities when a choice is made: choosing to do one thing is choosing not to do many other things. Many of these other choices will have attractive features which will make whatever you have chosen less attractive, no matter how good it really is.
The maximization of choice leads to an escalation of expectations, where the best that can ever be hoped for is that a decision meets expectations. In a world of extremely limited choice, pleasant surprises are possible. In a world of unlimited choice, perfection becomes the expectation: you could always have made a better choice. When there is only one choice on offer, the responsibility for the outcome of that 'choice' is outside of your control, and so any disappointment resulting from that decision can safely be blamed on external factors. But when you have to choose between hundreds of options it becomes much easier to blame oneself if anything is less than perfect. It is perhaps no coincidence that as choice has proliferated and standards have risen in the past few generations, so has the incidence of clinical depression and suicide.
What this means is that there is a critical level of choice. Some societies have too much, others patently too little. At the point at which there is too much choice in a critical proportion of our lives, our welfare is no longer improved. Too much choice is paralytic and dissatisfying, and too little is impoverishing. We don't want perfect freedom and nor do we want the absence of it; somewhere there is an optimal threshold, and affluent, materialist societies have probably already passed it.
Our uniquely large pre-frontal cortex enables us to simulate experiences, allowing us to compare potential futures and make judgements based on these simulations. The difficulty in deciding which of several simulations we prefer arises because we are surprisingly poor at analyzing what makes us happy. Seemingly obvious questions such as 'would you prefer to become paraplegic or win the lottery?' are obscured by the extraordinary fact that one year after each event, both groups report being equally happy with their lives. A preference for one alternative over another can be measured in its ability to confer happiness, and, contrary to all of our impulses, there can be no rational preference in this example when considered over a sufficiently long time-period, as there is no reported qualitative difference between the two levels of happiness after a single year.
This is a result of the impact bias, the tendency of our emotional simulator to overestimate the intensity of future emotional states, making you believe that the difference in two outcomes is greater than it really is. In short, things that we would unthinkingly consider important, like getting a promotion or not, passing an exam, or not or gaining or losing a romantic partner, frequently have far less impact, of a much lower intensity and a much lower duration than we expect them to have. Indeed, in an astonishing study published in 1996, it was found that even major life traumas had no effect on subjective well-being (with very few exceptions) if they had not occurred in the past three months 1.
The reason for this remarkable ability is that our views of the world change to make us feel better about whatever environment we find ourselves in over a period of time. Everything is relative, and we make happiness where we would otherwise believe there to be none. To truncate a well-known quotation from Milton, “The mind is its own place, and in itself can make a heaven of hell”. Daniel Gilbert, Professor of Psychology at Harvard, calls this 'synthesizing happiness'.
Synthetic happiness differs from 'natural' happiness in that natural happiness is what we feel when we get what we wanted, and synthetic happiness is what we (eventually) feel when we don't get what we wanted. The mistake we make is believing that synthetic happiness is inferior to natural happiness. This mistake is perpetuated by a society driven by an economic system which relies on people believing that getting what you want makes you happier than not getting what you want ever could. We can resist this falsehood by remembering that we possess within ourselves the ability to synthesize the commodity that we always pursue, and that we consistently overrate the emotional differences between two choices.
- 1. Suh, Eunkook, Ed Diener, and Frank Fujita. "Events and Subjective Well-being: Only Recent Events Matter." Journal of Personality and Social Psychology 70.5 (1996): 1091-102. Print.
Optical illusions are a visual proof of a built-in irrationality in the way we reason. In some illusions we can be shown two lines of equal lengths and yet perceive one to be longer than the other. Even when we see visual proof that the lines are in fact of equal length, it's impossible to overcome the sense that the lines are different – it's as if we cannot learn to override our intuitions. In the case of optical illusions, our intuition is fooled in a repeatable, predictable fashion, and there is not much we can do about it without modifying the illusion itself, either by measuring it or by obscuring some part of it.
Dan Ariely, a behavioural psychologist currently teaching at Duke University, reminds us that optical illusions are a big deal. Vision is one of the best things that we do – we are evolutionarily designed to be good at it, and a large part of our brain is dedicated to being good at it, larger than is dedicated to anything else. The fact that we make such consistent mistakes, and are repeatedly fooled by optical illusions should be troubling. If we make mistakes in vision, what kind of mistakes will we make in those things that we have no evolutionary reason to be any good at? In new and elaborate environments like financial markets, we don't have a specialized part of the brain to help us, and we don't have a convenient visual illustration with which to easily demonstrate the mistakes we make. Is our sense of our decision making abilities ever consistently compromised?
Ariely suggests that we are victims of decision making illusions in much the same way we are victims of optical illusions. When answering a survey, for instance, we feel like we are making our own decisions, but many of those decisions in fact lie with the person who designed the form. This is strikingly shown by the disparity in the percentage of people in different European countries who indicated that they would be interested in donating their organs after death, as illustrated by a 2004 paper by Eric Johnson and Daniel Goldstein. Consent rates in France, Belgium, Hungary, Poland, Portugal, France and Austria were over 99%, whilst the UK, Germany and Denmark all had rates of below 20%. This huge difference didn't arise due to strong cultural differences, but through a simple difference in the way the question on the form was presented. In countries with a low consent rate, the question was as an opt-in choice, as in 'Check the box if you wish to participate in the organ donor programme'. People didn't check the box, leaving the form in its 'default' state. Those presented with the inverse question, an explicit opt-out rather than explicit opt-in, also left the box unchecked. Both groups tended to accept whatever the form tacitly suggested the default position was. The two types of forms created strongly separated groups of consenting donors and non-consenting donors across the countries, separated by nearly 60% as a direct result of how the question was phrased.
This is just one example of how we can reliably be led into making a choice that isn't a choice at all, suggesting that our awareness of our own cognitive abilities isn't quite as complete as perhaps we would like. Recognizing this in-built limitation like Laurie Santos, Ariely stresses that the more we understand these cognitive limitations, the better we will be able to design and ameliorate our world.
|
2026-01-18T17:53:32.873343
|
37,908
| 3.602807
|
http://www.drdobbs.com/a-per-thread-singleton-class/184401516
|
A Singleton class is used when there must be exactly one instance of a class, and it must be accessible to all users from a well-known access point. To ensure that exactly one instance of a class is created, the class itself is made responsible for keeping track of its sole instance. Most Singleton classes ensure this by intercepting requests to create new objects. Whenever there is a request for an instance of a class, the class checks to see if an instance already exists. If the instance exists, then the class returns it, else a new instance is created, which is returned in response to the current and subsequent requests.
Need for a Per-Thread Singleton
The traditional Singleton class allows exactly one instance per process (see sidebar, How a Traditional Singleton Class Works), but in the multithreaded world of today, often you need an instance of a class for every thread in the process. For example, many Windows NT services are implemented as a server in a client-server environment. Many of these services create a separate thread to process an incoming request from a client. In most of these implementations, the different threads can run completely independent of each other. Usually each of these threads creates one or more objects, which contain the context information specific to the thread. Examples of the contents of these objects may be sockets on which the thread communicates with the client, thread-specific logging information, etc. Most of the functions that are executed as part of the thread execution need access to these objects in order to do their job. Short of passing these objects as a parameter to every function, you need a way to allow all the functions to access these objects. In a multithreaded environment, it is not possible to have this object as a global variable. A traditional Singleton class also does not work here, as you need multiple instances per process and, to be precise, a single instance per thread. This is where a per-thread Singleton class can be helpful.
Per-Thread Singleton Attributes
A thread-specific Singleton class should have the following properties:
- exactly one instance per thread
- a global point of access
- a way to destroy the thread-specific instance
The first two properties are similar to those of a traditional Singleton class. The third is usually not an issue for the traditional Singleton class since the solitary instance is created and stored as a static member variable in the class, which goes away when the program exits. However, the same strategy doesnt work for a thread-specific Singleton class. Because of the way the per-thread Singleton class is implemented, the thread-specific instance needs to be explicitly destroyed (more on this later).
Ensuring One Instance per Thread
You need a way to store instances of the class on a per thread basis. You cannot store them as static members as there is only one copy of a static member per process. The Windows NT/2000 operating system has a really cool feature called TLS (Thread Local Storage) that can help here (see the sidebar, TLS). TLS is a method by which each thread in a multithreaded process may allocate locations in which to store thread-specific data. These locations are referenced by TLS indexes, which are unique in a process. Using the same index in different threads in a process, you can retrieve data that is local to each of the threads.
Global Point of Access
Similar to the traditional Singleton classes, the per-thread Singleton classes provide a GetObject type function, which is the only publicly available interface to access an instance of the class. GetObject takes care of returning the thread-specific instance of the class to the caller.
Listing 1 shows an implementation of a per-thread Singleton class. The implementation uses Win32 APIs for TLS. This class looks similar to the traditional Singleton class and has the following things in common:
- The constructor of the class is private.
- The class has a public, static member function called GetObject to get an instance of the class.
Although this class also has a static member variable like the traditional Singleton class, the purpose of this member is very different. This variable stores the TLS index, which can be used to retrieve the thread-specific instance of this class. Whenever you need an class object, you can call the function ThreadSingleton :: GetObject. This function checks the TLS index to see if an instance of the class already exists for the current thread. If the instance exists, this function just returns the instance; else it creates a class object, stores it in the TLS index, and returns the newly created object. Any further calls to get the object just retrieve the object from the TLS index and return that instance. This way, functions executing in different threads can all access an instance of the object created for their own thread.
As shown in Listing 1, there are two threads:
- The main thread, which gets created when the function main is entered.
- The second thread, which is created using the Win32 API CreateThread inside the function main.
Both the threads create an instance of the ThreadSingleton class using the member function GetObject. You can see that both the threads get their own copy of the ThreadSingleton object by the different thread IDs returned by the function GetThreadID.
Destroying the Per-Thread Instance
When the thread exits, the instance stored in the TLS area doesnt get deleted automatically. This instance needs to be explicitly destroyed. To destroy the instance, the class provides another static member function, ThreadSingleton :: DestroyObject. To destroy the thread-specific instance, you need to ensure that this function gets called, exactly once, at the end when the thread is exiting. The easiest way to do this is to declare an object of a class that I call ThreadSingletonDestroyer at the top of the ThreadMain function. Youll make use of the property that a thread exits whenever ThreadMain exits and that the exit of ThreadMain results in calling the destructor of the ThreadSingletonDestroyer object declared in ThreadMain. To destroy the thread-specific instance, you will call the function ThreadSingleton :: DestroyObject in the destructor of the ThreadSingletonDestroyer class, thereby ensuring that the thread-specific instance is automatically deleted when the thread exits (see Listing 1). DestroyObject is a private function of the ThreadSingleton class, preventing anybody other than the ThreadSingletonDestroyer class, which is a friend, from calling it.
The per-thread Singleton class provides a convenient way to access thread-specific data. No longer do you need to pass thread-specific context data to all the functions that need it; the data is just one well-known function call away. The ThreadSingleton class interface is very similar to the traditional Singleton class interface making it easy to use and understand. A benefit of the class is that it also hides operating-system-specific mechanisms to access thread-specific data. For example, many operating systems support thread-specific storage, but the usage is always different. With this class, it is easy to tailor the implementation of the class to use operating-system-specific ways to store and retrieve thread-specific data, while the interface remains the same and the caller does not need to change.
Puneesh Chaudhry is a principal software engineer for EMC Corporation in Milford, MA. He has a B.E. in Computer Science from Delhi College of Engineering. His interests include backup and other storage technologies. He can be reached at email@example.com.
|
2026-01-18T22:15:08.070741
|
911,667
| 3.974869
|
http://www.flmnh.ufl.edu/science-stories/2012/08/01/museum-researchers-name-new-ancient-camels-from-panama-canal-excavation/
|
By Danielle Torrent
When it comes to camels, it’s difficult not to think of the Old World depicted through Arabian Nights – Bedouin travelers crossing vast, radiant deserts by day, through dark, star-spotted Arabian nights. But according to the fossil record, the ancestors of modern camels were creatures of the New World.
Based on fossils from the North American Great Plains, the earliest-known camels dwelled in the American savannah about 35 million years ago. It was a time before the formation of the Isthmus of Panama, when the continents of North and South America were still separated by the oceanic waters. But despite the separation of the continents, recent Panama Canal excavations by Florida Museum of Natural History researchers show ancient camels similar to those in North America also thrived in Central America 20 million years ago.
“We’re discovering this fabulous new diversity of animals that lived in Central America that we didn’t even know about before,” said Florida Museum vertebrate paleontology curator Bruce MacFadden, co-principal investigator on the National Science Foundation grant funding the Panama Canal project. “Prior to this discovery, they [ancient camels] were unknown south of Mexico.”
In the first published description of a fossil mammal discovered since the project began in 2009, Florida Museum researchers identified two new species of ancient extinct camels that inhabited the area about 20 million years ago. The research extends the distribution of mammals to their southernmost point in the ancient tropics, some of the most diverse areas today about which little is known historically because lush vegetation prevents paleontological excavations. The study appeared online in the Journal of Vertebrate Paleontology February 28, 2012.
“People think of camels as being in the Old World, but their distribution in the past is different than what we know today,” said MacFadden, a study co-author. “The ancestors of llamas originated in North America and then when the land bridge formed about 4 to 5 million years ago, they dispersed into South America and evolved into the llama, alpaca, guanaco and vicuña.”
Based on analysis of partial jaws and teeth and comparisons with relatives from Florida, Texas and Mexico, researchers described two species of floridatragulines, ancient camels that are also the oldest mammals found in Central America: Aguascalientia panamaensis and Aguascalientia minuta. Distinguished from each other mainly by their size, the two ancient camels belong to an evolutionary branch of the camel family that may be separate from the one that gave rise to modern camels because of their complicated morphology, which includes different proportions of teeth and elongated jaws.
“Some descriptions say these are ‘crocodile-like’ camels because they have more elongated snouts than you would expect,” said lead author Aldo Rincon, a UF geology doctoral student. “They were probably browsers in the forests of the ancient tropics. We can say that because the crowns are really short.”
Rincon discovered the fossils in the Las Cascadas formation nearly single-handedly, unearthing pieces of a jaw belonging to the same animal over a span of two years, he said. Rincon then worked with his graduate adviser, associate curator of vertebrate paleontology and study co-author Jonathan Bloch, to assess the specimens and describe their morphology.
“When I came back to the museum, I started putting everything together and realized, ‘Oh wow, I have a nearly complete jaw,’ ” Rincon said. “This is one of the nicest specimens because it shows the interior dentition, which was previously unknown for this type of camel.”
The study shows that despite Central America’s close proximity to South America, there was no connection between continents because mammals in the area 20 million years ago all had North American origins. The Isthmus of Panama formed about 15 million years later and the fauna crossed to South America 2.5 to 3 million years ago, MacFadden said.
Barry Albright, a professor of earth science at the University of North Florida who studied the early Miocene fauna of the Gulf Coast Plain, said he was surprised by the similarity of the Central American fauna.
“To me, it’s slightly unexpected,” Abright said. “That’s a large latitudinal gradient between the Gulf Coastal Plain and Panama, yet we’re seeing the same mammals, so perhaps that tells us something about climate over that interval of time and dispersal patterns of some mammals over that interval of time. “It’s interesting to see a fauna that far south really similar to a fauna in the Gulf Coastal Plain and even mammals up in the Great Plains.”
Camels belong to a group of even-toed ungulates that includes cattle, goats, sheep, deer, buffalo and pigs. Other fossil mammals discovered in Panama from the early Miocene have been restricted to those also found in North America at the time. While researchers are sure the ancient camels were herbivores that likely browsed in forests, they are still analyzing seeds and pollen to better understand the environment of the ancient tropics.
Discovery of the fossils was made possible through the NSF-funded Panama Canal Partnerships for International Research and Education project, which supports excavation of the Panama Canal during construction to widen and straighten the channel and build new locks. The construction is expected to continue through 2014. The $3.8 million-project supports development of partnerships between the U.S. and Panama and engagement of the next generation of scientists in paleontological and geological discoveries along the canal.
Study co-authors include Catalina Suarez and Carlos Jaramillo of the Smithsonian Tropical Research Institute.
|
2026-02-01T12:16:59.059181
|
792,494
| 4.09237
|
http://www.mtv.com/artists/redshift-00/
|
This article is about the astronomical phenomenon. For other uses, see Redshift (disambiguation).
Part of a series on
Age of the universe,
Chronology of the universe,
Cosmic microwave background,
Metric expansion of space,
Shape of the universe,
Large quasar group,
Future of universe
Ultimate fate of the universe,
Future of an expanding universe,
Timeline of cosmological theories,
History of the Big Bang theory,
Discovery of cosmic microwave,
Category: Physical cosmology,
In physics, redshift happens when light or other electromagnetic radiation from an object moving away from the observer is increased in wavelength, or shifted to the red end of the spectrum. In general, whether or not the radiation is within the visible spectrum, "redder" means an increase in wavelength - equivalent to a lower frequency and a lower photon energy, in accordance with, respectively, the wave and quantum theories of light.
Redshifts are an example of the Doppler effect, familiar in the change in the apparent pitches of sirens and frequency of the sound waves emitted by speeding vehicles. A redshift occurs whenever a light source moves away from an observer. Cosmological redshift is seen due to the expansion of the universe, and sufficiently distant light sources (generally more than a few million light years away) show redshift corresponding to the rate of increase in their distance from Earth. Finally, gravitational redshifts are a relativistic effect observed in electromagnetic radiation moving out of gravitational fields. Conversely, a decrease in wavelength is called blueshift and is generally seen when a light-emitting object moves toward an observer or when electromagnetic radiation moves into a gravitational field.
Although knowledge of redshifts and blueshifts has been applied to develop several terrestrial technologies (such as Doppler radar and radar guns), redshifts are most famously seen in the spectroscopic observations of astronomical objects.
A special relativistic redshift formula (and its classical approximation) can be used to calculate the redshift of a nearby object when spacetime is flat. However, in many contexts, such as black holes and Big Bang cosmology, redshifts must be calculated using general relativity. Special relativistic, gravitational, and cosmological redshifts can be understood under the umbrella of frame transformation laws. There exist other physical processes that can lead to a shift in the frequency of electromagnetic radiation, including scattering and optical effects; however, the resulting changes are distinguishable from true redshift and are not generally referred to as such (see section on physical optics and radiative transfer).
2 Measurement, characterization, and interpretation,
3 Redshift formulae
3.1 Doppler effect,
3.2 Expansion of space
3.2.1 Mathematical derivation,
3.2.2 Distinguishing between cosmological and local effects,
3.3 Gravitational redshift,
4 Observations in astronomy
4.1 Local observations,
4.2 Extragalactic observations,
4.3 Highest redshifts,
4.4 Redshift surveys,
5 Effects due to physical optics or radiative transfer,
6.3 Book references,
7 External links,
The history of the subject began with the development in the 19th century of wave mechanics and the exploration of phenomena associated with the Doppler effect. The effect is named after Christian Doppler, who offered the first known physical explanation for the phenomenon in 1842. The hypothesis was tested and confirmed for sound waves by the Dutch scientist Christophorus Buys Ballot in 1845. Doppler correctly predicted that the phenomenon should apply to all waves, and in particular suggested that the varying colors of stars could be attributed to their motion with respect to the Earth. Before this was verified, however, it was found that stellar colors were primarily due to a star's temperature, not motion. Only later was Doppler vindicated by verified redshift observations.
The first Doppler redshift was described by French physicist Hippolyte Fizeau in 1848, who pointed to the shift in spectral lines seen in stars as being due to the Doppler effect. The effect is sometimes called the "Doppler-Fizeau effect". In 1868, British astronomer William Huggins was the first to determine the velocity of a star moving away from the Earth by this method. In 1871, optical redshift was confirmed when the phenomenon was observed in Fraunhofer lines using solar rotation, about 0.1 Å in the red. In 1887, Vogel and Scheiner discovered the annual Doppler effect, the yearly change in the Doppler shift of stars located near the ecliptic due to the orbital velocity of the Earth. In 1901, Aristarkh Belopolsky verified optical redshift in the laboratory using a system of rotating mirrors.
The earliest occurrence of the term "red-shift" in print (in this hyphenated form), appears to be by American astronomer Walter S. Adams in 1908, where he mentions "Two methods of investigating that nature of the nebular red-shift". The word doesn't appear unhyphenated until about 1934 by Willem de Sitter, perhaps indicating that up to that point its German equivalent, Rotverschiebung, was more commonly used.
Beginning with observations in 1912, Vesto Slipher discovered that most spiral nebulae had considerable redshifts. Slipher first reports on his measurement in the inaugural volume of the Lowell Observatory Bulletin. Three years later, he wrote a review in the journal Popular Astronomy. In it he states, "... the early discovery that the great Andromeda spiral had the quite exceptional velocity of -300 km(/s) showed the means then available, capable of investigating not only the spectra of the spirals but their velocities as well." Slipher reported the velocities for 15 spiral nebulae spread across the entire celestial sphere, all but three having observable "positive" (that is recessional) velocities. Subsequently, Edwin Hubble discovered an approximate relationship between the redshifts of such "nebulae" (now known to be galaxies in their own right) and the distances to them with the formulation of his eponymous Hubble's law. These observations corroborated Alexander Friedmann's 1922 work, in which he derived the famous Friedmann equations. They are today considered strong evidence for an expanding universe and the Big Bang theory.
Measurement, characterization, and interpretation:
The spectrum of light that comes from a single source (see idealized spectrum illustration top-right) can be measured. To determine the redshift, one searches for features in the spectrum such as absorption lines, emission lines, or other variations in light intensity. If found, these features can be compared with known features in the spectrum of various chemical compounds found in experiments where that compound is located on earth. A very common atomic element in space is hydrogen. The spectrum of originally featureless light shone through hydrogen will show a signature spectrum specific to hydrogen that has features at regular intervals. If restricted to absorption lines it would look similar to the illustration (top right). If the same pattern of intervals is seen in an observed spectrum from a distant source but occurring at shifted wavelengths, it can be identified as hydrogen too. If the same spectral line is identified in both spectra but at different wavelengths then the redshift can be calculated using the table below. Determining the redshift of an object in this way requires a frequency- or wavelength-range. In order to calculate the redshift one has to know the wavelength of the emitted light in the rest frame of the source, in other words, the wavelength that would be measured by an observer located adjacent to and comoving with the source. Since in astronomical applications this measurement cannot be done directly, because that would require travelling to the distant star of interest, the method using spectral lines described here is used instead. Redshifts cannot be calculated by looking at unidentified features whose rest-frame frequency is unknown, or with a spectrum that is featureless or white noise (random fluctuations in a spectrum).
Redshift (and blueshift) may be characterized by the relative difference between the observed and emitted wavelengths (or frequency) of an object. In astronomy, it is customary to refer to this change using a dimensionless quantity called z. If λ represents wavelength and f represents frequency (note, λf = c where c is the speed of light), then z is defined by the equations:
Calculation of redshift,
Based on wavelength
Based on frequency
After z is measured, the distinction between redshift and blueshift is simply a matter of whether z is positive or negative. See the formula section below for some basic interpretations that follow when either a redshift or blueshift is observed. For example, Doppler effect blueshifts (z < 0) are associated with objects approaching (moving closer to) the observer with the light shifting to greater energies. Conversely, Doppler effect redshifts (z > 0) are associated with objects receding (moving away) from the observer with the light shifting to lower energies. Likewise, gravitational blueshifts are associated with light emitted from a source residing within a weaker gravitational field as observed from within a stronger gravitational field, while gravitational redshifting implies the opposite conditions.
In general relativity one can derive several important special-case formulae for redshift in certain special spacetime geometries, as summarized in the following table. In all cases the magnitude of the shift (the value of z) is independent of the wavelength.
Minkowski space (flat spacetime)
, for small , for motion completely in the radial direction., for motion completely in the transverse direction.
FLRW spacetime (expanding Big Bang universe)
any stationary spacetime (e.g. the Schwarzschild geometry)
(for the Schwarzschild geometry,
If a source of the light is moving away from an observer, then redshift (z > 0) occurs; if the source moves towards the observer, then blueshift (z < 0) occurs. This is true for all electromagnetic waves and is explained by the Doppler effect. Consequently, this type of redshift is called the Doppler redshift. If the source moves away from the observer with velocity v, which is much less than the speed of light (), the redshift is given by
where c is the speed of light. In the classical Doppler effect, the frequency of the source is not modified, but the recessional motion causes the illusion of a lower frequency.
A more complete treatment of the Doppler redshift requires considering relativistic effects associated with motion of sources close to the speed of light. A complete derivation of the effect can be found in the article on the relativistic Doppler effect. In brief, objects moving close to the speed of light will experience deviations from the above formula due to the time dilation of special relativity which can be corrected for by introducing the Lorentz factor γ into the classical Doppler formula as follows:
This phenomenon was first observed in a 1938 experiment performed by Herbert E. Ives and G.R. Stilwell, called the Ives-Stilwell experiment.
Since the Lorentz factor is dependent only on the magnitude of the velocity, this causes the redshift associated with the relativistic correction to be independent of the orientation of the source movement. In contrast, the classical part of the formula is dependent on the projection of the movement of the source into the line-of-sight which yields different results for different orientations. If θ is the angle between the direction of relative motion and the direction of emission in the observer's frame (zero angle is directly away from the observer), the full form for the relativistic Doppler effect becomes:
and for motion solely in the line of sight (θ = 0°), this equation reduces to:
For the special case that the light is approaching at right angles (θ = 90°) to the direction of relative motion in the observer's frame, the relativistic redshift is known as the transverse redshift, and a redshift:
is measured, even though the object is not moving away from the observer. Even when the source is moving towards the observer, if there is a transverse component to the motion then there is some speed at which the dilation just cancels the expected blueshift and at higher speed the approaching source will be redshifted.
Expansion of space:
Main article: http://en.wikipedia.org/wiki/Metric_expansion_of_space
In the early part of the twentieth century, Slipher, Hubble and others made the first measurements of the redshifts and blueshifts of galaxies beyond the Milky Way. They initially interpreted these redshifts and blueshifts as due solely to the Doppler effect, but later Hubble discovered a rough correlation between the increasing redshifts and the increasing distance of galaxies. Theorists almost immediately realized that these observations could be explained by a different mechanism for producing redshifts. Hubble's law of the correlation between redshifts and distances is required by models of cosmology derived from general relativity that have a metric expansion of space. As a result, photons propagating through the expanding space are stretched, creating the cosmological redshift.
There is a distinction between a redshift in cosmological context as compared to that witnessed when nearby objects exhibit a local Doppler-effect redshift. Rather than cosmological redshifts being a consequence of relative velocities, the photons instead increase in wavelength and redshift because of a feature of the spacetime through which they are traveling that causes space to expand. Due to the expansion increasing as distances increase, the distance between two remote galaxies can increase at more than 3×10 m/s, but this does not imply that the galaxies move faster than the speed of light at their present location (which is forbidden by Lorentz covariance).
The observational consequences of this effect can be derived using the equations from general relativity that describe a homogeneous and isotropic universe.
To derive the redshift effect, use the geodesic equation for a light wave, which is
is the spacetime interval,
is the time interval,
is the spatial interval,
is the speed of light,
is the time-dependent cosmic scale factor,
is the curvature per unit area.,
For an observer observing the crest of a light wave at a position and time , the crest of the light wave was emitted at a time in the past and a distant position . Integrating over the path in both space and time that the light wave travels yields:
In general, the wavelength of light is not the same for the two positions and times considered due to the changing properties of the metric. When the wave was emitted, it had a wavelength . The next crest of the light wave was emitted at a time
The observer sees the next crest of the observed light wave with a wavelength to arrive at a time
Since the subsequent crest is again emitted from and is observed at , the following equation can be written:
The right-hand side of the two integral equations above are identical which means
For very small variations in time (over the period of one cycle of a light wave) the scale factor is essentially a constant ( today and previously). This yields
which can be rewritten as
Using the definition of redshift provided above, the equation
is obtained. In an expanding universe such as the one we inhabit, the scale factor is monotonically increasing as time passes, thus, z is positive and distant galaxies appear redshifted.
Using a model of the expansion of the universe, redshift can be related to the age of an observed object, the so-called cosmic time-redshift relation. Denote a density ratio as Ω0:
with ρcrit the critical density demarcating a universe that eventually crunches from one that simply expands. This density is about three hydrogen atoms per thousand liters of space. At large redshifts one finds:
where H0 is the present-day Hubble constant, and z is the redshift.
Distinguishing between cosmological and local effects:
For cosmological redshifts of z < 0.01 additional Doppler redshifts and blueshifts due to the peculiar motions of the galaxies relative to one another cause a wide scatter from the standard Hubble Law. The resulting situation can be illustrated by the Expanding Rubber Sheet Universe, a common cosmological analogy used to describe the expansion of space. If two objects are represented by ball bearings and spacetime by a stretching rubber sheet, the Doppler effect is caused by rolling the balls across the sheet to create peculiar motion. The cosmological redshift occurs when the ball bearings are stuck to the sheet and the sheet is stretched.
The redshifts of galaxies include both a component related to recessional velocity from expansion of the universe, and a component related to peculiar motion (Doppler shift). The redshift due to expansion of the universe depends upon the recessional velocity in a fashion determined by the cosmological model chosen to describe the expansion of the universe, which is very different from how Doppler redshift depends upon local velocity. Describing the cosmological expansion origin of redshift, cosmologist Edward Robert Harrison said, "Light leaves a galaxy, which is stationary in its local region of space, and is eventually received by observers who are stationary in their own local region of space. Between the galaxy and the observer, light travels through vast regions of expanding space. As a result, all wavelengths of the light are stretched by the expansion of space. It is as simple as that....Steven Weinberg clarified, "The increase of wavelength from emission to absorption of light does not depend on the rate of change of a(t) here a(t) is the Robertson-Walker scale factor at the times of emission or absorption, but on the increase of a(t) in the whole period from emission to absorption."
Popular literature often uses the expression "Doppler redshift" instead of "cosmological redshift" to describe the redshift of galaxies dominated by the expansion of spacetime, but the cosmological redshift is not found using the relativistic Doppler equation which is instead characterized by special relativity; thus v > c is impossible while, in contrast, v > c is possible for cosmological redshifts because the space which separates the objects (for example, a quasar from the Earth) can expand faster than the speed of light. More mathematically, the viewpoint that "distant galaxies are receding" and the viewpoint that "the space between galaxies is expanding" are related by changing coordinate systems. Expressing this precisely requires working with the mathematics of the Friedmann-Robertson-Walker metric.
If the universe were contracting instead of expanding, we would see distant galaxies blueshifted by an amount proportional to their distance instead of redshifted.
Main article: http://en.wikipedia.org/wiki/Gravitational_redshift
In the theory of general relativity, there is time dilation within a gravitational well. This is known as the gravitational redshift or Einstein Shift. The theoretical derivation of this effect follows from the Schwarzschild solution of the Einstein equations which yields the following formula for redshift associated with a photon traveling in the gravitational field of an uncharged, nonrotating, spherically symmetric mass:
is the gravitational constant,
is the mass of the object creating the gravitational field,
is the radial coordinate of the source (which is analogous to the classical distance from the center of the object, but is actually a Schwarzschild coordinate), and,
is the speed of light.,
This gravitational redshift result can be derived from the assumptions of special relativity and the equivalence principle; the full theory of general relativity is not required.
The effect is very small but measurable on Earth using the Mössbauer effect and was first observed in the Pound-Rebka experiment. However, it is significant near a black hole, and as an object approaches the event horizon the red shift becomes infinite. It is also the dominant cause of large angular-scale temperature fluctuations in the cosmic microwave background radiation (see Sachs-Wolfe effect).
Observations in astronomy:
The redshift observed in astronomy can be measured because the emission and absorption spectra for atoms are distinctive and well known, calibrated from spectroscopic experiments in laboratories on Earth. When the redshift of various absorption and emission lines from a single astronomical object is measured, z is found to be remarkably constant. Although distant objects may be slightly blurred and lines broadened, it is by no more than can be explained by thermal or mechanical motion of the source. For these reasons and others, the consensus among astronomers is that the redshifts they observe are due to some combination of the three established forms of Doppler-like redshifts. Alternative hypotheses and explanations for redshift such as tired light are not generally considered plausible.
Spectroscopy, as a measurement, is considerably more difficult than simple photometry, which measures the brightness of astronomical objects through certain filters. When photometric data is all that is available (for example, the Hubble Deep Field and the Hubble Ultra Deep Field), astronomers rely on a technique for measuring photometric redshifts. Due to the broad wavelength ranges in photometric filters and the necessary assumptions about the nature of the spectrum at the light-source, errors for these sorts of measurements can range up to δz = 0.5, and are much less reliable than spectroscopic determinations. However, photometry does at least allow a qualitative characterization of a redshift. For example, if a sun-like spectrum had a redshift of z = 1, it would be brightest in the infrared rather than at the yellow-green color associated with the peak of its blackbody spectrum, and the light intensity will be reduced in the filter by a factor of four, . Both the photon count rate and the photon energy are redshifted. (See K correction for more details on the photometric consequences of redshift.)
In nearby objects (within our Milky Way galaxy) observed redshifts are almost always related to the line-of-sight velocities associated with the objects being observed. Observations of such redshifts and blueshifts have enabled astronomers to measure velocities and parametrize the masses of the orbiting stars in spectroscopic binaries, a method first employed in 1868 by British astronomer William Huggins. Similarly, small redshifts and blueshifts detected in the spectroscopic measurements of individual stars are one way astronomers have been able to diagnose and measure the presence and characteristics of planetary systems around other stars and have even made very detailed differential measurements of redshifts during planetary transits to determine precise orbital parameters. Finely detailed measurements of redshifts are used in helioseismology to determine the precise movements of the photosphere of the Sun. Redshifts have also been used to make the first measurements of the rotation rates of planets, velocities of interstellar clouds, the rotation of galaxies, and the dynamics of accretion onto neutron stars and black holes which exhibit both Doppler and gravitational redshifts. Additionally, the temperatures of various emitting and absorbing objects can be obtained by measuring Doppler broadening - effectively redshifts and blueshifts over a single emission or absorption line. By measuring the broadening and shifts of the 21-centimeter hydrogen line in different directions, astronomers have been able to measure the recessional velocities of interstellar gas, which in turn reveals the rotation curve of our Milky Way. Similar measurements have been performed on other galaxies, such as Andromeda. As a diagnostic tool, redshift measurements are one of the most important spectroscopic measurements made in astronomy.
The most distant objects exhibit larger redshifts corresponding to the Hubble flow of the universe. The largest observed redshift, corresponding to the greatest distance and furthest back in time, is that of the cosmic microwave background radiation; the numerical value of its redshift is about z = 1089 (z = 0 corresponds to present time), and it shows the state of the Universe about 13.8 billion years ago, and 379,000 years after the initial moments of the Big Bang.
The luminous point-like cores of quasars were the first "high-redshift" (z > 0.1) objects discovered before the improvement of telescopes allowed for the discovery of other high-redshift galaxies.
For galaxies more distant than the Local Group and the nearby Virgo Cluster, but within a thousand megaparsecs or so, the redshift is approximately proportional to the galaxy's distance. This correlation was first observed by Edwin Hubble and has come to be known as Hubble's law. Vesto Slipher was the first to discover galactic redshifts, in about the year 1912, while Hubble correlated Slipher's measurements with distances he measured by other means to formulate his Law. In the widely accepted cosmological model based on general relativity, redshift is mainly a result of the expansion of space: this means that the farther away a galaxy is from us, the more the space has expanded in the time since the light left that galaxy, so the more the light has been stretched, the more redshifted the light is, and so the faster it appears to be moving away from us. Hubble's law follows in part from the Copernican principle. Because it is usually not known how luminous objects are, measuring the redshift is easier than more direct distance measurements, so redshift is sometimes in practice converted to a crude distance measurement using Hubble's law.
Gravitational interactions of galaxies with each other and clusters cause a significant scatter in the normal plot of the Hubble diagram. The peculiar velocities associated with galaxies superimpose a rough trace of the mass of virialized objects in the universe. This effect leads to such phenomena as nearby galaxies (such as the Andromeda Galaxy) exhibiting blueshifts as we fall towards a common barycenter, and redshift maps of clusters showing a Fingers of God effect due to the scatter of peculiar velocities in a roughly spherical distribution. This added component gives cosmologists a chance to measure the masses of objects independent of the mass to light ratio (the ratio of a galaxy's mass in solar masses to its brightness in solar luminosities), an important tool for measuring dark matter.
The Hubble law's linear relationship between distance and redshift assumes that the rate of expansion of the universe is constant. However, when the universe was much younger, the expansion rate, and thus the Hubble "constant", was larger than it is today. For more distant galaxies, then, whose light has been travelling to us for much longer times, the approximation of constant expansion rate fails, and the Hubble law becomes a non-linear integral relationship and dependent on the history of the expansion rate since the emission of the light from the galaxy in question. Observations of the redshift-distance relationship can be used, then, to determine the expansion history of the universe and thus the matter and energy content.
While it was long believed that the expansion rate has been continuously decreasing since the Big Bang, recent observations of the redshift-distance relationship using Type Ia supernovae have suggested that in comparatively recent times the expansion rate of the universe has begun to accelerate.
See also: List of most distant objects by type
Currently, the objects with the highest known redshifts are galaxies and the objects producing gamma ray bursts. The most reliable redshifts are from spectroscopic data, and the highest confirmed spectroscopic redshift of a galaxy is that of UDFy-38135539 at a redshift of , corresponding to just 600 million years after the Big Bang. The previous record was held by IOK-1, at a redshift , corresponding to just 750 million years after the Big Bang. Slightly less reliable are Lyman-break redshifts, the highest of which is the lensed galaxy A1689-zD1 at a redshift and the next highest being . The most distant observed gamma ray burst was GRB 090423, which had a redshift of . The most distant known quasar, ULAS J1120+0641, is at . The highest known redshift radio galaxy (TN J0924-2201) is at a redshift and the highest known redshift molecular material is the detection of emission from the CO molecule from the quasar SDSS J1148+5251 at
Extremely red objects (EROs) are astronomical sources of radiation that radiate energy in the red and near infrared part of the electromagnetic spectrum. These may be starburst galaxies that have a high redshift accompanied by reddening from intervening dust, or they could be highly redshifted elliptical galaxies with an older (and therefore redder) stellar population. Objects that are even redder than EROs are termed hyper extremely red objects (HEROs).
The cosmic microwave background has a redshift of , corresponding to an age of approximately 379,000 years after the Big Bang and a comoving distance of more than 46 billion light years. The yet-to-be-observed first light from the oldest Population III stars, not long after atoms first formed and the CMB ceased to be absorbed almost completely, may have redshifts in the range of . Other high-redshift events predicted by physics but not presently observable are the cosmic neutrino background from about two seconds after the Big Bang (and a redshift in excess of ) and the cosmic gravitational wave background emitted directly from inflation at a redshift in excess of .
Main article: http://en.wikipedia.org/wiki/Redshift_survey
With advent of automated telescopes and improvements in spectroscopes, a number of collaborations have been made to map the universe in redshift space. By combining redshift with angular position data, a redshift survey maps the 3D distribution of matter within a field of the sky. These observations are used to measure properties of the large-scale structure of the universe. The Great Wall, a vast supercluster of galaxies over 500 million light-years wide, provides a dramatic example of a large-scale structure that redshift surveys can detect.
The first redshift survey was the CfA Redshift Survey, started in 1977 with the initial data collection completed in 1982. More recently, the 2dF Galaxy Redshift Survey determined the large-scale structure of one section of the Universe, measuring redshifts for over 220,000 galaxies; data collection was completed in 2002, and the final data set was released 30 June 2003. The Sloan Digital Sky Survey (SDSS), is ongoing as of 2013 and aims to measure the redshifts of around 3 million objects. SDSS has recorded redshifts for galaxies as high as 0.8, and has been involved in the detection of quasars beyond z = 6. The DEEP2 Redshift Survey uses the Keck telescopes with the new "DEIMOS" spectrograph; a follow-up to the pilot program DEEP1, DEEP2 is designed to measure faint galaxies with redshifts 0.7 and above, and it is therefore planned to provide a high redshift complement to SDSS and 2dF.
Effects due to physical optics or radiative transfer:
The interactions and phenomena summarized in the subjects of radiative transfer and physical optics can result in shifts in the wavelength and frequency of electromagnetic radiation. In such cases the shifts correspond to a physical energy transfer to matter or other photons rather than being due to a transformation between reference frames. These shifts can be due to such physical phenomena as coherence effects or the scattering of electromagnetic radiation whether from charged elementary particles, from particulates, or from fluctuations of the index of refraction in a dielectric medium as occurs in the radio phenomenon of radio whistlers. While such phenomena are sometimes referred to as "redshifts" and "blueshifts", in astrophysics light-matter interactions that result in energy shifts in the radiation field are generally referred to as "reddening" rather than "redshifting" which, as a term, is normally reserved for the effects discussed above.
In many circumstances scattering causes radiation to redden because entropy results in the predominance of many low-energy photons over few high-energy ones (while conserving total energy). Except possibly under carefully controlled conditions, scattering does not produce the same relative change in wavelength across the whole spectrum; that is, any calculated z is generally a function of wavelength. Furthermore, scattering from random media generally occurs at many angles, and z is a function of the scattering angle. If multiple scattering occurs, or the scattering particles have relative motion, then there is generally distortion of spectral lines as well.
In interstellar astronomy, visible spectra can appear redder due to scattering processes in a phenomenon referred to as interstellar reddening - similarly Rayleigh scattering causes the atmospheric reddening of the Sun seen in the sunrise or sunset and causes the rest of the sky to have a blue color. This phenomenon is distinct from redshifting because the spectroscopic lines are not shifted to other wavelengths in reddened objects and there is an additional dimming and distortion associated with the phenomenon due to photons being scattered in and out of the line-of-sight.
For a list of scattering processes, see Scattering.
|
2026-01-30T12:53:35.092765
|
1,134,427
| 3.776889
|
http://www.wmo.int/pages/prog/www/DPFSERA/td778-sec3.htm
|
Programme > WWW > ERA > Procedures > TD 778
|Emergency Response Activity (ERA)
Aim of an atmospheric transport model in connection with numerical weather prediction (NWP) model (scientific information)
3.1 The portion of the atmosphere where the earth's surface (land or water) has a direct influence is called the Atmospheric Boundary Layer (ABL). Since most pollution releases occur in that layer, (except for aircraft emissions, volcanic eruptions or high level bomb blasts) it is important to review some fundamental concepts about the ABL structure.
3.1.1 The main feature of the Atmospheric Boundary Layer is the turbulent nature of the flow. Turbulence reinforces mixing mechanisms and tends to homogenize the properties of the atmospheric fluid much more quickly than would a laminar flow. For example turbulent mixing is an important factor in preventing local accumulation of anthropogenic pollutants.
3.1.2 The meteorological parameters are affected by the earth's surface through dynamical processes (friction of the air over the surface) and through thermal processes (heating or cooling of the air in contact with the ground).
At the top of the ABL, in the free atmosphere, the wind speed is approximately geostrophic. At the surface, the wind speed reduces to zero over land, and matches the speed of the surface currants over water. Hence a wind shear exits over the depth of the ABL which dynamically produces turbulence. The stronger is the wind aloft, the more intense is the generated turbulence. This mechanical turbulence produces a flux of momentum from the atmosphere to the surface of the earth. When there is a difference between the temperature of the surface and the temperature of the air, there is a transfer of energy between the two bodies, and a heat flux is created within the ABL. These fluxes are very different, depending on the vertical temperature gradient. Close to the surface, there exists a layer were the fluxes of heat and momentum are nearly constant with height; this layer is called the surface boundary layer (SBL) or more simply, the surface layer. In that layer, frictional effects are dominant compared to pressure and Coriolis forces. The scaling length is zo, the roughness length, which is the height above the ground where the wind is assumed to vanish in order to take into account the rough elements of the surface. Generally three states of ABL are distinguished : neutral, unstable and stable.
3.1.3 In the neutral ABL, the temperature of the surface is equal to the temperature of the air. A truly neutral ABL (potential temperature uniform throughout the whole ABL, only mechanical turbulence) is infrequent. Within the SBL the vertical wind follows a logarithmic profile. Above the SBL, the Coriolis force becomes important and the wind turns (clockwise in the Northern Hemisphere and anticlockwise in the Southern Hemisphere) with the altitude. The wind increases with height to become equal to the geostrophic wind in the free atmosphere, both in direction and velocity, at the top of the ABL.
3.1.4 In the unstable ABL, the temperature of the surface is greater than the temperature of the air (the surface heats the air). Buoyancy forces compound the mechanical effects and intense turbulence is generated. This layer can be divided into three zones: first, the surface layer (typically tens of meters) with an superadiabatic gradient of temperature and a rather strong wind shear, second, a mixed layer where the potential temperature and winds are almost constant with height, and third, an entrainment zone where there is a temperature inversion which caps the ABL. In the capping inversion zone, the turbulence is damped by a strong stable stratification. Some coherent structures can be frequently identified within the unstable ABL, such as convective cells or warm parcels, and can be studied as a separate subject. The unstable ABL occurs generally during the day when the solar heating is important.
3.1.5 In the stable ABL , the temperature of the surface is lower than the temperature of the air (the air heats the surface). The thermal effects in that case counteract the motions induced by mechanical turbulence. The surface layer shows an subadiabatic gradient of temperature over a depth of roughly ten meters. Pollutants released in that layer remain near the source. The wind is generally weak near the surface and a maximum (low level jet) is often found at the top of the temperature inversion zone. The stable ABL usually occurs during the night. In those conditions, the stable ABL is capped by an unstable layer, remnant of the ABL produced during the previous day and in which only the dynamical turbulence remains.
3.1.6 Generally the definition of the height of the ABL is rather arbitrary. For example, the height of the ABL can be identified as the altitude where the mean vertical turbulent fluxes becomes "negligible". Fortunately, the ABL is often capped with a temperature inversion zone; in that case h is equal to zi, the altitude of the bottom of this capping inversion layer. The typical height of the ABL is about 1500 meters. The diurnal variation of the ABL is illustrated in figure 3 ( Stull, 1988).
3.1.7 Many theoretical studies of the ABL have been done. The Ekman model (1902) is interesting because it provides a simple estimate of the wind variation over the whole ABL. Ekman assumed a three-way balance among the Coriolis force, the pressure gradient force and the frictional forces due to turbulent motion. It was further assumed, that these frictional forces were proportional to the vertical shear of the mean horizontal wind, that the proportionality factor (the eddy diffusivity coefficient) was constant, and that the pressure gradient forces were constant in the ABL (i.e. constant geostrophic wind). With these assumptions the equations of motion lead the well known Ekman wind spiral (see Appendix 1, referenced below). It predicts an angle of 45N between the surface wind and the geostrophic wind. Typically, the observed angle is about 10N in unstable conditions, 15 to 20N in neutral conditions, and 30 to 50N in stable conditions. It is in the stable ABL that the Ekman model applies best.
3.1.8 A widely accepted model to describe the surface layer is provided by the Similarity Theory which states that the mean and turbulent properties in the layer depend only on the height z, and three governing parameters: a buoyancy parameter, the surface wind stress and the surface heat flux. These three governing parameters define a length scale, the Monin-Obukhov length L, a velocity scale u*, and a temperature scale q*. Wind and temperature profiles are given as universal functions of dimensionless combination of these scaling parameters together with the roughness parameter z0 (see Appendix 2 for more details, referenced below).
3.2 With these basic principles, it is now possible to consider the modelling of the transport and diffusion of a pollutant in the atmosphere. All ATMs are governed by the "advection-diffusion equation", which is the equation of continuity for the concentration of pollutant C .
3.2.1 The advection-diffusion equation states that the time variation of the concentration of pollutant at a point depends on several different physical processes (see Appendix 3, referenced below). These processes are:
a) advection or transport by the mean wind.
b) diffusion or mixing by unresolved turbulent wind eddies; in reality it is also a transport process occurring at scales which cannot be fully resolved and which must be parameterized in some fashion. The combined processes of advection and diffusion are often commonly referred to as dispersion.
c) emission describing the processes by which pollutants are released in the atmosphere.
d) depletion describing the processes by which pollutants are removed from the atmosphere. These generally take into account the effects of clouds and precipitation (wet scavenging), radioactive decay, and deposition on the ground due to the various capturing properties of the surface (dry deposition).
3.2.2 There are several types of models to simulate the Long Range Transport and Diffusion of pollutants in the atmosphere: they mainly fall in two classes Lagrangian ATMs and the Eulerian ATMs.
Lagrangian models describe fluid elements that follow the instantaneous wind flow. They include all models in which plumes can be broken down into segments, puffs or particles. The advection is directly simulated by computing the trajectories of the plume elements as they move in the mean wind field. In models where the plume is modelled by a relatively low number of elements (puffs or plume segments) diffusion is usually simulated by a Gaussian model applied to each plume elements, and where the standard deviation is calculated taking into account the ABL structure. Some ATMs use a very large number of particles and diffusion is modelled by adding a "semi-random" component to the large scale wind, using Monte Carlo techniques. The probability density function for the random component, which simulates the atmospheric turbulence, is also dependent on the state of the ABL. The trajectory of each particles is calculated using these pseudovelocities, and concentrations are calculated by counting the number of particles within a certain volume.
Eulerian models directly solve the diffusion equation at every point of a grid, using numerical techniques that allow specific treatments for each physical process (finite difference method, splitting, finite elements method ...). The turbulent fluxes are commonly assumed to be proportional to the mean gradient according to the K gradient theory (first order closure). The horizontal and vertical K coefficients are generally dependent on the ABL structure. Precautions have to be taken in order to minimize the artificial diffusion frequently introduced by the numerical approximations.
3.3 The modelling of the source of emission, the source term description, is a crucial part of ATMs. In most cases, the processes by which pollutants are injected in the atmosphere (explosion, fire, high pressure jet, etc.) happen at scales well below those which are resolved by ATMs. The source effects have to be parameterized; the type of parameterization will depend on whether the ATM is Lagrangian or Eulerian.
3.3.1 Information on the initial release height and its vertical extent is essential. This is illustrated in figure 4, which shows three air mass trajectories starting at the same time but at different heights. Two of them are in the ABL (500 m and 1000 m), the third one is in the free atmosphere. The results are very different, even for the "in ABL" trajectories. In this extreme case, a release in the lower layer (< 500m) would have moved south then westward along the France-Spain border towards the Atlantic ocean. A release around 1000m would have gone straight West over the Atlantic. In the in the free atmosphere, however, a release would have moved eastward at the beginning, then southward over the Mediterranean sea. Deciding on the proper countermeasures in this particular situation would not have been very easy, if no estimate of the vertical extension of the release would have been available.
3.3.2 The time scenario is also of major importance. Of course the state of the atmosphere changes considerably with time: frontal passages, movement of pressure systems, diurnal evolution of the ABL, etc.. These will profoundly affect the evolution of the pollution cloud. For example, if a front moves over the source area, wet deposition could be a major factor for ground contamination; the deposition areas would be completely different if the release had begun before or after the arrival of the front. Furthermore, the transport/diffusion processes would also be very different and the pollutant plumes would reach different regions.
3.3.3 If the vertical structure and the time scenario of the source term are well described, a rough estimate of the amount of pollutant is generally enough to decide on suitable countermeasures: protection of the population, food restriction, etc.. In certain cases, the area of maximum air concentration and the deposition areas need only to be qualitatively known, and more accurate estimation of the plume intensity would come out of ground measurements. If accurate estimates of the amount of pollutant released are available (which seems unlikely in a emergency), the ATM could yield outputs of qualitative and quantitative interest.
3.3.4 Information on the radiological species released is important because parameters such as dry deposition velocity, scavenging ratio, and half life are dependent on the type of pollutant; all of the depletion terms of the diffusion equation are directly related to the nature of the released elements.
3.4 Generally, ATMs used during emergencies are diagnostic or "off side" models, in order to allow for a fast and timely response. The dispersion calculations are not performed within a full scale NWP; rather, the ATMs are stand-alone models which must be provided with meteorological data from NWP models as input. So there is an impact of NWP models on the ATMs. NWP models provide data on a grid with a specific scale and all the information produced by NWP models is not necessarily available. ATMs can only simulate phenomena of the same scale as the input data mesh and sub-grid scale phenomena have to be parametrized. That is the main reason why processes such as convection or scavenging are treated in a cruder fashion in operational ATMs than in research ATMs. ATMs are of course dependent on the quality of the input meteorological data. A source of uncertainty is the precipitation field. NWP models generally only provide rain fluxes at the ground, so estimation of the depth of the wet layer must be done by ATMs. The results for wet deposition may not be very accurate, even when the precipitation areas are well estimated. ATMs will reproduce, and sometimes amplify, the NWP models errors. In the ATMES (Klug et al., 1992) experiment, evaluation of different ATMs for the Chernobyl accident, has shown that the evolution of a pollution cloud can be depicted fairly well when analysed/observed meteorological fields are used. However there is a deterioration of the models' performance when using forecast meteorological fields. That is why an evaluation of the NWP forecasts by senior meteorologists is essential. Experienced forecasters can advise the ATM specialists about the quality of the forecast meteorological fields so that the quality of the outputs of ATMs can be assessed.
3.5 ATMs generally provide two kinds of outputs: air concentration of pollutant (in unit per cubic meter) at different time steps and different levels, wet and dry deposition (in unit per square meter) at different time steps. Following the Montréal workshop on users' requirements, RSMCs are required to provide charts of time integrated pollutant concentration within the 500 meters layer above the ground, charts of total deposition (wet plus dry) cumulated since the beginning of the release to the end of the simulation, and air mass trajectories originating from three different levels.
3.5.1 Air mass trajectories represent the motion of an air parcel within the three dimensional wind field. These trajectories can reveal interesting information about the vertical structure of the atmosphere and the differences in the flow at 500m, 1500m and 3000m in the vicinity of the source of emission. It can also help explain the dispersing plume shapes. Trajectories can also provide information about differences in the predicted wind fields from different meteorological models.
3.5.2 The time integrated pollutant concentration parameter is obtained by computing the mean air concentration over the 500 first meters at each time step, and then integrating it over a predefined period. The results (in Becquerel second per cubic meter) can then be easily related to the doses received by a human being who remains at a given point during the considered period.
3.5.3 The total cumulated deposition parameter represents, for a radiological pollutant, the activity which is present at the ground at the end of the simulation. On this chart, dry deposition due to the uptake of pollutant by the ground and wet deposition due to the precipitations are added. This chart represents the impact at the ground of the radiological event
Appendix 1: The Ekman wind spiral
Appendix 2: The Surface Boundary Layer and the Similarity Theory
Appendix 3: The advection diffusion equation
Also see :
STULL, R.B., 1988. An Introduction to Boundary Layer Meteorology. Kluwer Academic Publishers, Boston.
KLUG W., Graziani G., Gripa G., Pierce D and C. Tassone, 1992. Evaluation of Long Range Atmospheric Transport Models Using Environmental Radioactivity Data From The Chernobyl Accident: The ATMES Report. Elsevier Science Publishers Ltd., London and New York.
|
2026-02-04T19:33:20.385610
|
201,321
| 3.934299
|
http://www.all-science-fair-projects.com/science_fair_projects_encyclopedia/Wave_mechanics
|
Science Fair Project Encyclopedia
The wave equation is an important partial differential equation which generally describes all kinds of waves, such as sound waves, light waves and water waves. It arises in many different fields, such as acoustics, electromagnetics, and fluid dynamics. Variations of the wave equation are also found in quantum mechanics and general relativity.
The general form of the wave equation for a scalar quantity u is:
Here c is a fixed constant, the speed of the wave's propagation (for a sound wave in air this is about 330 m/s, see speed of sound). For the vibration of string this can vary widely: on a spiral spring (a slinky) it can be as slow as a meter per second.
u = u(x,t), is the amplitude, a measure of the intensity of the wave at a particular location x and time t. For a sound wave in air u is the local air pressure, for a vibrating string it is the physical displacement of the string from its rest position. is the Laplace operator with respect to the location variable(s) x. Note that u may be a scalar or vector quantity.
The basic wave equation is a linear differential equation which means that the amplitude of two waves interacting is simply the sum of the waves. This means also that a behavior of a wave can be analyzed by breaking up the wave into components. The Fourier transform breaks up a wave into sinusoidal components and is useful for analyzing the wave equation.
The one-dimensional form can be derived from considering a flexible string, stretched between two points on a x-axis. It is
The general solution to this is a Fourier series: an infinite sum of sine and cosine waves. If the domain of the equation is infinite with no boundary conditions, then D'Alembert's method can be used to solve it.
In two dimensions, expanding the Laplacian gives:
An example of the solution to the 2-D wave equation is the motion of a tightly-stretched drumhead. In this case, rather than sinusoids, the solutions are combinations of Bessel functions.
The wave equation is the prototypical example of a hyperbolic partial differential equation.
More realistic differential equations for waves allow for the speed of wave propagation to vary with the frequency of the wave, a phenomenon known as dispersion. Another common correction is that, in realistic systems, the speed also can depend on the amplitude of the wave, leading to a nonlinear wave equation:
The elastic wave equation in three dimensions describes the propagation of waves in an isotropic homogeneous elastic medium. Most solid materials are elastic, so this equation describes such phenomena as seismic waves in the Earth and ultrasonic waves used to detect flaws in materials. While linear, this equation has a more complex form than the equations given above, as it must account for both longitudinal and transverse motion:
- λ and μ are the so-called Lamé moduli describing the elastic properties of the medium,
- ρ is density,
- is the source function (driving force),
- and is displacement.
Note that in this equation, both force and displacement are vector quantities. Thus, this equation is sometimes known as the vector wave equation.
- Linear Wave Equations at EqWorld: The World of Mathematical Equations.
- Nonlinear Wave Equations at EqWorld: The World of Mathematical Equations.
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details
|
2026-01-21T08:23:31.573158
|
1,154,701
| 3.529805
|
http://perlmaven.com/multi-dimensional-arrays-in-perl
|
Technically speaking there are no multi-dimensional arrays in Perl, but you can use arrays in Perl to act as if they had more than one dimension.
In Perl each element of an array can be a reference to another array, but syntactically they would look like a two-dimensional array.
Creating a matrix in Perl
Let's see the following code:
#!/usr/bin/perl use strict; use warnings; my @matrix; $matrix = 'zero-zero'; $matrix = 'one-one'; $matrix = 'one-two';
We just created an array called @matrix. It is a regular one-dimensional array in Perl, but we accessed it as if it was two dimensional.
What do you think the following line will do?
I know, it was a trick question. The program won't even compile. You will get the following error:
Global symbol "$matrix" requires explicit package name at ... line .. Execution of ... aborted due to compilation errors.
If you read about the global symbol requires explicit package name error, you will see that it basically means you have not declared a variable. In this case $matrix. Indeed, if you read how to access array elements in Perl, you will see that you'd access the first element of the @matrix by using $matrix. Notice the square brackets after the variable name!
There are 3 things here that can be a bit confusing:
@matrix, $matrix and $matrix. The first two are related. The third is unrelated. The first one is an array. The second one is an element of an array and the third one is an unrelated scalar. If you declare an array such as @matrix you can automatically use $matrix to access the first element, but if you'd also like to use $matrix you'd need to declare that separately.
That leads us to warning. While Perl is OK with you having the exact same variable name for an array and for a scalar, it is strongly recommended you don't have them in the same code. It can just confuse the reader.
After this short detour, let's go back to our example:
Let's see what would the following print:
print "$matrix\n"; # ARRAY(0x814dd90) print "$matrix\n"; # zero-zero print "$matrix\n"; # one-one
The first line prints ARRAY(0x814dd90). As I mentioned, Perl does not have multi-dimensional arrays. What you see here is that the first element of the @matrix array is a reference to an internal, so-called anonymous array that holds the actual values. The ARRAY(0x814dd90) is the address of that internal address in the memory. You can't do much with this, except of knowing that you probably need to "de-reference" that address. In our case that de-referencing is done by the addition of another pair of square brackets.
That way you can get back the original values we put in the array.
Visualizing a multi-dimensional array
There is a module called Data::Dumper, which comes with Perl and that can provide a reasonably readable view of the matrix we created. In order to use it, you first need to load it to memory with the use statement. Then calling the Dumper function and passing a reference to it. The back-slash \, just before the @matrix, creates a reference to the array. The Dumper function serializes the data structure and returns a string, which is then printed by the print function.
use Data::Dumper qw(Dumper); print Dumper \@matrix;
The output will look like this:
$VAR1 = [ [ 'zero-zero' ], [ undef, 'one-one', 'one-two' ] ];
The $VAR1 at the beginning is just a standard name Data::Dumper uses. You can disregard it for now. the rest of the output shows 3 pairs of square brackets. The outermost pair represents the main array we call @matrix. the first internal pair holds a single value (zero-zero). This represents the first row in the matrix. The second internal pair has 3 values. The first one is undef, this is the place of $matrix where we have not assigned a value. The other two we assigned.
Two dimensional array or what?
As you can see this resembles a two dimensional array, but its shape is not rectangular, as you would expect from a matrix. The first row has only one element while the second row has 3. (even if one of them is undef).
In a similar way there could be elements in the @matrix array that have not other dimension. For example we could write:
$matrix = 'two';
which would change the output of Data::Dumper to this:
$VAR1 = [ [ 'zero-zero' ], [ undef, 'one-one', 'one-two' ], 'two' ];
Here the outer array has 3 elements. The first two are the internal arrays, and the 3rd one is a simple scalar.
So one of the "rows" in the "matrix" does not even have a dimension.
More than 2 dimensions?
What if we add the following code?
$matrix = 130; $matrix = 131;
The Dumper output will look like this:
$VAR1 = [ [ 'zero-zero' ], [ undef, 'one-one', 'one-two', [ 130, 131 ] ], 'two' ];
Look, the second internal array now has a 4th element and that element itself, is an array (or rather a reference to an array).
An array in Perl can have any number of "dimensions" and it does not need to form a "regular" shape. Each element can have an internal array. And each element of the internal array can have its own internal array and so on.
Data::Dumper can help us figure out what is in such a data structure.
|
2026-02-05T03:38:13.316625
|
1,008,250
| 3.54806
|
http://psychology.wikia.com/wiki/Cognitivism_(psychology)
|
Individual differences |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |
In psychology, cognitivism is a theoretical approach to understanding the mind, which argues that mental function can be understood by quantitative, positivist and scientific methods, and that such functions can be described as information processing models.
Cognitivism has two major components, one methodological, the other theoretical. Methodologically, cognitivism adopts a positivist approach and the belief that psychology can be (in principle) fully explained by the use of experiment, measurement and the scientific method. This is also largely a reductionist goal, with the belief that individual components of mental function (the 'cognitive architecture') can be identified and meaningfully understood. The second is the belief that cognition consists of discrete, internal mental states (representations or symbols) whose manipulation can be described in terms of rules or algorithms.
Cognitivism became the dominant force in psychology in the late-20th century, replacing behaviorism as the most popular paradigm for understanding mental function. Cognitive psychology is not a wholesale refutation of behaviorism, but rather an expansion that accepts that mental states exist. This was due to the increasing criticism towards the end of the 1950s of behaviorist models. For example Chomsky argued that language could not be acquired purely through conditioning, and must be at least partly explained by the existence of internal mental states.
Criticisms of psychological cognitivismEdit
Cognitivism has been criticised in a number of ways.
Phenomenologists and hermeneutic philosophers have criticised the positivist approach of cognitivism for reducing individual meaning to what they perceive as measurements stripped of all significance. They argue that by representing experiences and mental functions as measurements, cognitivism is ignoring the context (cf contextualism) and, therefore, the meaning of these measurements. They believe that it is this personal meaning of experience gained from the phenomenon as it is experienced by a person (what Heidegger called being in the world) which is the fundamental aspect of our psychology that needs to be understood: therefore they argue that a context free psychology is a contradiction in terms. They also argue in favour of holism: that positivist methods cannot be meaningfully used on something which is inherently irreducible to component parts. Hubert Dreyfus has been the most notable critic of cognitivism from this point of view. Humanistic psychology draws heavily on this philosophy, and practitioners have been among the most critical of cognitivism.
In the 1990s, various new theories emerged that challenged cognitivism and the idea that thought was best described as computation. Some of these new approaches, often influenced by phenomenological and post-modernist philosophy, include situated cognition, distributed cognition, dynamicism, embodied cognition. Some thinkers working in the field of artificial life (for example Rodney Brooks) have also produced non-cognitivist models of cognition.
The idea that mental functions can be described as information processing models has been criticised by philosopher John Searle and mathematician Roger Penrose who both argue that computation has some inherent shortcomings which cannot capture the fundamentals of mental processes.
- Penrose uses Gödel's incompleteness theorem (which states that there are mathematical truths which can never be proven in a sufficiently strong mathematical system; any sufficiently strong system of axioms will also be incomplete) and Turing's halting problem (which states that there are some things which are inherently non-computable) as evidence for his position.
- Searle has developed two arguments, the first (well known through his Chinese Room thought experiment) is the 'syntax is not semantics' argument - that a program is just syntax, understanding requires semantics, therefore programs (hence cognitivism) cannot explain understanding. The second, which he now prefers but is less well known, is his 'syntax is not physics' argument - nothing in the world is intrinsically a computer program except as applied, described or interpreted by an observer, so either everything can be described as a computer and trivially a brain can but then this does not explain any specific mental processes, or there is nothing intrinsic in a brain that makes it a computer (program) - both points, he claims, refute cognitivism.
The focused issues that interest cognitive psychologists consist of the inner mechanism of human thought and the processes of knowing. Cognitive psychologists have attempted to dig out the response to mental structures, such as what is saved and how it is recorded of course in our brain, once more to mental processes concerning how the integration and retrieval of information is operated. The theoretical assumptions in cognitive psychology lend instructional systems a hand in the design of efficient processing strategies for the learners to acquire knowledge, e.g. mnemonic devices to reduce the workload of the short-term memory, rehearsal strategies to maintain information, and the use of metaphors and analogies to relate meaning of the new information to prior knowledge.
- cognitive psychology
- cognitive science
- critical psychology
- symbol grounding
- Important publications in cognitivism
- Costall, A. and Still, A. (eds) (1987) Cognitive Psychology in Question. Brighton: Harvester Press Ltd. ISBN 0710810571
- Searle, J. R. Is the brain a digital computer APA Presidential Address
|This page uses Creative Commons Licensed content from Wikipedia (view authors).|
|
2026-02-02T20:07:52.049462
|
70,449
| 3.572013
|
http://en.wikipedia.org/wiki/Clara_cell
|
Clara cells are dome-shaped cells with short microvilli found in the small airways (bronchioles) of the lungs. Clara cells are found in the ciliated simple epithelium. These cells may secrete glycosaminoglycans to protect the bronchiole lining. Bronchiolar cells gradually increase in number as the number of goblet cells decrease.
They are also known as "club cells" (see Name) and "bronchiolar exocrine cells".
"Clara cells" were originally described by their namesake, Max Clara in 1937. Clara was born in South Tyrol in 1899 and died in 1966. He was a Nazi doctor who used tissue from executed victims of the Third Reich for his research at Leipzig, including the work that led to his discovery of Clara cells. Some scholars believe that the eponymous name of these cells should be changed because of the ethical controversy surrounding the discovery of the cells but other scholars disagree and think that the name should remain because it is a testament to a time when medicine crossed an ethical line. The term "Clara" will be used parenthetically for a 2-year period.
In May 2012, the Respiratory Journal Editors group (comprising the editors of most of the major respiratory journals, including the journals of the American Thoracic Society, the European Respiratory Society and the American College of Chest Physicians agreed to adopt a name change policy starting on January 1, 2013. The terms Clara cell and Clara cell secretory protein will be replaced with club cell and club cell secretory protein, respectively.
One of the main functions of Clara cells is to protect the bronchiolar epithelium. They do this by secreting a small variety of products, including Clara cell secretory protein (CCSP) and a solution similar to the component of the lung surfactant. They are also responsible for detoxifying harmful substances inhaled into the lungs. Clara cells accomplish this with cytochrome P450 enzymes found in their smooth endoplasmic reticulum. Clara cells also act as a stem cell and multiply and differentiate into ciliated cells to regenerate the bronchiolar epithelium.
The respiratory bronchioles represent the transition from the conducting portion to the respiratory portion of the respiratory system. The narrow channels are usually less than 2 mm in diameter and they are lined by a simple cuboidal epithelium, consisting of ciliated cells and non-ciliated Clara cells, which are unique to bronchioles. In addition to being structurally diverse, Clara cells are also functionally variable. One major function they carry out is the synthesis and secretion of the material lining the bronchiolar lumen. This material includes glycosaminoglycans, proteins such as lysozymes, and conjugation of the secretory portion of IgA antibodies. These play an important defensive role, and they also contribute to the degradation of the mucus produced by the upper airways. The heterogeneous nature of the dense granules within the Clara cell's cytoplasm suggests that they may not all have a secretory function. Some of them may contain lysosomal enzymes, which carry out a digestive role, either in defense: Clara cells engulf airborne toxins and break them down via their cytochrome P-450 enzymes (particularly CYP4B1, which is only present in the clara cells) present in their smooth endoplasmic reticulum; or in the recycling of secretory products. Clara cells are mitotically active cells. They divide and differentiate to form both ciliated and non-ciliated epithelial cells.
Role in disease
Clara cells contain Tryptase clara, which is believed to be responsible for cleaving the hemagglutinin surface protein of influenza A virus, thereby activating it and causing the symptoms of flu. When the l7Rn6 protein is disrupted in mice, these mice display severe emphysema at birth as a result of disorganization of the Golgi apparatus and formation of aberrant vesicular structures within clara cells. Malignant clara cells are also seen in bronchioalveolar carcinoma of the lung
- Atkinson JJ, Adair-Kirk TL, Kelley DG, Demello D, Senior RM (2008). "Clara cell adhesion and migration to extracellular matrix". Respir. Res. 9 (1): 1. doi:10.1186/1465-9921-9-1. PMC 2249579. PMID 18179694.
- Peter J. Papadakos; Burkhard Lachmann (29 August 2007). Mechanical Ventilation: Clinical Applications and Pathophysiology. Elsevier Health Sciences. pp. 74–. ISBN 978-0-7216-0186-1. Retrieved 27 May 2011.
- Winkelmann, Andreas; Noack, Thorsten (2010). "The Clara cell - a "Third Reich eponym"?". European Respiratory Journal 36 (4): 722–7. doi:10.1183/09031936.00146609. PMID 20223917.
- Pringle, Heather (2010). "The Dilemma of Pernkopf's Atlas". Science 329 (5989): 274–275. doi:10.1126/science.329.5989.274-b. PMID 20647444.
- Irwin, RS; Augustyn N, French CT, Rice J, Tedeschi V, Welch SJ (2013). "Spread the word about the journal in 2013: from citation manipulation to invalidation of patient-reported outcomes measures to renaming the Clara cell to new journal features". Chest 143: 1–5. PMID 23276834.
- Akram, KM; Lomas NJ, Spiteri MA, Forsyth NR (2013). "Club cells inhibit alveolar epithelial wound repair via TRAIL-dependent apoptosis". Eur Respir J 41: 683–694. doi:10.1183/09031936.00213411. PMID 22790912.
- Taubenberger JK (August 1998). "Influenza virus hemagglutinin cleavage into HA1, HA2: No laughing matter". Proc. Natl. Acad. Sci. U.S.A. 95 (17): 9713–5. doi:10.1073/pnas.95.17.9713. PMC 33880. PMID 9707539.
- Fernández-Valdivia R, Zhang Y, Pai S, Metzker ML, Schumacher A (January 2006). "l7Rn6 Encodes a Novel Protein Required for Clara Cell Function in Mouse Lung Development". Genetics 172 (1): 389–99. doi:10.1534/genetics.105.048736. PMC 1456166. PMID 16157679.
- BU Histology Learning System: 13805loa
- -415956935 at GPnotebook
- UIUC Histology Subject 1385
- Histology at ucsf.edu
|
2026-01-19T09:28:33.226355
|
920,229
| 3.646961
|
https://en.wikipedia.org/wiki/Vestibulocerebellum
|
Anatomy of the cerebellum
Drawing of the human brain, showing cerebellum and pons
Vertical cross-section of the human cerebellum, showing folding pattern of the cortex, and interior structures
|SCA, AICA, PICA|
The anatomy of the cerebellum can be viewed at three levels. At the level of large-scale anatomy, the cerebellum consists of a tightly folded and crumpled layer of cortex, with white matter underneath, several deep nuclei embedded in the white matter, and a fluid-filled ventricle in the middle. At the intermediate level, the cerebellum and its auxiliary structures can be decomposed into several hundred or thousand independently functioning modules or "microzones". At the microscopic level, each module consists of the same small set of neuronal elements, laid out with a highly stereotyped geometry.
- 1 Large-scale anatomy
- 2 Peduncles
- 3 Development
- 4 Additional images
- 5 References
The cerebellum is located at the bottom of the brain, with the large mass of the cerebral cortex above it and the portion of the brainstem called the pons in front of it. It is separated from the overlying cerebrum by a layer of leathery dura mater; all of its connections with other parts of the brain travel through the pons. Anatomists classify the cerebellum as part of the metencephalon, which also includes the pons; the metencephalon in turn is the upper part of the rhombencephalon or "hindbrain". Like the cerebral cortex, the cerebellum is divided into two hemispheres; it also contains a narrow midline zone called the vermis. A set of large folds are conventionally used to divide the overall structure into ten smaller "lobules". Because of its large number of tiny granule cells, the cerebellum contains more neurons than the rest of the brain put together, but it only takes up 10% of total brain volume. The cerebellum receives nearly 200 million input fibers; in contrast, the optic nerve is composed of a mere one million fibers.
The unusual surface appearance of the cerebellum conceals the fact that the bulk of the structure is made up of a very tightly folded layer of gray matter, the cerebellar cortex. It has been estimated that if the human cerebellar cortex were completely unfolded (which could not be done without tearing it in several places), it would give rise to a layer of neural tissue about 1 meter long and 10 centimeters wide—a total surface area of 500-1000 square cm, all packed within a volume of 100-150 cubic cm. Underneath the gray matter of the cortex lies white matter, made up largely of myelinated nerve fibers running to and from the cortex. Embedded within the white matter—which is sometimes called the arbor vitae (Tree of Life) in the cerebellum because of its branched, tree-like appearance—are four deep cerebellar nuclei.
The cerebellum can be divided according to three different criteria: gross anatomical, phylogenetical, and functional.
Gross anatomical divisions
On gross inspection, three lobes can be distinguished in the cerebellum: the flocculonodular lobe, the anterior lobe (rostral to the "primary fissure"), and the posterior lobe (dorsal to the "primary fissure"). The latter two can be further divided in a midline cerebellar vermis and lateral cerebellar hemispheres.
Phylogenetic and functional divisions
The cerebellum can also be divided in three parts based on both phylogenetic criteria (the evolutionary age of each part) and on functional criteria (the incoming and outgoing connections each part has and the role played in normal cerebellar function). From the phylogenetically oldest to the newest, the three parts are:
|Functional denomination (phylogenetic denomination)||Anatomical parts||Role|
|Vestibulocerebellum (Archicerebellum)||Flocculonodular lobe (and immediately adjacent vermis)||The vestibulocerebellum regulates balance and eye movements. It receives vestibular input from both the semicircular canals and from the vestibular nuclei, and sends fibres back to the medial and lateral vestibular nuclei. It also receives visual input from the superior colliculi and from the visual cortex (the latter via the pontine nuclei, forming a cortico-ponto-cerebellar pathway). Lesions of the vestibulocerebellum cause disturbances of balance and gait.|
|Spinocerebellum (Paleocerebellum)||Vermis and intermediate parts of the hemispheres ("paravermis")||The spinocerebellum regulates body and limb movements. It receives proprioception input from the dorsal columns of the spinal cord (including the spinocerebellar tract) and the trigeminal nerve, as well as from visual and auditory systems. It sends fibres to deep cerebellar nuclei which in turn project to both the cerebral cortex and the brain stem, thus providing modulation of descending motor systems. The spinocerebellum contains sensory maps as it receives data on the position of various body parts in space: in particular, the vermis receives fibres from the trunk and proximal portions of limbs, while the intermediate parts of the hemispheres receive fibres from the distal portions of limbs. The spinocerebellum is able to elaborate proprioceptive input in order to anticipate the future position of a body part during the course of a movement, in a "feed forward" manner.|
|Cerebrocerebellum (Neocerebellum, Pontocerebellum)||Lateral parts of the hemispheres||The neocerebellum is involved in planning movement and evaluating sensory information for action. It receives input exclusively from the cerebral cortex (especially the parietal lobe) via the pontine nuclei (forming cortico-ponto-cerebellar pathways), and sends fibres mainly to the ventrolateral thalamus (in turn connected to motor areas of the premotor cortex and primary motor area of the cerebral cortex) and to the red nucleus (in turn connected to the inferior olivary nucleus, which links back to the cerebellar hemispheres). The neocerebellum is involved in planning movement that is about to occur and has purely cognitive functions as well.|
Much of what is understood about the functions of the cerebellum stems from careful documentation of the effects of focal lesions in human patients who have suffered from injury or disease or through animal lesion research.
As explained in more detail in the Function section, the cerebellum differs from most other brain areas in that the flow of neural signals through it is almost entirely unidirectional: there are virtually no backward connections between its neuronal elements. Thus the most logical way to describe the cellular structure is to begin with the inputs and follow the sequence of connections through to the outputs.
The deep nuclei of the cerebellum act as the main centers of communication, and the four different nuclei of the cerebellum (dentate, interpositus, fastigial, and vestibular) receive and send information to specific parts of the brain. In addition, these nuclei receive both inhibitory and excitatory signals from other parts of the brain which in turn affect the nucleus's outgoing signals.
The cytoarchitecture (cellular organization) of the cerebellum is highly uniform, with connections organized into a rough, three-dimensional array of perpendicular circuit elements. This organizational uniformity makes the nerve circuitry relatively easy to study. To envision this "perpendicular array", one might imagine a tree-lined street with wires running straight through the branches of one tree to the next.[clarification needed]
There are three layers to the cerebellar cortex; from outer to inner layer, these are the molecular, Purkinje, and granular layers. The function of the cerebellar cortex is essentially to modulate information flowing through the deep nuclei. The microcircuitry of the cerebellum is schematized in Figure 5. Mossy and climbing fibers carry sensorimotor information into the deep nuclei, which in turn pass it on to various premotor areas, thus regulating the gain and timing of motor actions. Mossy and climbing fibers also feed this information into the cerebellar cortex, which performs various computations, resulting in the regulation of Purkinje cell firing. Purkinje neurons feed back into the deep nuclei via a potent inhibitory synapse. This synapse regulates the extent to which mossy and climbing fibers activate the deep nuclei, and thus control the ultimate effect of the cerebellum on motor function. The synaptic strength of almost every synapse in the cerebellar cortex has been shown to undergo synaptic plasticity. This allows the circuitry of the cerebellar cortex to continuously adjust and fine-tune the output of the cerebellum, forming the basis of some types of motor learning and coordination. Each layer in the cerebellar cortex contains the various cell types that comprise this circuitry.
This outermost layer of the cerebellar cortex contains two types of inhibitory interneurons: the stellate and basket cells. It also contains the dendritic arbors of Purkinje neurons and parallel fiber tracts from the granule cells. Both stellate and basket cells form GABAergic synapses onto Purkinje cell dendrites.
The middle layer contains only one type of cell body—that of the large Purkinje cell. Purkinje cells are the primary integrative neurons of the cerebellar cortex and provide its sole output. Purkinje cell dendrites are large arbors with hundreds of spiny branches reaching up into the molecular layer (Fig. 6). These dendritic arbors are flat—nearly all of them lie in planes—with neighboring Purkinje arbors in parallel planes. Each parallel fiber from the granule cells runs orthogonally through these arbors, like a wire passing through many layers. Purkinje neurons are GABAergic—meaning they have inhibitory synapses—with the neurons of the deep cerebellar and vestibular nuclei in the brainstem. Each Purkinje cell receives excitatory input from 100,000 to 200,000 parallel fibers. Parallel fibers are said to be responsible for the simple (all or nothing, amplitude invariant) spiking of the Purkinje cell.
Purkinje cells also receive input from the inferior olivary nucleus via climbing fibers. A good mnemonic for this interaction is the phrase "climb the other olive tree", given that climbing fibers originate from the contralateral inferior olive. In striking contrast to the 100,000-plus inputs from parallel fibers, each Purkinje cell receives input from exactly one climbing fiber; but this single fiber "climbs" the dendrites of the Purkinje cell, winding around them and making a large number of synapses as it goes. The net input is so strong that a single action potential from a climbing fiber is capable of producing a "complex spike" in the Purkinje cell: a burst of several spikes in a row, with diminishing amplitude, followed by a pause during which simple spikes are suppressed.
Just underneath the Purkinje layer are the Lugaro cells whose very long dendrites travel along the boundary between the Purkinje and the granular layers.
The innermost layer contains the cell bodies of three types of cells: the numerous and tiny granule cells, the slightly larger unipolar brush cells and the much larger Golgi cells. Mossy fibers enter the granular layer from their main point of origin, the pontine nuclei. These fibers form excitatory synapses with the granule cells and the cells of the deep cerebellar nuclei. The granule cells send their T-shaped axons—known as parallel fibers—up into the superficial molecular layer, where they form hundreds of thousands of synapses with Purkinje cell dendrites. The human cerebellum contains on the order of 60 to 80 billion granule cells, making this single cell type by far the most numerous neuron in the brain (roughly 70% of all neurons in the brain and spinal cord, combined). Golgi cells provide inhibitory feedback to granule cells, forming a synapse with them and projecting an axon into the molecular layer.
Relationship with cerebral cortex
The local field potentials of the neocortex and cerebellum oscillate coherently at (6–40 Hz) in awake behaving animals. These appear to be under the control of output from the cerebral cortex. This output would be mediated by a pathway from layer 5/6 neurons in the neocortex through that project either to the pons or the inferior olive. If through the pon this would go to mossy fibers that synapse with granule and Golgi neurons with the granule cells then targeting Purkinje neurons via their excitatory parallel fibers. If the inferior olive it would go via excitatory climbing fiber inputs to Purkinje neurons. These return this output back to the cerebral cortex through the ventrolateral thalamus completing the loop.
The SCA branches off the lateral portion of the basilar artery, just inferior to its bifurcation into the posterior cerebral artery. Here, it wraps posteriorly around the pons (to which it also supplies blood) before reaching the cerebellum. The SCA supplies blood to most of the cerebellar cortex, the cerebellar nuclei, and thesuperior cerebellar peduncles.
The AICA branches off the lateral portion of the basilar artery, just superior to the junction of the vertebral arteries. From its origin, it branches along the inferior portion of the pons at the cerebellopontine angle before reaching the cerebellum. This artery supplies blood to the anterior portion of the inferior cerebellum, the middle cerebellar peduncle, and to the facial (CN VII) and vestibulocochlear nerves (CN VIII). Obstruction of the AICA can cause paresis, paralysis, and loss of sensation in the face; it can also cause hearing impairment. Moreover, it could cause an infarct of the cerebellopontine angle. This could lead to hyperacusia (dysfunction of the stapedius muscle, innervated by CN VII) and vertigo (wrong interpretation from the vestibular semi-circular canal's endolymph acceleration caused by alteration of CN VIII).
The PICA branches off the lateral portion of the vertebral arteries just inferior to their junction with the basilar artery. Before reaching the inferior surface of the cerebellum, the PICA sends branches into the medulla, supplying blood to several cranial nerve nuclei. In the cerebellum, the PICA supplies blood to the posterior inferior portion of the cerebellum, the inferior cerebellar peduncle, the nucleus ambiguus, the vagus motor nucleus, the spinal trigeminal nucleus, the solitary nucleus, and the vestibulocochlear nuclei.
Variations among vertebrates
There is considerable variation in the size and shape of the cerebellum in different vertebrate species. It is generally largest in cartilaginous and bony fish, birds, and mammals, but somewhat smaller in reptiles. The large paired and convoluted lobes found in humans are typical of mammals, but the cerebellum is generally a single median lobe in other groups, and is either smooth or only slightly grooved. In mammals, the neocerebellum is the major part of the cerebellum by mass, but in other vertebrates, it is typically the spinocerebellum.
In amphibians, lampreys, and hagfish the cerebellum is little developed; in the latter two groups it is barely distinguishable from the brain-stem. Although the spinocerebellum is present in these groups, the primary structures are small paired nuclei corresponding to the vestibulocerebellum.
The cerebellum follows a rule of "threes", with three major input and output peduncles (fiber bundles). These are the superior (brachium conjunctivum), middle (brachium pontis), and inferior (restiform body) cerebellar peduncles.
|Superior||While there are some afferent fibers from the anterior spinocerebellar tract that are conveyed to the anterior cerebellar lobe via this peduncle, most of the fibers are efferents. Thus, the superior cerebellar peduncle is the major output pathway of the cerebellum. Most of the efferent fibers originate within the dentate nucleus which in turn project to various midbrain structures including the red nucleus, the ventral lateral/ventral anterior nucleus of the thalamus, and the medulla. The dentatorubrothalamocortical (dentate nucleus > red nucleus > thalamus > premotor cortex) and cerebellothalamocortical (cerebellum > thalamus > premotor cortex) pathways are two major pathways that pass through this peduncle and are important in motor planning.|
|Middle||This is composed entirely of afferent fibers originating within the pontine nuclei as part of the massive corticopontocerebellar tract (cerebral cortex > pons > cerebellum). These fibers descend from the sensory and motor areas of the cerebral neocortex and make the middle cerebellar peduncle the largest of the three cerebellar peduncles.|
|Inferior||This carries many types of input and output fibers that are mainly concerned with integrating proprioceptive sensory input with motor vestibular functions such as balance and posture maintenance. Proprioceptive information from the body is carried to the cerebellum via the dorsal spinocerebellar tract. This tract passes through the inferior cerebellar peduncle and synapses within the paleocerebellum. Vestibular information projects onto the archicerebellum.
The climbing fibers of the inferior olive run through the inferior cerebellar peduncle.
This peduncle also carries information directly from the Purkinje cells out to the vestibular nuclei in the dorsal brainstem located at the junction between the pons and medulla.
There are three sources of input to the cerebellum, in two categories consisting of mossy and climbing fibers, respectively. Mossy fibers can originate from the pontine nuclei, which are clusters of neurons located in the pons that carry information from the contralateral cerebral cortex. They may also arise within the spinocerebellar tract whose origin is located in the ipsilateral spinal cord. Most of the output from the cerebellum initially synapses onto the deep cerebellar nuclei before exiting via the three peduncles. The most notable exception is the direct inhibition of the vestibular nuclei by Purkinje cells.
During the early stages of embryonic development, the brain starts to form in three distinct segments: the prosencephalon, mesencephalon, and rhombencephalon. The rhombencephalon is the most caudal (toward the tail) segment of the embryonic brain; it is from this segment that the cerebellum develops. Along the embryonic rhombencephalic segment develop eight swellings, called rhombomeres. The cerebellum arises from two rhombomeres located in the alar plate of the neural tube, a structure that eventually forms the brain and spinal cord. The specific rhombomeres from which the cerebellum forms are rhombomere 1 (Rh.1) caudally (near the tail) and the "isthmus" rostrally (near the front).
Two primary regions are thought to give rise to the neurons that make up the cerebellum. The first region is the ventricular zone in the roof of the fourth ventricle. This area produces Purkinje cells and deep cerebellar nuclear neurons. These cells are the primary output neurons of the cerebellar cortex and cerebellum. The second germinal zone (cellular birthplace) is known as the Rhombic lip, neurons then move by human embryonic week 27 to the external granular layer. This layer of cells—found on the exterior of the cerebellum—produces the granule neurons. The granule neurons migrate from this exterior layer to form an inner layer known as the internal granule layer. The external granular layer ceases to exist in the mature cerebellum, leaving only granule cells in the internal granule layer. The cerebellar white matter may be a third germinal zone in the cerebellum; however, its function as a germinal zone is controversial.
|Wikimedia Commons has media related to Cerebellum.|
- The Brain From Top To Bottom
- Edwards CR, Newman S, Bismark A et al. (2008). "Cerebellum volume and eyeblink conditioning in schizophrenia". Psychiatry Res 162 (3): 185–194. doi:10.1016/j.pscychresns.2007.06.001. PMC 2366060. PMID 18222655.
- Hutchinson S, Lee LH, Gaab N, Schlaug G (2003). "Cerebellar volume of musicians". Cereb. Cortex 13 (9): 943–9. doi:10.1093/cercor/13.9.943. PMID 12902393.
- Kingsley, RE (2000). Concise Text of Neuroscience (2nd ed.). Lippincott Williams and Wilkins. ISBN 0-683-30460-7.
- Harting JK The Global Cerebellum '97, University of Wisconsin Medical School.
- Kinney GA, Overstreet LS, Slater NT (September 1997). "Prolonged physiological entrapment of glutamate in the synaptic cleft of cerebellar unipolar brush cells" (PDF). J Neurophysiol 78 (3): 1320–33. PMID 9310423.
- Soteropoulos DS, Baker SN (2006). "Cortico-cerebellar coherence during a precision grip task in the monkey". J Neurophysiol 95 (2): 1194–206. doi:10.1152/jn.00935.2005. PMID 16424458. doi:10.1152/jn.00935.2005
- Ros H, Sachdev RN, Yu Y, Sestan N, McCormick DA (2009). "Neocortical networks entrain neuronal circuits in cerebellar cortex". Journal of Neuroscience 29 (33): 10309–20. doi:10.1523/JNEUROSCI.2327-09.2009. PMC 3137973. PMID 19692605.doi:10.1523/JNEUROSCI.2327–09.2009
- Romer, Alfred Sherwood; Parsons, Thomas S. (1977). The Vertebrate Body. Philadelphia, PA: Holt-Saunders International. p. 531. ISBN 0-03-910284-X.
- Muller F, O'Rahilly R (1990). "The human brain at stages 21–23, with particular reference to the cerebral cortical plate and to the development of the cerebellum". Anat Embryol (Berl) 182 (4): 375–400. doi:10.1007/BF02433497. PMID 2252222.
|
2026-02-01T15:03:15.853509
|
193,493
| 3.764491
|
http://www.drpeterjdadamo.com/wiki/wiki.pl/Mitochondria
|
In cell biology, a mitochondrion (plural mitochondria) (from Greek mitos thread + khondrion granule) is an organelle, variants of which are found in most Eukaryotic cells. Mitochondria are sometimes described as "cellular power plants," because their primary function is to convert organic materials into energy in the form of ATP via the process of oxidative phosphorylation. Usually a cell has hundreds or thousands of mitochondria, which can occupy up to 25% of the cell's cytoplasm. Mitochondria usually have their own DNA (mtDNA); according to the generally accepted Endosymbiotic theory, they were originally derived from external organisms.
A mitochondrion contains outer and inner membranes composed of phospholipid bilayers studded with proteins, much like a typical cell membrane. The two membranes, however, have very different properties.
The outer mitochondrial membrane, which encloses the entire organelle, contains numerous integral proteins called porins, which contain a relatively large internal channel (about 2-3 nm) that is permeable to all molecules of 5000 daltons or less [Alberts, 1994]. Larger molecules can only tranverse the outer membrane by active transport. The outer mitochondrial membrane is composed of about 50% phospholipids by weight and contains a variety of enzymes involved in such diverse activities as the elongation of fatty acids, oxidation of epinephrine (adrenaline), and the degradation of tryptophan.
The inner membrane contains proteins with three types of functions [Alberts, 1994]:
- those that carry out the oxidation reactions of the respiratory chain
- ATP synthase, which makes ATP in the matrix
- specific transport proteins that regulate the passage of metabolites into and out of the matrix.
It contains more than 100 different polypeptides, and has a very high protein-to-phospholipid ratio (more than 3:1 by weight, which is about 1 protein for 15 phospholipids). Additionally, the inner membrane is rich in an unusual phospholipid, cardiolipin, which is usually characteristic of bacterial plasma membranes. Unlike the outer membrane, the inner membrane does not contain porins, and is highly-impermeable; almost all ions and molecules require special membrane transporters to enter or exit the matrix.
The mitochondrial matrix
The matrix is the space enclosed by the inner membrane. The matrix contains a highly concentrated mixture of hundreds of enzymes, in addition to the special mitochondrial ribosomes, tRNA, and several copies of the mitochondrial DNA genome. Of the enzymes, the major functions include oxidation of pyruvate and fatty acids, and the citric acid cycle. [Alberts, 1994]
Mitochondria structure :1) Inner membrane2) Outer membrane 3) Crista4) Matrix
Thus, mitochondria possess their own genetic material, and the machinery to manufacture their own RNAs and proteins. (See: protein synthesis). This nonchromosomal DNA encodes a small number of mitochondrial peptides (13 in humans) that are integrated into the inner mitochondrial membrane, along with polypeptides encoded by genes that reside in the host cell's nucleus.
The inner mitochondrial membrane is folded into numerous cristae (see diagram above), which expand the surface area of the inner mitochondrial membrane, enhancing its ability to generate ATP. In typical liver mitochondria, for example, the surface area, including cristae, is about five times that of the outer membrane. Mitochondria of cells which have greater demand for ATP, such as muscle cells, contain even more cristae than typical liver mitochondria.
Although the primary function of mitochondria is to convert organic materials into cellular energy in the form of ATP, mitochondria play an important role in many metabolic tasks, such as:
- Apoptosis-Programmed cell death
- Glutamate-mediated excitotoxic neuronal injury
- Cellular proliferation
- Regulation of the cellular redox state
- Heme synthesis
- Steroid synthesis
- Heat production (enabling the organism to stay warm).
Some mitochondrial functions are performed only in specific types of cells. For example, mitochondria in liver cells contain enzymes that allow them to detoxify ammonia, a waste product of protein metabolism. A mutation in the genes regulating any of these functions can result in a variety of mitochondrial diseases.
Reproduction and gene inheritance
Mitochondria replicate their DNA and divide mainly in response to the energy needs of the cell; in other words their growth and division is not linked to the cell cycle. When the energy needs of a cell are high, mitochondria grow and divide. When the energy use is low, mitochondria are destroyed or become inactive. At cell division, mitochondria are distributed to the daughter cells more or less randomly during the division of the cytoplasm. Mitochondria divide by binary fission similar to bacterial cell division. Unlike bacteria, however, mitochondria can also fuse with other mitochondria. Sometimes new mitochondria are synthesized in centers that are rich in proteins and polyribosomes needed for their synthesis.
Mitochondrial genes are not inherited by the same mechanism as nuclear genes. At fertilization of an egg by a sperm, the egg nucleus and sperm nucleus each contribute equally to the genetic makeup of the zygote nucleus. In contrast, the mitochondria, and therefore the mitochondrial DNA, usually comes from the egg only. At fertilization of an egg, a single sperm enters the egg along with the mitochondria that it uses to provide the energy needed for its swimming behavior. However, the mitochondria provided by the sperm are targeted for destruction very soon after entry into the egg. The egg itself contains relatively few mitochondria, but it is these mitochondria that survive and divide to populate the cells of the adult organism. This means that mitochondria are usually inherited purely down the female line.
|
2026-01-21T05:38:56.066156
|
575,789
| 3.600607
|
http://www.psychologicalscience.org/index.php/publications/observer/2009/september-09/graphing-literacy-in-the-psychology-major.html
|
Florence Nightingale and the Creation of a Beautiful Display of Data
Figure 1. Florence Nightingale’s Polar Area (often referred to as the cox comb) graph depicted “The Causes of Mortality in the Army in the East” in the years 1854-55. The twelve sections represent the months in a year. The size of the section representing each month indicates the number of people who died in that particular month. The colors correspond to the different causes of death. (This adaptation of Nightingale’s graph is courtesy of Worth Publishers)
The most enthusiastic statistician of all” (Porter, 1986 p. 67) among her energetic 19th century peers was Florence Nightingale, the first woman admitted as a Fellow of the Royal Statistical Society in England. Her popular legacy, of course, is the nursing legend. When the Crimean War broke out, Nightingale directed the entire nursing operation at the war front for the British Army. Her legend began to grow as she instituted practices of basic hygiene, such as changing the bed sheets when new patients entered the hospital. She documented every change that she made so that she could identify what worked and succeeded at dramatically reducing the mortality rate (Goldie, 1997).
Florence Nightingale’s response to bureaucratic resistance was statistics, sometimes accompanied by withering sarcasm (Gill, 2005). Her response began with a simple innovation: She kept systematic records of what happened to patients. (We sometimes liken Florence Nightingale to the Count on Sesame Street, obsessively counting every variable she encountered.) Her simple act of using descriptive statistics to catalog daily life in the hospital had huge consequences and is credited by some as having saved the British Army during the Crimean War.
What can we learn from Florence Nightingale’s use of statistics and graphs?
Graphing literacy: Learning the tools of graph construction. In recent years, a number of proficiencies have been championed in academia: writing, oral communication, and even numeracy have been integrated into university curricula, including in psychology. But so far, graphing literacy has been ignored as a mainstream literacy. Unfortunately, graphing often is perceived as a tedious, unimportant, or easily learned task. Yet, Florence Nightingale’s story is just one of many that demonstrate the illumination that a beautiful graph can shed on its subject.
To attain high levels of clarity, the first step is to emphasize to the student the importance of following the rules for creating a graph — only then can students learn to break them with style and purpose. Conventions in graph construction have evolved precisely so that others can easily “read” the story the graph is telling without having to refer to any accompanying text. Within the psychology major, statistics and research methods are ideal courses in which to embed thorough instruction in graphing conventions, but a quick list of guidelines can be covered in almost any psychology course. Indeed, we have become convinced that the need for graphing literacy represents a critical skill that extends across disciplines and is a natural development of advances in computers, art, and the sciences. The first time that a graph shows up within a course’s curriculum — in the text, a journal article, or a student’s presentation — distribute this simple seven-item checklist and have students size up the graph.
- Is there a clear, specific title?
- Do both axes have labels that identify the variables? Do all labels read left-to-right — even the one on the y-axis? If possible, has the graph-maker avoided a key that labels variables in a box separate from the graph?
- Are all terms on the graph the same terms that are used in the text that the graph accompanies? Have all abbreviations been eliminated?
- Are the units of measurement included in the title or data labels?
- Do the values on the axes either go down to zero, or have cut marks (double slashes) to indicate that they do not go down to zero?
- Are colors (preferably shades of grey) used in a simple, clear way?
- Has all chartjunk — moiré vibrations, grids, and ducks — been eliminated?
Figure 2. Some graphs allow therapists to compare the actual rate of improvement of a client in therapy with the expected rate given that client’s characteristics. (Adapted from Lueger et al., 2001; Courtesy of Worth Publishers)
The last item admonishes against “chartjunk,” a term coined by Edward Tufte (2001) that includes any extraneous features in the graph. Moiré vibrations refer to any of the patterns that computers provide as options to fill in bars; stick with subdued, solid colors. Grids refer to background patterns, almost like graph paper, on which the data representations, such as bars, are superimposed; these should never be in a final version of a graph. Ducks are features of the data that have been dressed up to be something other than merely data — for example, dollar signs superimposed above bars representing income; avoid these. Like learning to write clearly, learning how to construct graphs also requires the creators to think clearly and to re-examine what they really intend to communicate.
Including graph-related activities in class will teach students to become more automatic in their critical review of visual presentations that they encounter in their everyday lives. We’ll present three effective activities in this article.
Activity 1: Using any graph that comes up in class, walk through the seven-item checklist described above. After a quick run-through of these items, ask students to paraphrase the story that the graph is telling. Have students work in pairs, and look only at the graph and its title — not at any written materials that might elaborate on the graphs — and write a sentence or two that capture the message of the graph in their own words. This task should be easy with a well-constructed graph. If it’s hard, ask students how the graph creator might have altered the visual display to reduce ambiguity and communicate more effectively. Ask what Florence Nightingale would have done.
Florence Nightingale’s creative graph construction: Persuasive innovation and life-saving tool
Nightingale’s careful record keeping did not accomplish her mission on its own. She needed a graph to tell the story. Florence Nightingale invented a “polar-area” diagram, often now referred to as a “cox comb” graph, so named because it resembled the shape of a rooster’s head. A recreation of this circular chart is shown in Figure 1, and it includes causes of death, numbers of deaths, and months of the year. This graph told her story more clearly and eloquently than descriptive statistics alone ever could.
Nightingale used innovation to create a compelling graph. Perhaps more importantly, she was active in her choices about how to present her story. Far too often, graphmakers are passive, accepting the choices they are offered. When creating graphs with software, we must remember that the software developers made a number of decisions for us. For example, a common default is to include a vertical label on the y-axis, requiring the consumer of the graph to turn her or his head to read the label. However, with most software, we can click on the label to choose a horizontal orientation. To tell our stories well, we must assert control over the script.
Activity 2: Provide data to your students and have them use software (e.g., Excel, SPSS) to create a graph, passively accepting the default choices of the software developer. You can use archival data (e.g., the mean numbers of season wins for the Red Sox and Yankees over the last 10 years) or student-generated data (e.g., mean scores for women and men on a measure such as those on outofservice.com). Have students critique the default graph using the seven-item checklist. Then have students “play” with the graph, changing defaults to make the graph adhere to these criteria. Have students swap graphs with one another and critique the improved graphs according to the criteria.
Harnessing the story-telling power of graphs: Misleading or clarifying? You and your students will likely encounter graphs organically in your course, whether in the text or in a student presentation. However, you can introduce particularly compelling or especially egregious graphs to illuminate psychological topics in any course. Here are two compelling applications of new ways to visualize data, followed by a misleading graph.
First, psychotherapy is typically discussed in several courses, including Introduction to Psychology, Psychopathology, Psychotherapy, and Psychological Testing courses. Clinical psychology researchers have developed graphing techniques to help therapists predict when the therapy process appears to be leading to a poor outcome by delineating an expected rate of recovery for a specific client, based on the characteristics of the client and her symptoms (Howard et al., 1996). A rendering of one such graph is shown in Figure 2 (Lueger et al., 2001). Based on her characteristics and symptoms, this patient is expected to show initial quick improvement, followed by more gradual, but steady, improvement. Her actual course of therapy, however, is a rapid decline, followed by a plateau, then a rapid improvement. This graph allows a therapist to determine how a client’s actual rate of improvement compares to what would be expected for other individuals with similar characteristics. If therapy progresses more slowly than expected, then both the client and her therapist may be spurred to take action by the discrepancy in the graph. This graph clearly demonstrates that after her initial sharp decline, this client seems to be progressing toward the expected treatment response for someone like her. The therapist and client alike can use this graph to compare the client’s progress to what is expected and initiate important discussions within therapy.
Figure 3. Chevrolet looks superior to its competitors in terms of years on the road, but only until you see that the y-axis starts at 95%! All of these brands do well by this measure.
A second use of visualizing data integrates psychology and geography because geographic information systems (GIS) enable us to layer psychological data sets on top of maps. Arousal mapping, for example, allows us to visually compare “fear surfaces” of perceived versus real danger in a park, neighborhood, or campus. We also can vividly portray the epidemiology of anything we can measure and locate on a map, such as the rates of depression relative to geographical features. GIS can even provide acoustic feedback to non-sighted persons about their spatial environment, an innovation developed on the UC Santa Barbara campus with both practical and theoretical significance, including how we interpret various features of graph construction (see Golledge, 2002; Goodchild, 2000; Goodchild, 2006; Heth & Cornell, 2007; Hirtle, 1998; Montello, 2002). APA now offers an advanced training workshop in how geographic information systems can be applied to the behavioral sciences.
Along with compelling graphs, misleading graphs make great teaching tools and immediately sharpen students’ ability to think critically. For example, graphs designed to persuade (a topic often taught in Introduction to Psychology, Social Psychology, or Organizational Psychology) are often like the ad in Figure 3 that states “more than 98 percent of all Chevy trucks sold in the last 10 years are still on the road.” Before you run out and buy a Chevy truck, it pays to check out the y-axis, which begins at 95 percent. A trivial difference appears to be quite dramatic even though at least 95 percent of each of these brands of trucks is still on the road 10 years later. You can build a file of graphs from your own everyday reading that can be introduced into your courses and ask students to use the checklist to redesign a graph. Or just ask them what Florence Nightingale would think about this particular graph.
Activity 3: After introducing wonderful or horrible graphs of your own, students often start bringing in graphs that they encounter. Encourage this behavior by rewarding students with extra credit for bringing in a particularly bad or good graph, along with their critiques using the seven-item checklist and a brief summary of the graph’s message. Time permitting, have students present their graphs in class. Alternately, set up a Web site (e.g., on blackboard.com or another courseware tool) to post students’ graphs and critiques.
Florence Nightingale’s mission: Using statistics and graphs in her service to others: Florence Nightingale’s graph was a powerfully persuasive tool. Perhaps it was her public relations savvy that inspired her to use the color red to represent deaths due to wounds, blue to represent deaths due to preventable causes, and black to represent deaths due to all other causes. The visual portrayal of so much blue compared to so little red, even in war time, forced the British government into a wide variety of health reforms. Many more people were dying due to preventable causes than were dying in battle! It was a brilliant, live-saving graph that was created out of her passion to let the data speak.
Perhaps Florence’s “calling” provided such strength of character or perhaps she had learned something vital from the frustrations of her youth. Her response to bureaucratic frustration and resistance was not wasted emotion… but impassioned statistics! History has become both kinder and more precise to Florence Nightingale, a pioneering woman who was recognized then, and increasingly now, as the “passionate statistician” (Cook, 1913).
References and Further Reading:
Leave a comment below and continue the conversation.
|
2026-01-27T06:16:07.621883
|
937,161
| 3.644149
|
http://historymatters.gmu.edu/d/101
|
Statistics on women’s work in the early 20th century were invariably misleading: most women worked but only a minority were formally in the wage labor force. Nowhere was the discrepancy between the domestic ideal and the reality of women’s work lives wider than in rural America. In 1913 the U. S. Department of Agriculture decided to investigate and document the lives of farm woman they discovered a vast reservoir of discontent. The report, reproduced here, was culled from letters responding to a questionnaire sent to the wives of farmers and commented on all aspects of rural life, especially the enormous burden of labor that these officially non-working women were expected to carry out.
INTRODUCTIONThe Secretary of Agriculture, on October 1, 1913, addressed a letter to the housewives of 55,000 crop correspondents, asking them to suggest ways in which the United States Department of Agriculture could render more direct service to the farm women of the United States. This inquiry was prompted by the following extract from a letter addressed to the Secretary by Mr. Clarence Poe, Raleigh, N. C., under date of July 9, 1913:
Have some bulletins for the farmer’s wife as well as for the farmer himself. The farm woman has been the most neglected factor in the rural problem and she has been especially neglected by the National Department of Agriculture. Of course, a few such bulletins are printed, but not enough.
Although the department had issued many bulletins and publications designed to give farm women practical aid in household operations, and to assist them in poultry raising, butter making, gardening, and other farm activities commonly discharged by women, Mr. Poe’s suggestion seemed to merit careful investigation.
Moreover, at the time that Mr. Poe wrote, the Smith-Lever Act, providing for cooperative agricultural extension work, was under discussion by the Congress with prospects of an early passage. This act as drafted, and since passed, provided for “the giving of instruction and practical demonstrations in agriculture and home economics.” This, it was seen, would call on this department to cooperate with the States in furnishing a new type of instruction specifically designed to aid farm women in their important tasks of homemaking and domestic manufacturing. For this reason it seemed especially important to seek information as to the things in which the rural women most needed cooperative assistance. . . .
The following is the text of the Secretary’s letter:
DEPARTMENT OF AGRICULTURE,
OFFICE OF THE SECRETARY,
Washington, D. C., October 1, 1913.
TO HOUSEWIVES IN THE HOMES OF THE OFFICIAL CROP CORRESPONDENTS.
LADIES: The Department of Agriculture is in receipt of a letter in which the writer said:“The farm woman has been the most neglected factor in the rural problem, and she has been especially neglected by the National Department of Agriculture.”
This letter was written not by a woman, but by a broad-minded man so thoroughly in touch with the agricultural and domestic needs of the country that his opinions have great weight.
The Department of Agriculture certainly wishes to render directly to the women of the United States the full aid and service which their important place in agricultural production warrants.
Because we believe that these women themselves are best fitted to tell the department how it can improve its service for them, I respectfully request that you give careful thought to this matter. Then please communicate your ideas to me in the inclosed franked envelope.
Your answers may state your own personal views, or, even better, you may first discuss the question with your women neighbors or in your church societies or women’s organizations and submit an answer representing the combined opinions of the women of your entire community. You are, of course, at liberty to criticize freely, but I would especially urge that you try to make your suggestions constructive ones that we can at once put into effect. All of your suggestions will be carefully read and considered by Government specialists. Many of them will be carried out at once; others as soon as the information sought can be gathered and the necessary machinery for its distribution made ready. Such suggestions as call for revision of existing laws or additional legislation will be referred to the proper committees of the Senate and the House of Representatives.
Answers to this inquiry should reach me not later than November 15, 1913. All answers should be written on only one side of the paper and should be as concise as it is possible to make them.
In order to serve the women of the country, the department from time to time will insert in the weekly issue of the News Letter to official Crop Correspondents special paragraphs or special supplement pages of direct interest to women.
D. F. HOUSTON,
The replies began to arrive from the Eastern States during the second week in October, though the bulk of the answers reached Washington after November 1. Straggling replies came in up to Christmas, and in these were included a number of letters from farm women and other women who formerly lived on the farm, but are residing in cities, who had not been directly addressed but who had learned of the inquiry from the public prints. In all, 2,241 replies were received, and of these 216 were either acknowledgments, statements that the writer could make no suggestions, or irrelevant replies that had no bearing on the general subject. The number of women directly represented, however, is much larger than the tally of the letters would indicate, as many writers transmitted opinions of their neighbors or of women’s clubs, granges, or church organizations. The letters received were in all forms—carefully typewritten statements, notes scribbled on the back or margin of the Secretary’s letter, or painstakingly written on scraps of wrapping paper. Not a few wrote on the margin of the Secretary’s letter that no blank for answer had been inclosed, and this, in connection with the makeshift note paper of others, seems to indicate that on some farms, at least, the ordinary conveniences for correspondence are regarded as luxuries.
In a number of cases the letters were signed by men who wrote either on their own initiative or recorded their wives' views. The pleasant feature of the replies from men was that the vast majority of them seemed to recognize that the women on the farms do not always receive their full due and that improvements are needed to free them from unnecessary drudgery and to make their lives happier, less lonely, and more endurable. Letters from men expressing selfish or narrow views of the rural woman’s place, or resenting the department’s endeavor to serve them, were entirely exceptional. Wherever the writer is a man that fact is indicated in connection with any excerpts from the letters which appear on subsequent pages. Extracts not so marked are from letters written by women. . . .
Because of the interesting human note found in many of the letters, the editors determined to let the writers tell their own story by publishing verbatim extracts from many of the letters, rather than attempting to make a statistical summary of their contents. . . .
Many of the writers asked that their letters be treated confidentially, and for this reason all are published anonymously, with the omission in certain cases of specific allusions which would make possible the identification of the writer. . . .
SCOPE OF THE REPORTS
The present report deals only with letters which discuss the social and labor needs of farm women. Under these headings are included references to better roads, telephones, and mail service as important factors in the social life of the country, and to the long hours and methods of women’s work, which, on many farms, increase isolation and leave little leisure or energy for recreation or intellectual activities. Later reports will deal with (l) the domestic needs of farm women, (2) the educational needs of farm women, and (3) the economic needs of farm women, as indicated by the writers of the different sections.WOMAN’S LABORLONG HOURS AND OVERWORK
The long hours of labor and the overworking of women on the farms form the major part of many letters. Several of the writers stated that it was impossible for them to get any kind of domestic help, even in time of sickness, and commented on the difference between the country home and the city home, where day workers can be obtained in emergencies. Some saw a solution for this difficulty in properly directed immigration. Others suggested inducing the surplus from the overcrowded sections to enter domestic employment on the farm. Coupled closely with this complaint is the fact that conditions of farm life tend to make the younger generation leave the farms and seek employment in city factories and urban occupations, thus making it more difficult for the overworked farm woman to employ the daughter of a neighbor as her assistant.
A large number speak of the extra work put upon women by the employment of large numbers of field laborers who have to be housed and fed; and one or two, while stating that farm help no longer comes from the neighboring farms, object seriously to introducing into their families the rough element now hired. Others seem not to object to the work, but state that under present marketing conditions the returns they receive from the sale of garden truck, poultry and eggs, and milk and butter, do not constitute a legitimate wage.
Many letters from Southern States complain of the heavy work that women have to do in the fields. Cotton hoeing and picking are frequently mentioned as one of the chief hardships. This field work, it is said, leaves the woman no time for anything else.
The following are some of the significant extracts from letters dealing with these phases of the subject:
“g"One great trouble perhaps the greatest is the fact that here in New England whatever help is employed on the farm must to some extent be taken into the house. Formerly the ‘hired man’ was the son of a neighbor or perhaps the cousin or relative of the proprietor, so was not so bad; but now the help that it is possible to obtain is usually a very undesirable member of the household besides being another for the housewife to provide food for. I see no remedy for this. I know several cases where farms that have been for several generations in one family are being sold because no really efficient help can be obtained either indoors or out.”
“Too little attention has been given to the part and importance of the woman on the farm. Probably this is so because of the ideal which prevails but which now gives some promise of change. This ideal assigns to the farm woman almost constant work that is heavy, and provides for her too few (if any) and insufficient conveniences and improvements for doing her work. She goes at it largely as a matter of brawn, exercising too meagerly her intelligent thought. But often too the man is held as tightly to his daily routine and fails to have time for thinking how he may do his work by better methods—or improve the conditions of the farm woman whose part in rural economy is rated too low. This ideal, in the second place, provides too inadequately for the farm woman’s leisure and cultivation of interest in other things than her daily routine of household cares. Her sphere of thought and activity is frequently limited and often her work is drudgery.”
“It seems to me that the farmers‘ wives’ work is more laborious than the farmers'. The farmer has one day in seven for comparative rest, but Sunday is often the hardest day in the week, especially during the summer, for the farmer’s wife.”
"It may be summed up in two words—drudgery and economy. These seem to pursue her from the time she signs her name to the mortgage that is given in the purchase of the farm until that other time when, weary and worn, she gives up the unequal struggle and is laid at rest. This interest (paid on farm mortgages) robs the farm woman of much.
“We bought a 11-acre farm; my husband was a good dairyman and a first class butter maker, but we could scarcely pay taxes and interest and live, until I took up crochet work. I managed thus to pay $200 on the mortgage every year, but the strain was too great, and overwork ruined my health—but the mortgage was paid. Meantime I have had only one new hat in eight years and one secondhand dress, earned by lace work. We are of the better class and have to keep up appearances, but the struggle is heartbreaking and health destroying. We have worked night and day. Our two sons have had to give up a higher education to work, and both have decided mechanical and constructive ability.”
“Suggest some feasible plan for caring for the farm help without making them a part of the family. Many of them are dirty, vulgar, profane, and drunken, yet they eat at table with us; our children listen to and become familiar with their drunken babblings. Our privacy is destroyed, our tastes and sense of decency are outraged. We are forced to wait upon and clean up after men who would not be allowed to enter the houses of men of any other vocation. Do not misunderstand; the farmers' wives care little for social status. It is not because they are hired men that we wish them banished, but because oftentimes they are personally unworthy.”
“Lack of proper literature and time to read it; almost impossible to employ girls or women to help with housework. How to provide board and lodging for farm laborers without taking them into the home and table with the family (they often being very undesirable foreigners and tramps who only work for a few days to earn money for drink).”
“I have in mind a small, delicate woman, with a family of small children, that does all her own housework, milks four or five cows, cooks for extra help, carries from a spring all the water — no time to read a paper or book. Late to bed and early to rise, yet neither he nor she has any idea they could make her burden easier.”
A man: “The average farmer’s wife is unable to devote her best energies to the bigger problems of farm housekeeping owing to the fact that she is obliged to be more or less of a drudge. Surely among the vast numbers of immigrant girls to this country there must be some who would welcome an opportunity to identify themselves with a well-kept home; thus to be taught to become economical and progressive farm women. If the Government could establish and maintain a bureau, with agencies at the principal landing stations, to this end it would work a great benefit to the farm women. I think the farm woman in many instances overburdened with work and the care of a family; so much so that many of the farmers' daughters are looking upon farm life with a shudder.”
A man: “Under present conditions it is impossible most times to get help even in case of sickness. The farm wife can not reach a laundry or a bakery, nor can the husband and his help get their meals at a hotel or restaurant, as can be done oftentimes in the city. She is depended on to feed her own people, and often to appear hospitable and generous. She feels she must be in readiness to feed wayfarers that are hardly able to reach a hotel and very much wish to dine at a farmhouse.”
“My first complaint is hard work, no profits, and an exceedingly small sum upon which to live and supply her children. If city people could see the farmers' wives and children work and sweat in the fields in June, July, and August, when they are going to the beach or some summer resort or to cool in the mountains, they would not wonder at our complaint. When our work is over we could go, too, if we got any profits.”
A man: “It is the wife of the tenant and poor farmer who needs help. She has a hard row to hoe. She has very few labor-saving implements, no electrical or gasoline power, but does nearly all her work by ‘main strength and awkwardness.’ Thousands rise at 4 am and peg away until 10 pm. That game finally puts her down and out. The union man and ‘industrial worker’ does his eight-hour stint and then agitates for shorter hours and more pay, but the wife of the tenant or poor farmer has no time to ‘agitate,’ strike, or walk out. Her pay is plain board and clothing. Very few ever see a State fair, get a week’s vacation, or even an auto ride. She is a slave to long hours of work and her husband is a slave to the landlord, for whom he works two-fifths or one-half his time, and who is determined to have every dime, peck, or pound of his rent.”
“I am not writing as a practical farm woman but as one who has recently lived three years on a large farm. Those three years gave me an opportunity to observe and understand the hardships and isolation, the waste of time with the tiresome traveling back and forth, constant contact with uncongenial laborers and many other unpleasant features. I do not complain so much of the labor. Work is honorable and health-giving There were weeks when I accomplished more than many other farmers' wives. The work of the farmer’s wife is hard, but unless she takes part in the more laborious operations in the field and stables, I think it is not more so than many other women in town who have their homes to keep up and take boarders or sew or in some way assist in providing for the family.”
“The necessity of taking the farm men into the family is the most unfortunate of any condition of the home. Labor is scarce and the farmer must take such labor as he can get, often changing several times during the year, with rough and uncouth fellows who have to sit with the family at table and evenings, and their manner and language make this intimate association undesirable for all members of the family, especially the boys and girls. I think they should have a men’s room where they can sit and have a separate table for eating. When it so happened that we had to give the men meals, I gave them a separate table with food neatly prepared and we thought they liked it better than eating with the family.”
“The woman in town can always hire some one to help by the day at least, but in the country that is not so—if she hires help she must make a companion of the girl and often take her along when she goes to town. There is no family privacy in the farm home where help is kept. The average farmer, or better than the average, does not care for the privacy of life. I can see no possible way of improving the home life and giving the family life more thought unless the farmer can afford a home for his men.”
A man: “On the large farms where men must be boarded at the farm home, and where it is hard to get servants and where there may be several small children, the wife and mother is to be pitied. It seems the owner should build small cottages for his help as the Southerners did for their slaves and thus keep work from the home. Where the wife has to cook for hands such good packers' goods as possible should be used.”
“The farm work which has to be done is nothing but drudgery for the whole family from the age of 12 years and upward and it has least pay for our services. We have on an average from ten to fifteen thousand dollars invested in our farms and personal property and we have to work from 12 to 13 hours a day to make a living.”
this way: On the farm her husband managed to work 200 acres without a hired man most of the time with her help. In the city he worked as a day laborer for $2.50 per day and she kept 12 boarders and took care of her two children.“
A man: “From the experience of 30 years in the store business in rural parts of northern Minnesota, I do not hesitate to say that over one-half of the total work done on the farm has been done by the women of the house, besides they have done all their cooking and mending and have raised the families.”
“The majority of farmers' wives are simply overburdened with summer company from the city, either relatives or friends, who if they were forced to change places with us would soon realize what it meant to be considered the one to make life lovely for them during the long hot days of harvest, haying, and all. We think articles written on this subject might bring them to realize we are not machines of perpetual motion with no chance of a feeling of physical exhaustion.”
"There is almost every kind of machinery and utensils made to lighten her work, as well as machinery to lighten man’s work. Therefore the fault must lie somewhere else. To my mind a great deal of it is their own choosing. I think the marked clause in an editorial taken from the Chicago Tribune of November 10, 1913, tells quite a story, and I know that it is a true statement of affairs in a great many cases. The marked sentence is: ‘The average farmer, says this bulletin (referring to bulletin of Wisconsin country life conference), has until recently been interested in his crops, cattle, and a bank account more than he was in the comfort of his wife and children.’ I am glad to say, however, that the more progressive class of farmers are putting in modern homes. In a great many—I could say the majority—of new houses built, gas or electric lights, heating plants, running water, with modern plumbing, etc., are being installed.
“If it were not for the long, hard hours with poor remuneration the majority of farmers' wives would be content. We are told to be more sociable - have picnics and merrymakings so as to be content with our lot. Why, we can hardly find time, as we are, to visit a neighbor, and are too tired on Sunday for church. A good rest would be a more cheerful prospect than any picnic. While city women are having parties and the children doing nothing but attending school, all hands on the farm are at work.”
“I have been a farmer’s wife for 30 years and have never had a vacation.”
“g"The laws are all right for the women. In my mind the worst trouble is with the women themselves. They spy around and talk of each other, making remarks about a speck of dirt or any disorder in a neighbor’s house. Awfully nice housekeeping is the tyrant the women bow down to. It is a poor excuse of a woman who can not get help from her husband. I read somewhere of a woman who asked her husband for a wringer and sewing machine the first summer of their marriage. On being refused she hired out in harvest to make the money. After that lesson she had every labor-saving appliance she saw fit to ask for. I serve good, clean, wholesome food to the men folk, hire my washing, and do not scrub my kitchen floor.”
"Every one is urging the farmer to raise crops. Now all this means extra help for the woman to cook for, since all these crops have to be attended, harvested, and marketed. From one to four extra men to board during the hottest part of the year is the rule, provided you can get the men. We would not complain if we could see the bank account growing in proportion to the work, or if there were any permanent improvement in our surroundings, but a good many of us are beginning to ask, ‘Who gets the benefits of all the hurry and work necessary to produce the big crops?’ I heard a very practical farmer say last summer, when the corn was drying up, that he did not care, for he had noticed that he always made more on a bad crop year than on a good one. He was judging entirely by financial results and not taking into account the difference in labor to himself and his family.
“This question was brought up at a women’s meeting recently, and all agreed that they were tired of this continual urging the farmer to so-called better farming, since it only meant more work for the whole family with no real gain. These were not dissatisfied women, but just average Middle-West farmers' wives and daughters who can help with the milking or take a team to the field if the hired man leaves suddenly or the exigencies of the case demand it—women physically and mentally alive, who feel the joy of achievement. Better homes and better living generally in the country will do more than all the back-to-the land jargon. Farmers should be induced to pay more attention to the house, garden, and orchard, for these are badly neglected and will continue to be so long as all the time and attention are given to stock and field crops. With farm homes once made attractive, the high cost of living will settle itself. People will come back and raise their own living partly from choice and partly from necessity.”
“I am living in the most prosperous—at least said to be the richest county of Virginia, in the beautiful Shenandoah Valley. Most of the women live the lives of slaves—slaves to their farms and families. Help is hard to obtain and keep. The strong, hearty woman doesn’t mind the work, and there are a great many of this class. The delicate, broken-down, and overworked are filling the hospitals.”
A man: “The women here carry water one-quarter mile and go one mile to milk.”
A man: “On the cotton farms the women and children generally hoe out the cotton, putting it to a stand and cleaning the row of grass and weeds. Then in the fall of the year the women and children pick out the bulk of the cotton crop. This is the life of the average tenant of the South. The poor tenant mothers are deserving of the sympathy and encouragement of all. It often happens that our best and most prosperous farmers come from these poor children who have been taught to labor and learn the cost of a dollar, but the mothers toil on with no hope of anything except to raise their children.”
“In almost all of the one-crop, cotton-growing sections the labor question is narrowed down to the farmer, his wife, and children. The wife, if able to work, regardless of condition, makes a full hand at whatever the occasion demands—plowing, hoeing, chopping, putting down fertilizer, picking cotton, etc. The same is required of the children almost regardless of age, sex, or condition. In many cases this seems unavoidable. Poverty is the word that covers the condition.”
A man: “The woman does 50 per cent of all the work on the farm except at the plow, such as cleaning up the land, hoeing the corn, potatoes, cabbages, and beans, etc.—the woman does the same as the man—in gathering the corn, potatoes, etc. The woman does the work at 50 cents per day and will ask for the work, while the men hands can’t be employed on the farm for less than $1 a day. I employ women when I can’t get men hands, and at half the cost.”
“The two greatest problems that confront women in the rural districts are overmuch work and little strength. We need domestic help. We do not claim all wisdom in doing things, yet our knowledge surpasses our strength to do the many different tasks incumbent upon us in farm life.”
“To look at the careworn, tired faces, and bent forms of the ‘bride of a few years’ in our hill sections, where servants are scarce, we realize at once our personal and National neglect and are astounded at the enormity of it.”
“The woman living on farms, in addition to bearing and caring for her children, does her own housework—cooking, and washing the clothes once a week, and then works in the fields during the months of May, June, and July, which is the hoeing season, and in September, October, November, and December, which is the cotton-picking season.”
A man: “I wonder if the gentleman has ever seen a woman plowing cotton with oxen and what he would think if he knew that this woman’s husband was working at a sawmill several miles away, and it was her duty to get up and cook his breakfast so he can be at his work at 6—and yet this is a common sight in the rural districts. What is needed, and what can help this life? Go to that man and show him that the life he is living is wrong. What power can raise them from the neglected position this gentleman sees them in? I would answer: Educate the man who is her husband.”
A man: “After consulting some of the women in this part of the woods I find that a majority want a law passed to this effect: Make it read that any man who marries a girl in the rural districts who requests or allows his wife to go to the field and work as a hand in making or gathering a crop be subject to a fine and imprisonment. The claim is that it is injurious to the offspring of such to be in the hot sun and laboring in the same.”
A man: "The long hours of labor that the farmer’s wife has to contend with and the constant drudgery ought to be mitigated — if there was a system of education taken up by the government, which has been to some extent already, to explain to the farmers that the extremely long hours and constant drudgery is not economy and does not necessarily work toward prosperity in a financial way; that conservation of strength, energy, and health brings the best results; and that it doesn’t pay to work such long hours and have no recreation to break the monotony of hard, constant labor. If farmers generally would not make the day’s work longer than 8 hours, or at the least 10 hours, it would give some chance for recreation and rest, and a half holiday on Saturday if it was practiced generally would no doubt generally afford the necessary recreation and rest. When the housewife’s labor commences at 5 o’clock in the morning and continues until bedtime, no wonder they get dissatisfied with their lot in life and break down in health and often suffer from nervous prostration on account of this unreasonable method and unhealthful practice of so many long hours.
A man: “They claim they have to work from sunup to sundown, hoeing, picking cotton in the mud and dew. I saw a man and his wife, while I was walking around among them. Their baby was fastened up in the house 400 yards from them. They said it stayed there from morning until dinner and from dinner until night (while the man and his wife were working in the field). I find some of them in bad shape.”
“If we had time out of the cotton patch to learn how to can fruit for the market so we could can our fruit as it ripens, even if we only got pay for our labor, we would be no worse off and the world much better.”
A man: “In this county and my own neighborhood a large number of women and girls can not read or write and in some families no one can read, so you see it is hard to get better conditions until they are educated. Fifty per cent of the women and girls are picking cotton today and neglecting young children and household duties.”
"One evil for which a remedy should be supplied is the demand made upon the farmer’s wife by the transient. By this I mean the peddler, the book agent, the seller of nursery stock, the insurance man. the lightning-rod man, the hunters—in fact, grafters of all sorts, together with the man who has legitimate business with the farmer and who finds it too convenient and economical to force himself upon the hospitality of the farm home. Many a farmer’s wife is forced to be a country hotel keeper without pay, and if on rare occasions a man with a conscience does pay, it is no compensation, even at regular hotel rates, for the extra washing, cleaning, and cooking thrust upon an already tired woman. With all the modern appliances for lightening labor, the farm woman has many more tasks than the ordinary home-keeper. Then why should the idea prevail that it is only in the natural order of things that she should work, work, work, and that any one at any time may thrust himself into her home unannounced and demand that she wait upon him? This condition causes girls to desert the farm. It steals their leisure time for which they had planned reading, music, driving, or visiting. They wonder why there is no privacy in the farm home when in the city, town, or village the home is sacred to its owners and their friends
A man: “I believe in less big dinners on Sundays. Why make a slave of a woman on Sunday?”
"Boarding the help on the farm, where there is much
Source: United States Department of Agriculture, Report No. 103: Social and Labor Needs of Farm Women. Extracts from letters received from farm women in response to an inquiry ’How the U.S. Department of Agriculture can better meet the needs of farm housewives,’ with special reference to the provision of instruction and practical demonstrations in home economics under the Act of May 8, 1914, providing for cooperative agricultural extension work, etc. (Washington: Government Printing Office, 1915), 5–10, 42–55.
|
2026-02-01T20:33:31.087634
|
622,065
| 3.614034
|
http://www.pal-item.com/usatoday/article/3324251
|
Tourists swim in the Pacific Ocean at sunset on Easter Island in 2010. Scientists say that parts of the Pacific are warming at a rate 15 times faster than ever before. / Martin Bernetti, AFP/Getty Images
Although the temperature of the Earth's atmosphere may have hit the "pause" button recently - with little global warming measured over the past few years - that hasn't been the case with the oceans.
In a study out today in the journal Science, researchers say that the middle depths of a part of the Pacific Ocean have warmed 15 times faster in the past 60 years than they did during the previous 10,000 years.
Most of the heat that humanity has put into the atmosphere since the 1970s from greenhouse gas emissions has likely been absorbed by the oceans, according to the most recent report from the Intergovernmental Panel on Climate Change, a United Nations-sponsored group of scientists that issues reports every few years about the effects of global warming.
"Increases in ocean heat content and temperature are robust indicators of global warming during the past several decades," according to today's Science study.
"We're pumping heat into the ocean at a faster rate over the past 60 years," said study lead author Yair Rosenthal, a climate scientist at Rutgers University. "We may have underestimated the efficiency of the oceans as a storehouse for heat and energy," he added. "It may buy us some time - how much time, I don't really know. But it's not going to stop climate change."
"It's not so much the magnitude of the change, but the rate of change," noted study co-author Braddock Linsley, a Columbia University climate scientist. "We're experimenting by putting all this heat in the ocean without quite knowing how it's going to come back out and affect climate."
He said that in the past six decades the temperature of the Pacific Ocean water studied (from the surface to about 2,200 feet below) has increased by about one-third of a degree Fahrenheit. (The specific area studied was in the Pacific near Indonesia, chosen because that's a typical sample of Pacific Ocean water.) Researchers say that while the amount of warming might seem small in the scheme of things, it's the rate of warming that's so alarming, Linsley said.
The researchers found that Pacific Ocean water has generally been cooling over the past 10,000 years, until about 800 years ago, when temperatures started to slowly rise. (Then fell again, during the so-called Little Ice Age from the mid-1500s to mid-1800s). It's been only in the past few decades, though, that the rate has dramatically increased.
The Earth's atmosphere has been about the same temperature for the past 15 years or so, providing fuel for skeptics of man-made global warming. However, this study, along with other recent research, finds that heat absorbed by the planet's oceans has increased significantly.
Obviously, there were no thermometers taking measurements of ocean temperatures over the past few thousand years (instrument records from buoys go back only to the 1960s). So scientists had to use "proxy" sources to measure temperature. In this case, it was fossils of ancient marine life - little shelled animals known as foraminifera - that could be analyzed to reconstruct the climates in which they lived over millennia.
"This is a relatively new way of measuring past temperature data," Rosenthal noted.
How long will this pause in atmospheric temperature last? It may be up to natural variability in the Pacific: When the La Niña climate pattern (cooler than average Pacific Ocean water) switches, and the Pacific reverts to a warmer-than-usual El Niño phase, global temperatures may likely shoot up again, along with the rate of warming, said Kevin Trenberth, a climate scientist at the National Center for Atmospheric Research in Boulder, Colo., who was not involved in the research.
"With global warming, you don't see a gradual warming from one year to the next," Trenberth said. "It's more like a staircase. You trot along with nothing much happening for 10 years and then suddenly you have a jump and things never go back to the previous level again."
Copyright 2013 USATODAY.com
Read the original story: Pacific Ocean warming 15 times faster than before
|
2026-01-27T21:42:04.121974
|
808,862
| 4.071455
|
http://www.chemistryexplained.com/elements/A-C/Carbon.html
|
Carbon is an extraordinary element. It occurs in more different forms than any other element in the periodic table. The periodic table is a chart that shows how chemical elements are related to each other. More than ten million compounds of carbon are known. No other element, except for hydrogen, occurs in even a fraction of that number of compounds.
As an element, carbon occurs in a striking variety of forms. Coal, soot, and diamonds are all nearly pure forms of carbon. Carbon also occurs in a form, discovered only recently, known as fullerenes or buckyballs. Buckyball carbon holds the promise for opening a whole new field of chemistry (see accompanying sidebar).
Carbon occurs extensively in all living organisms as proteins, fats, carbohydrates (sugars and starches), and nucleic acids.
Group 14 (IVA)
Carbon is such an important element that an entirely separate field of chemistry is devoted to this element and its compounds. Organic chemistry is the study of carbon compounds.
Discovery and naming
Humans have been aware of carbon since the earliest of times. When cave people made a fire, they saw smoke form. The black color of smoke is caused by unburned specks of carbon. The smoke may have collected on the ceiling of their caves as soot.
Later, when lamps were invented, people used oil as a fuel. When oil burns, carbon is released in the reaction, forming a sooty covering on the inside of the lamp. That form of carbon became known as lampblack. Lampblack was also often mixed with olive oil or balsam gum to make ink. And ancient Egyptians sometimes used lampblack as eyeliner.
One of the most common forms of carbon is charcoal. Charcoal is made by heating wood in the absence of air so it does not catch fire. Instead, it gives off water vapor, leaving pure carbon. This method for producing charcoal was known as early as the Roman civilization (509 B.C.-A.D. 476).
French physicist René Antoine Ferchault Reaumur (1683-1757) believed carbon might be an element. He studied the differences between wrought iron, cast iron, and steel. The main difference among these materials, he said, was the presence of a "black combustible material" that he knew was present in charcoal.
Carbon was officially classified as an element near the end of the eighteenth century. In 1787, four French chemists wrote a book outlining a method for naming chemical substances. The name they used, carbone, is based on the earlier Latin term for charcoal, charbon.
Coal, soot (nearly pure carbon), and diamonds are all nearly pure forms of carbon.
Carbon exists in a number of allotropic forms. Allotropes are forms of an element with different physical and chemical properties. Two allotropes of carbon have crystalline structures: diamond and graphite. In a crystalline material, atoms are arranged in a neat orderly pattern. Graphite is found in pencil "lead" and ball-bearing lubricants. Among the non-crystalline allotropes of carbon are coal, lampblack, charcoal, carbon black, and coke. Carbon black is similar to soot. Coke is nearly pure carbon formed when coal is heated in the absence of air. Carbon allotropes that lack crystalline structure are amorphous, or without crystalline shape.
The allotropes of carbon have very different chemical and physical properties. For example, diamond is the hardest natural substance known. It has a rating of 10 on the Mohs scale. The Mohs scale is a way of expressing the hardness of a material. It runs from 0 (for talc) to 10 (for diamond). The melting point of diamond is about 3,700°C (6,700°F) and its boiling point is about 4,200°C (7,600°F). Its density is 3.50 grams per cubic centimeter.
On the other hand, graphite is a very soft material. It is often used as the "lead" in lead pencils. It has a hardness of 2.0 to 2.5 on the Mohs scale. Graphite does not melt when heated, but sublimes at about 3,650°C (6.600°F). Sublimination is the process by which a solid changes directly to a gas when heated, without first changing to a liquid. Its density is about 1.5 to 1.8 grams per cubic centimeter. The numerical value for these properties varies depending on where the graphite originates.
The amorphous forms of carbon, like other non-crystalline materials, do not have clear-cut melting and boiling points. Their densities vary depending on where they originate.
Carbon does not dissolve in or react with water, acids, or most other materials. It does, however, react with oxygen. It burns in air to produce carbon dioxide (CO 2 ) and carbon monoxide (CO). The combustion (burning) of coal gave rise to the Industrial Revolution (1700-1900).
Another highly important and very unusual property of carbon is its ability to form long chains. It is not unusual for two atoms of an element to combine with each other. Oxygen (O 2 ), nitrogen (N 2 ), hydrogen (H 2 ), chlorine (Cl 2 ), and bromine (Br 2 ) are a few of the elements that can do this. Some elements can make even longer strings of atoms. Rings of six and eight sulfur atoms (S 6 and S 8 ), for example, are not unusual.
Carbon has the ability to make virtually endless strings of atoms. If one could look at a molecule of almost any plastic, for example, a long chain of carbon atoms attached to each other (and to other atoms as well) would be evident. Carbon chains can be even more complicated. Some chains have side chains hanging off them.
There is almost no limit to the size and shape of molecules that can be made with carbon atoms. (See accompanying diagrams.)
Buckyballs are a recently discovered form of pure carbon. These spheres are made up of exactly 60 linked carbon atoms.
Occurrence in nature
Carbon is the sixth most common element in the universe and the fourth most common element in the solar system. It is the second most common element in the human body after oxygen. About 18 percent of a person's body weight is due to carbon.
The black color of smoke is caused by unburned specks of carbon.
Carbon is the 17th most common element in the Earth's crust. Its abundance has been estimated to be between 180 and 270 parts per million. It rarely occurs as a diamond or graphite.
Both allotropes are formed in the earth over millions of years, when dead plant materials are squeezed together at very high temperatures. Diamonds are usually found hundreds or thousands of feet beneath the earth's surface. Africa has many diamond mines.
Carbon also occurs in a number of minerals. Among the most common of these minerals are the carbonates of calcium (CaCO 3 ) and magnesium (MgCO 3 ). Carbon also occurs in the form of carbon dioxide (CO 2 ) in the atmosphere. Carbon dioxide makes up only a small part of the atmosphere (about 300 parts per million), but it is a crucial gas. Plants use carbon dioxide in the atmosphere in the process of photosynthesis. Photosynthesis is the process by which plants convert carbon dioxide and water to carbohydrates (starches and sugars). This process is the source of life on Earth.
Carbon also occurs in coal, oil, and natural gas. These materials are often known as fossil fuels. They get that name because of the way they were formed. They are the remains of plants and animals that lived millions of years ago. When they died, they fell into water or were trapped in mud. Over millions of years, they slowly decayed. The products of that decay process were coal, oil, and natural gas.
Some forms of coal are nearly pure carbon. Oil and natural gas are made primarily of hydrocarbons, which are compounds made of carbon and hydrogen.
Three isotopes of carbon occur in nature, carbon-12, carbon-13, and carbon-14. One of these isotopes, carbon-14, is radioactive. Isotopes are two or more forms of an element. Isotopes differ from each other according to their mass number. The number written to the right of the element's name is the mass number. The mass number represents the number of protons plus neutrons in the nucleus of an atom of the element. The number of protons determines the element, but the number of neutrons in the atom of any one element can vary. Each variation is an isotope.
Five artificial radioactive isotopes of carbon are known also. A radioactive isotope is one that breaks apart and gives off some form of radiation. Artificial radioactive isotopes can be made by firing very small particles (such as protons) at atoms. These particles stick in the atoms and make them radioactive.
Carbon-14 has some limited applications in industry. For example, it can be used to measure the thickness of objects, such as sheets of steel. The steel must always be the same thickness.
Carbon is the sixth most common element in the universe and the fourth most common element in the solar system.
In this process, a small sample of carbon-14 is placed above the conveyor belt carrying the steel sheet. A detection device is placed below the sheet. The detection device counts the amount of radiation passing through the sheet. If the sheet gets thicker, Less radiation gets through. If the sheet gets thinner, more radiation gets through. The detector records how much radiation passes through the sheet. If the amount becomes too high or too low, the conveyor belt is turned off. The machine making the sheet is adjusted to produce steel of the correct thickness.
The most important use of carbon-14 is in finding the age of old objects (see accompanying sidebar for more information).
Diamond, graphite, and other forms of carbon are taken directly from mines in the earth. Diamond and graphite can also be made in laboratories. Synthetic diamonds, for example, are made by placing pure carbon under very high pressures (about 800,000 pounds per square inch) and temperatures (about 2,700°C). The carbon is heated and squeezed in the same way organic material is heated and squeezed in the earth. Today, about a third of all diamonds used are synthetically produced.
W hen an organism is alive, it takes in carbon dioxide from the air around it. Most of that carbon dioxide is made of carbon-12, but a tiny portion consists of carbon-14. So the living organism always contains a very small amount of radioactive carbon, carbon-14. A detector next to the living organism would record radiation given off by the carbon-14 in the organism.
When the organism dies, it no longer takes in carbon dioxide. No new carbon-14 is added, and the old carbon-14 slowly decays into nitrogen. The amount of carbon-14 slowly decreases as time goes on. Over time, less and less radiation from carbon-14 is produced. The amount of carbon-14 radiation detected for an organism is a measure, therefore, of how long the organism has been dead. This method of determining the age of an organism is called carbon-14 dating.
The decay of carbon-14 allows archaeologists (people who study old civilizations) to find the age of once-living materials. Measuring the amount of radiation remaining indicates the approximate age.
There are many uses for carbon's two key allotropes, diamond and graphite. Diamonds are one of the most beautiful and expensive gemstones in the world. But they also have many industrial uses. Because they are so hard they are used to polish, grind, and cut glass, metals, and other materials. The bit on an oil-drilling machine may be made of diamonds. The tool used to make thin tungsten wires is also made of diamonds.
Synthetic diamonds are more commonly used in industry than in jewelry. Industrial diamonds do not have to be free of flaws, as do jewelry diamonds.
Graphite works well as pencil lead because it rubs off easily. It is also used as a lubricant. Graphite is added to the space between machine parts that rub against each other. The graphite allows the parts to slide over each other smoothly.
Graphite is also used as a refractory. Refractory material can withstand very high temperatures by reflecting heat away from itself. Refractory materials are used to line ovens used to maintain high temperatures.
Graphite is used in nuclear power plants. A nuclear power plant converts nuclear energy to electrical power. Graphite acts as a moderator by slowing down the neutrons used in the nuclear reaction.
Graphite is used to make black paint, in explosives and matches, and in certain kinds of cathode ray tubes, like the ones used in television sets.
Amorphous forms of carbon have many uses. These include the black color in inks, pigments (paints), rubber tires, stove polish, typewriter ribbons, and phonograph records.
One form of carbon is known as activated charcoal. The term activated means that the charcoal has been ground into a very fine powder. In this form, charcoal can remove impurities from liquids that pass through. For example, activated charcoal removes color and odor from oils and water solutions.
The decay of carbon-14 allows archaeologists (people who study old civilizations) to find the age of once-living materials.
Carbon dioxide (CO 2 ) is used to make carbonated beverages (it's the fizz in soda pop and beer), in fire extinguishers, and as a propellant in aerosol products. A propellant is a gas that pushes liquids out of a spray can, such as those used for deodorant or hair spray. Carbon dioxide can also be frozen to a solid called dry ice. It is widely used as a way of keeping objects cold.
Carbon monoxide (CO) is another compound formed between carbon and oxygen. Carbon monoxide is a very toxic gas produced when something burns in a limited amount of air. Carbon monoxide is always formed when gasoline burns in the engine of an automobile and is a common part of air pollution. Old heating units can produce carbon monoxide. This colorless and odorless gas can cause headaches, illness, coma, or even death.
Carbon monoxide has a few important industrial uses. It is often used to
obtain a pure metal from the ore of that metal:
It would take a very large book to describe all the uses of organic compounds, which are divided into a number of families. An organic family is a group of organic compounds with similar structures and properties. The largest organic family is the hydrocarbons, compounds that contain only carbon and hydrogen. Methane, or natural gas (CH 4 ), ethane (C 2 H 6 ), propane (C 3 H 8 ), ethylene (C 2 H 4 ), and benzene (C 6 H 6 ) are all hydrocarbons.
Hydrocarbons are used as fuels. Gas stoves burn natural gas, which is mostly methane. Propane gas is a popular camping fuel, used in small stoves and lanterns. Another important use of hydrocarbons is in the production of more complicated organic compounds.
Buckyballs and nanotubes
In the 1980s, chemists discovered a new allotrope of carbon. The carbon atoms in this allotrope are arranged in a sphere-like form of 60 atoms. The form resembles a building invented by American architect Buckminster Fuller (1895-1983). The building is known as a geodesic dome.
Each of the points on the dome is occupied by one carbon atom. The discoverers named the new form of carbon buckminsterfullerene in honor of Fuller. That name is too long to use in everyday conversation so it is usually shortened to fullerene or buckyball.
The discovery of the fullerene molecule was very exciting to chemists. They had never seen a molecule like it. They have been studying ways of working with this molecule. One interesting technique has been to cut open just one small part of the molecule. Then they cut open a small part on a second molecule. Finally, they join the two buckyballs together. They get a double-buckyball.
Repeating this process over and over could result in triple-buckyballs, quadruple-buckyballs, and so on. As this process is repeated, the buckyball becomes a long narrow tube called a nanotube. Nanotubes are long, thin, and extremely tiny tubes somewhat like a drinking straw or a long piece of spaghetti.
Scientists are beginning to find ways of using nanotubes. One idea is to run a thin chain of metal atoms through the center of a nanotube. This allows it to act like a tiny electrical wire. Nanotubes may completely change many devices that will be made in the future.
Other organic families contain carbon, hydrogen, and oxygen. Methyl alcohol (wood alcohol) and ethyl alcohol (grain alcohol) are the most common members of the alcohol family.
Methyl alcohol is used to make other organic compounds and as a solvent (a substance that dissolves other substances). Ethyl alcohol is used for many of the same purposes. It is also the alcohol found in beer, wine, and hard liquor, such as whiskey and vodka.
All alcohols are poisonous but some alcohols are more poisonous than others. If not drunk in moderation, alcoholic beverages can damage the body and brain. And, if drunk in large quantities, they can cause death. Methyl alcohol is more toxic than ethyl alcohol. People who have drunk methyl alcohol by mistake have died.
The list of everyday products made from organic compounds is very long. It includes drugs, artificial fibers, dyes, artificial colors and flavors, food additives, cosmetics, plastics of all kinds, detergents, synthetic rubber, adhesives, antifreeze, pesticides and herbicides, synthetic fuels, and refrigerants.
Carbon is essential to life. Nearly every molecule in a living organism contains carbon. The study of carbon compounds that occur in living organisms is called biochemistry (bio- = life + -chemistry ).
Carbon can also have harmful effects on organisms. For example, coal miners sometimes develop a disease known as black lung. The name comes from the appearance of the miner's lungs. Instead of being pink and healthy, the miner's lungs are black. The black color is caused by coal dust inhaled by the miner. The longer a miner works digging coal, the more the coal dust is inhaled. That worker's lungs become more and more black.
Color is not the problem with black lung disease however. The coal dust in the lungs blocks the tiny holes through which oxygen gets into the lungs. As more coal dust accumulates, more holes are plugged up, making it harder for the miner to breathe. Many miners eventually die from black lung disease because they lose the ability to breathe.
Carbon is essential to life. Nearly every molecule in a living organism contains carbon.
Carbon monoxide poisoning is another serious health problem. Carbon monoxide is formed whenever coal, oil, or natural gas bums. For example, the burning of gasoline in cars and trucks produces carbon monoxide. Today, almost every person in the United States inhales some carbon monoxide every day.
Small amounts of carbon monoxide are not very dangerous. But larger amounts cause a variety of health problems. At low levels, carbon monoxide causes headaches, dizziness, nausea, and loss of balance. At higher levels, a person can lose consciousness. At even higher levels, carbon monoxide can cause death.
|
2026-01-30T19:19:53.887263
|
446,372
| 3.698313
|
http://catalog.flatworldknowledge.com/bookhub/reader/3798?e=campbell_1.0-ch01_s01
|
Click the Study Aids tab at the bottom of the book to access your Study Aids (usually practice quizzes and flash cards).
Study Pass is our latest digital product that lets you take notes, highlight important sections of the text using different colors, create "tags" or labels to filter your notes and highlights, and print so you can study offline. Study Pass also includes interactive study aids, such as flash cards and quizzes.
Highlighting and Taking Notes:
If you've purchased the All Access Pass or Study Pass, in the online reader, click and drag your mouse to highlight text. When you do a small button appears – simply click on it! From there, you can select a highlight color, add notes, add tags, or any combination.
If you've purchased the All Access Pass, you can print each chapter by clicking on the Downloads tab. If you have Study Pass, click on the print icon within Study View to print out your notes and highlighted sections.
To search, use the text box at the bottom of the book. Click a search result to be taken to that chapter or section of the book (note you may need to scroll down to get to the result).
View Full Student FAQs
1.1 Spatial Thinking
- The objective of this section is to illustrate how we think geographically every day with mental maps and to highlight the importance of asking geographic questions.
At no other time in the history of the world has it been easier to create or to acquire a map of nearly anything. Maps and mapping technology are literally and virtually everywhere. Though the modes and means of making and distributing maps have been revolutionized with recent advances in computing like the Internet, the art and science of map making date back centuries. This is because humans are inherently spatial organisms, and in order for us to live in the world, we must first somehow relate to it. Enter the mental map.
Mental or cognitive maps are psychological tools that we all use every day. As the name suggests, mental mapsMaps of the environment stored in our brains. are maps of our environment that are stored in our brain. We rely on our mental maps to get from one place to another, to plan our daily activities, or to understand and situate events that we hear about from our friends, family, or the news. Mental maps also reflect the amount and extent of geographic knowledge and spatial awareness that we possess. To illustrate this point, pretend that a friend is visiting you from out of town for the first time. Using a blank sheet of paper, take five to ten minutes to draw a map from memory of your hometown that will help your friend get around.
What did you choose to draw on your map? Is your house or where you work on the map? What about streets, restaurants, malls, museums, or other points of interest? How did you draw objects on your map? Did you use symbols, lines, and shapes? Are places labeled? Why did you choose to include certain places and features on your map but not others? What limitations did you encounter when making your map?
This simple exercise is instructive for several reasons. First, it illustrates what you know about where you live. Your simple map is a rough approximation of your local geographic knowledge and mental map. Second, it highlights the way in which you relate to your local environment. What you choose to include and exclude on your map provides insights about what places you think are important and how you move through your place or residence. Third, if we were to compare your mental map to someone else’s from the same place, certain similarities emerge that shed light upon how we as humans tend to think spatially and organize geographical information in our minds. Fourth, this exercise reveals something about your artistic, creative, and cartographic abilities. In this respect, not only are mental maps unique, but also the way in which such maps are drawn or represented on the page is unique too.
To reinforce these points, consider the series of mental maps of Los Angeles provided in Figure 1.1 "Mental Map of Los Angeles A".
Figure 1.1 Mental Map of Los Angeles A
Figure 1.2 Mental Map of Los Angeles B
Figure 1.3 Mental Map of Los Angeles C
Take a moment to look at each map and compare the maps with the following questions in mind:
- What similarities are there on each map?
- What are some of the differences?
- Which places or features are illustrated on the map?
- From what you know about Los Angeles, what is included or excluded on the maps?
- What assumptions are made in each map?
- At what scale is the map drawn?
Each map is probably an imperfect representation of one’s mental map, but we can see some similarities and differences that provide insights into how people relate to Los Angeles, maps, and more generally, the world. First, all maps are oriented so that north is up. Though only one of the maps contains a north arrow that explicitly informs viewers the geographic orientation of the map, we are accustomed to most maps having north at the top of the page. Second, all but the first map identify some prominent features and landmarks in the Los Angeles area. For instance, Los Angeles International Airport (LAX) appears on two of these maps, as do the Santa Monica Mountains. How the airport is represented or portrayed on the map, for instance, as text, an abbreviation, or symbol, also speaks to our experience using and understanding maps. Third, two of the maps depict a portion of the freeway network in Los Angeles, and one also highlights the Los Angeles River and Ballona Creek. In a city where the “car is king,” how can any map omit the freeways?
What you include and omit on your map, by choice or not, speaks volumes about your geographical knowledge and spatial awareness—or lack thereof. Recognizing and identifying what we do not know is an important part of learning. It is only when we identify the unknown that we are able to ask questions, collect information to answer those questions, develop knowledge through answers, and begin to understand the world where we live.
Asking Geographic Questions
Filling in the gaps in our mental maps and, more generally, the gaps in our geographic knowledge requires us to ask questions about the world where we live and how we relate to it. Such questions can be simple with a local focus (e.g., “Which way is the nearest hospital?”) or more complex with a more global perspective (e.g., “How is urbanization impacting biodiversity hotspots around the world?”). The thread that unifies such questions is geography. For instance, the question of “where?” is an essential part of the questions “Where is the nearest hospital?” and “Where are the biodiversity hotspots in relation to cities?” Being able to articulate questions clearly and to break them into manageable pieces are very valuable skills when using and applying a geographic information system (GIS).
Though there may be no such thing as a “dumb” question, some questions are indeed better than others. Learning how to ask the right question takes practice and is often more difficult than finding the answer itself. However, when we ask the right question, problems are more easily solved and our understanding of the world is improved. There are five general types of geographic questions that we can ask and that GIS can help us to answer. Each type of question is listed here and is also followed by a few examples (Nyerges 1991).Nyerges, T. 1991. “Analytical Map Use.” Cartography and Geographic Information Systems (formerly The American Cartographer) 18: 11–22.
Questions about geographic locationThe position of a phenomenon on the surface of the earth.:
- Where is it?
- Why is it here or there?
- How much of it is here or there?
Questions about geographic distributionDescribes how phenonmena are spread across the surface of the earth.:
- Is it distributed locally or globally?
- Is it spatially clustered or dispersed?
- Where are the boundaries?
Questions about geographic associationRefers to how things are related to each other in space.:
- What else is near it?
- What else occurs with it?
- What is absent in its presence?
Questions about geographic interactionDescribes the linkages and relationships bewteen places.:
- Is it linked to something else?
- What is the nature of this association?
- How much interaction occurs between the locations?
Questions about geographic changeRefers to the persistence, transformation, or disappearance of phenomena on the earth.:
- Has it always been here?
- How has it changed over time and space?
- What causes its diffusion or contraction?
These and related geographic questions are frequently asked by people from various areas of expertise, industries, and professions. For instance, urban planners, traffic engineers, and demographers may be interested in understanding the commuting patterns between cities and suburbs (geographic interaction). Biologists and botanists may be curious about why one animal or plant species flourishes in one place and not another (geographic location/distribution). Epidemiologists and public health officials are certainly interested in where disease outbreaks occur and how, why, and where they spread (geographic change/interaction/location).
A GIS can assist in answering all these questions and many more. Furthermore, a GIS often opens up additional avenues of inquiry when searching for answers to geographic questions. Herein is one of the greatest strengths of the GIS. While a GIS can be used to answer specific questions or to solve particular problems, it often unearths even more interesting questions and presents more problems to be solved in the future.
- Mental maps are psychological tools that we use to understand, relate to, and navigate through the environment in which we live, work, and play.
- Mental maps are unique to the individual.
- Learning how to ask geographic questions is important to using and applying GISs.
- Geographic questions are concerned with location, distributions, associations, interactions, and change.
- Draw a map of where you live. Discuss the similarities, differences, styles, and techniques on your map and compare them with two others. What are the commonalities between the maps? What are the differences? What accounts for such similarities and differences?
- Draw a map of the world and compare it to a world map in an atlas. What similarities and differences are there? What explains the discrepancies between your map and the atlas?
- Provide two questions concerned with geographic location, distribution, association, interaction, and change about global warming, urbanization, biodiversity, economic development, and war.
|
2026-01-25T03:25:20.680466
|
646,230
| 3.562835
|
http://www.cornellcollege.edu/politics/courses/allin/371/20129/syllabus.shtml
|
Department of Politics
AUGUST 30, 2012
When Europeans first arrived in North America, they viewed it as a continental wilderness populated by wild men (Native Americans) and wild beasts. By the late 19th century the continent had been tamed, and Americans were increasingly interested in preserving vestiges of their wilderness past. The purest expression of that impulse has been the political movement to preserve and protect large tracts of undeveloped federal lands in a National Wilderness Preservation System. Today there are 756 designated wilderness areas totaling more than 109 million acres. The Boundary Waters Canoe Area Wilderness is one of the oldest, one of the most heavily used, and probably the most famous of them all. As such, it is the ideal venue for an exploration of the politics and policy of wilderness preservation in America.
Our course will explore the wilderness concept, the history of wilderness preservation in the United States, the impact wilderness designation on national parks, national forests, and other public lands, and the host of controversies that inevitably arise when government agencies are directed to "preserve natural conditions." What is wilderness? Is preserving wilderness possible? Does wilderness preservation waste resources? To what extent should land managers interfere with natural forces? Should forest fires be allowed to burn? Should predatory animals be reintroduced? What is the appropriate place of people in wilderness areas? To what degree should we try to make the wilderness safe for visitors? To what extent should visitors be regulated to protect wilderness? Should concessions be made to Native Americans whose ancestors once called these "wilderness areas" home? Science is indispensable to thinking seriously about many of these questions, but ultimately the choices to be made are political choices. We will try to understand who is making these choices, how and why.
: Portions of this syllabus and some of your reading assignments are available in the portable document format (PDF). PDF files have advantages that might appeal to you: (a) You can save a PDF file to your hard drive and view it without being connected to the Internet; (b) PDF files have page breaks, so you can print selected pages if you like. PDF is also the dominant file type used for delivering facsimiles of paper documents, like court opinions and legislative reports, over the Internet. To read PDF files on your personal computer you need the Adobe Acrobat Reader, which you can download without charge from the publisher's web site. This software is already loaded on most college-owned computers.
Feedback: Whether or not you are asked to complete a standardized course evaluation, I am interested in your comments and suggestions for improvement of the course, the readings, the assignments and this course description. Feel free to send comments as you think of them. E-mail: firstname.lastname@example.org.
Instructor: Craig W. Allin, Room 113, College Hall. Telephone: Office, (895-) 4278; Mobile, (319) 431-1100. If I do not answer the phone, leave me a message or send e-mail to email@example.com.
Office Hours on Campus: If I'm not in class with you, you can probably find me in my office. Feel free to make an appointment or just show up. To help you find me, the most current version of my schedule is available for your electronic inspection over the campus network if you are using Microsoft Outlook (not Outlook Express or Outlook Web Access). From 8:00 a.m. to noon and 1:00 p.m. to 3:00 p.m., Cheryl Dake, faculty secretary for South Hall (ext. 4283) can consult my calendar and make appointments for you.
Office Hours off Campus: Sunrise to one hour past sunset.
E-Mail Attachments: Please deliver
your paper by means of e-mail attachment. Please
save your paper in Microsoft Word (*.doc or *.docx) or Rich Text Format
(*.rtf) which supported by virtually all word processing
programs). Attach your file to an e-mail addressed
. If you are unfamiliar with e-mail attachments,
Class Meetings: TBA, when on campus. Consult Course Calendar & Assignments for times.
Focus & Approach: My intention is to
expose you to the politics of wilderness preservation while
allowing you to experience first-hand the Boundary Waters
Canoe Area Wilderness, which in terms of history, law and
politics is probably most important wilderness area in the
United States. In order to do that, we will, for the most
part, abandon the classroom, our computers, and our Internet
connections. You and I will read, think, present and discuss.
For a week we will live and travel together in the wilderness,
sharing the intellectual responsibilities of students, the
management responsibilities of federal officials, and the
domestic responsibilities associated with cooking, cleaning
and camp life generally.
READING. We will work from two books and a number of scholarly articles and primary documents. Some of the materials listed below will be assigned to everybody, and some to an individual. Many are relevant to your policy paper assignment, and some are general references which will be available to be borrowed from Craig. Remember, there will be no opportunity to print these documents once we leave campus.
Readings for Discussion & Policy Paper [a work in progress]
CASE MEMORANDUM: One goal of this course is to look at a single case closely enough so that you can begin to understand how the three branches of government interact to create the public policy that we so often attribute to "the government." The "chain-of-lakes" issue in the management of the BWCAW is a good example. Your reading for days 2 and 4 will familiarize you with some of the legal documents that govern management of the BWCAW. In an essay not to exceed 1000 words, please summarize the origins, development, and resolution of the chain-of-lakes issue giving attention to the participants in the process, the legal issues that arose, and the current status of public policy with respect to the issue.
PRE-TRIP EXAM: There will be one exam in the course, sometime before we begin our wilderness trip. It will test your mastery of the assigned readings.
CAMPFIRE TALK: Out in the wilderness, we will have limited opportunity for reading and discussion. Your only formal "academic" responsibility during our wilderness trip is one "campfire" talk. You will be assigned an article or book chapter, which you will read, contemplate, and discuss with the group one evening after dinner. Each of you will need a photocopy of your assignment (in a ziploc bag). You will not need your textbook or any other official course reading on the canoe trip. Your job at the campfire talk is (a) to report what you read and to summarize its major points, (b) to relate your selection to the assigned readings we have all done, to the specific circumstances of the BWCAW, and/or to our field experiences, (c) to share the lessons you learned from the selection, and (d) to answer questions from the other participants in the seminar. You should plan on talking for 15 to 20 minutes excluding the time you spend answering questions.
POST-TRIP POLICY PAPER: The final policy paper addresses management of flammable materials in that portion of the BWCA Wilderness that suffered the massive blowdown on July 4, 1999. The assignment is described in detail below.
Assignment: This is a public policy course, so you should not be surprised that you will be writing a short public policy paper: 1500-2500 words. Of course, you will need to read, think, assess, and write persuasively, but this paper will be unlike any public policy paper you have previously written for me.
First, you will be writing a policy analysis rather than a policy proposal. Major events – some would say natural calamities – have taken place, and public policy decision-makers have responded to them. Your job is to assess decisions that have already been made rather than to propose a policy for the future.
Second, because this is a field course with limited time on campus and limited access to the library, I have shortened and streamlined this assignment in a variety of ways. There will be no need to to do research in the conventional sense or to submit a topic and bibliography. The topic has been chosen for you, and I have done the initial research, located your sources and prepared your bibliography. I will not ask you to submit an outline of contentions, but you should study carefully the material and links provided below to be sure that the paper you write is in good form. You should be thinking about the paper from the first day of class, as you read your assignments and experience the Boundary Waters Canoe Area Wilderness. Mark your texts, take notes, and jot down your ideas throughout the course. You will write the paper after we return to Cornell, and it will be due on the final day of class. There will be no opportunity to rewrite.
Introduction: "Heavy rain and straight-line winds in excess of 90 miles per hour hit north central and northeastern Minnesota on July 4, 1999, blowing down trees and causing severe flooding. On the the Superior National Forest, 477,000 acres (more than 600 square miles) were impacted by the blowdown, including one-third of the Boundary Waters Canoe Area Wilderness (BWCAW)" [www.superiornationalforest.org]. This was an amazing freak weather event. Click here for a discussion by scientists from the National Oceanic and Atmospheric Administration. For contemporaneous news coverage consult these articles in the Minneapolis Star-Tribune.
With literally millions of trees blown down, there was and is a significantly increased risk of a major forest fire in the Boundary Waters Canoe Area Wilderness. Such a fire would be very expensive to fight, and it might be difficult or impossible to extinguish. (When giant fires broke out in Yellowstone National Park in 1988, they were not extinguished until the first snows fell despite the efforts of more than 10,000 firefighters and the expenditure of more than $100 million.) The BWCA had such a fire in late summer of 2011. The Pagami Creek Wildfire began unobtrusively on August 18, and remained completely unremarkable for four weeks. On September 12, the fire exploded, burning over an area of 93,000 acres – in one day! [photos] (If this blowup had happened on the same date in 2006, the Wilderness Politics Class would have been there!) Did the Forest Service make a huge mistake in management of the Pagami Creek Fire [the local newspaper thinks so] or is this simply an example of nature at work in a wilderness area? The BWCA Wilderness is supposed to be managed as a wilderness, an area where nature is allowed to take its course. The July 4 storm was certainly a natural event, and the Pagami Creek Fire was started by lightning.
Topic: Your topic is wilderness fire management, specifically did the United States Forest Service mismanage the Pagami Creek Fire and/or the post-blowdown fuel situation that preceded the fire? In answering this question I expect you to take into account United States Forest Service preparation for the possibility of a major fire in the years since after the great blowdown; Forest Service management the Pagami Creek Fire per se [see official United States Forest Service post-fire reports available here]; the nature of the BWCA ecosystem; the philosophy of wilderness preservation; and the various laws, policies and management guidelines applicable to the BWCAW.
Bibliography: Your bibliography is the list of sources printed above. Feel free to cut and paste it into your paper. Since it is in Turabian format, your citations should use the same format. In Turabian format a citation to a general idea or to an entire work must include the last name(s) of the author(s) and the year of publication. For more specific facts and all quotations, the page number must be included. Information contained in the body of the sentence is not repeated in the citation. Consult the examples below.
Policy Assessment & Outline of Contentions: Your policy assessment must answer the question posed above: "did the United States Forest Service mismanage the Pagami Creek Fire and/or the post-blowdown fuel situation that preceded the fire?" Please note that articulating a good thesis for your policy assessment will require you to have already completed your reading and thinking. Your paper should present your thesis followed by supporting arguments and evidence. The best way to think about the supporting arguments is as a hierarchy of contentions. Before you organize your contentions into an outline, consult A Good Argument Is a Hierarchy of Contentions.
Policy Paper: Your policy assessment and supporting arguments will be presented in a formal paper with appropriate manuscript format, proper citations, etc. Remember, answering the question you have been assigned requires you to take a position and make a case for it. Your policy assessment is the conclusion that you have drawn after examining all of the evidence. Your job in the paper is to state that conclusion and explain how you reached it. In otherwords, you need to set forth arguments and facts that demonstrate persuasively that your assessment is not merely an opinion but rather the correct assessment based upon the totality of the facts. Papers that take a position and argue a case are very common at all levels in law, business, journalism, and government. They may be called briefs (law), decision memoranda (business), editorials (journalism), or policy papers (government). Whatever they are called, good ones have certain characteristics. They are:
Please deliver your policy paper in the form of a single e-mail attachment. Consult POLICY PAPERS: How to Succeed for more detailed instructions. To view a sample policy paper written for another course click here.
|
2026-01-28T05:59:52.696333
|
895,017
| 4.079771
|
http://www.developerfusion.com/article/145371/an-internal-application-message-bus-in-vbnet/
|
Modern programming languages and multi-core CPUs offer very efficient multi-threading. Using multithreading can improve performance and responsiveness of an application, but working with threads is quite difficult as they can make things much more complicated. One way to organise threads so that they co-operate without tripping over each other is to use a messaging mechanism to communicate between them.
Why use messages?
Messages provide a good model for communication between independent processes because we humans use them all the time. We naturally coordinate and co-operate by sending messages to each other. The messages can be synchronous (like a conversation) or asynchronous (like an email or a letter) or broadcast (like a radio programme). The messaging paradigm is easy for us to imagine and understand. In particular, it provides a natural way for us to think about how things interact. We can easily imagine a process that is responsible for a particular action, which is started when it receives a message. The message may contain information needed for the action. When the action is complete, the process can report the result by sending another message. We can imagine a simple, independent process, all it has to do is wait for the arrival of a message, carry out a task, and send a message saying it has finished. What could be simpler?
What is a message bus?
A message bus provides a means of message transmission in which the sender and receiver are not directly connected. For example, Ethernet is a message bus – all senders put all their messages on the bus and, locally at least, all receivers receive every message.
When messages are sent on a bus, there needs to be a way for the receiver(s) to select the messages they need to process. There are various ways to do this, but for the message bus implemented in this article we allow a sender to label a message with a sender role, a subject and a message type. These may appear to be quite arbitrary properties, but they fit in with the way in which the bus is used and provide a straightforward way for receivers to filter messages so that they process only those messages that are relevant.
This form of messaging is usually referred to as a Publish and Subscribe model.
How our bus will work
These are the essential characteristics of our message bus:
- The message bus operates within a single application, to send messages between independent worker threads.
- Any worker thread in the application can access the message bus.
- Any worker thread may send and receive messages using the bus.
- Messages are broadcast , so every receiver that is listening will get every message.
- The bus does not store messages so a receiver will not get any messages that were sent before it connects to the bus.
- The thread that sends a message is separated from the thread(s) that receive it, so sending and receiving are always asynchronous.
- A receiver can set a filter to select only relevant messages for delivery – subscribing to a subset of the messages sent on the bus.
- Worker processes that send and receive messages are not held up by other worker threads when they do so. We want our senders and receivers to be working at their tasks without having to wait for messages to be delivered and processed by other threads.
Classes of the message bus
These are the classes which make up the bus:
- The base class of the bus and all the other classes. This class is never instantiated directly, but holds class (Shared) variables and methods that provide some core functions of the bus.
- A component that provides the mechanism for delivering messages to receivers.
- A component that provides and controls a thread for use within the sender and receiver classes.
- The class that is used by senders to put messages into the Message bus. Each worker process that sends messages uses a
- The class that is used by worker processes to subscribe and take delivery of messages from the bus.
- Class used to apply subscription filters to incoming messages within a
- Objects of this class are sent and received. In our system the message content is a string, but the class could be extended through inheritance to provide richer content.
cBus and cBusLink – the core of the message bus
cBus is the base class for all the other classes in the implementation.
cBus is a virtual class – it is never itself instantiated. It contains only one class member,
oBusLink, a shared instance of
oBusLink is protected, which means it is accessible only to derived (child) classes of
cBusLink, which are central to the whole message bus, are very simple (see Listing 1).
Listing 1 – cBus and cBusLink classes
Public Class cBus '// /////////////////////////////////////// '// The BusLink class is used only as a means of '// propagating publication of a message from '// senders to receivers. Protected Class cBusLink '// Event published with new message Public Event NewMessage(ByVal oMessage As cMessage) '// Event published when bus is stopped Public Event StopBus() '// Flag to indicate that the bus has been '// stopped. Provides orderly shutdown Private bStopped As Boolean = False '// Method to publish a message Public Sub PublishMessage(ByVal oMessage As cMessage) If bStopped Then Exit Sub RaiseEvent NewMessage(oMessage) End Sub '// Method to stop the bus, for orderly shutdown Public Sub StopBusNow() bStopped = True RaiseEvent StopBus() End Sub End Class '// Global shared single instance of cBusLink '// used to send messages to all receivers Protected Shared oBusLink As New cBusLink '// Global shared flag indicating the bus has '// been stopped Protected Shared bStopped As Boolean = False '// /////////////////////////////////////// '// ID generator is used by other classes to '// generate unique sequence numbers Protected Class cIDGenerator Private _ID As Long = 0 Public Function NextID() As Long _ID += 1 Return _ID End Function End Class '// //////////////////////////////////// '// Public method to stop the bus before '// closedown. Ensures orderly closedown. Public Shared Sub StopBusNow() bstopped = True oBusLink.StopBusNow() End Sub End Class
cBusLink is at the core of the message bus and is responsible for delivering messages to every recipient through the
NewMessage event. As we shall see later, every
cReceiver object holds a reference to a single shared
cBusLink object and they all subscribe to its
NewMessage event. When this event is fired, every
cReceiver object is given a reference to the new message.
Objects of the
cMessage class carry the message data from sender to recipient. In our implementation, the class has only a single string payload, see Listing 2 – but you can implement sub-types of
cMessage with additional properties and methods for more sophisticated communication between senders and receivers.
Listing 2 – cMessage class
Public Class cMessage Inherits cBus '// ///////////////////////////////// '// This class is a container for allocating '// unique message ids to each mec Private Shared _oMsgID As New cIDGenerator '// Properties of the message, accessible to derived '// classes Protected _SenderRole As String = "" Protected _SenderRef As String = "" Protected _Subject As String = "" Protected _Type As String = "" Protected _Content As String = "" '// Message ID is private, it cannot be changed, '// even by derived classes Private _MsgID As Long '// ///////////////////////////// '// Default constructor used only for '// derived classes Protected Sub New() _MsgID = _oMsgID.NextID End Sub '// ///////////////////////////// '// Public constructor requires key message '// properties to be supplied. The message '// cannot be modified thereafter. Public Sub New(ByVal Sender As String, _ ByVal Subject As String, _ ByVal Type As String, _ Optional ByVal Content As String = "") _SenderRole = Sender _Subject = Subject _Type = Type _Content = Content _MsgID = _oMsgID.NextID End Sub '// ///////////////////////////////////////////////// '// Property accessors - all read-only so values '// cannot be changed by any recipient. Public ReadOnly Property SenderRole() As String Get Return _SenderRole End Get End Property Public ReadOnly Property Subject() As String Get Return _Subject End Get End Property Public ReadOnly Property Type() As String Get Return _Type End Get End Property Public ReadOnly Property MsgID() As Long Get Return _MsgID End Get End Property Public ReadOnly Property Content() As String Get Return _Content End Get End Property '// '//////////////////////////// End Class
This class implementation is mostly straightforward, but some aspects are worth looking at more closely:
- The class inherits
cBusto gain access to the protected class
cIDGeneratorwhich is declared in the base class.
- All the variables that store property values, except for MsgID, are declared Protected so that they can be accessed within in a child class. MsgID is declared Private so its value cannot be changed by a child class.
cSender and its counterpart
cReceiver do all the hard work.
cSender is the class used by a worker thread to add messages to the bus. Before we look under the hood, let’s examine the public members of the class that a sending process will use.
First, a worker process that wants to send messages must instantiate an instance of
cSender, providing the sender’s role as a parameter. The role allows for the possibility that there might be multiple worker threads performing the same role within the application. A recipient can filter messages based on the role of the sender, but does not need to know that there is more than one sender acting in that role.
: Dim oSender as New cSender("clock") :
Once instantiated, the
cSender object can be used to send messages on the bus:
: Dim oMsg as New cMessage("time", "hourchange", "10>11") oSender.SendMessage oMsg :
In this case, the message has the type "time", the subject "hourchange" and the content "10>11".
Under the hood of the
cSender implementation uses a queue to separate the sender process from the bus. When the worker thread sends a message it is written to the injector queue, from where it is picked up by a separate injector thread and published through the bus link:
The injector runs on a separate thread, so that placing a message on the bus does not hold up the worker process. The injector thread is provided by a
cThread object which runs only when messages are waiting in the injector queue.
cThread is described in more detail below.
The implementation of the
cSender class is shown in Listing 3.
Listing 3 – cSender Class
Public Class cSender Inherits cBus '// ////////////////////////////////////////// '// Queue of messages waiting to be injected '// into the message bus. Each sender has its '// own private injector queue Private _oMsgQ As New System.Collections.Generic.Queue(Of cMessage) '// ///////////////////////////////////////// '// Reference to the global BusLink instance, used '// only to pick up the BusStopped event published '// by the bus when stopped. Private WithEvents oMyBusLink As cBusLink '// ///////////////////////////////////////// '// Event to inform owner the bus has stopped Public Event Stopped() '// Sender role, used to identify the sender and '// provide the key for filtering messages '// at the receiver. Private _Role As String Public ReadOnly Property Role() As String Get Return _Role End Get End Property #Region "Construct and destruct" '// ////////////////////////////////////////// '// Constructor with role (mandatory) Public Sub New(ByVal sRole As String) _Role = sRole '// Set the reference to the buslink to the '// shared instance of the single buslink. We '// need this reference to pick up the stop event oMyBusLink = oBusLink End Sub '// ////////////////////////////////////////////// '// This method is called when the bus is closed down Private Sub oBusLink_StopBus() Handles oMyBusLink.StopBus SyncLock _oMsgQ RaiseEvent Stopped() End SyncLock End Sub #End Region #Region "Sending messages" '// ///////////////////////////////////////// '// Method used by worker thread to place a '// new default cMessage object on the injector '// queue. Public Function SendNewMessage(ByVal Type As String, _ ByVal Subj As String, _ Optional ByVal Ref As String = "", _ Optional ByVal Content As String = "") As cMessage If BusStopped Then Return Nothing Dim oM As New cMessage(_Role, Type, Subj, Ref, Content) SendMessage(oM) Return oM End Function '// ////////////////////////////////////////// '// Method used by worker thread to place message '// object on the injector queue. Public Sub SendMessage(ByVal pMessage As cMessage) If BusStopped Then Exit Sub '// We do not allow Nothing to be sent If pMessage Is Nothing Then '// Do nothing '// We could throw an error here Else SyncLock _oMsgQ _oMsgQ.Enqueue(pMessage) '// Start the thread only if '// one message on the queue. If _oMsgQ.Count = 1 Then _oInjectorThread.Start() End If End SyncLock End If End Sub '// //////////////////////////////////////// '// Holds up the caller thread until all the messages '// have been injected into the bus Public Sub Flush() Do Until _oMsgQ.Count = 0 Threading.Thread.Sleep(2) Loop End Sub #End Region #Region "Message Injector" '// ////////////////////////////////////////// '// Functions run by the thread for injecting messages '// into the bus. The thread runs only when at '// least one message is waiting in the injector queue. Private WithEvents _oInjectorThread As New cThread '// ////////////////////////////////////////// '// Injector Thread fires Run event to place '// messages on the queue Private Sub _oInjectorThread_Run() Handles _oInjectorThread.Run InjectMessagesNow() End Sub '// /////////////////////////////////////////// '// When the injector thread runs, this function '// is called to push all the queued messages into '// the bus. Private Sub InjectMessagesNow() Dim oM As cMessage '// Loop until all messages in the '// queue have been injected into the '// bus. Do '// Check if stopped flag was set while '// going round loop. If BusStopped Then Exit Sub '// Get the next message off the '// injector queue SyncLock _oMsgQ If _oMsgQ.Count > 0 Then oM = _oMsgQ.Dequeue() Else oM = Nothing End If '// Release the lock so that the worker '// process can add new messages to '// the queue while we are publishing '// this message on the bus End SyncLock If oM Is Nothing Then '// Queue is empty, so finish the '// loop Exit Do End If '// Now we have got the message, we can '// send it using the single global '// cBusLink which is instantiated in the '// base class cBus. SyncLock oBusLink oBusLink.PublishMessage(oM) End SyncLock Loop End Sub #End Region Protected Overrides Sub Finalize() '// Close down the injector thread _oInjectorThread.StopThread() MyBase.Finalize() End Sub End Class
SendMessage is used by a worker process to place messages on the injector queue. The queue class is not threadsafe, so
SyncLock is used to protect the queue from simultaneous use by another thread. The injector thread is started only when a message is added to an empty queue, and this fires the event
The private method
_oInjectorThread_Run handles the injector thread
Run event. The method takes all the waiting messages from the injector queue, placing them in turn on the bus by using the BusLink’s
PublishMessage method. When the method exits, the thread is blocked in within
cThread until another message is placed on the empty queue. If a message is added to the injector queue while an earlier message is being sent on the bus, it will be included in the sending loop without needing the
Run event to fire again.
Objects of this class are used by worker processes to receive messages from the bus.
The process that creates the
cReceiver object can choose to set filters so that only relevant messages are delivered. More detail on filtering is given below.
When the receiver object connects to the bus, it sets its own private member variable
_BusLinkRef to refer to the shared member
_BusLinkRef is declared
WithEvents so that the
NewMessage event of the
cBusLink can be handled.
The thread that owns the receiver can set a
cFilter object on the receiver. Then every message received through the
NewMessage event is checked against the filter and, if it passes, it is added to the receiver’s incoming message queue, waiting to be delivered. The filter can be changed during the run.
Messages are delivered and processed in one of three ways:
- The worker thread calls
GetNextMessageto return the next message from the queue. If there are no messages waiting, the method returns Nothing.
- The worker thread calls DeliverMessages to deliver all queued messages through the
MessageReceivedevent. The events are raised on the worker thread.
- The creator/owner calls the
StartAsyncmethod to request that the receiver object provides a separate worker thread to raise the
MessageReceivedevent, when new messages arrive. The event is raised on a thread provided by a
DeliverMessages means that the receiver worker thread must set up its own processing loop, for example by having its own timer to repeat the loop. This is appropriate when, for example, the thread needs to interact with the GUI – using a Timer component on a form could provide the thread.
In contrast, using
StartAsync means that the
cReceiver object will create its own internal worker thread that raises the
Listing 4 – cReceiver class
Public Class cReceiver Inherits cBus '// ////////////////////////////////////// '// Id generator for all cReceiver objects Private Shared _oRecId As New cIDGenerator '// ////////////////////////////////////// '// Event used to deliver a message to the '// message handler function Public Event MessageReceived(ByVal oMessage As cMessage) '// ////////////////////////////////////// '// Event used to indicate the bus has stopped, '// used to ensure orderly shutdown of the bus Public Event Stopped() Public ReadOnly Property IsStopped() As Boolean Get Return BusStopped End Get End Property '// ////////////////////////////////////// '// Message queue holding the messages '// waiting to be delivered Private _MQueue As New System.Collections.Generic.Queue(Of cMessage) '// /////////////////////////////////////////// '// Filter set by the recipient to select '// messages. Fileter can be by specific role(s), '// subjects(s) or type(s) or using more specialised '// filters. Filters can be changed at any time. The '// default no filter allows all messages through. Public Filter As cFilter = Nothing '// ////////////////////////////////////////// '// Reference to the single global buslink '// so that the receiver can pick up published '// messages from the bus Private WithEvents _BusLinkRef As cBusLink '// Flag to indicate that this object has been '// finalised and is closing. Private _Closing As Boolean = False Private _RaiseStopEvent As Boolean = False '// ///////////////////////////////////////// '// Unique identifier of this receiver object Private _ID As Long '// ///////////////////////////////////////// '// Counts of number of messages received '// and delivered Private _BCount As Long = 0 ' Messages from the Bus Private _RCount As Long = 0 ' Messages received onto the queue Private _DCount As Long = 0 ' Messages delivered to the worker '// ////////////////////////////////// '// Constructor Public Sub New() _ID = _oRecId.NextID End Sub '// /////////////////////////////////// '// Establishes connection to the bus so that '// message delivery can start Public Sub Connect() '// ///////////////////////////////////////// '// Set the buslink variable to refer to the '// shared buslink so that it delivers '// messages through the event handler _BusLinkRef = oBusLink '// NOTE: oBus is a direct reference to '// the protected shared class member. End Sub '// //////////////////////////////////////// '// Breaks the connection with the bus '// so that messages are no longer '// received. Public Sub Disconnect() _BusLinkRef = Nothing End Sub '// ///////////////////////////////// '// Accessor methods for the readonly '// properties Public ReadOnly Property BCount() As Long '// Bus message count Get Return _BCount End Get End Property Public ReadOnly Property RCount() As Long '// Received message count Get Return _RCount End Get End Property Public ReadOnly Property DCount() As Long '// Delivered message count Get Return _DCount End Get End Property Public ReadOnly Property QCount() As Long '// Queued (waiting) message count Get If _MQueue IsNot Nothing Then Return _MQueue.Count Else Return 0 End If End Get End Property Public ReadOnly Property ID() As Long '// Unique ID number of this receiver Get Return _ID End Get End Property Public Function MessagesWaiting() As Boolean '// Helper property returns true if there '// are messages waiting Return QCount > 0 End Function #Region "Message arrival" '// ////////////////////////////////// '// This method handles the new message '// event from the bus. The message is '// queued for delivery. Private Sub oBusLink_NewMessage( _ ByVal oMessage As cMessage _ ) Handles _BusLinkRef.NewMessage '// Discard message if closing, or the bus has stopped If _Closing Then Exit Sub If BusStopped Then Exit Sub _BCount += 1 '// //////////////////////////// '// Check against the filter. '// The message must be included by the filter '// otherwise it will not be delivered. Select Case True Case Filter Is Nothing, Filter.bInclude(oMessage) '// /////////////////////////////// '// New message has passed the filter, so '// add it to the message queue waiting '// for delivery to the worker process. AddToQueue(oMessage) End Select End Sub '// //////////////////////////////// '// Method used to add messages '// to the message queue when they arrive '// from the message bus. Private Sub AddToQueue(ByVal oMessage As cMessage) '// //////////////////////////////////////////// '// Check if the queue exists - if not, then '// exit without adding a message. If _MQueue Is Nothing Then Exit Sub '// //////////////////////////////////////////// '// Check if closing or stopped, if so exit If BusStopped Then Exit Sub If _Closing Then Exit Sub Dim bStartDelivery As Boolean '// //////////////////////////////////////////// '// SyncLock the queue to guarantee exclusive '// access, then add the message SyncLock _MQueue _RCount += 1 _MQueue.Enqueue(oMessage) '// //////////////////////////////////////////////// '// We start the delivery thread if async AND '// this is the first message in the queue bStartDelivery = _AsyncMode And _MQueue.Count = 1 End SyncLock '// ////////////////////////////// '// Check if we need to start the delivery thread '// which we do only in async mode and if this is '// the first message in the queue If bStartDelivery Then _DeliveryThread.Start() End If End Sub #End Region #Region "Message delivery" '// //////////////////////////////// '// '// Message delivery can be made in these '// ways: '// * Asynchronously on a provided thread '// - call StartAsync to enable this '// - messages are delivered through MessageReceived event '// '// * By a call from the worker thread '// - use GetNextMessage to retrieve the message '// '// GetNextMessage returns the next '// message as the function result. '// It returns Nothing if '// there is no message in the queue '// '// //////////////////////////////// '// Delivery thread is used with asynch delivery only Private WithEvents _DeliveryThread As cThread = Nothing Private _AsyncMode As Boolean = False '//////////////////////////////////// '// Starts Asynchronous delivery through the NewMessage event. '// Called by the creator/owner to initiate a new thread delivering '// messages from this receiver. Public Sub StartAsync() '// Do nothing if closing, stopped or already in asyinc mode. If _Closing Then Exit Sub If BusStopped Then Exit Sub If _AsyncMode Then Exit Sub _AsyncMode = True '// Create and start the delivery thread. If _DeliveryThread Is Nothing Then _DeliveryThread = New cThread _DeliveryThread.Start() End Sub '// /////////////////////////////////////////////// '// Picks up the next message from the queue '// if any and returns it. Returns Nothing '// if there is no message. Public Function GetNextMessage() As cMessage '// Do not return anything if closing or stopped If _Closing Then Return Nothing If BusStopped Then Return Nothing Dim oM As cMessage '// Lock the queue and get the next message SyncLock _MQueue If _MQueue.Count > 0 Then oM = _MQueue.Dequeue _DCount += 1 Else oM = Nothing End If End SyncLock '// Return the message (if any) Return oM End Function '// /////////////////////////////////////////////// '// This event handler is called when the thread runs '// - only when messages are waiting to be delivered in '// async mode Private Sub _DeliveryThread_Run() Handles _DeliveryThread.Run DeliverWaitingMessages() End Sub '// /////////////////////////////////////////////// '// Delivers all the messages in the incoming '// message queue using the MessageReceived event Public Sub DeliverWaitingMessages() '// Raise the stop event if the bus has been stopped If BusStopped Then '// Inform the delivery thread If _RaiseStopEvent Then RaiseEvent Stopped() _RaiseStopEvent = False End If Exit Sub End If '// Do nothing if closing If _Closing Then Exit Sub '// The queue may be nothing , so simply '// exit and try again on the cycle If _MQueue Is Nothing Then Exit Sub Dim oM As cMessage '// Retrieve all the messages and deliver them '// using the message received event. Do '// Lock the queue before dequeuing the message SyncLock _MQueue If _MQueue.Count > 0 Then oM = _MQueue.Dequeue Else oM = Nothing End If End SyncLock '// /////// '// After releasing the lock we '// can deliver the message. If oM IsNot Nothing Then _DCount += 1 RaiseEvent MessageReceived(oM) End If '// If the queue was not empty then loop back for the '// next message Loop Until oM Is Nothing End Sub #End Region #Region "Stats Report" '//////////////////////////////////////////////// '// This sub simply publishes a message of '// stats about this receiver. Public Sub StatsReport() If BusStopped Then Exit Sub Dim sRpt As String sRpt = "Report from Receiver #" & Me.ID sRpt &= "|BUS=" & _BCount sRpt &= "|REC=" & _RCount sRpt &= "|DEL=" & _DCount sRpt &= "|Q=" & _MQueue.Count sRpt &= "|Closing=" & _Closing Dim s As New cSender("Receiver#" & ID) s.SendNewMessage("STATS", "STATS", sRpt) s.Flush() s = Nothing End Sub #End Region '// /////////////////////////////////// '// Handler for the stopbus event. Do '// not deliver any more messages once the '// bus has been stopped. Private Sub oBusLinkRef_StopBus() Handles _BusLinkRef.StopBus _Closing = True '_DeliveryTimer = Nothing _AsyncMode = False _RaiseStopEvent = True End Sub '// //////////////////////////////////// '// Finalise to tidy up resources when being disposed Protected Overrides Sub Finalize() _DeliveryThread.StopThread() _Closing = True _AsyncMode = False _MQueue = Nothing MyBase.Finalize() End Sub End Class
cThread class provides a thread and the control methods needed to block and release the thread as required.
By default, the thread is blocked. The class provides a method,
Start, which unblocks the thread. The thread immediately raises the
Run event to carry out the processing required, and then blocks again until the
Start method is called again, when it repeats the
In our message bus,
cThread is used in
cSender to inject messages onto the bus, and in
cReceiver to deliver messages, when operating in Async mode. In both of these classes the
Run event handler picks messages off a queue until it is empty, then exits. It is quite likely that new messages are added to the queue while the handler is running, and these are picked up in the handler loop. Eventually, the queue is empty and if Start has not been called again, the thread blocks until it is.
The implementation of the class is shown in Listing 5.
Listing 5 – cThread class
Public Class cThread Inherits cBus Private WithEvents _BusLinkRef As cBusLink = oBusLink Private Shared iThreadCount As Long = 0 '// Event fired to execute the thread's '// assigned processes. Public Event Run() '// Thread object provides the thread Private _Thread As New Thread(AddressOf RunThread) '// Signal object to block the thread '// when there are no messages to be delivered Private _Signal As New EventWaitHandle(False, EventResetMode.AutoReset) '// Flag to indicate thread has been stopped Private bThreadStopped As Boolean = False '// Start the thread on creation of the object Public Sub New() _Thread.Start() End Sub '// Start called by owner to '// unblock this thread. Public Sub Start() If _Thread.ThreadState = ThreadState.Unstarted Then _Thread.Start() SyncLock Me _Signal.Set() End SyncLock End Sub '// Stop called by owner to close '// down thread Public Sub StopThread() bThreadStopped = True _Signal.Set() End Sub '// Method executed by the thread. This is '// a repeated loop until the bus is stopped Private Sub RunThread() Do '// The signal blocks the thread until '// it is released by the Start method _Signal.WaitOne() If bThreadStopped Then Exit Sub End If '// Raise the thread event that will '// do the work. RaiseEvent Run() Loop End Sub Private Sub _BusLinkRef_StopBus() Handles _BusLinkRef.StopBus StopThread() End Sub End Class
cFilter objects are used by
cReceiver to apply filtering to incoming messages. The base
cFilter class is declared Must Override so cannot be instantiated. It is only by defining a child class to apply some filtering logic that messages get filtered. This is how it works:
- The base class,
cFilterdefines a Protected Must Override method
bMatches, which takes a
Messageobject as a parameter. In a child class this method is overridden to implement specific filtering logic.
cFilterdefines a Public method,
bInclude, which takes a message object as a parameter and returns true if the message is to be included, and false if not. This is the method used by
cReceiverto check if a message passes the filter. Apart from testing its own
bMatchesvalue, this method also contains the logic to check other
cFilterobjects that have been attached in And / Or collections.
- Four further methods,
Or_Notprovide the means to add other filter objects to the And/Or collections of this filter.
Or_ etc. methods makes it easy to build compound logical conditional tests using basic filter components. For example, if I have two filter objects FilterA and FilterB, they can be combined as FilterA.Or_(FilterB), or FilterA.And_(FilterB). It is also possible to combine several chains of filters. For example, FilterA.And_(FilterB.Or_Not(FilterC)) implements the filter condition A and (B or not C).
Actual filtering classes implemented
Various specialised classes of
cFilter are implemented to provide filtering on sender role, message type and subject. These include, for example,
cSubjectEquals. As their names suggest, these filters check that the key fields of the message match a given string.
A worker process that uses
cReceiver can apply filters to the incoming message simply by setting the Filter property of the receiver:
: Dim oReceiver as new cReceiver oReceiver.Filter = new cRoleEquals("monitor") :
Inside cFilter and its derived classes
cFilter class defines the protected MustOverride method
bMatches. The derived classes override
bMatches, providing the appropriate code to determine the match. For example, in the case of the
cSubjectContains class, the overriding
bMatches method is:
: Public Overrides Function bMatches(ByVal oMessage As cMessage) As Boolean Return oMessage.Subject.Contains(FilterString) End Function :
If you need to have a more specialised filtering mechanism in your application, it is easy to define a derived class of
cFilter that implements whatever logic you need in bMatches.
Listing 6 – cFilter class and derived classes
'// The filter base class is used to implement '// message filtering on incoming messages '// at each receiver. Filters can be grouped in '// AND and OR groups - the message is '// included if it matches all filters in the '// AND group or any filter in the OR group. Public MustInherit Class cFilter Inherits cBus '// A collection of filters which this filter must AND '// with to allow the message through Private oAnds As New System.Collections.Generic.List(Of cFilter) '// A collection of filters which this filter must OR '// with to allow the message through Private oOrs As New System.Collections.Generic.List(Of cFilter) '// Check if the message is included by this filter Public Function bInclude(ByVal oMessage As cMessage) As Boolean Dim bResult As Boolean '// First, test this filter alone bResult = bMatches(oMessage) Dim oFF As cFilter '// If this filter matches, then check all the '// ANDs to see if they also match If bResult Then For Each oFF In oAnds bResult = oFF.bMatches(oMessage) '// As soon as we find the first failure to '// match we know the result is a non-match '// for this filter and all its ANDs If Not bResult Then Exit For Next End If '// If all the ANDS were true, then the whole result '// is true regardless of the OR result. If bResult Then Return True '// The ANDs did not match, so now '// we find if any one OR matches, and if so '// the result is true For Each oFF In oOrs bResult = oFF.bInclude(oMessage) If bResult Then Return True Next oFF '// No match on any of the ORS, so '// the message does not match this filter Return False End Function '// /////////////////////////////////// '// This method must be overridden in child '// classes to implement the matching test. Protected MustOverride Function bMatches( _ ByVal omessage As cMessage) As Boolean '// /////////////////////////////////// '// These methods add a given filter to the '// ANDs or ORs collections to build filtering '// logic. Public Function And_(ByVal oFilter As cFilter) As cFilter oAnds.Add(oFilter) Return Me End Function Public Function Or_(ByVal ofilter As cFilter) As cFilter oOrs.Add(ofilter) Return Me End Function Public Function Or_Not(ByVal ofilter As cFilter) As cFilter oOrs.Add(Not_(ofilter)) Return Me End Function Public Function And_Not(ByVal oFilter As cFilter) As cFilter oAnds.Add(Not_(oFilter)) Return Me End Function '// '// /////////////////////////////////////// '// /////////////////////////////////////// '// Class and function to provide negation '// of a filter condition Private Class cNot Inherits cFilter Private oNotFilter As cFilter Public Sub New(ByVal oFilter As cFilter) oNotFilter = oFilter End Sub Protected Overrides Function bMatches(ByVal omessage As cMessage) As Boolean Return Not oNotFilter.bMatches(omessage) End Function End Class Private Function Not_(ByVal oFilter As cFilter) As cFilter Return New cNot(oFilter) End Function '// '// ///////////////////////////////////////////// End Class #Region "Filter implementations" '// ///////////////////////////////////////// '// Derived specialised classes for implementing '// different specific filters. Public Class cTypeContains Inherits cFilter Public FilterString As String Public Sub New(ByVal sFilter As String) FilterString = sFilter End Sub Protected Overrides Function bMatches( _ ByVal oMessage As cMessage) As Boolean Return oMessage.Type.Contains(FilterString) End Function End Class Public Class cTypeEquals Inherits cFilter Public FilterString As String Public Sub New(ByVal sFilter As String) FilterString = sFilter End Sub Protected Overrides Function bMatches( _ ByVal oMessage As cMessage) As Boolean Return oMessage.Type = FilterString End Function End Class Public Class cRoleContains Inherits cFilter Public FilterString As String Public Sub New(ByVal sFilter As String) FilterString = sFilter End Sub Protected Overrides Function bMatches( _ ByVal oMessage As cMessage) As Boolean Return oMessage.SenderRole.Contains(FilterString) End Function End Class Public Class cRoleEquals Inherits cFilter Public FilterString As String Public Sub New(ByVal sFilter As String) FilterString = sFilter End Sub Protected Overrides Function bMatches( _ ByVal oMessage As cMessage) As Boolean Return oMessage.SenderRole = FilterString End Function End Class Public Class cSubjectContains Inherits cFilter Public FilterString As String Public Sub New(ByVal sFilter As String) FilterString = sFilter End Sub Protected Overrides Function bMatches( _ ByVal oMessage As cMessage) As Boolean Return oMessage.Subject.Contains(FilterString) End Function End Class Public Class cSubjectEquals Inherits cFilter Public FilterString As String Public Sub New(ByVal sFilter As String) FilterString = sFilter End Sub Protected Overrides Function bMatches( _ ByVal oMessage As cMessage) As Boolean Return oMessage.Subject = FilterString End Function End Class Public Class cRoleTypeSubjectFilter Inherits cFilter Public sRole As String = "" Public sType As String = "" Public sSubject As String = "" Protected Overrides Function bMatches( _ ByVal oMessage As cMessage) As Boolean Return oMessage.Type = sType _ And oMessage.SenderRole = sRole _ And oMessage.Subject = sSubject End Function End Class '// '/////////////////////////////////////////////// #End Region
A demo application
The demo application included in the zip file is a simple windows forms application that includes a number of components that communicate with each other via the MessageBus:
- The main control form provides buttons for opening the other form types
- A mouse tracker form, that monitors mouse movements over the form and sends mouse movement messages on the bus
- A clock object that sends a time message whenever the time ticks past a tenth of a second, a second, a minute or an hour.
- A mouse follower form, that monitors mouse movement messages from the bus and positions a red box on the form at the position indicated by the message. This form also receives clock events from the bus and displays the time, as sent out by the clock object.
- A message sender form, which can generate bus messages of different types at a frequency set by the user
- A message receiver form, that lists messages received, optionally filtered on attributes set by the user
The user can open as many sender forms, receiver forms and mouse follower forms as they wish, and can set the message types to be sent and received. Each of the forms operates independently of the others.
|
2026-02-01T06:47:46.234980
|
938,401
| 3.73235
|
http://en.wikipedia.org/wiki/Evolutionary_origin_of_religions
|
Evolutionary origin of religions
- 1 Nonhuman religious behaviour
- 2 Setting the stage for human religion
- 3 Evolutionary psychology of religion
- 4 Prehistoric evidence of religion
- 5 See also
- 6 References
- 7 External links
Nonhuman religious behaviour
Humanity’s closest living relatives are common chimpanzees and bonobos. These primates share a common ancestor with humans who lived between four and six million years ago. It is for this reason that chimpanzees and bonobos are viewed as the best available surrogate for this common ancestor. Barbara King argues that while non-human primates are not religious, they do exhibit some traits that would have been necessary for the evolution of religion. These traits include high intelligence, a capacity for symbolic communication, a sense of social norms, realization of "self" and a concept of continuity. There is inconclusive evidence that Homo neanderthalensis may have buried their dead which is evidence of the use of ritual. The use of burial rituals is evidence of religious activity, but there is no other evidence that religion existed in human culture before humans reached behavioral modernity.
Marc Bekoff, Professor Emeritus of Ecology and Evolutionary Biology at the University of Colorado, Boulder, argues that many species grieve death and loss.
Setting the stage for human religion
Increased brain size
In this set of theories, the religious mind is one consequence of a brain that is large enough to formulate religious and philosophical ideas. During human evolution, the hominid brain tripled in size, peaking 500,000 years ago. Much of the brain's expansion took place in the neocortex. This part of the brain is involved in processing higher order cognitive functions that are connected with human religiosity. The neocortex is associated with self-consciousness, language and emotion. According to Dunbar's theory, the relative neocortex size of any species correlates with the level of social complexity of the particular species. The neocortex size correlates with a number of social variables that include social group size and complexity of mating behaviors. In chimpanzees the neocortex occupies 50% of the brain, whereas in modern humans it occupies 80% of the brain.
Robin Dunbar argues that the critical event in the evolution of the neocortex took place at the speciation of archaic homo sapiens about 500,000 years ago. His study indicates that only after the speciation event is the neocortex large enough to process complex social phenomena such as language and religion. The study is based on a regression analysis of neocortex size plotted against a number of social behaviors of living and extinct hominids.
Stephen Jay Gould suggests that religion may have grown out of evolutionary changes which favored larger brains as a means of cementing group coherence among savannah hunters, after that larger brain enabled reflection on the inevitability of personal mortality.
Lewis Wolpert argues that causal beliefs that emerged from tool use played a major role in the evolution of belief. The manufacture of complex tools requires creating a mental image of an object which does not exist naturally before actually making the artifact. Furthermore, one must understand how the tool would be used, that requires an understanding of causality. Accordingly, the level of sophistication of stone tools is a useful indicator of causal beliefs. Wolpert contends use of tools composed of more than one component, such as hand axes, represents an ability to understand cause and effect. However, recent studies of other primates indicate that causality may not be a uniquely human trait. For example, chimpanzees have been known to escape from pens closed with multiple latches, which was previously thought could only have been figured out by humans who understood causality. Chimpanzees are also known to mourn the dead, and notice things that have only aesthetic value, like sunsets, both of which may be considered to be components of religion or spirituality. The difference between the comprehension of causality by humans and chimpanzees is one of degree. The degree of comprehension in an animal depends upon the size of the prefrontal cortex: the greater the size of the prefrontal cortex the deeper the comprehension.
Development of language
Religion requires a system of symbolic communication, such as language, to be transmitted from one individual to another. Philip Lieberman states "human religious thought and moral sense clearly rest on a cognitive-linguistic base". From this premise science writer Nicholas Wade states:
- "Like most behaviors that are found in societies throughout the world, religion must have been present in the ancestral human population before the dispersal from Africa 50,000 years ago. Although religious rituals usually involve dance and music, they are also very verbal, since the sacred truths have to be stated. If so, religion, at least in its modern form, cannot pre-date the emergence of language. It has been argued earlier that language attained its modern state shortly before the exodus from Africa. If religion had to await the evolution of modern, articulate language, then it too would have emerged shortly before 50,000 years ago."
Another view distinguishes individual religious belief from collective religious belief. While the former does not require prior development of language, the latter does. The individual human brain has to explain a phenomenon in order to comprehend and relate to it. This activity predates by far the emergence of language and may have caused it. The theory is, belief in the supernatural emerges from hypotheses arbitrarily assumed by individuals to explain natural phenomena that cannot be explained otherwise. The resulting need to share individual hypotheses with others leads eventually to collective religious belief. A socially accepted hypothesis becomes dogmatic backed by social sanction.
Morality and group living
Frans de Waal and Barbara King both view human morality as having grown out of primate sociality. Though morality awareness may be a unique human trait, many social animals, such as primates, dolphins and whales, have been known to exhibit pre-moral sentiments. According to Michael Shermer, the following characteristics are shared by humans and other social animals, particularly the great apes:
- "attachment and bonding, cooperation and mutual aid, sympathy and empathy, direct and indirect reciprocity, altruism and reciprocal altruism, conflict resolution and peacemaking, deception and deception detection, community concern and caring about what others think about you, and awareness of and response to the social rules of the group".
De Waal contends that all social animals have had to restrain or alter their behavior for group living to be worthwhile. Pre-moral sentiments evolved in primate societies as a method of restraining individual selfishness and building more cooperative groups. For any social species, the benefits of being part of an altruistic group should outweigh the benefits of individualism. For example, lack of group cohesion could make individuals more vulnerable to attack from outsiders. Being part of a group may also improve the chances of finding food. This is evident among animals that hunt in packs to take down large or dangerous prey.
All social animals have hierarchical societies in which each member knows its own place. Social order is maintained by certain rules of expected behavior and dominant group members enforce order through punishment. However, higher order primates also have a sense of reciprocity and fairness. Chimpanzees remember who did them favors and who did them wrong. For example, chimpanzees are more likely to share food with individuals who have previously groomed them.
Chimpanzees live in fission-fusion groups that average 50 individuals. It is likely that early ancestors of humans lived in groups of similar size. Based on the size of extant hunter-gatherer societies, recent Paleolithic hominids lived in bands of a few hundred individuals. As community size increased over the course of human evolution, greater enforcement to achieve group cohesion would have been required. Morality may have evolved in these bands of 100 to 200 people as a means of social control, conflict resolution and group solidarity. According to Dr. de Waal, human morality has two extra levels of sophistication that are not found in primate societies. Humans enforce their society’s moral codes much more rigorously with rewards, punishments and reputation building. Humans also apply a degree of judgment and reason not otherwise seen in the animal kingdom.
Psychologist Matt J. Rossano argues that religion emerged after morality and built upon morality by expanding the social scrutiny of individual behavior to include supernatural agents. By including ever-watchful ancestors, spirits and gods in the social realm, humans discovered an effective strategy for restraining selfishness and building more cooperative groups. The adaptive value of religion would have enhanced group survival. Rossano is referring here to collective religious belief and the social sanction that institutionalized morality. According to Rossano's teaching, individual religious belief is thus initially epistemological, not ethical, in nature.
Evolutionary psychology of religion
There is general agreement among cognitive scientists that religion is an outgrowth of brain architecture that evolved early in human history. However, there is disagreement on the exact mechanisms that drove the evolution of the religious mind. The two main schools of thought hold that either religion evolved due to natural selection and has selective advantage, or that religion is an evolutionary byproduct of other mental adaptations. Stephen Jay Gould, for example, believed that religion was an exaptation or a spandrel, in other words that religion evolved as byproduct of psychological mechanisms that evolved for other reasons.
Such mechanisms may include the ability to infer the presence of organisms that might do harm (agent detection), the ability to come up with causal narratives for natural events (etiology), and the ability to recognize that other people have minds of their own with their own beliefs, desires and intentions (theory of mind). These three adaptations (among others) allow human beings to imagine purposeful agents behind many observations that could not readily be explained otherwise, e.g. thunder, lightning, movement of planets, complexity of life, etc. The emergence of collective religious belief identified the agents as deities that standardized the explanation.
Some scholars have suggested that religion is genetically "hardwired" into the human condition. One controversial hypothesis, the God gene hypothesis, states that some variants of a specific gene, the VMAT2 gene, predispose to spirituality.
Another view is based on the concept of the triune brain: the reptilian brain, the limbic system, and the neocortex, proposed by Paul D. MacLean. Collective religious belief draws upon the emotions of love, fear, and gregariousness and is deeply embedded in the limbic system through sociobiological conditioning and social sanction. Individual religious belief utilizes reason based in the neocortex and often varies from collective religion. The limbic system is much older in evolutionary terms than the neocortex and is, therefore, stronger than it much in the same way as the reptilian is stronger than both the limbic system and the neocortex. Reason is pre-empted by emotional drives. The religious feeling in a congregation is emotionally different from individual spirituality even though the congregation is composed of individuals. Belonging to a collective religion is culturally more important than individual spirituality though the two often go hand in hand. This is one of the reasons why religious debates are likely to be inconclusive.
Yet another view is that the behaviour of people who participate in a religion makes them feel better and this improves their fitness, so that there is a genetic selection in favor of people who are willing to believe in religion. Specifically, rituals, beliefs, and the social contact typical of religious groups may serve to calm the mind (for example by reducing ambiguity and the uncertainty due to complexity) and allow it to function better when under stress. This would allow religion to be used as a powerful survival mechanism, particularly in facilitating the evolution of hierarchies of warriors, which if true, may be why many modern religions tend to promote fertility and kinship.
Still another view is that human religion was a product of an increase in dopaminergic functions in the human brain and a general intellectual expansion beginning around 80 kya. Dopamine promotes an emphasis on distant space and time, which is critical for the establishment of religious experience. While the earliest shamanic cave paintings date back around 40 kya, the use of ochre for rock art predates this and there is clear evidence for abstract thinking along the coast of South Africa by 80 kya.
Prehistoric evidence of religion
When humans first became religious remains unknown, but there is credible evidence of religious behavior from the Middle Paleolithic era (300–500 thousand years ago) and possibly earlier.
The earliest evidence of religious thought is based on the ritual treatment of the dead. Most animals display only a casual interest in the dead of their own species. Ritual burial thus represents a significant change in human behavior. Ritual burials represent an awareness of life and death and a possible belief in the afterlife. Philip Lieberman states "burials with grave goods clearly signify religious practices and concern for the dead that transcends daily life."
The earliest evidence for treatment of the dead comes from Atapuerca in Spain. At this location the bones of 30 individuals believed to be Homo heidelbergensis have been found in a pit. Neanderthals are also contenders for the first hominids to intentionally bury the dead. They may have placed corpses into shallow graves along with stone tools and animal bones. The presence of these grave goods may indicate an emotional connection with the deceased and possibly a belief in the afterlife. Neanderthal burial sites include Shanidar in Iraq and Krapina in Croatia and Kebara Cave in Israel.
The earliest known burial of modern humans is from a cave in Israel located at Qafzeh. Human remains have been dated to 100,000 years ago. Human skeletons were found stained with red ochre. A variety of grave goods were found at the burial site. The mandible of a wild boar was found placed in the arms of one of the skeletons. Philip Lieberman states:
- "Burial rituals incorporating grave goods may have been invented by the anatomically modern hominids who emigrated from Africa to the Middle East roughly 100,000 years ago".
Matt Rossano suggests that the period in between 80,000–60,000 years after humans retreated from the Levant to Africa was a crucial period in the evolution of religion.
The use of symbolism
The use of symbolism in religion is a universal established phenomenon. Archeologist Steven Mithen contends that it is common for religious practices to involve the creation of images and symbols to represent supernatural beings and ideas. Because supernatural beings violate the principles of the natural world, there will always be difficulty in communicating and sharing supernatural concepts with others. This problem can be overcome by anchoring these supernatural beings in material form through representational art. When translated into material form, supernatural concepts become easier to communicate and understand. Due to the association of art and religion, evidence of symbolism in the fossil record is indicative of a mind capable of religious thoughts. Art and symbolism demonstrates a capacity for abstract thought and imagination necessary to construct religious ideas. Wentzel van Huyssteen states that the translation of the non-visible through symbolism enabled early human ancestors to hold beliefs in abstract terms.
Some of the earliest evidence of symbolic behavior is associated with Middle Stone Age sites in Africa. From at least 100,000 years ago, there is evidence of the use of pigments such as red ochre. Pigments are of little practical use to hunter gatherers, thus evidence of their use is interpreted as symbolic or for ritual purposes. Among extant hunter gatherer populations around the world, red ochre is still used extensively for ritual purposes. It has been argued that it is universal among human cultures for the color red to represent blood, sex, life and death.
The use of red ochre as a proxy for symbolism is often criticized as being too indirect. Some scientists, such as Richard Klein and Steven Mithen, only recognize unambiguous forms of art as representative of abstract ideas. Upper paleolithic cave art provides some of the most unambiguous evidence of religious thought from the paleolithic. Cave paintings at Chauvet depict creatures that are half human and half animal.
Origins of organized religion
|Period years ago||Society type||Number of individuals|
Organized religion traces its roots to the neolithic revolution that began 11,000 years ago in the Near East but may have occurred independently in several other locations around the world. The invention of agriculture transformed many human societies from a hunter-gatherer lifestyle to a sedentary lifestyle. The consequences of the neolithic revolution included a population explosion and an acceleration in the pace of technological development. The transition from foraging bands to states and empires precipitated more specialized and developed forms of religion that reflected the new social and political environment. While bands and small tribes possess supernatural beliefs, these beliefs do not serve to justify a central authority, justify transfer of wealth or maintain peace between unrelated individuals. Organized religion emerged as a means of providing social and economic stability through the following ways:
- Justifying the central authority, which in turn possessed the right to collect taxes in return for providing social and security services.
- Bands and tribes consist of small number of related individuals. However, states and nations are composed of many thousands of unrelated individuals. Jared Diamond argues that organized religion served to provide a bond between unrelated individuals who would otherwise be more prone to enmity. In his book Guns, Germs, and Steel he argues that the leading cause of death among hunter-gatherer societies is murder.
- Religions that revolved around moralizing gods may have facilitated the rise of large, cooperative groups of unrelated individuals.
The states born out of the Neolithic revolution, such as those of Ancient Egypt and Mesopotamia, were theocracies with chiefs, kings and emperors playing dual roles of political and spiritual leaders. Anthropologists have found that virtually all state societies and chiefdoms from around the world have been found to justify political power through divine authority. This suggests that political authority co-opts collective religious belief to bolster itself.
Invention of writing
Following the neolithic revolution, the pace of technological development (cultural evolution) intensified due to the invention of writing 5000 years ago. Symbols that became words later on made effective communication of ideas possible. Printing invented only over a thousand years ago increased the speed of communication exponentially and became the main spring of cultural evolution. Writing is thought to have been first invented in either Sumeria or Ancient Egypt and was initially used for accounting. Soon after, writing was used to record myth. The first religious texts mark the beginning of religious history. The Pyramid Texts from ancient Egypt are one of the oldest known religious texts in the world, dating to between 2400–2300 BCE. Writing played a major role in sustaining and spreading organized religion. In pre-literate societies, religious ideas were based on an oral tradition, the contents of which were articulated by shamans and remained limited to the collective memories of the society's inhabitants. With the advent of writing, information that was not easy to remember could easily be stored in sacred texts that were maintained by a select group (clergy). Humans could store and process large amounts of information with writing that otherwise would have been forgotten. Writing therefore enabled religions to develop coherent and comprehensive doctrinal systems that remained independent of time and place. Writing also brought a measure of objectivity to human knowledge. Formulation of thoughts in words and the requirement for validation made mutual exchange of ideas and the sifting of generally acceptable from not acceptable ideas possible. The generally acceptable ideas became objective knowledge reflecting the continuously evolving framework of human awareness of reality that Karl Popper calls 'verisimilitude' – a stage on the human journey to truth.
- Gods and Gorillas
- King, Barbara (2007). Evolving God: A Provocative View on the Origins of Religion. Doubleday Publishing." ISBN 0-385-52155-3.
- Excerpted from Evolving God by Barbara J. King
- Palmer, Douglas, Simon Lamb, Guerrero Angeles. Gavira, and Peter Frances. Prehistoric Life: the Definitive Visual History of Life on Earth. New York, N.Y.: DK Pub., 2009.
- Ehrlich, Paul (2000). Human Natures: Genes, Cultures, and the Human Prospect. Washington, D.C.: Island Press. pp. page 214. ISBN 1-55963-779-X. "Religious ideas can theoretically be traced to the evolution of brains large enough to make possible the kind of abstract thought necessary to formulate religious and philosophical ideas"
- Dunbar, Robin (2003). THE SOCIAL BRAIN: Mind, Language, and Society in Evolutionary Perspective (– Scholar search).
- Stephen Jay Gould; Paul McGarr; Steven Peter Russell Rose (2007). "Challenges to Neo-Darwinism and Their Meaning for a Revised View of Human Consciousness". The richness of life: the essential Stephen Jay Gould. W. W. Norton & Company. pp. 232–233. ISBN 978-0-393-06498-8.
- Lewis Wolpert (2006). Six impossible things before breakfast, The evolutionary origins of belief. New York: Norton. ISBN 0-393-06449-2. "with regard to hafted tools, One would have to understand that the two pieces serve different purposes, and imagine how the tool could be used,"
- Wolpert, Lewis (2006). Six impossible things before breakfast, The evolutionary origins of belief. New York: Norton. p. page 82. ISBN 0-393-06449-2. "Belief in cause and effect has had the most enormous effect on human evolution, both physical and cultural. Tool use, with language, has transformed human evolution and let to what we now think of as belief"
- Lieberman (1991). Uniquely Human. Cambridge, Mass.: Harvard University Press. ISBN 0-674-92183-6.
- *"Wade, Nicholas – Before The Dawn, Discovering the lost history of our ancestors. Penguin Books, London, 2006. p. 8 p. 165" ISBN 1-59420-079-3
- Shermer, Michael (2004). "Why are we moral:The evolutionary origins of morality". The Science of Good and Evil. New York: Times Books. ISBN 0-8050-7520-8.
- Videos of chimpanzee food sharing
- Rossano, Matt (2007). Supernaturalizing Social Life: Religion and the Evolution of Human Cooperation.
- Nicholas Wade. Scientist Finds the Beginnings of Morality in Primate Behavior. New York Times. March 20, 2007.
- Matthew Rutherford. The Evolution of Morality. University of Glasgow. 2007. Retrieved June 6, 2008
- Evolutionary Religious Studies (ERS): A Beginner’s Guide
- A scientific exploration of how we have come to believe in God
- Toward an evolutionary psychology of religion and personality
- The evolutionary psychology of religion Steven Pinker
- Atran, S; Norenzayan, A (2004). "Religion's Evolutionary Landscape Scott Atran Ara Norenzayan". The Behavioral and brain sciences (Behavioral and Brain Sciences) 27 (6): 713–30; discussion 730–70. PMID 16035401.
- Kluger, Jeffrey; Jeff Chu, Broward Liston, Maggie Sieger, Daniel Williams (2004-10-25). "Is God in our genes?". TIME. Time Inc. Retrieved 2007-04-08.
- Lionel Tiger and Michael McGuire (2010). God's Brain. Prometheus Books. ISBN 978-1-61614-164-6. see pages 202-204
- Previc, F.H. (2009). The dopaminergic mind in human evolution and history. New York: Cambridge University Press.
- Previc, F.H. (2011). Dopamine, altered consciousness, and distant space with special reference to shamanic ecstasy. In E. Cardona & M. Winkelman (eds.), Altering consciousness: Multidisciplinary perspectives (Vol. 1), pp. 43-60. Santa Barbara, CA: ABC-CLIO, LLC.
- Previc, F. H. (2006). The role of the extrapersoal brain systems in religious activity. Consciousness and Cognition, 15, 500-539.
- Elephants may pay homage to the dead
- Greenspan, Stanley (2006). How Symbols, Language, and Intelligence Evolved from Early Primates to Modern Human. Cambridge, MA: Da Capo Press. ISBN 0-306-81449-8.
- "The Neanderthal dead:exploring mortuary variability in Middle Palaeolithic Eurasia".
- Evolving in their graves: early burials hold clues to human origins – research of burial rituals of Neanderthals
- "BBC article on the Neanderthals". "Neanderthals buried their dead, and one burial at Shanidar in Iraq was accompanied by grave goods in the form of plants. All of the plants are used in recent times for medicinal purposes, and it seems likely that the Neanderthals also used them in this way and buried them with their dead for the same reason. Grave goods are an archaeological marker of belief in an afterlife, so Neanderthals may well have had some form of religious belief."
- . Uniquely Human page 163
- Rossano, Matt (2009). "The African Interregnum: The “Where,” “When,” and “Why” of the Evolution of Religion". The African Interregnum: The "Where," "When," and "Why" of the Evolution of Religion. The Frontiers Collection. p. 127. doi:10.1007/978-3-642-00128-4_9. ISBN 978-3-642-00127-7.
- Symbolism and the Supernatural
- "Human Uniqueness and Symbolization". "This 'coding of the non-visible' through abstract, symbolic thought, enabled also our early human ancestors to argue and hold beliefs in abstract terms. In fact, the concept of God itself follows from the ability to abstract and conceive of 'person'"
- Rossano, Matt (2007). The Religious Mind and the Evolution of Religion.
- Diamond, Jared (1997). "Chapter 14, From Egalitarianism to Kleptocracy". Guns Germs and Steel. New York, NY: Norton. p. 277. ISBN 0-393-03891-2.
- Norenzayan, A., & Shariff, A. F. (2008). The origin and evolution of religious prosociality. Science, 322, 58–62
- Budge, Wallis (1997). An Introduction to Ancient Egyptian Literature. Mineola, N.Y.: Dover Publications. pp. page 9. ISBN 0-486-29502-8.
- Allen, James (2007). The Ancient Egyptian Pyramid Texts. Atlanta, Ga.: Scholars Press. ISBN 1-58983-182-9.
- The beginning of religion at the beginning of the Neolithic
- Pyysiäinen, Ilkka (2004). "Holy Book:The Invention of writing and religious cognition". Magic, Miracles, and Religion: A Scientist's Perspective. Walnut Creek, CA: AltMira Press. ISBN 0-7591-0663-0.
- Objective Knowledge: An Evolutionary Approach, 1972, Rev. ed., 1979, ISBN 0-19-875024-2
- Robert Wright´s "The Evolution of God"
- Evolutionary Religious Studies at Binghamton University, An introduction to the study of religion from an evolutionary perspective.
- A 1998 speech at Biota 2 by Douglas Adams on the Origin of God
- Wilhelm Schmidt and the origin of religion – an opposing viewpoint
|
2026-02-01T20:56:45.618487
|
189,055
| 3.990512
|
http://physics.aps.org/story/v22/st2
|
Sound waves in a solid with wavelengths not much longer than the distance between atoms can potentially probe material properties almost on atomic scales. But detecting such vibrations is no easy feat. Reporting in the 4 and 11 July issues of Physical Review Letters, two research teams offer new ways to detect these vibrations, paving the way for studies of nanostructures, thin films, and interfaces between materials with high resolution in both space and time.
Jiggling of the atoms in a crystal gives rise to vibrational waves known as phonons, which can transport sound, heat, and other forms of energy through a solid. Measurements of phonon properties provide information on crystal structure and the forces among atoms. Phonons are often detected through their interactions with light, but when phonon wavelengths are similar to interatomic distances, much smaller than visible light wavelengths, such methods become ineffective. Although researchers believe they have produced short-wavelength phonons in the past, they haven’t been able to detect them directly or control them very precisely.
To create short-wavelength phonons, Mariano Trigo of the University of Michigan in Ann Arbor and his colleagues began with a technique previously used by others. They shot 50-femtosecond laser pulses at a nanostructure of alternating, several-atom-thick layers of gallium-indium arsenide and aluminum-indium arsenide. This so-called superlattice sat on top of a crystal of indium phosphide, the “substrate.” Because the two superlattice materials absorb light energy with different efficiencies, the laser pulses could excite terahertz frequency phonons of a special kind: the two several-atom-thick layers within each pair moved alternately toward and then away from each other.
But the team was then able to take the next step–as the phonons propagated into the substrate, the researchers detected them there using x rays. In the substrate, the superlattice phonons transformed into phonons with the same terahertz frequency and with wavelength approximately equal to the width of the superlattice’s paired layers. Such phonons cannot be excited by visible light directly in the substrate.
Passing through the substrate, these phonons briefly distorted its atomic structure. The researchers flashed the substrate with 100-picosecond x-ray bursts generated by the Advanced Photon Source at Argonne National Laboratory in Illinois. The x-ray flashes were not brief enough to give true instantaneous snapshots of the substrate’s atomic structure. But by timing the x-ray bursts relative to the laser pulses, the researchers could detect the progression of changes in the x-ray diffraction pattern during the passage of the phonons through the substrate. These x-ray data agreed with their predictions from computer simulations.
In their proposal of a different technique, Evan Reed of Lawrence Livermore National Laboratory in California and his colleagues used a computer simulation including millions of atoms. They modeled a brief, intense laser pulse sending a shock wave through a crystal of gallium nitride connected to a crystal of aluminum nitride. The leading edge of the shock created terahertz phonons that crossed into the aluminum nitride.
Both gallium nitride and aluminum nitride are piezoelectric, meaning they generate electric currents in response to squeezing or stretching of the crystalline lattice. Phonons traveling through each crystal generated no net current because simultaneous expansion and contraction of the lattice canceled. But as the phonons crossed the boundary between the two materials, the difference between their piezoelectric responses led to the appearance of currents. A sensor a few millimeters away could then detect the fields produced by the currents, the researchers showed.
Because currents appear only at the interface between the two materials, measurements of the transient fields allow the atomic motion at that boundary to be calculated with very fine time resolution, Reed explains. High-frequency phonons often show up in simulations of shock waves in materials but haven’t yet been observed, he adds, so such experiments “could verify that these oscillations really do happen.”
Richard Averitt of Boston University notes that experiments along the lines proposed by Reed and his colleagues could be quite difficult but says the technique could allow novel investigations, for example of shock physics at or near the atomic scale. The work by Trigo’s group, meanwhile, holds promise for further experiments that would take advantage of the new high-intensity femtosecond x-ray facilities to study phenomena such as nanoscale heat transport. “It will be exciting to see how the experiments in these areas play out over the next several years,” Averitt says.
David Lindley is a freelance science writer in Alexandria, Virginia.
|
2026-01-21T03:55:13.483154
|
533,016
| 3.639553
|
http://www.dailyenergyreport.com/solid-waste-to-energy-is-it-rd-or-market-ready/
|
Waste-to-Energy is a multifaceted concept; it means different things to different people, is underestimated in complexity and questionable in terms of profitability and carbon neutrality. Waste can be solid or liquid; gaseous waste products are referred to as emissions. Energy can be a stream of electrons injected into the grid as electricity or combustible fuel commodities such as ethanol or synthetic fuels. The emissions and the restriction thereof, from converting solid waste into energy have a significant impact on the way energy is generated. This discussion will be restricted to “organic” solid wastes.
The conversion of organic solid waste into energy is not a new technology. “In 1885, the U.S. Army built the nation’s first garbage incinerator on Governor’s Island in New York City harbor.”1
The solid waste-to-energy industry begins with disposable goods from every sector in the economy. Each sector generates different kinds of waste each varying in composition (see following solid waste chart2 and “What is Municipal Solid Waste (MSW)?”).3 The makeup of the waste is further influenced by lot, location and time of year. The only norm in this business is that there is no norm.
Traditionally, energy was derived from waste by incineration. Heat generated from the combustion process was harvested and converted into electricity. The heat boils water that in turn powers steam generators to produce electric energy.
Today, organic wastes can be converted into fuel by two different strategies- thermal and non-thermal.
Thermal technologies include gasification, thermal de-polymerization, pyrolysis, and plasma arc gasification.
One technology that is pushing its way towards commercialization combines pyrolysis with Fisher-Tropsch (“FT”) synthesis. The Pyrolytic/FT pathway begins with the pretreatment (sorting, crushing, and drying) of the solid waste feedstock, which is then moved to a slow pyrolysis treatment at a high temperature of 900°C (the treatment involves a gasification process in the absence of oxygen) to generate syngas (CO and H2), which is subsequently cleaned and refined into liquid fuel by a FT process. This above description is a much simplified version excluding the complexity of the actual process and a number of the process steps, variables and parameters. FT is a sophisticated catalytic technology patented by Franz Fischer and Hans Tropsch, working at the Kaiser Wilhelm Institute in Germany in the 1920s.
Non-thermal technologies or biological processes used to generate fuels include anaerobic digestion, fermentation production, and mechanical biological treatment. Like the Pyrolytic/FT pathway, any generalized description may not be representative of other non-thermal conversion processes. Anaerobic digestion process utilizes microorganisms to break down biodegradable material in the absence of oxygen, begins with a pretreatment to optimize the amount of digestible material and moisture content as well as to remove harmful contaminants from the organic feedstock, which is then “digested by bacterial hydrolysis to produce insoluble organic polymers such as carbohydrates. The carbohydrates are then made available to acidogenic bacteria that convert the sugars and amino acids into carbon dioxide, hydrogen, ammonia, and organic acids. The organic acids are subsequently converted by acetogenic bacteria into acetic acid, along with additional ammonia, hydrogen, and carbon dioxide. Finally, methanogens convert these products to methane (natural gas), carbon dioxide and trace amounts of contaminant gases such as hydrogen sulfide. The methane can be burned to produce both heat and electricity or used as a fuel after “scrubbing to remove the sulfides.”4
The one commonality between incineration, FT and biologic conversion of waste-to energy is the use of solid waste as a feedstock. All wastes are not created equally and therefore cannot be uniformly applied to generate energy be it electricity or fuel. This is the primary risk in the energy-from-waste industry. The type and kind of energy generated depends on what actually constitutes the solid waste. Sorted material with all plastics, glass and metal removed and no construction debris poses fewer issues. If construction debris is included, we may have pressure treated lumber, which has the potential to create some aldehydes in the flue gas such as formaldehyde. There will also be some heavy metals such as cadmium or chromium. Plastics can produce more monoxide, but can also increase tar formation, any of which can inhibit the performance of or poison the FT catalyst. Also the composition of the syngas is critical to the effectiveness and efficiency of the process. In the aforementioned biological process, the feedstock must contain mostly digestible or fermentable material, the right moisture content and must be free of contaminants.
Outlining the right feedstock composition for each of these processes is much simpler than obtaining it. From the complex and unpredictable nature of solid waste and testing and sampling protocols to determining the syngas or biogas composition from each feedstock and tying that to determine the quality and cost of the product (electricity or fuel), it is far from certain what you get, even under controlled conditions. At the end of the day, the estimated capital cost to build, operate and maintain a waste-to-energy processing plant is highly questionable.
Finally, the energy balance of a waste-to energy plant is of utmost importance for the production of energy-from-waste. The sample calculation given below is a gross first-order approximation of the amount of energy consumed to produce a given amount of energy. The values used here were estimates based on a theoretical Pyrolytic / FT plant that converts municipal solid waste into fuel grade diesel; these values were averages obtained from field research. Parameters and assumptions are given in the notes below the calculations.
Significantly more sophisticated calculations can be found in Stevens Institute of Technology’s5 “BASF Catalyst and Golden BioMass Fuels Corporation report on their investigation of energy balance, in broad outline, for the production of a high-quality synthetic diesel from residual crop biomass via a Fischer-Tropsch route. Their calculations took into consideration:
- harvesting surplus biomass (such as crop residue);
- locally pyrolyzing the biomass into pyrolysis oil, char, and un-condensable gas;
- transporting the PO to a remote central processing facility;
- converting the PO at this facility by auto thermal reforming into synthesis gas (CO and H2); and
- Fischer–Tropsch (FT) synthesis of the syngas into diesel fuel.”
The crude analysis shows that a hypothetical 50 tpd conversion plant producing fuel grade diesel at a 20% overall yield with from raw incoming feedstock to fuel grade diesel, generates over $1.1 million of gross revenue per year. A formal pro forma capturing all costs and other revenue can then determine the IRR (Internal Rate of Return) and bankability of such a project.
In closing, there is no question that converting waste-to-energy is a necessary sustainable and renewable enterprise if nothing more than to remove recyclables and to allow landfills to remain open due to lower volumes of waste. However, it is not a business to be taken lightly. Processes are new, advanced and in many cases proprietary. Translating patents to operations is far from trivial. The chemistry, thermodynamics and equipment design are complex. Operating systems maybe beyond R&D and the pilot stage but questionable as a cost-effective commercial plant. Instant mavens, who see gold in that garbage, beware. Better to use the knowledge of those who have scar tissue to show in this business.
- H. Lanier Hickman, Jr., “A Brief History of Solid Waste Management During the Last 50 Years.” MSW Management September/October 2001), http://www.forestermedia.net/MSW/Articles/A_Brief_History_of_Solid_Waste_Management_During_t_4230.aspx
- Environmental Strategies for Cities, MIT, Solid wastes: http://web.mit.edu/urbanupgrading/urbanenvironment/sectors/solid-waste-sources.html
- BarryOnEnergy, What is Municipal Solid Waste (MSW)?: http://tinyurl.com/barry-stevens451
- Wikipedia, Anaerobic Digestion: http://en.wikipedia.org/wiki/Anaerobic_digestion
- Green Car Conference, Stevens Institute of Technology, Material and Energy balance Spreadsheet, http://www.greencarcongress.com/2011/05/manganaro-20110516.html
The opinions expressed in this article are solely those of the author Dr. Barry Stevens, an accomplished business developer and entrepreneur in technology-driven enterprises. He is the founder of TBD America Inc., a global technology business development group. In this role, he is responsible for leading the development of emerging and mature technology driven enterprises in the shale gas, natural gas, renewable energy and sustainability industries. To learn more about TBD America, please visit: http://tbdamericainc.com/
|
2026-01-26T15:08:30.124788
|
831,643
| 3.601581
|
http://www.newscientist.com/article/mg21028185.700-autopiloted-glider-knows-where-to-fly-for-a-free-ride.html
|
HAWKS and albatrosses soar for hours or even days without having to land. Soon robotic gliders could go one better, soaring on winds and thermals indefinitely. Cheap remote sensing for search and rescue would be possible with this technology, or it could be used to draw up detailed maps of a battlefield.
Glider pilots are old hands at using rising columns of heated air to gain altitude. In 2005 researchers at NASA's Dryden Flight Research Center in Edwards, California, flew a glider fitted with a custom autopilot unit 60 minutes longer than normal, just by catching and riding thermals. And in 2009 Dan Edwards, who now works at the US Naval Research Laboratory in Washington DC, kept a glider soaring autonomously for 5.3 hours this way.
Both projects relied on the glider to sense when it was in a thermal and then react to stay in the updraft. But thermals can be capricious, and tend to die out at night, making flights that last several days impossible, says Salah Sukkarieh of the Australian Centre for Field Robotics in Sydney. He is designing an autopilot system that maps and plans a glider's route so it can use a technique known as dynamic soaring when thermals are scarce. The glider first flies in a high-speed air current to gain momentum, then it turns into a region of slower winds, where the newly gained energy can be converted to lift. By cycling back and forth this way, the glider can gain either speed or altitude.
"Theoretically you can stay aloft indefinitely, just by hopping around and catching the winds," says Sukkarieh, who presented his research at a robotics conference in Shanghai, China, last month.
Inspired by albatrosses and frigate birds, the operators of radio-controlled gliders have used dynamic soaring to reach speeds of more than 600 kilometres per hour by flying between two regions of differing wind speeds.
To plan a path for dynamic soaring you need a detailed map of the different winds around the glider. So Sukkarieh is working on ways to accurately measure and predict these winds. He recently tested his autopilot on a real glider, which made detailed wind-speed estimates as it flew.
The system has on-board sensors, including an accelerometer and altimeter, which measure changes in the aircraft's velocity and altitude to work out how the winds will affect the glider. From its built-in knowledge of how wind currents move, the system was able to work out the location, speed, and direction of nearby winds to create a local wind map.
By mapping wind and thermal energy sources this way and using a path-planning program, the glider autopilot should be able to calculate the most energy-efficient routes between any two points. The system would be able to plot a path up to a few kilometres away when the wind is calm but only over a few metres when turbulent, as the winds change so quickly, says Sukkarieh.
He says that the amount of energy available to a glider is usually enough to keep it aloft for as long as it can survive the structural wear and tear. He plans to test the mapping and route-planning systems more extensively in simulations, to be followed by actual soaring experiments.
"I think we have some examples from nature that mean this should be possible," says Edwards, who is not involved in Sukkarieh's research. "We're just taking our first baby steps into doing it autonomously."
Make like a hawk
Hawks and vultures are masters of spiralling upwards in rising thermals. But flying around in search of a free lift is not terribly efficient so Salah Sukkarieh of the Australian Centre for Field Robotics in Sydney thinks these birds have learned to recognise visual cues for thermals, such as towering cumulus clouds surrounded by blue sky. He's working on software that would allow a robotic glider to recognise useful cloud formations. By looking for wispy, or "smeared" clouds, the glider can find the horizontal winds that are good for dynamic soaring. At the same time, radar could measure the movement of airborne dust particles, giving an indication of wind speed and direction.
- New Scientist
- Not just a website!
- Subscribe to New Scientist and get:
- New Scientist magazine delivered every week
- Unlimited online access to articles from over 500 back issues
- Subscribe Now and Save
If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to.
|
2026-01-31T04:34:14.105302
|
1,160,333
| 3.570796
|
http://www.entcolumbia.org/stape.html
|
Otosclerosis is a disease of ear bone degeneration that most commonly develops during the teen or early adult years. In otosclerosis, the consistency of the sound-conducting bones of the ear changes from hard, mineralized bone to spongy, immature bone tissue. This can result in a buildup of inappropriate bone around the stapes foot-plate (a bone in the middle ear). This buildup of bone causes the stapes to become fixed and prevents it from vibrating normally. The lack of vibration prevents sound from being conducted to the inner, leading to a conductive hearing loss.
Stapedectomy is a surgical treatment for otosclerosis. In stapedectomy, the immobilized stapes is removed, and a tiny platinum or stainless steel prosthesis is inserted in the middle ear to replace it. The artificial prosthesis is less than 1/8 of an inch long. There are variations in stapedectomy depending on the extent of the disease. More extensive damage may require removal of the entire stapes footplate, while a small focus of disease allows for removal of less tissue. In patients with very extensive thickened tissue covering the oval window (obliterative otosclerosis), stapedectomy can not be performed. In such cases the stapes suprastructure is removed and the surgeon uses a small drill to thin out the oval window. An opening is made in the footplate, and the prosthesis is then positioned.
This surgery is extremely effective, and usually restores normal hearing in patients with conductive hearing loss. Patients can usually return to work in about a week.
Stapedectomy is usually performed on an outpatient basis. Either local or general anesthesia may be used, depending on the comfort of the surgeon and the patient. The use of local anesthesia allows the patient to be awake and report any vertigo or nausea, which may indicate impending damage to the inner ear. General anesthesia is considered safer by many surgeons, however. A small tissue graft, usually taken from a vein in the patient’s hand, is taken and used to seal the oval window after the prosthesis is inserted.
Risks of stapedectomy include imbalance, tinnitus, changes in taste, dry mouth, perforation of the tympanic membrane, injury to facial nerves, and cochlear deafness.
In some cases, surgery is not possible and a hearing aid may be used instead. People who should not have stapedectomy include those who experience frequent changes in barometric pressure (pilots and divers), elderly patients with a baseline imbalance, people whose vocations demand excellent balance, and anyone with known Meniere’s disease (stapedectomy often causes permanent profound hearing loss in patients with Meniere’s disease). If a person has a perforated tympanic membrane due to middle ear infection, the infection must be cleared and the membrane healed before stapedectomy. Patients with sensorineural hearing loss or mixed hearing loss may not improve after stapedectomy, and would not be likely to benefit from surgery.
Description, diagrams, and animated views of stapedectomy surgery may be found online at The Ear Surgery Information Center.
|
2026-02-05T05:58:11.414352
|
1,040,328
| 3.70929
|
http://graphics.stanford.edu/courses/cs178/applets/convolution.html
|
Applet: Katie Dektar
Text: Marc Levoy
Technical assistance: Andrew Adams
Convolution is an operation on two functions f and g, which produces a third
function that can be interpreted as a modified ("filtered") version of f. In
this interpretation we call g the filter. If f is defined on a
spatial variable like x rather than a time variable like t, we call the
operation spatial convolution. Convolution lies at the heart of any
physical device or computational procedure that performs smoothing or
sharpening. Applied to two dimensional functions like images, it's also useful
for edge finding, feature detection, motion detection, image matching, and
countless other tasks. Formally, for functions f(x) and g(x)
of a continuous variable x, convolution is defined as:
where * means convolution and · means ordinary multiplication. For functions of a discrete variable x, i.e. arrays of numbers, the definition is:
Finally, for functions of two variables x and y (for example images), these definitions become:
In digital photography, the image produced by the lens is a continuous function f(x,y), Placing an antialiasing filter in front of the sensor convolves this image by a smoothing filter g(x,y). This is the third equation above. Once the image has been recorded by a sensor and stored in a file, loading the file into Photoshop and sharpening it using a filter g[x,y] is the fourth equation.
Despite its simple definition, convolution is a difficult concept to gain an intuition for, and the effect obtained by applying a particular filter to a particular function is not always obvious. In this applet, we explore convolution of continuous 1D functions (first equation) and discrete 2D functions (fourth equation).
On the left side of the applet is a 1D function ("signal"). This is f. You can draw on the function to change it, but leave it alone for now. Beneath this is a menu of 1D filters. This is g. If you select "custom" you can also draw on the filter function, but leave that game for later. At the bottom is the result of convolving f by g. Click on a few of the filters. Notice that "big rect" blurs f more than "rect", but it leaves kinks here and there. Notice also that "gaussian" blurs less than "big rect" but doesn't leave kinks.
Both functions (f and g) are drawn as if they were functions of a continuous variable x, so it would appear that this visualization is showing convolution of continuous functions (first equation above). In practice the two functions are sampled finely and represented using 1D arrays. These numbers are connected using lines when they are drawn, giving the appearance of continuous functions. The convolution actually being performed in the applet's script is of two discrete functions (second equation above).
Whether we treat the convolution as continuous or discrete, its interpretation is the same: for each position x in the output function, we shift the filter function g left or right until it is centered at that position, we flip it left-to-right, we multiply every point on f by the corresponding point on our shifted g, and we add (or integrate) these products together. The left-to-right flipping is because, for obscure reasons, the equation for convolution is defined as g[x-k], not g[x+k] (using the 2nd equation as an example).
If this procedure is a bit hard to wrap your head around, here's an equivalent way to describe it that may be easier to visualize: at each position x in the output function, we place a copy of the filter g, centered left-to-right around that position, flipped left-to-right, and scaled up or down according to the value of the signal f at that position. After laying down these copies, if we add them all together at each x, we get the right answer!
To see this alternative way of understanding convolution in action, click on "animate", then "big rect". The animation starts with the original signal f, then places copies of the filter g at positions along f, stretching them vertically according to the height of f at that position, then adds these copies together to make the thick output curve. Although the animation only shows a couple dozen copies of the filter, in reality there would need to be one copy for every position x. In addition, for this procedure to work the sum of copies must be divided by the area under the filter function, a processing called normalization. Otherwise, the output would be higher or lower than the input, rather than simply being smoothed or sharpened. For all the filters except "custom", normalization is performed for you just before drawing the thick output curve. For the "custom" filter, see below.
Once you understand how this works, try the "sharpen" or "shift" filters. The sharpen filter replaces each value of f with a weighted sum of its immediate neighbors, but subtracting off the values of neighbors a bit further away. The effect of these subtractions is to exaggerate features in the original signal. The "Sharpen" filter in Photoshop does this; so do certain layers of neurons in your retina. The shift filter replaces each value of f with a value of f taken from a neighbor some distance to the right. (Yes, to the right, even though the spike in the filter is on the left side. Remember that convolution flips the filter function left-to-right before applying it.)
Finally, click on "custom" and try drawing your own filter. If the area under your filter is more or less than 1.0, the output function will jump up or down, respectively. To avoid this, click on "normalize". This will scale your filter up or down until its area is exactly 1.0. By the way, if you animate the application of your custom filter, the scaled copies will only touch the corresponding point on the original function if your custom filter reached y=1.0 at its x=0 position. Regardless, if your filter is normalized, the output function will be of the right height.
On the right side of the applet we extend these ideas to two-dimensional discrete functions, in particular ordinary photographic images. The original 2D signal is at top, the 2D filter is in the middle, depicted as an array of numbers, and the output is at the bottom. Click on the different filter functions and observe the result. The only difference between "sharpen" and "edges" is a change of the middle filter value from 9.00 to 8.00. However, this change is crucial, as you can see. In particular, the sum of all non-zero filter values in "edges" is zero. Therefore, for positions in the original signal that are smooth (like the background), the output of the convolution is zero (i.e. black). The filter "hand shake" approximates what happens in a long-exposure photograph, in this case if the camera's aim wanders from upper-left to lower-right during the exposure.
Finally, click on "identity", which sets the middle filter tap (as positions in a filter are sometimes called) to 1.0 and the rest to 0.0. It should come as no surprise that this merely makes a copy of the original signal. Now click on "custom", then click on individual taps and enter new values for them. As you enter each value, the convolution is recomputed. Try creating a smoothing filter, or a sharpening filtering. Or starting with "identity", change the middle tap to 0.5, or 2.0. Does the image scale down or up in intensity? The applet is clipping outputs at 0.0 (black) and 255.0 (white), so if you try to scale intensites up, the image will simply saturate - like a camera if your exposure were too long. Try putting 1.0's in the upper-left corner and the lower right corner, setting everything else to 0.0. Do you get a double image? As with the custom 1D filter, if your filter values don't sum to 1.0, you might need to press "normalize". Unless you're doing edge finding, in which case they should sum to 0.0.
|
2026-02-03T07:34:55.283366
|
55,457
| 3.638267
|
http://harunyahya.com/en/books/4160/The_Collapse_Of_The_Theory_Of_Evolution_In_50_Themes/chapter/4837/Item_01_10
|
1- The Theory of Evolution Regards Chance as a Creative Deity
The theory of evolution claims that unconscious, unreasoning, inanimate atoms such as phosphorus and carbon assembled themselves together by chance. As a result of such natural phenomena as lightning, volcanic eruptions, ultraviolet rays and radiation, these atoms organized themselves in such a flawless way as to give rise to proteins, cells—and thereafter, fish, rabbits, lions, birds, human beings and all manner of life forms.
That is the basic claim made by the theory of evolution, which regards chance as a creative deity. However, belief in any such claim is a violation of reason, logic and science.
One of evolutionists’ greatest errors is to think that life could have formed spontaneously in the environment they refer to as “the primeval Earth.”
2- Natural Selection Cannot Account for the Complex Structures in Living Things
The theory of evolution maintains that those living organisms that best adapt to their environment have more opportunities to survive and multiply, and therefore, they can pass on their advantageous characteristics to subsequent generations, and species evolve by way of this “mechanism.”
But the fact is that the mechanism in question—known as natural selection—cannot cause living things to evolve, nor endow them with any new features. It can only reinforce existing characteristics belonging to a particular species.
In any given region, for example, those rabbits able to run fastest will survive, while others die. After a few generations, all the rabbits in this region will consist of fast-running individuals. However, these rabbits can never evolve into another species—greyhounds or foxes, for instance.
The youngster being attacked by the cheetah will in all likelihood be unable to escape. Because the cheetah is more agile, powerful and experienced than he is. Evolutionists want to have people believe that this phenomenon, familiar to everyone, is an “evolutionary mechanism.” Yet it is clear that no matter how much time goes by, this youngster will never turn into any other living organism.
3- Peppered Moths Are No Evidence for Evolution through Natural Selection
Of all the supposed “proofs” of the theory of evolution, the most frequently repeated concerns changes in a species of moth in 19th century Britain. It is claimed that due to air pollution during the Industrial Revolution, tree bark was darkened—for which reason dark- colored moths were better camouflaged from predatory birds, and thus their numbers increased.
But this is not evolution, because no new species of moth emerged. All that happened was that the ratio ratio of the two already existing types in an already existing species changed. In addition, it has since emerged that the account on which this claim was based was untrue. The well-known photos showing moths clinging to the bark of trees were found to be fabrications. Contrary to what has been claimed, no instance of so-called “industrial melanism”—the darkening of color due to industrial pollution—has ever taken place.
4- Just as an Earthquake Cannot Improve a City, Neither Are Mutations Advantageous to Develop Living Things
Mutations are caused by random changes in the DNA in which all the information concerning the human body’s characteristics is encoded. Mutations occur due to outside agents such as radiation or chemicals. Evolutionists maintain that such random genetic changes can cause living things to evolve. The fact is, though, that mutations are always harmful to living things, do not develop them, and can never endow them with any new functional features (such as wings or lungs, for instance). Mutations either kill or deform the afflicted organism. To claim that mutations improve a species and endow it with new advantages is like claiming that an earthquake can make a city more advanced and modern, or that striking a computer with a hammer will result in a more advanced model. Indeed, no mutation has ever been observed to increase—much less improve—genetic information.
5- Life Comes From Life
The erroneous theory known as “spontaneous generation,” which had been around since at least the Middle Ages, maintained that inanimate substances could by chance assamble to produce a living being. The idea that insects formed from food wastes or mice from wheat was widespread up, until the 18th century. Even in the 19th century, when Darwin wrote his book The Origin of Species, the scientific world still widely believed that bacteria could arise from inanimate matter.
In fact, however, only five years after Darwin published his book, Louis Pasteur announced his results after long studies and experiments, that disproved spontaneous generation, a cornerstone of Darwin's theory. In his triumphal lecture at the Sorbonne in 1864, Pasteur said: "Never will the doctrine of spontaneous generation recover from the mortal blow struck by this simple experiment." (Sidney Fox, Klaus Dose, Molecular Evolution and The Origin of Life, New York: Marcel Dekker, 1977. p. 2)
His findings revealed, once again, that life did not emerge spontaneously on Earth, but that it began with a miraculous creation.
6- No Transitional Forms Have Ever Been Found in the Fossil Record
The theory of evolution claims that the transition from one species to another takes place from the primitive (simple) to the more complex—progressively, and in stages. According to this claim, bizarre, monstrous creatures known as “transitional forms” must have existed during this progress from one species to another. For example, there must have existed half-fish and half-amphibian creatures that, despite still having fish characteristics, had also acquired some amphibious ones, as well as half-human, half-ape creatures, and half-reptile, half-bird life forms.
If any such transitional species had really existed, then their remains should be encountered in the fossil record. But in over a century, there is still not the slightest trace of such intermediate forms that paleontologists have searched for with such great eagerness.
7- Living Groups Emerged Abruptly on Earth and at the Same Time
Almost all the basic living categories known today emerged suddenly and at the same time, during the Cambrian Period, 530 to 520 million years ago. Living organisms with totally different bodily structures—sponges, mollusks, worms, Echinodermata, arthropods and vertebrates—all appeared suddenly, simultaneously, with no life forms remotely resembling them in any earlier geological period. This fact alone completely undermines evolutionists’ claims that living things evolved from a single common ancestor gradually, and over a very long period of time.
The fact that the Earth was suddenly filled with a great many species, all possessing radically different physical structures and exceedingly complex organs demonstrates that these were, of course, created. Since evolutionists deny creation and the existence of God, they cannot definitely explain this miraculous phenomenon.
Just as silica, the raw material of glass, can not gradually transform itself into a crystal goblet, or the pieces of plastic and metal and glass cannot come together and assemble themselves into a camera, neither can living beings come into being from non-living substances—no matter how much time we allow for this to happen.
8- Species Living Today Have Undergone No Changes over Hundreds of Millions of Years
Had evolution actually taken place, then living things must have emerged on Earth as a result of small, gradual changes—and to have continued changing over the course of time. Yet the fossil records demonstrate the exact opposite! Different classes of living creatures emerged suddenly, with no ancestors even remotely resembling them, and remained in a stable state, undergoing no changes at all, often for hundreds of millions of years.
9- Fish that Ruined Evolutionists’ Dreams:Cœlecanth
Evolutionists used to depict the Cœlecanth, a fish known only from fossils dating back 400 million years, as very powerful evidence of a transitional form between fish and amphibians. Since it was assumed that this species had become extinct 70 million years ago, evolutionists engaged in all kinds of speculation regarding the fossils. On 22 December 1938, however, a living Cœlecanth was caught in the deep waters of the Indian Ocean. More than 200 other living specimens have been caught in the years that followed.
All the speculation regarding these fish had been unfounded. Contrary to what evolutionists claimed, the Cœlecanth was not a vertebrate with half-fish, half-amphibian characteristics preparing to emerge onto dry land. It was in fact a bottom-dwelling fish that almost never rose above a depth of 180 meters (590 feet). Moreover, there were no anatomical differences between the living Cœlecanth and the 400-million-year-old fossil specimens. This creature had never “evolved” at all.
10- Birds' Wings Cannot Be the Work of Chance
Evolutionists maintain that birds evolved from reptiles—though this is impossible, and a bird’s wing alone is sufficient to prove this. In order for evolution of the kind claimed to have taken place, a reptile’s forearms would have to have changed into functional wings as the result of mutations taking place in its genes—and quickly! And this is not feasible. First of all, this transitional life form would be unable to fly with only half-developed wings. It would also be deprived of its forearms. That would mean it was essentially deformed and therefore—according to the theory of evolution—would be eliminated.
In order for any bird to fly, its wings had to be fully formed in every detail. The wings should be soundly attached to the chest cavity. The bird would need to have a light skeletal structure allowing it to take off, maintain its balance in the air and move in all directions. Its wing and tail feathers would have to be light, flexible and in aerodynamic proportion to one another. In short, everything would have to operate with a flawless coordination in order to make flight possible. How could this inerrant structure in birds’ bodies have resulted from a succession of random mutations? That question has no answer.
|
2026-01-19T04:19:44.672282
|
1,033,797
| 3.544199
|
http://webstylemag.com/2010/09
|
If you wanted to preserve important bits of our civilization for future centuries, you could do worse than a bundle of paper sealed in resin. It’s remarkably cheap and effective; you can make one over a weekend. In this article we’ll build a ½ scale model of a time capsule that contains the complete Linux 0.1 source code, plus sundry articles and Internet ephemera.
A time capsule must perform three basic functions:
- Encode information with sufficient density & durability.
- Protect the information from physical damage, moisture, heat & cold, etc.
- Be findable.
So while the Rosetta Stone performed (2) fairly well, it was pretty lucky to be found at all. Also, its data density is terrible: about 1 bit per cubic centimeter. A book in a library fulfills (1), but requires the library around it to provide (2) and (3).
The internet, contrary to popular belief, is not very good at preserving information on a long time scale. It ultimately depends on digital media that break down rapidly. Early Unix source code, one of the most important sequences of bits ever written, didn’t last more than a couple of decades. It had to be reconstructed from printouts.
The makeup of our capsule is simple: cellulose, carbon, polymers, and distributed information. You print a bundle of paper, place it inside a box, stick a label on it, then drown it in translucent epoxy resin. Alongside whatever it is you are preserving, you include the locations of other capsules.
1. Density & Durability
The humble piece of paper has come a long way in the last few decades. Acid-free paper 1 is the norm. It has archival properties comparable to cotton rag or parchment, and can easily survive for 300 to 500 years. Black & white laser toner is carbon powder and resin, fused to the surface at a few hundred degrees Fahrenheit. Carbon, being an atomic element, never fades. All in all it’s a cheap & cheerful way to preserve data for a very long time.
2. Protection from the elements
You need an airtight seal that is itself fairly rugged. Epoxy resin is the hard plastic you often see protecting the surface of bars and restaurant tables. It’s the closest thing you can get to man-made amber. Our scale-model capsule will be encased in a shell of resin about 1 centimeter thick. We’re not shooting for 2,000 years exposed in the desert, just 50 to 500 years in the ground.
3. Be findable
The biggest design problem of traditional time capsules is that people forget where the damned things are buried. There seem to be two contradictory thoughts going on at once: that the best way to preserve information is inside a buried box, but that the best way to preserve information about the box is somewhere else.
Inside each time capsule will be a list of other known capsules. That, I hope, will make the difference between a node in a network and a forgotten box of junk. Dozens or hundreds of people could build full-scale capsules like this and share location data with each other. This prototype and its twin are the only two of their kind so far, so they only link to each other. The larger the network, the greater the chances of recovery.
- 70 sheets of high-quality, acid-free printer paper
- Laser printer
- 250 pages of data you want to preserve
- Scissors or paper cutter
- Masking tape
- ¼” thick balsa wood planks
- Illustation board (thick paperboard)
- Razor, ruler
- Wood glue or white glue
- 500ml clear epoxy resin
- Disposable cups and stir sticks
- Gloves, goggles, and mask
Gather whatever data you want to preserve. It can be books, songs, computer programs, your Facebook page, diary, recipes, anything. I would focus on things that are likely to disappear. The future will probably most appreciate a description of boring, everyday life in Right Now, A.D.
Laser-print your data “4up” and single-sided. You should experiment with your printer’s capabilities, but I’ve found that 10pt Helvetica printed 4up is the smallest mine can go and still be legible. Don’t print double-sided because the toner might stick to itself if it ever gets too hot.
Cut the sheets into quarter-pages, collate them, then tightly wrap the bundle in a couple of layers of paper, like a Christmas present.
The box is mainly for appearance’s sake, and to protect the paper from light. You could probably sink your bundle (wrapped in a few layers of paper and plastic) directly into the resin and it would work fine.
I made mine from illustration board. Cut two 12 x 15cm pieces for the top and bottom. Then cut the side walls 3cm high and slightly shorter than the 12 or 15 cm, to account for the thickness of the neighboring wall.
Glue it up! It doesn’t have to be pretty or perfect as long as it fits as tightly as possible around the paper bundle. Let it sit for an hour to dry.
Place the bundle inside a ziplock bag, squeeze all the air out, and seal it. Put that inside the box and glue the lid shut. Paint it if you want, then glue a label to the front so people know what it is.
You build the mould the same way as the box, just 2cm larger in each dimension. I built mine out of balsa wood. If you have an aluminium or plastic tray of the right dimensions you can use that instead.
Needless to say, do all epoxy work in a well-ventilated area and follow the safety instructions. Epoxy resin sounds tricky but it’s pretty easy to handle with practice. There are many types of resin of varying properties. You want “encapsulating epoxy resin” or “clear casting resin”, which is often used to seal electronic components and art projects. The strongest resins take 48 hours to harden completely, but last much longer than fast-cure resins.
Mix & pour about 3/4 cup (140ml) of resin in the bottom of the mould and let it cure for about an hour. This forms the back of the shell. (By the way, don’t buy the stuff in the picture. Buy low-odor 1:1 mix resin, which is much easier to work with.)
Center the box inside the mould. Mix & pour the rest of the resin on top of the box, and let it flow into the sides. Loosely cover and let cure for 24 to 48 hours. Your inner box will probably not be water-tight, so expect some bubbles to stream out of it as the epoxy seeps in. (To avoid this you could use an airtight tin, though there is a chance it will float!)
Place in ground, let stand 300 years.
This is a scale model to demonstrate the process. Real capsules will contain at least 500 full-sized sheets of paper. The magic of the square-cube law makes it more cost-effective as you scale up. Casting large volumes of epoxy is a bit tricky, so start small and ask your friendly neighborhood supplier for advice.
Three Reams (1,500 sheets): This is probably the most managable size for a weekend project. You could use ready-made “archival” boxes for the inner box and one of those plastic file-folder boxes for the outer mould. Artist-grade resin starts to get pretty expensive at these volumes, but you can use amber encapsulating resin instead, about $80 for two gallons. AeroMarine sells in bulk and will send you free samples.
Carton (5,000 sheets): If you want something with more volume and durability, you can use a concrete flower pot whose inner dimensions are about 2 cm larger than the outer dimensions of the paper carton. Tightly seal the paper with several layers of thick plastic, and pour the resin as before.
Oil Drum (28,000 sheets): if you have good concrete molding skills and access to bituminous resins (instead of expoy resin) for the water seal, you could build a time capsule around a 55-gallon oil drum with very good capacity. It’s probably wise to invest in higher-quality printing at this scale, so you can fit more than 4 pages of data per sheet.
Monument (3,500,000 sheets): A typical twenty-foot cargo container can hold over 700 cartons of paper. Constructing this monument requires a proper concrete foundation, a steel-reinforced concrete shell, and serious seals against moisture. The curation and printing alone are jobs of unusual size, but doable.
A cargo container sells for about USD$3,000. A contractor friend of mine estimated the concrete construction work would cost about USD$15,000. The printing, curation of the data, and mosaic work would be the most expensive items. But I believe the total cost would be under USD$100,000. That’s less than many corporate sculptures, and a lot more useful.
I think it would be beautiful to put giant concrete archives in public parks around the world. A mosaic on the top surface would describe what it is and what it’s for. Sink them about 2 meters down so they stick out a bit. They would form large benches for people to sit and play on, trace out the mosaic with their fingers, and perhaps be reminded of time.
If you want to build a time capsule yourself, send me an email! Let’s get this thing started.
“Acid-free” is a bit misleading. All wood-pulp paper contains acid that will yellow and destroy the fibers over time. “Acid-free” paper is given an extra wash, then impregnated with alkalies (baking soda, more or less) to improve whiteness and neutralize remaining acids. The percentage varies from 2% by weight up to 4 or 5% for “archival” quality paper.
|
2026-02-03T05:12:56.232788
|
275,019
| 3.954566
|
http://www.mathreference.com/ca-int,aaf.html
|
2 - 2cos(u)cos(v) - 2sin(u)sin(v)
Draw a segment from the origin to p. This is the segment that defines the angle u. Draw another segment from q, perpendicular to this segment. Let these perpendicular segments meet at the point s. Now spq forms another right triangle, with pq as hypotenuse. The altitude of this triangle has length sin(w), while the base is 1-cos(w). Apply the pythagorean theorem again to get the square of the hypotenuse.
2 - 2cos(w)
Set this equal to the earlier expression to obtain the angle subtraction formula:
cos(v-u) = cos(u)cos(v) + sin(u)sin(v)
This is great, but u and v are rather constrained. Let v stray past 90°. Its cosine becomes negative, but cos(u)-cos(v) is still correct for the length of the base of the first triangle. As v-u exceeds 90°, s slides through the origin, and winds up behind the origin. The cosine of w goes negative, yet 1-cos(w) is still the length of the base of the second right triangle. Eventually q is lower than p. The first right triangle points down, rather than up. Now sin(v)-sin(u) is the opposite of the length of the altitude, but the length is squared in the pythagorean formula, so this doesn't matter. When v passes 180°, its sine is negative, yet sin(v)-sin(u) still gives the length of the altitude of the first triangle, at least in absolute value. Our formula holds for any u between 0° and 90°, and any v between u and u+180°.
When v goes beyond u+180°, reflect the picture through the line x = y. This reproduces the earlier case, where our formula holds. The reflection swaps sine and cosine for u and v, which changes the right side of the equation not at all. It also replaces w with 360°-w, which changes its cosine not at all. Thus the formula holds for all v between u and u+360°.
If u is an angle in the second quadrant, subttract 90° from u and v, leaving w unchanged. Now u is in the first quadrant and the formula works. Our rotation moved sine to cosine and cosine to -sine, for both u and v. This changes the formula not at all. Perform a similar rotation when u is in the third or fourth quadrant. Therefore the angle subtraction formula works for all angles u and v.
Replace v with -v to get the angle addition formula:
cos(u+v) = cos(u)cos(v) - sin(u)sin(v)
In the above formula, hold u fixed and let v be a variable. Take the derivative with respect to v. This gives the angle addition for sines.
sin(u+v) = cos(u)sin(v) + sin(u)cos(v)
Replace v with -v to get the angle subtraction formula for sines.
sin(u-v) = sin(u)cos(v) - cos(u)sin(v)
tan(u+v) = (tan(u) + tan(v)) / (1 - tan(u)tan(v))
tan(u-v) = (tan(u) - tan(v)) / (1 + tan(u)tan(v))
|
2026-01-22T10:07:21.170277
|
564,978
| 3.769044
|
http://indianapublicmedia.org/news/iu-researchs-yorks-eternal-flame-49697/
|
Scientists have typically viewed natural gas as a non-renewable energy source. But that might not always be the case, according to new research from Indiana University scientists.
Behind a waterfall in New York state, there’s a so-called “eternal flame” that burns naturally and does not go out. It stays lit because of gas that seeps out of shale deposits deep underground and provides the flame constant fuel.
Typically, scientists have assumed temperatures deep in the earth were so hot that they were breaking the shale rock and releasing the gas.
But Indiana University geological scientist Arndt Schimmelmann says that’s not true in this case.
“This flame and these seepages have occurred for millions of years in those areas and we know that the source rock, about 400 meters deep, is not very warm,” Schimmelmann says. “It should not even be able to produce much gas at this temperature, yet the gas is coming and it’s not being depleted. So our hypothesis is that a different mechanism is responsible for continuous gas generation at depth.”
Schimmelmann says he doesn’t know what that mechanism is, but more research will hopefully identify the cause.
Still, Indiana Geological Survey scientist Maria Mastalerz says the discovery likely won’t lead to more extraction of natural gas as an energy source because the geological features of places with this phenomenon are very different from the locations where companies are drilling for gas. Instead, she says the implications are much more important for climate change.
“Our role is to identify the places that these emissions occur and get some data on the level of emissions and the level of contribution to total methane emissions globall,” Mastalerz says.
Mastalerz says methane is a strong greenhouse gas, so understanding how much methane is released naturally into the atmosphere is key to understanding the future of climate change.
|
2026-01-27T02:37:15.591753
|
582,970
| 3.71321
|
http://theconversation.com/welcome-to-the-plastisphere-ocean-going-microbes-on-vessels-of-plastic-16102
|
The amount of plastic debris accumulating in the open ocean has doubled in 40 years. This has been is a topic of increasing public concern and scientific interest since it was first reported in the 1970s.
It conjures up images of islands of garbage, the proverbial “Great Pacific Garbage Patch” twice the size of France, but for those who have sailed through it, the reality is much less sensational. Despite popular conception, large pieces or visible accumulations are rare. Instead, nets towed at the sea surface typically catch confetti-sized plastic pieces called microplastics.
These small pieces are post-consumer waste (stuff you and I have thrown out), and the majority appears to be floating polyethylene and polypropylene, unlike for example the plastic used for bottled water (polyethylene terephthalate, or PET) that sinks to the ocean floor.
A large fraction of this plastic is produced for single use packaging and eludes recycling efforts. A piece of low density plastic dropped into coastal waters on the east coast of the United States can be carried by surface currents to the centre of the North Atlantic Subtropical Gyre, also known as the Sargasso Sea in about six weeks. Here the plastic is trapped by currents, with little chance of escape. Sadly plastic debris has been found in all five major ocean gyres, and in the Southern Ocean, in what is usually considered “pristine” open water.
In 2010 Science Magazine published the surprising finding that the overall amount of plastic in the North Atlantic Ocean is remarkably stable. Where is all the plastic going? Other research has revealed that it is broken down by ultraviolet light and physical abrasion. As biologists we took a closer look at what lives on plastic marine debris in the ocean to see what role sea organisms might play in the fate of plastic marine debris.
Our study recently published in Environmental Science and Technology provides compelling evidence that part of the answer rests with the ocean’s “hidden majority” – the smallest oceanic lifeforms comprised of marine microbes: bacteria and other single-cell organisms. Ours is the first description of microbial life associated with plastic marine debris from the open ocean.
Using a combination of high-powered microscopy and state-of-the-art DNA sequencing, we found a diverse array of microscopic organisms living on plastic marine debris, and they are distinct from the “natural” community in the surrounding waters or on floating seaweeds. We are describing the “natural history” of an unnatural substrate. There are thousands of different micro-organisms on a piece of plastic half the size of a fingernail, some of which appear to live specifically on the plastic.
We refer to this community as the “Plastisphere”. Like the biosphere (the thin film of life around the surface of planet Earth), the Plastisphere represents a little world of life that exists on the surface of plastic particles. This environment comes complete with predators and prey, organisms that photosynthesise to produce energy from light, (similar to plants on land), and even parasites and potentially disease-causing organisms harmful to invertebrates, fish and humans.
Perhaps our most compelling observation was the discovery of what we call “pit-formers”: conspicuous cells that appeared to be embedded in pits on the plastic surface – somewhat likes eggs in an egg carton. We hypothesise that these curious residents play some part in breaking down the plastics. That’s not to say that the pit-formers are necessarily biodegrading plastic down to its constituent water and carbon. This claim would require further laboratory testing. However, any breakdown of plastics into progressively smaller bits is noteworthy because it might explain why the amount of plastic doesn’t seem to be increasing over the long term.
This process also has potentially serious consequences. It is sobering to realise that the estimated global annual plastic production of 245m tonnes represents 35kg of plastic per person per year for each of the seven billion people on the planet. This is, approximately, the total combined weight of the entire human race.
A considerable amount will end up in the ocean, where it will gradually degrade into smaller pieces. At the base of marine food webs is plankton, including zooplankton, small creatures that generally filter tiny food particles from the water. As plastic fragments shrink in size, the chance of plankton eating them increases. If plastic and toxic chemicals associated with some plastics enter the food chain at the bottom, they may pass up through the food chain and ultimately could accumulate in the fish we eat for dinner - and that’s something worth researching.
|
2026-01-27T08:40:55.612129
|
717,372
| 3.883094
|
http://www.reference.com/browse/Unami+Delaware
|
The Delaware languages, also known as the Lenape languages, are Munsee and Unami, two closely related languages of the Eastern Algonquian subgroup of the Algonquian language family. Munsee and Unami were spoken aboriginally by the Lenape people in the vicinity of the modern New York City area in the United States, including western Long Island, Manhattan Island, Staten Island, as well as adjacent areas on the mainland: southeastern New York State, eastern Pennsylvania, New Jersey, and coastal Delaware.
It is estimated that as late as the seventeenth century there were approximately forty Delaware local village bands with populations of possibly a few hundred persons per group. Estimates for the early contact period vary considerably, with a range of 8 000 - 12 000 given. Other estimates for approximately 1600 AD suggest 6 500 Unami and 4 500 Munsee, with data lacking for Long Island Munsee. These groups were never united politically or linguistically, and the names Delaware, Munsee, and Unami postdate the period of consolidation of these local groups. The earliest use of the term Munsee was recorded in 1727, and Unami in 1757.
The intensity of contact with European settlers resulted in the gradual displacement of Delaware peoples from their aboriginal homeland, in a series of complex population movements involving displacement and consolidation of small local groups, extending over a period of more than two hundred years. The currently used names were gradually applied to the larger groups resulting from this process. The ultimate result was the displacement of virtually all Delaware-speaking peoples from their homeland to Oklahoma, Kansas, Wisconsin, upstate New York, and Canada.
Two distinct Unami-speaking groups emerged in Oklahoma in the late nineteenth century, the Registered (Cherokee) Delaware in Washington, Nowata, and Craig Counties, and the Absentee Delaware of Caddo County. Until recently there were a small number of Unami speakers in Oklahoma, but the language is now extinct there. Some language revitalization work is underway by the Delaware Tribe of Indians.
Equally affected by consolidation and dispersal, Munsee groups moved to several locations in southern Ontario as early as the late eighteenth century, to Moraviantown, Munceytown, and Six Nations. Several different patterns of migration led to groups of Munsee speakers moving to Stockbridge, Wisconsin; Cattaraugus, New York; and Kansas. Today Munsee survives only at Moraviantown, where there are no more than one or two fluent speakers.
Munsee and Unami are assigned to the Algonquian language family, and are analysed as members of Eastern Algonquian, a subgroup of Algonquian.
The languages of the Algonquian family constitute a group of historically related languages descended from a common source language, Proto-Algonquian. The Algonquian languages are spoken across Canada from the Rocky Mountains to the Atlantic coast; on the American Plains; south of the Great Lakes; and on the Atlantic coast. Many of the Algonquian languages are now extinct.
The Eastern Algonquian languages were spoken on the Atlantic coast from the Canadian Maritime provinces to North Carolina. Many of the languages are now extinct, and some are known only from very fragmentary records. Eastern Algonquian is considered a genetic subgroup within the Algonquian family, that is, the Eastern Algonquian languages share a sufficient number of common innovations to suggest that they descend from a common intermediate source, Proto-Eastern Algonquian. The latter proto-language is hypothesized to descend from Proto-Algonquian.
The linguistic closeness of Munsee and Unamis entails that they share an immediate common ancestor which may be called Common Delaware; the two languages have diverged in distinct ways from Common Delaware.
As well, in some classifications of Eastern Algonquian languages the Delaware languages are grouped with Mohican as Delawaran, reflecting the relatively high degree of similarity between the three. Nonetheless Unami and Munsee are more closely related to each other than to Mohican. Some historical evidence suggests commonalities between Mahican and Munsee.
The line of historical descent is therefore Proto-Algonquian > Proto-Eastern Algonquian > Delawarean > Common Delaware + Mahican, with Common Delaware splitting into Munsee and Unami.
Munsee and Unami are linguistically very similar. Despite their relative closeness the two are sufficiently distinguished by features of syntax, phonology, and vocabulary that they are not mutually intelligible and by normal linguistic criteria are treated as separate languages.
Munsee Delaware was spoken in the central and lower Hudson River Valley, western Long Island, the upper Delaware River Valley, and the northern third of New Jersey. While dialect variation in Munsee was likely there is no information about possible dialectal subgroupings.
Three dialects of Unami are distinguished: Northern Unami, Southern Unami, and Unalachtigo.
Northern Unami, now extinct, is recorded in large amounts of materials collected by Moravian missionaries but is not reflected in the speech of any modern groups. The Northern Unami groups were south of the Munsee groups, with the southern boundary of the Northern Unami area being at Tohickon Creek on the west bank of the Delaware River and between Burlington and Trenton on the east bank.
The poorly known Unalachtigo dialect is described as having been spoken in the area between Northern and Southern Unami, with only a small amount of evidence from one group.
Southern Unami, to the south of the Northern Unami-Unalachtigo area, was reflected in the Unami Delaware spoken by Delawares in Oklahoma, but is now extinct.
The Unamis residing in Oklahoma are sometimes referred to as Oklahoma Delaware, while the Munsees in Ontario are sometimes referred to as Ontario Delaware or Canadian Delaware.
Munsee-speaking residents of Moraviantown use the English term Munsee to refer to residents of Munceytown, approximately 50 kilometres to the east and refer to themselves in English as Delaware, and in Munsee as /lənáːpe:w/ ‘Delaware person, Indian.’ Oklahoma Delawares refer to Ontario Delaware as /mwə́nsi/ or /mɔ́nsi/, a term that is also used for people of Munsee ancestry in their own communities.
Some Delawares at Moraviantown also use the term Christian Indian as a preferred self-designation in English. There is an equivalent Munsee term ké·ntə̆we·s ‘one who prays, Moravian convert.’
Munsee speakers refer to Oklahoma Delawares as Unami in English or /wə̆ná·mi·w/ in Munsee. The Oklahoma Delawares refer to themselves in English as Delaware and in Unami as /ləná·p·e/.
The name "Lenape" that is sometimes used in English for Delaware properly only refers to Unami. However, Kraft uses Lenape as a cover term to refer to all Delaware-speaking groups.
Munsee speakers refer to their languages as /hə̀lə̆ni·xsəwá·kan/, literally 'speaking the Delaware language.'
The first recorded mention of Delaware Pidgin dates from 1628, while the final recorded mention is from 1785. Delaware Pidgin is attested in word lists, liturgical material, and later word lists taken from earlier sources.
Pidgin Delaware was used by both Munsee and Unami Delawares in interactions with speakers of Dutch, Swedish, and English. Some non-Delaware users of the pidgin likely were under the impression that they were speaking Delaware.
There is no evidence to support the hypothesis that Pidgin Delaware predated the arrival of Europeans.
Delaware Pidgin is characterized by its extreme simplification of the complex grammatical features of Delaware nouns and verbs. Delaware Pidgin features include: (a) elimination of the distinction between singular and plural forms normally marked on nouns with a plural suffix; (b) simplification of the complex system of person marking, with no indication of grammatical gender or plurality, and concomitant use of separate pronouns to indicate grammatical person; (c) elimination of reference to plural pronominal categories of person; (d) elimination of negative suffixes on verbs, with negation marked solely by independent particles.
Delaware Pidgin appears to show no grammatical influence at all from Dutch or other European languages, contrary to the general patterns occurring in pidgin languages, according to which a European contributing language will constitute a significant component of the pidgin. Comments by an early observer suggest that Delaware speakers deliberately simplified their language to facilitate communication with the small numbers of Dutch settlers and traders they encountered in the 1620s.
Delaware Pidgin also appears to be somewhat unusual among pidgin languages in that almost all its vocabulary appears to come from the language spoken by the Delaware users of the Pidgin, with virtually none coming from European users. The relatively few Pidgin Delaware words that are not from Unami likely were borrowings mediated through Unami or Munsee or other languages.
Pidgin Delaware is only one of a number of pidgin languages that arose on the Atlantic coast due to contact between speakers of Algonqiuan languages and Europeans. Although records are fragmentary, it is clear that many Indians used varieties of pidginized English, and there are also recorded fragments of a pidgin Massachusett, an Eastern Algonquian language spoken to the north of Delaware territory in what is now Boston and adjacent areas. It is likely that, as with Pidgin Delaware, Europeans who learned other local pidgins were under the impression that they were using the actual indigenous language.
This section focuses upon presenting general information about Munsee and Unami sounds and phonology, with detailed discussion reserved for entries for each language.
Munsee and Unami have the same basic inventories of consonants, as in the following chart.
In addition, Unami is analysed as having contrastive long voiceless stops: p·, t·, č·, k·; and long voiceless fricatives: s·, š·, and x·. The raised dot /·/ is used to indicate length of a preceding consonant or vowel. A full analysis and description of the status of the long consonants is not available, and more than one analysis of Delaware consonants has been proposed. Some analyses only recognize long stops and fricatives as predictable, i.e. as arising by rule. The contrastive long consonants are described as having low functional yield, that is, they differentiate relatively few pairs of words, but nonetheless do occur in contrasting environments. Both languages have rules that lengthen consonants in certain environments in both Munsee and Unami.
Several additional consonants occur in Munsee loan words: /f/ in e.g. nə̀fó·ti ‘I vote’; /r/ in ntáyrəm.
A number of alternate analyses of Munsee and Unami vowels have been proposed. In one, the two languages are analysed as having the same basic vowel system, consisting of four long vowels /i· o· e· a·/, and two short vowels /ə a/. This vowel system is equivalent to the vowel system reconstructed for Proto-Eastern-Algonquian. Alternative analyses reflect several differences between the two languages. In this analysis Munsee is analysed as having contrasting length in all positions, with the exception of /ə/. In cells with two vowels, the first is long.
|High||i·, i||o·, o|
Similarly, Unami vowels have also been analysed as organized into contrasting long-short pairs. One asymmetry is that high short /u/ is paired with long /o·/, and the pairing of long and short /ə/ is noteworthy. In cells with two vowels, the first is long.
|High||i·, i||o·, u|
|Mid||e·, e||ə·, ə||ɔ·, ɔ|
Both Munsee and Unami have loan words from European languages, reflecting early patterns of contact between Delaware speakers and European traders and settlers. The first Europeans to have sustained contact with the Delaware were Dutch explorers and traders, and loan words from Dutch are particularly common. Dutch is the primary source of loan words in Munsee and Unami.
Because many of the early encounters between Delaware speakers and Dutch explorers and settlers occurred in Munsee territory, Dutch loanwords are particularly common in Munsee, although there are also a number in Unami as well.
Many Delaware borrowings from Dutch are nouns that name items of material culture that were presumably salient or novel for Delaware speakers, as is reflected in the following borrowed words.
|hé·mpət||hémpəs||shirt||hemd ‘shirt, shift’|
|mó·kəl||mɔ́·k·əl||(ironwood) maul (Munsee); maul, sledgehammer (Unami)||moker ‘sledge, large hammer’|
More recent borrowings tend be from English such as the following Munsee loan words: ahtamó·mpi·l ‘automobile’; kátəl ‘cutter’; nfó·təw ‘s/he votes.’
There is one known Swedish loan word in Unami: típa·s ‘chicken,’ from Swedish tippa, a call to chickens.
Europeans writing down Delaware words and sentences have tended to use adaptations of European alphabets and associated conventions. The quality of such renditions have varied widely, as Europeans attempted to record sounds and sound combinations they were not familiar with.
Practical orthographies for both Munsee and Unami have been created in the context of various language preservation and documentation projects. A recent bilingual dictionary of Munsee uses a practical orthography derived from a linguistic transcription system for Munsee. The same system is also used in a recent word book produced locally at Moraviantown.
The online Unami Lenape Talking Dictionary uses a practical system distinct from that for Munsee. However, other practically oriented Unami materials use a writing system with conventional phonetic symbols.
|kwə́t·i||kwëti||one||kwə́t·a·š||kwëtash||six||wčé·t||wchèt||sinew, muscle||tə́me||tëme||coyote, wolf|
|ní·š·a||nìshi||two||ɔ́·k||òk||and||ní·š·a·š||nishash||seven||tá·x·an||taxàn||piece of firewood|
The table below presents a sample of Munsee words, written first in a linguistically-oriented transcription, followed by the same words written in a practical system. The linguistic system uses a raised dot (·) to indicate vowel length. Although stress is mostly predictable, the linguistic system uses the acute accent to indicate predictable main stress. As well, predictable voiceless or murmured /ă/ is indicated with the breve accent (̆). Similarly, the breve accent is used to indicate an ultra-short // that typically a single voiced consonant followed by a vowel. The practical system indicates vowel length by doubling the vowel letter, and maintains the linɡuistic system’s practices for marking stress and voiceless / ultra-short vowels. The practical system uses orthographic
log, timber nə̆wánsi·n
I forgot it
his older brother
I am named so and so máske·kw
he smokes wə́sksəw
he is young
it is ripe
it is dry
|
2026-01-29T07:35:14.118945
|
939,393
| 3.60631
|
http://docs.oracle.com/cd/E19205-01/819-5257/blaqi/index.html
|
A breakpoint is a location where an action occurs, at which point the program stops executing. The following are event specifications for breakpoint events.
The function has been entered, and the first line is about to be executed. The first executable code after the prolog is used as the actual breakpoint location. This may be a line where a local variable is being initialized. In the case of C++ constructors, execution stops after all base class constructors have executed. If the -instr modifier is used (see -instr), it is the first instruction of the function about to be executed. The function specification can take a formal parameter signature to help with overloaded function names or template instance specification. For example:
stop in mumble(int, float, struct Node *)
Do not confuse in function with the-in function modifier.
The designated line is about to be executed. If you specify filename, then the designated line in the specified file is about to be executed. The file name can be the name of a source file or an object file. Although quotation marks are not required, they may be necessary if the file name contains special characters. If the designated line is in template code, a breakpoint is placed on all instances of that template.
Equivalent to in function for all overloaded functions named function or all template instantiations thereof.
Equivalent to in function or the member function named function for every class.
Equivalent to in function for all member functions that are members of classname, but not any of the bases of classname. -norecurse is the default. If -recurse is specified, the base classes are included.
A member function called on the specific object at the address denoted by object-expression has been called.stop inobject ox is roughly equivalent to the following, but unlike inclass, bases of the dynamic type of ox are included. -recurse is the default. If -norecurse is specified, the base classes are not included.
stop inclass dynamic_type(ox) -if this==ox
|
2026-02-01T21:15:47.366718
|
403,890
| 3.593185
|
http://www.wisegeek.com/what-are-image-processing-algorithms.htm
|
Learn something new every day More Info... by email
Image processing algorithms make use of computer algorithms to manipulate hardware and software to produce greater control over image processing than was ever possible with analog image processing. They are written in several languages and make use of different algorithms according to what their use and purpose are. Image processing covers more than just the processing of images taken with a digital camera, so the algorithms in use are developed for processing of magnetic resonance imaging (MRI) and computed tomography (CT) scans, satellite image processing, microscopics and forensic analysis, robotics and more. Algorithms for image processing fall into several categories, such as filtering, convolutions, morphological operations and edge detection. These functions have expanded image processing tremendously since the 1980s as computer hardware proliferation has become possible because the hardware has become more affordable for the average business or household.
In personal and professional digital camera operation, sophisticated algorithms make up for what the captured image lacks by means of interpolation of color. This is done by examining the adjacent pixels and those further back in the image to keep false coloration, known as color aliasing, from appearing in the image, which causes a degradation from the reality of the image photographed. Digital processing of the photograph allows for the reduction in noise and signal distortions on digital images, and the algorithms can process two-dimensional, three-dimensional and four-dimensional images into formats that can be easily stored and manipulated.
Optical character recognition algorithms are used by surveillance teams and law enforcement personnel to read license plates from closed-circuit camera systems or road-mounted cameras. These algorithms must be intricate enough to make adjustments for the speed of the vehicle being chased, weather conditions and angles of view to make the license plate characters easily readable. Image processing algorithms also are used in the development of neural networks and wavelets by using optical character recognition algorithms in use in handwriting recognition software. These image recognition algorithms interpret handwritten notes, diagrams, photographs and equations and process them into contextual translations for storage and transmission between various hardware devices.
In medicine, image processing algorithms have continued to be fine tuned and expanded to use both linear and curve algorithms together with distance transformation formulations to achieve greater detail, along with geometric corrections to provide faithful scan images from positron tomography and MRIs. In forensics and microscopics, simple and complex deconvolution algorithms have enabled microscopists to reduce blurring and perform faithful image resolution. In digital mammography, several image processing algorithms are put to use to in combination provide a clear picture of each lesion, the lesion’s edges and density and to more clearly define any tumors evident. These medical applications have continued to be developed but are delivering ever-truer images for the diagnoses and prognoses information of which the medical community is in need.
|
2026-01-24T10:46:37.793663
|
447,843
| 4.005704
|
http://www.nbcnews.com/id/6940729/ns/technology_and_science-science/
|
Add ants to the list of animals that can fly. Worker ants, the wingless kind.
Scientists call it gliding, or directed aerial descent. But just as one might say that flying squirrels fly, so do a type of ants called Cephalotes astratus. They live in rain forest treetops, and their newly discovered ability is a lifesaver.
Stephen Yanoviak of the University of Texas Medical Branch and University of Florida made the discovery by accident about two years ago while collecting mosquitoes for an unrelated project in the rain forest canopy near Iquitos, Peru.
The finding was announced Wednesday.
"When I brushed some of the ants off of the tree trunk, I noticed that they did not fall straight to the ground," Yanoviak told LiveScience. "Instead, they made a J-shaped cascade leading back to the tree trunk."
Yanoviak immediately suspected that his observation was something "new and exciting," but figured someone must have scooped him years ago. However, a quick read of past research revealed that his observation was novel.
So paint them
Yanoviak started marking the ants with paint to follow their amazing journeys up and down the trees. He discussed the findings with Michael Kaspari of the Smithsonian Tropical Research Institute in Panama and the University of Oklahoma. A third colleague, Robert Dudley, with the University of California and also the Smithsonian, was brought in to create high-speed videos of the gliding wonders, among other things.
The team found that the ants downward journey comes in three phases: a 2- to 3-yard freefall and attempt to slow down, followed by a rapid midair turn back toward the tree trunk, and finished off with a steep but directed glide to the tree trunk.
The remarkably adapted ants are the first animals found to consistently glide backwards, other than microbes, some of which spend their entire lives gliding in directions hard to call backwards or forwards.
Yanoviak and his colleagues discovered that the gliding ants are able to return to their home tree trunk 85 percent of the time.
Once they make contact again with the trunk, the ants either cling to it with their sticky toes (called "tarsi" in ants) or fall a few more yards before gaining a foothold — at which point they begin their march back up the tree, often returning to the exact point from which they dropped, and typically within 10 minutes of their initial fall. Experiments done with blinded ants found that they rely on their vision to detect the tree trunk and guide their descent.
Smaller ants fell shorter distances. The scientists also found that ants called Pseudomyrmecinae were able to glide, but other arboreal ants they tested could not.
These results are published in Thursday's issue of the journal Nature.
Look, ma, no parachute!
The falling ants’ first phase is called uncontrolled parachuting because they splay their legs in all directions in an effort to slow their fall by increasing drag. However, parachuting animals technically lack control over their trajectory.
Gliding or directed descent is initiated in phases two and three when the ants turn around and gain control over their flight path.
Their typical falling speed is 8 mph (4.3 meters per second), a fast clip for a creature less than a half-inch (1 centimeter) long. Sometimes, the tiny creatures bounce off the tree trunk the first time they hit it. When that happens, they're able to recover control rapidly and glide right back to the tree, Yanoviak said.
For arboreal animals, the ability to glide or fly or even parachute can be a life-or-death matter. Ants are frequently buffeted about by the wind or nearby mammals and birds, which can knock them off a branch or leaf to start tumbling down to a risky place — the forest understory, comprising the shrubs and trees that grow between the rain forest canopy and the ground cover. Moreover, some ants will voluntarily drop off tree trunks when approached by a foreign object.
In any case, the ability to self-rescue comes in handy. The understory and forest floor are full of hazards, not to mention terrain that is tough for tiny navigators, Kaspari said.
"An ant falling to the forest floor enters a dark world of mold and decomposition, of predators and scavengers, where the return trip is through a convoluted jungle of dead, accumulated leaves," Kaspari said. "Gliding is definitely the way to go, and we won’t be surprised if we find more examples of this behavior among wingless canopy insects."
Gliding is thought to be an important stage in the evolution of flight, scientists say.
A fall of 30 yards is a huge distance for a canopy ant — 3,000 times the animal’s body length. For a human, this would be equivalent to being tossed 3.5 miles and then having to walk back home (although humans have different biomechanics and energy reserves that ants lack).
Ants often rely on chemical trails to find their way back to the nest. If they land in the understory and cannot find a trail or some other cue to get home, they are lost forever. Like many animals, ants are dependent on the work and contributions of the entire group, so the loss of any individual ant that falls and never returns is costly.
For this reason, evolution has favored traits like sticky toes and the ability for directed aerial descent to prevent the loss of workers, Yanoviak said.
The ants glide backwards because their hindlegs are longer than their forelegs. It is probably easier for them to get a quick grip on the tree with their hindlegs, as if using a fishing gaff or grappling hook, Yanoviak said.
It could also be that the shape of an ant's body only permits directional control in the air when facing backwards. However, Yanoviak said he recently discovered a type of ant called Camponotus that glides to the tree head-first. "The story will undoubtedly get more interesting the more we work on it," he said.
Other arboreal creatures that can glide include lizards, frogs and snakes. Still no word on whether pigs can fly.
© 2012 LiveScience.com. All rights reserved.
|
2026-01-25T04:01:37.302724
|
1,060,273
| 4.197219
|
http://www.learnnc.org/lp/editions/few/679
|
What research and best practice show about teaching grammar and spelling.
The teaching of formal grammar has a negligible or, because it usually displaces some instruction and practice in actual composition, even a harmful effect on the improvement of writing.
— Braddock, Lloyd-Jones, and Schoer, 1963, quoted in Hillocks, 1986, p. 133
Fifty years of research into grammar instruction confirms what many teachers have long suspected: when it comes to improving writing, traditional grammar instruction simply does not work. In fact, the most unequivocal conclusion reached by George Hillocks in his 1986 meta-analysis of twenty-five years of writing research was that traditional grammar instruction was the most ineffective method of improving writing.
Many teachers, though, worry that throwing out all instruction in grammar and conventions will produce a generation of students who are unable to write an intelligible sentence. So what’s a teacher to do? Rather than eliminating instruction in conventions, the Features of Effective Writing model puts conventions in their proper place in the writing process — at the end, where they can be considered only after students have revised their writing for the other four features, as they prepare to publish their work.
What are conventions?
Conventions are the surface features of writing — mechanics, usage, and sentence formation. Conventions are a courtesy to the reader, making writing easier to read by putting it in a form that the reader expects and is comfortable with.
Mechanics are the conventions of print that do not exist in oral language, including spelling, punctuation, capitalization, and paragraphs. Because they do not exist in oral language, students have to consciously learn how mechanics function in written language.
For example, while speakers do not have to be conscious of the spellings of words, writers not only have to use standard spelling for each word but may even have to use different spellings for words that sound the same but have different meanings. The same holds true for punctuation: speakers do not have to think consciously about intonation and pauses, but writers have to decide where to use a period instead of a comma and how to indicate that they are quoting someone’s exact words.
Usage refers to conventions of both written and spoken language that include word order, verb tense, and subject-verb agreement. Usage may be easier than mechanics to teach because children enter school with a basic knowledge of how to use language to communicate. As children are learning to use oral language, they experiment with usage and learn by practice what is expected and appropriate.
However, the oral language that many children use at home is often very different from formal “school” language. In addition, children who speak a language other than English at home may use different grammatical rules, word order, and verb conjugations. Although it may be easier to teach “correct” usage when a child’s oral language at home is already very similar to school language, children from all oral language backgrounds benefit from learning about how language is used in different situations.
Sentence formation refers to the structure of sentences, the way that phrases and clauses are used to form simple and complex sentences. In oral language, words and sentences cannot be changed once they have been spoken. But the physical nature of writing allows writers to craft their sentences, combining and rearranging related ideas into a single, more compact sentence. As students become more adept at expressing their ideas in written language, their sentences become longer and more complex.
Conventions in the writing process: last, not first
Teaching conventions in isolation is ineffective at best, because students need opportunities to apply their knowledge of conventions to their writing. Even daily oral language activities are a waste of time for students without procedural knowledge of how and when to use conventions in writing. Consequently, the most effective way to teach conventions is to integrate instruction directly into the writing process.
Attention to conventions too early in the writing process, however, can interfere with both students’ development of automaticity. Writers need the ability to automatically juggle the many physical and cognitive aspects of writing — letter formation, spelling, word order, grammar, vocabulary, and ideas — without consciously thinking about them. The only way to develop this automaticity in writing is to practice, practice, practice. For many students, however, most daily writing is limited to filling in the blanks on worksheets.
The first step to improving automaticity, then, is to provide daily opportunities to write for extended periods of time. Initially, this writing should be single-draft writing only, using phonic spelling, with no physical editing of their writing by either the teacher or the student. Only when students grow more automatic in their writing should teachers introduce conventions into the writing process.
Students’ motivation to write also suffers when teachers focus on conventions first and ideas last. Many students have little self-confidence when they write because teachers and parents have been too quick to point out their errors instead of praising their ideas first. This problem can be solved by having students share first drafts in a positive, conversational atmosphere that focuses only on the content of their writing, with no correction of errors (Cunningham, Hall, and Cunningham, 2003).
The proper place for teaching conventions, then, is at the end of the writing process, during the editing phase, when students are preparing their writing for publication. When students know that their work will be published for a specific audience, they are more motivated to learn the conventions that will make their writing readable and to edit for those conventions.
Conventions in the primary grades (K-2)
Because primary students should be concentrating first on developing fluency in written language, their first draft writing should not be corrected for usage, spelling, or punctuation. Instead, primary students should begin to develop an ear for their writing by publishing their writing orally. Many teachers have a five-minute “sharing time” or “author’s chair” time every day at the end of their writing workshop time, when four to five students have a chance to read their drafts aloud to the rest of the class.
Once students have learned to produce fluent single draft writing, usually by the middle of second grade, they can begin to add very simple editing rules. Ask questions such as “Does each sentence start with a capital letter?” and “Does each sentence make sense?” (Cunningham, Hall, & Cunningham, 2003). Primary students can also learn strategies for proofreading their drafts, such as the “Mumbling Together” DPI writing strategy lesson. Daily practice with oral language can also help.
Because spelling, punctuation, and capitalization, are easier for young children to physically see and correct in their writing, those are the first conventions students should learn to edit in their writing.
Spelling. For beginning writers, correct spelling is less important than having opportunities to apply their emerging knowledge of the alphabetic principle to their own writing. Phonic spelling (also called invented spelling) allows beginning writers to apply their developing knowledge of phonics to sound out the spelling of words as they write. However, because over fifty percent of the words students encounter are high-frequency sight words that are rarely spelled phonetically (such as “they”), beginning writers also need to learn strategies for spelling these words. Word walls provide students with a tool for learning the correct spellings of high-frequency words and applying them in their daily writing (Cunningham & Hall, 2000).
Punctuation and capitalization. Primary students can begin to learn the basic functions of punctuation marks and capitalization during shared reading and writing lessons. During the second or third read-aloud of a book, teachers can point out different punctuation marks and talk about why the author used them. Teachers can model the use of punctuation marks during shared writing activities, and then encourage students to use punctuation marks in their own writing. One of the first editing rules that students can learn is to end each sentence with a period.
Usage and sentence formation
While memorizing definitions of parts of speech in isolation is not effective, students do need to know how to talk about the words they encounter when they read and write. Teachers can talk about why an author uses particular adjectives or verbs in their writing. The “Be the Sentence” lesson from DPI Writing Strategies helps students experience physically how parts of speech and punctuation marks fit together to make different kinds of simple sentences.
Conventions in the elementary grades (3-5)
As upper elementary students become more adept at juggling the various aspects of writing, they can begin to focus more of their attention on conventions. At the same time, upper elementary students are beginning to branch out into writing in different content-area subjects and need to learn how conventions vary for different writing genres.
Although conventions are an important feature of effective writing, many students never move beyond surface-level editing to actually revising the content of their writing. This is why it is especially important to emphasize to upper elementary students that editing should be reserved for the end of the writing process, only after they have revised their work for the other four features.
Upper elementary students can learn proofreading symbols and act as editors for their peers. Have students skip lines in their early drafts to provide room for revision comments and editing marks. Because these students are growing more conscious of the opinions of others, providing opportunities to write for audiences other than their teachers and classmates can also help them become aware of the importance of editing their writing before they publish.
Spelling. As students begin to encounter more difficult words, usually around second grade, they can no longer rely exclusively on the “sound it out” strategy to spell unfamiliar words. This is the point at which many students are first diagnosed with reading or writing disabilities. Many of these students can be “cured” of their disabilities through an understanding of the nature of the English language and a repertoire of spelling strategies.
Unlike phonetically regular languages such as Spanish, English includes many words whose spellings are determined by morphology; that is, their spelling is driven by meaning rather than by pronunciation. (This morphological basis for spelling allows English speakers in North and South Carolina to spell “Beaufort” the same way, even though they pronounce it differently.) This means that students can use words they know to figure out the spelling of unknown words. For example, a student who can’t decide whether to spell the word “medicine” with a “c” or an “s” can think of related words, such as “medic” and “medical”, that use a “c”. Other examples are using “bombardment” to identify the silent “b” in “bomb.” Besides using familiar words, students can also use “Making Words” and word sorting activities to help them learn English spelling patterns. Students can also learn to use prefixes and suffixes from words they know to help them spell unfamiliar words (Cunningham, 2000).
Word walls are also an effective strategy for teaching upper elementary students to spell high-frequency words. For older students, homonyms, “spelling demons,” and other frequently misspelled words can be added to the word wall. Lists of words for different units of study can also be posted on separate bulletin boards to help students correctly spell key vocabulary words. In addition, students who move to different classrooms during the day can use individual word wall folders with high frequency words (Cunningham & Hall, 2000).
Finally, upper elementary students can also use phonic spelling as a placeholder when they are unsure about correct spellings in their early drafts, with the understanding that they will identify misspelled words and correct them during the editing stage.
Upper elementary students should start editing their writing using simple editing rules such as subject-verb agreement, verb tense consistency, and pronoun usage. As students increase the range of genres they write, they can also learn that different genres tend to use different verb tenses: past tense for narratives and recounts of science experiments; present tense for informational reports, instructions, recipes, and explanations; and future tense for plans and proposals.
For older students, problems with punctuation, sentence fragments, and run-on sentences are usually related to difficulties producing more complex sentences.
Problems with sentence fragments usually mean that students do not know how to combine simple sentences into more complex sentences that use subordinate clauses. Sentence combining lessons can show students alternative ways to combine simple sentences into more complex sentences, using the correct punctuation.
Run-on sentences also provide a good opportunity to teach students parts of speech, such as nouns, verbs, and coordinate conjunctions that can help them divide run-on sentences into self-sufficient complete sentences. In addition, many students have problems with run-on sentences because they want to show that two sentences are related; teaching students to use a semicolon to link two closely related sentences can solve this problem. Other punctuation marks can be introduced, as well, to show the relationships between clauses in complex sentences.
Conventions in middle and high school
By the time students enter middle school, they should have developed control of the basic conventions of written language, as well as the vocabulary to be able to talk about how those conventions are used in their writing. So what’s left to learn about conventions in middle and high school?
Middle and high school students first need to consistently edit their own work for appropriate conventions. They are then ready to explore how conventions are used in specific contexts and genres to achieve a particular effect with an audience. Rather than editing conventions only at the word and sentence level, students can begin to understand how conventions contribute to the reader’s understanding of the text as a whole. At the same time, they can study how professional writers defy these conventions to achieve certain effects.
By middle school, students should have control of conventions such as spelling, punctuation, and paragraphing. Spelling should be more a matter of acquiring specialized content-area vocabulary than learning new spelling strategies. Students should have a repertoire of spelling strategies to help them identify potentially misspelled words in their writing. They also should know how to use tools such as dictionaries and spell-checkers to check for the correct spelling.
Students should now learn how to use conventions that are specific to different genres, such as conventions for friendly letters and business letters, capitalizing lines in poetry, headings and subheadings in informational reports, and conventions for bibliographic citations.
By sixth grade, students should have mastered basic knowledge of usage, such as word order, subject-verb agreement, verb tenses, and correct use of modifiers. In middle school, they can begin to use nominative, objective, and possessive pronouns appropriately and to check that pronouns match their antecedents. They can extend their knowledge of appropriate usage to different dialects, comparing usage in informal, ethnic, and regional dialects to standard English usage. Students can also compare usage in oral and written language by comparing quoted speech in literature to language used by the narrator or by translating written language into oral speech. Once students reach high school, they are ready to explore usage in different contexts and genres.
By middle school, students are ready experiment with using varying sentence lengths to achieve specific effects on an audience. They are also ready to use (and punctuate) dependent and independent clauses by combining simple sentences into more complex sentences.
High school students can further refine their writing by learning to structure their sentences and paragraphs to achieve specific effects in their writing. Students can use parallel structures within their sentences to make them easier to read. Students can also structure their sentences and paragraphs to emphasize the new information they provide about their topic. Passive voice, for example, can be used to emphasize the object of an action rather than the actor. (Had the preceding sentence been written as “Students can emphasize the object of an action by using passive voice,” the term passive voice would have been diminished in importance.)
High school students can also use sentence-combining activities to practice embedding information within subordinate clauses. In addition, they can use techniques specific to informational writing, such as nominalization, which converts actions into objects or processes in order to pack more information into a sentence. For example, the five-word sentence, “The group mobilized its forces” can be converted into a five-word phrase, “The mobilization of the group’s forces,” that can be used as the subject or object of a sentence.
Guiding questions for conventions in elementary grades
- Are your sentences complete?
- Do you have any sentence fragments that need to be completed?
- Do you have run-on sentences?
- Does your piece demonstrate standard usage?
- Is there subject-verb agreement?
- Is there consistency in verb tense?
- Are pronouns used correctly?
- Are all your words used correctly?
- Are punctuation, capitalization, spelling, and paragraphs used correctly in your piece?
- Does your punctuation make your piece hard to read?
- Have you used capital letters for the first word in a sentence and proper nouns?
- Have you spelled most common words correctly?
- Do misspelled words in your piecve make it hard to read?
- Have you used paragraphs appropriately?
Cunningham, Patricia M. (2000). Phonics they use: Words for reading and writing (3rd ed.). New York: Longman.
Cunningham, Patricia, and Hall, Dorothy. (2002). Month-by-Month Phonics for First Grade. Greensboro, NC: Carson-Dellosa.
Cunningham, Patricia, Hall, Dorothy, and Cunningham, James. (2003). “Writing the Four Blocks Way.” Presentation at International Reading Association Annual Conference, Orlando, FL.
Hillocks, George. (1986). Research in Teaching Composition. Urbana, IL: ERIC Clearinghous on Reading and Communication Skills and National Conference on Research in English.
Strong, William. (2000). Coaching Writing. Guide To Grammar and Writing.
- Next: Further reading
|
2026-02-03T14:49:50.637892
|
1,008,241
| 4.154587
|
http://kiddyhouse.com/Themes/frogs/frteach.html
|
TEACHERS : FROG LESSON PLANS AND ACTIVITIES
Grow a Frog : Teacher's Manual
Printable pdf copy
How Far Can a Frog Jump?
A math activity for 4th grade. Introducing the metric system.
Five Little Frogs
Grade Level/Subjects: K through 2 Math and Music Lesson Plan. You can use this lesson plan together with the
5-Little Speckled frog worksheets
Frog Metamorphosis: A Change For the Better
To introduce students to the concept of metamorphosis as practiced by frogs
Frogs: A Thematic Unit Plan Submitted by: Lisa Turturice
For Grade 2. To help children broaden their concepts of living things as they learn more about the metamorphosis and development of frogs.-
To help children develop an understanding of the basic needs of animals through the study and care of tadpoles as they develop into adult frogs.
Frog and Toad Unit
Small Pond Teacher's Notes
Lesson plan based on Australian stamps : Pond Theme. Can be
adapted for use in any classroom. In pdf printable format.
Frogs: Fact and Folklore by Audrey Carangelo
Grade level : 6-8. Students will understand the following:
1. The importance of frogs in their local ecosystem.
2. Why a frog is uniquely suited to its habitat.
Frog finds his family Webquest
With this WebQuest, students will be able to learn some facts about frogs without actually going to a pond.
For grades K
For grades K-2. Make a tadpole puppet that transforms into
a frog. Pdf format file.
The Metamorphosis of Frogs by Maria Ragucci
Grade Level: 3, 4. Objectives: Predicting: make statements about what the frogs will look like next time the class observes them.
Frog ID: My Frog is Missing
Level of Activities:
2nd - 4th grade. Lesson plan and student activities.
Circle Fractions Frog Craft: Shapes, Scissor Skills, Colors, Fractions, and Counting
This craft can be used to teach a variety of skills. You don't have to use all the thoughts for lessons... just choose the ones you're working on with your child
Survival of the Mutant Toad by Retha M. Edens
Grade Level: 3, 4, 5, 6. To learn about the importance of camouflage. To increase students' knowledge about toads, habitats, predators, and prey.
It's a Frog's Life Created by:
Learn about the life cycle of a frog through information provided on
the Internet, literature, and observations of live animals.
The Frog WebQuest by Emily Campbell
Understanding of frogs and some of their life processes
Frogs - Science Web Quest Created by: Beth Zemke
5 and 6 graders. Students will develop an appreciation for the role frogs and other amphibians play in their ecosystems and learn about the anatomy of the frog.
Frog Finds His Family.
Designed by Shannon Schwartz
A WebQuest for Kindergarten Science
Frog and Toad WebQuest
Allyson Maiolo (Lunde)
A WebQuest for 2nd Grade Science and Literacy.
How to dissect a frog video
Watch this science video on how to dissect a frog.
Frog Unit Study
A very good site for teachers who are doing a unit on frogs.
Science Unit: The fate of frogs
Students examine the important role frogs play in their terrestrial ecosystems and why they are considered „environmental indicators.? They investigate factors threatening frog populations and develop an action plan to increase the diversity of frog species in their local area.
Knowledge, understandings. For Year 3-6
FROG PRINTABLES AND OTHER TEACHING AIDS
Posters of Life Cycle of A Frog
Print it out to display in the class.
Power Point Presentation of the life cycle of a frog
Click to download it..
HOMEPAGE AND MAIN MENU
Click on the house to see our other topics and resources for teachers, kids and parents.
IMPORTANT : COPYRIGHT : Please do not use any of the graphics here for
your homepage. They are my own drawings unless otherwise stated. If you wish to have frog cliparts for your homepage,
please get it from our clipart page. However, they are not for your homepage clipart collection.
DISCLAIMER : This is a disclaimer. We try to gather information that are
as accurate as possible. However, if there are mistakes, we will not be held
liable for anything. Use it at your own discretion.
IMPORTANT : We are not responsible for any links beyond our site.
|
2026-02-02T20:07:38.170612
|
261,960
| 3.697637
|
http://nature.nps.gov/geology/parks/noat/
|
As one of North America's largest mountain-ringed river basins with an intact, unaltered ecosystem, the Noatak River environs feature some of the Arctic's finest arrays of plants and animals. The river offers equally superlative wilderness float-trip opportunities-from deep in the Brooks Range to tidewater of the Chukchi Sea. Noatak National Preserve lies almost completely enclosed by the Baird and De Long mountains of the Brooks Range. In this transition zone, the northern coniferous forest thins out and gradually gives way to the tundra that stretches northward to the Beaufort Sea. The Noatak basin is internationally recognized as a Biosphere Reserve. Under this United Nations scientific program the area's ecological and genetic components are monitored to establish baseline data for measuring changes in other ecosystems worldwide. Information can also be gathered here on sustainable uses of natural resources by humans, as exemplified by the Inupiat and other native peoples who have lived off the land of northwest Alaska for many thousands of years. The Noatak River is classified as a National Wild and Scenic River from its headwaters to the Kelly River.
The basic geological framework of the northwest region was set by the late Paleozoic era and included the Brooks Range geosyncline (a broad sedimentary trough), the Arctic Foothills, and the Arctic Coastal Plain. During the Triassic period (Mesozoic era), the site of the present Brooks Range was stabilized, and limestone and chert were formed. The process of mountain-building began during the mid Jurassic period. By the Cretaceous period the Brooks Range dominated the landscape, and volcanic activity from the Jurassic period continued in an area south of the range.
The sedimentary rocks of the Brooks Range and the DeLong Mountains were intensely folded and faulted during the late Cretaceous period. It was during this time that the existing east west fault trends within the area were established. A resurgent strong uplift during the early Tertiary period (Cenozoic era) was responsible for the present configuration of the Brooks Range. Volcanic activity produced intrusions and debris throughout the region during the Tertiary and Quaternary periods.
Bedrock geology of the DeLong Mountains includes faulted and folded sheets of sedimentary clastic rocks with intrusions of igneous rock. Shale, chert, and limestone of Paleozoic and Mesozoic eras are dominant. Graywacke and mafic rock of the Jurassic and Cretaceous periods are also found.
The lowland area of the Noatak drainage is underlain primarily by siltstone, sandstone, and limestone of the mid to late Paleozoic era. Also in evidence are graywacke, chert, and igneous rock of Mesozoic origin.
The Baird Mountains south of the lowland are composed of strongly folded sedimentary rocks with granitic intrusions. Known bedrock consists primarily of Paleozoic or older, highly metamorphosed rocks.
Permafrost plays an important role in the geologic processes and topographic development of the preserve. The Noatak drainage and adjacent lowland areas are underlain by discontinuous permafrost, and areas in the Baird and DeLong mountains are underlain by continuous permafrost. Permafrost can reach depths of 2,000 feet, but is generally between 15 and 260 feet in the Noatak area. Continental ice sheets did not cover all of northwest Alaska during the Pleistocene period, although glaciers did cover most upland areas. The last retreat of the glaciers, about 4,500 years ago, established the present sea level and the extensively glacially carved landscape that is in evidence today. This landscape is characterized by deep, U shaped valleys, rocky peaks, and braided streams. A portion of the Noatak valley lowland was glaciated during Wisconsin time and today is typified by such glacial features as kame, kettles, moraines, and alluvial till.
The three major soil types within the preserve include the upland or mountain slope soils of the lithosol type, tundra soils, and soils associated with the Noatak drainage and lowlands. Lithosol soils on the higher slopes of the DeLong and Baird mountains are limited and are mostly imperfectly weathered rock fragments and barren rock. The soil is without zonation and consists of a thin layer of highly gravelly and stony loam. Where this soil accumulates in protected pockets on mountain slopes, it supports mosses, lichens, and some dwarf shrubs. Below the upland soils on more gently rolling terrain, the tundra soils predominate. These are dark, humus rich, nonacid soils. Texture in the tundra soils varies from highly gravelly to sandy. The floodplains of the Noatak and its tributaries are characterized by silty and sandy sediments and gravel. These soils occur in association with the greatest proportions of organic material along the lower reaches of the Noatak. A fibrous peat extends to the permafrost layer in many areas.
Soil erosion along the Noatak riverbanks is considered severe. This occurs during spring breakup when high volumes and velocities of water scour the riverbanks and carry sediment downstream. In places where waters contact ground ice in adjacent riverbanks, thermal erosion can occur. As the ice melts, banks are undercut and sediments are swept downstream. Additional erosion can occur during high precipitation and storm periods in summer.
A general park map is available on the park's map webpage.For information about topographic maps, geologic maps, and geologic data sets, please see the geologic maps page.
A geology photo album has not been prepared for this park.For information on other photo collections featuring National Park geology, please see the Image Sources page.
Currently, we do not have a listing for a park-specific geoscience book. The park's geology may be described in regional or state geology texts.
Parks and Plates: The Geology of Our National Parks, Monuments & Seashores.
Lillie, Robert J., 2005.
W.W. Norton and Company.
9" x 10.75", paperback, 550 pages, full color throughout
The spectacular geology in our national parks provides the answers to many questions about the Earth. The answers can be appreciated through plate tectonics, an exciting way to understand the ongoing natural processes that sculpt our landscape. Parks and Plates is a visual and scientific voyage of discovery!
Ordering from your National Park Cooperative Associations' bookstores helps to support programs in the parks. Please visit the bookstore locator for park books and much more.
For information about permits that are required for conducting geologic research activities in National Parks, see the Permits Information page.
The NPS maintains a searchable data base of research needs that have been identified by parks.
A bibliography of geologic references is being prepared for each park through the Geologic Resources Evaluation Program (GRE). Please see the GRE website for more information and contacts.
NPS Geology and Soils PartnersAssociation of American State Geologists
Geological Society of America
Natural Resource Conservation Service - Soils
U.S. Geological Survey
Currently, we do not have a listing for any park-specific geology education programs or activities.For resources and information on teaching geology using National Park examples, see the Students & Teachers pages.
|
2026-01-22T05:26:29.843147
|
1,114,206
| 3.515473
|
http://www.acton.org/pub/religion-liberty/volume-11-number-6/different-kind-enlightenment
|
It is now common to argue that the roots of many of the features of modern culture—secularism, utilitarianism, and materialism, to name a few—are found in the ideas of the Enlightenment, a European-wide, eighteenth-century movement described by Immanuel Kant as “man's release from his self-incurred tutelage.”
Kant suggested that the Enlightenment freed man from his inability to use innate understanding without guidance from another person. More broadly, the Enlightenment as it unfolded in certain parts of Europe stressed above all the autonomy of reason as the key tool through which human thought and action might be explored. The term Enlightenment has become most closely associated with France, where thinkers such as Voltaire argued for the primacy of reason with no less a purpose in mind than to “regenerate” humankind, to elevate mankind over the individual, to emphasize the superiority of what Jean-Jacques Rousseau called “the greatest happiness of all” over individual concerns.
The individual who generated ideas thought in an enlightened way. The enlightened person accepted an idea based on personal reflection rather than on the authority of another. The enlightened person had freedom of will and the freedom to debate ideas in the public square. Theoretically, there was ample room in enlightened France for philosophical disagreement, although toleration—particularly for a positive role for religion in society—was sometimes elusive. The perceived lofty and admirable nature of these freedoms carried an implicit challenge to received understanding about the importance of faith and religious truth in culture. We know that the rational scrutiny of religion—Christianity in particular—is a hallmark of the French Enlightenment's legacy.
The purpose of this essay is not to enter into debate about the French Enlightenment but to introduce readers to the increasingly prevalent notion in contemporary historical circles that the Enlightenment, best understood, encompassed a variety of intellectual movements, the focus of which was not necessarily the apotheosis of reason. To reply today to the deceptively simple question, What was the Enlightenment?, one must look at intellectual developments in Germany, America, England, Scotland, Scandinavia, and Russia.
Specifically, American readers might focus on the Scottish Enlightenment, the leading thinkers of which include well-known figures such as Adam Smith and David Hume. Their books, along with those of lesser-known but equally important thinkers such as Francis Hutcheson, Adam Ferguson, William Robertson, Hugh Blair, John Witherspoon, and Thomas Reid, were found in the libraries of the Founding Fathers and in the drawing rooms of the merchants and professionals of Philadelphia, Charleston, and New York. Above all, the Scots concerned themselves with exploring human nature in the fullest sense. Readers will be familiar with the contributions of Adam Smith to political economy and David Hume to philosophy. What may be less familiar is the fact that Smith, Hume, and their colleagues among the Scottish literati rooted their investigations into human nature in a profound appreciation of the roles of morality and ethics, aesthetics, social theory, law, historiography, religion, and language in human thought and action. To highlight some of the unique features of Scottish Enlightenment thought, we will look at the Scots' treatment of morality and ethics, first; then, law and jurisprudence; and, finally, religion.
The Moral Sense and Ethics
In their search for what Hume called the “ultimate original qualities” of human nature, the Scots relied first on moral philosophy. If the rational philosopher, the philosophe, was the standard bearer of the French Enlightenment, the moral philosopher filled the same role for the Scots. Their discussions about the nature of morality and moral knowledge fell between two extremes: The first suggested that moral laws could be identified only through revelation from God; the second suggested that morality was the product of the innermost workings of human nature. While Enlightenment literati addressed the nature of morality differently, particularly regarding the religious dimensions of moral decision making, they shared a common purpose: to identify a moral order in behavior and human identity and to ask if this order sprang from external influences or from a natural sense within the human mind and heart. In the Scots' view, adherence to this moral order was required of all members of society and underscored the fundamental structure of civilized Enlightenment society.
The Scots placed their discussions about moral philosophy within the framework of an intellectual legacy inherited from seventeenth-century European discussions about morality and natural law. This discussion was transported into Scottish intellectual life during the early decades of the eighteenth century by academics trained at European universities; the most notable among them was Gerschom Carmichael, instructor in philosophy at the University of Glasgow from 1694, and, from 1727, its first Professor of Moral Philosophy. Carmichael was deeply influenced by Dutch philosophers Hugo Grotius and Samuel Pufendorf, who addressed matters of ethics as part of a wider effort to define moral standards affecting all manner of social interaction. Essentially, Grotius and Pufendorf argued that man was, by nature, a social animal and that the social world was defined by a complex network of authority and mutual obligation to one's fellow citizens. Ethics and morality were the arbiters and mainstays of good and responsible citizenship.
Francis Hutcheson, an Irish-born clergyman who became the preeminent moral theorist of his generation and the philosophical forefather of Hume and Smith, succeeded Carmichael as Professor of Moral Philosophy. Hume sought Hutcheson's advice while drafting his Treatise on Human Nature. For Smith, Hutcheson was a “never to be forgotten” professor from his undergraduate days. Hutcheson's preeminent contribution was to shift the direction in which Scottish moral philosophy evolved by developing a theory of the moral sense, a God-given faculty that permitted human beings to distinguish between good and evil, or between morally correct or incorrect behavior. Hutcheson believed that humans have a distinct “perception of moral excellence” that cannot be clouded or influenced by human will.
This absence of the will was a crucial feature of Hutcheson's understanding of the moral sense, which permitted him to reply to philosophers such as Thomas Hobbes or Bernard Mandeville, who argued that, ultimately, all human action is driven by self-interest or selfishness. If it was impossible to switch off the moral sense in daily life, and if the moral sense informed human action and underscored steady behavior, the likelihood of acting from selfish motivations was diminished.
As concerned as he was to present his arguments about the moral sense in a philosophically coherent manner, Hutcheson also concentrated on the practical application of moral philosophy to daily life. Hutcheson was a practical moralist, who followed in the tradition of the Roman philosopher, Cicero, to encourage his students and readers to exercise their moral abilities through the pursuit of an active life. In so doing, Hutcheson believed, they contributed to the promotion of virtue in society. This, in turn, underscored the moral and social order. Here, too, Hutcheson's perspective was widely adopted by the Enlightenment literati.
As a Presbyterian minister, Hutcheson eagerly combined his advocacy of virtue with a firm belief in Christian principles. He believed that divine grace and fostering the happiness of others laid at the heart of moral goodness. He also accepted the notion that life should be seen as a progress toward virtue and that individuals are capable of self-improvement. The best means for achieving progress consisted in following the disciplines of duty, faith, and virtue, incorporated with the lessons of human experience. For Hutcheson, moral philosophy was ultimately useful because it served to better not only the individual but also the quality of public life.
As Gertrude Himmelfarb recently noted, neither Hutcheson nor his followers denied the powers of reason; the Scots were not “irrationalists.” In varying degrees, they assigned reason a secondary role in contributing toward the cultivation of virtue and moral knowledge. This was true even for Hume, noted for his unsentimental views on human nature, who, following Hutcheson, wrote that human beings had “an instinct” stemming from a “moral taste” or “benevolence” that guided the course of virtuous action. Hume wrote, “There is some benevolence, however small, infused into our bosom; some spark of friendship for human kind; some particle of the dove kneaded into our frame, along with the elements of the wolf and serpent.” Smith built on Hutcheson's legacy by developing as part of his moral theory a theory of sympathy, through which people appreciate the nature and consequences of their actions and moderate or regulate their behavior accordingly. By exercising all of one's moral faculties, the Scots concluded, the ideal character of an enlightened person might be found. Smith describes such a person in his The Theory of Moral Sentiments: “The perfectly virtuous man desires not only to be loved, but to be lovely…, not only praise, but praiseworthiness.… To feel much for others and little for ourselves,… to retrain our selfish and to indulge our benevolent affections, constitutes the perfection of human nature.”
Law and Jurisprudence
Before the eighteenth century, Scotland had in place a long-established legal system built on the code of civil law that was, and remains, autonomous and distinct from the English legal system. Members of the legal community, with their colleagues in the universities and in the Church of Scotland, played a crucial role in fostering intellectual exchange during the Enlightenment. Jurists such as Henry Home, Lord Kames earned great distinction as a patron of a number of the Enlightenment literati while making his own contribution to legal scholarship by publishing his Historical Law Tracts. These tracts advanced understanding about the legal needs of an enlightened commercial society in a style accessible to “every person who has an appetite for knowledge.” Kames argued that legal principles had to “connect with manners and politics” in society to show how legal institutions affected all citizens, safeguarded property rights, and reflected the moral priorities of the community.
For the Enlightenment literati, generally, treatment of legal theory rested primarily in studying jurisprudence, which they defined as the theory of rules through which civil governments were directed. The first duty of any government was to “maintain justice; to prevent the members of society from encroaching on one another's property, or seizing what is not their own,” Smith notes. “The design here is to give each one the secure and peaceable possession of his own property,” he continues. “When this end, which we may call the internal peace, is secured, the government will next be desirous of promoting the opulence of the state,“ to include trade, commerce, manufactures and agriculture. Smith developed his explanation of the relationships between justice, property, and civil authority in his Lectures on Jurisprudence, which laid partial foundations for further discussion of property and political economy in The Wealth of Nations.
The lesser-known member of the literati, John Erskine of Carnock, provided another avenue for debate about the role of law in society. Erskine devised a framework for a type of Scottish natural law, which focused on orderliness in the world (with orderliness meaning, essentially, lawful behavior), that helped human beings deal with changing fortunes. Erskine suggested that all action in the world occurs under the law of nature promulgated by God. Within the law of nature, distinctions are made between intelligent beings and all other creatures. Intelligent beings may exercise free will to reject the laws of nature; other creatures obey because they have no other option. Erskine placed God at the center of his writings, stressing the need for people to learn God's law either by reading Scriptures or by adhering to the “law written in our hearts,” conscience, the moral sense, or the impulse that tells one if behavior is just.
By contrast, Hume believed that justice was essentially a human invention designed to impose restrains, when needed, to achieve order and harmony in society. Justice was to be contrasted with sentiments or moral impulses because Hume did not believe that humans have a natural feeling of justice. For example, there is no law of human nature, in Hume's view, that makes one respect another person's property. A system of justice is required for the sake of public utility; indeed, Hume wrote, “public utility is the sole origin of justice.“ The connections that Hume and his colleagues made between law, justice, property, and government were precise and deliberate, and it is noteworthy that in many of his writings, Hume discusses justice exclusively as it relates to property. Unlike many of his counterparts in the French Enlightenment, Hume did not extend the role of justice to include questions of equality or human rights.
Erskine's reliance on divine authority to underpin his concept of the law reflected a strong tendency among many of the Scottish literati to keep a central place for the Divine in their social and philosophical works. No disparity between the Scots and the French could have been greater than that over religion. Not only were there differences in the relationship of church to state in each country, but there were also distinctly different cultural legacies inherited by Enlightenment thinkers from Roman Catholicism in France and Calvinism or Presbyterianism in Scotland.
Among the Scots, a broad range of positions on religion existed. William Robertson, Hugh Blair, Thomas Reid, and Adam Ferguson were active ministers of the Church of Scotland. If not an outright atheist, Hume was deeply skeptical about religion. Smith was raised in the Calvinist tradition but may have ended his life as a deist. Aside from personal professions of faith or formal ecclesiastical training, the Scottish literati's interest in anthropology and culture fostered study of Islam, Judaism, Hinduism, and religions of the Far East.
Robertson and Blair were members of a group of ministers, the Moderate Clergy of the Church of Scotland, who were particularly friendly to the moral theories of Hutcheson, Smith, and, to a large extent, Hume. The Moderates emphasized the benefits of Enlightenment commercial society, yet it fell to senior ministers such as Robertson and Blair to emphasize Christianity's role in the eighteenth-century commercial world order. In so doing, they safeguarded the church's position as a moral bulwark against self-interest or avarice. To accomplish this, the Moderates developed a kind of Christian Stoicism—drawing on many of the writings of Greek and Roman Stoic philosophers whom they admired—to reconcile matters of faith and secular ethics. Christian Stoicism was a sometimes complex construct that blended virtue in private and public life with Christian morality. This, in turn, permitted them to argue for a necessary connection between faith and social ethics in Enlightenment society. There were philosophical imperfections within Christian Stoicism, yet these imperfections did not diminish the success of Christian Stoicism in conveying moral messages. Rather, they reflected the traditionally problematic relationship between faith and reason in history. To the criticism of some of their contemporaries, the Moderates did not focus on areas of philosophical incompatibility; rather, they stressed areas of compatibility between faith and ethics to promote what they believed to be the public good. In a sense, the Moderates undertook a process of what Himmelfarb has called a “socializing of religion” rather than a rationalizing of it.
Although several among the Scottish literati were friendly with leading luminaries of the French Enlightenment, the Enlightenment movement took different forms across Europe in substance, emphasis, and social temper. Disagreements among French, Scottish, and English counterparts did not prevent a great degree of intellectual exchange among the key thinkers. Unlike the French, the Scots' Enlightenment messages were characterized by an egalitarian flavor. In theory, moral improvement, the possibility of achieving virtue, and the enjoyment of material progress were accessible to all members of society, albeit in varying degrees. Furthermore, in maintaining a place for matters of faith in society, the Scots left a space in their philosophical theories for the enlightened person who is “truly human simply by virtue of being born in the image of God.”
Purchase a subscription to the Journal of Markets & Morality to get access to the most recent issues.
Read our free quarterly publication that has interviews with important religious figures and articles bettering the free and virtuous society. Visit R&L today.
Phone: (616) 454-3080
Fax: (616) 454-9454
|
2026-02-04T11:21:27.530552
|
641,675
| 3.53833
|
http://www.calvin.edu/academic/engl/writing/good-writing.html
|
Characteristics of Good Writing
The following is a guide to successful writing both in the English department and other departments at Calvin College. The first portion of the document describes the characteristics of good writing while the second portion addresses moving from the fundamentals of writing to writing for specific academic disiciplines.
Defining Good Writing
Defining good writing is almost as difficult as defining pornography. One is tempted to paraphrase Justice Stewart and say, "I can't define it, but I know it when I read it." Far be it from us to press the parallels between good writing and pornography too far, but only to note that-as with pornography, so with good writing-"community standards" differ from community to community and from discipline to discipline, even to the point of flat contradiction between what various disciplines consider good or merely competent writing. Generally speaking, we can agree on the following.
The basic qualities of good writing
Most academics will probably agree on the fundamental qualities of good writing. We may broadly agree that basic errors of grammar and mechanics must be avoided. We may part ways, however, on whether a particular usage is incorrect. For example, broadcast news agencies may allow split infinitives, even though literary critics may not tolerate them. In short, the well-written report or essay will be free of grammatical and mechanical errors; it will conform to the conventions of standard academic English; it will avoid traces of inappropriate dialect or colloquialisms; and it will be sensitive to the level of formality called for by an assignment.
Good writing conveys a clear sense of purpose
Most faculty will similarly agree that good writing conveys a clear sense of the writer's purpose. It is insightful and illuminating, and communicates a content that is unified and significant. We are concerned here with what might be called the intellectual impact of the writing; it is theoretically possible (though admittedly unlikely) for writing to avoid the errors of grammar and mechanics mentioned above and still be poorly written. The rare student might write in a way that is both conceptually pointless and grammatically perfect.
The writer's strategy is fundamental
The writer's strategy goes beyond grammar and conception but is just as fundamental to the paper's success. Most faculty will agree that a paper's structure and development-the way its conception is advanced from assertion through argumentation and details to conclusion-are critical to its success. Good writing at this level often depends upon the writer's willingness to outline, to cut and paste, to discard. In principle, students should complete these are activities well before they begin a final draft, but even good students are often loathe to carry them out.
Good writing shows effective style
Good writing must also show an effective style. Here we recognize, however, an element of subjectivity in evaluation, as well as a difference in the styles commended by various disciplines. Although many faculty may have difficulty characterizing the style of a specific piece of writing as appropriate or inappropriate, they will generally agree that an effective style conveys ideas and information precisely, concisely and in a manner appropriate to the context of a particular paper or report. An effectively styled essay generates interest and even emphasis through its choice of diction; it demonstrates the ability to use punctuation rhetorically-for effect as well as clarity.
From the fundamentals of writing to the academic disciplines
English 101 introduces students to the qualities of effective writing, as outlined above. But when students move into various academic disciplines, they often find that what a professor means by effective organizational strategy or appropriate style is differs from what they learned in English 101. For example, a business student might be surprised to learn that she is expected to begin the opening paragraph of a case study with a precise and succinct statement of the bottom line, and that supporting detail (which her English teacher suggested was crucial) may even be relegated to an appendix. Students of the natural sciences may discover that a given organizational plan (abstract, introduction, methods and materials, results, discussion) is preferred by a journal, even though the organizational strategies they learned for freshman English papers were virtually limitless. To take a final example, English 101 teaches students to favor the active voice over the passive; this stylistic preference serves students of the humanities well enough, but the chemistry student who prefers the active voice in his lab report may be asked to revise.
Different writing styles are demanded by various academic courses and disciplines. Moreover, it is very difficult to predict the career options of most of our students, not to mention those of traditional liberal arts students. It is also true that the notion of "lifework" is becoming obsolete in a society where workers change careers with increasing frequency. Given these facts, we owe it to our students to prepare then to write competently in as many contexts as possible. The Writing Program Committee suggests that it be the goal of writing instruction at Calvin College to develop students who are capable of writing effectively in various academic and work-related contexts. It would be irresponsible of this committee to propose general characteristics of good writing in a manner that ignored discipline-specific differences in the particular definitions of those characteristics. We are consistent with Calvin's historically liberal arts approach to scholarship if we emphasize breadth of preparation in an area as general as writing.
If we define our notions of "competence" and "incompetence" broadly, with a view to the various disciplines, the competent writer will effectively fulfill the stylistic expectations of more than one discipline (for example, business and the humanities). A less competent writer may be only marginally effective in only one discipline, and the incompetent writer will be incapable of writing effectively according to the conventions of any discipline. We know that under these guidelines true excellence in student writing will be not only hard to define but also quite rare. Our emphasis on developing a student writer's ability to cross disciplinary lines suggests that excellence lies in the ability to appropriate the expectations of one academic style and apply them effectively in a different discipline.
|
2026-01-28T04:22:47.550107
|
77,866
| 3.53737
|
http://clio.missouristate.edu/bwalker/tallhisban.html
|
Tall Hisban is a farming town in the Madaba Plains in central Jordan, a highland plateau bordering on steppe lands. Located in the wheat basket of the Middle East, the Tell, one of many in the area, is across the Dead Sea from Jerusalem. Tell Hisban is the oldest continually excavated site in Jordan and has been the subject of excavation since the 1960s. It is one of the largest archaeological projects in Jordan.
The Tell is filled with underground tunnels, which once served several purposes. Many could be used as living quarters for people providing them with protection and a way to evade tax collectors. They could also be used to hold animals, others could be blocked off and used for storage, and still more led to underground cisterns were water could be contained.
Tell Hisban has a rich history. Stone tools and flint have been found dating back to the Paleolithic Period; an early Bronze Age cemetery has been found as well. During the Iron Age (1400-500 B.C.), a dry moat was built. Based on the few pottery sherds found it dates to about 1000 B.C. There are references in the Books of Numbers, Deuteronomy, and the Song of Solomon to a city on a high hill, to a well-fortified city, and to the pools of Heshbon. These references may suggest Hisban. The pools of Heshbon could easily be the cisterns within Hisban, the “well-fortified city” may have been destroyed completely, or it may have meant an established site and not a citadel, or Heshbon may not be Hisban at all. Heshbon was the capital of the Amorites until 1400 B.C. when it was conquered by the Israelites [Numbers 21:21-31]. From 1400-931 B.C. the city was occupied by the tribe of Reuben.
During the Hellenistic Period (332-164 B.C.) the site was called Esbus. It was during this time that the citadel walls were built along with four corner towers. It became a temple and administrative site and the caves beneath it were used for occupation. During the Macabean Period (164-63 B.C.) Hisban was a Jewish settlement and part of a separate political area. From 63 B.C. to 330 A.D. Hisban was controlled by the Romans and a Roman coin with a mold of the city and the name reveals that it was still referred to as Esbus. Evidence points to the possibility of a Temple during this period, certainly there was a Plaza with a ceremonial center at the top of the Tell, but whether this center was a temple or a water shrine remains uncertain. One important piece of pottery from this period is a sherd from the base of a Roman pot. While just a sherd, this piece is important because of a place stamp written in Aramaic, which it contains. During the 4th century A.D. it was a large town and reached its maximum population. Herod “the Great” fortified Esbus. The Esbus-Jerusalem road and Bilanova, another road, passed by the city adding to its appeal.
During the Byzantine Period (330-640 A.D.) Hisban was at its most prosperous. The remains of a church date to this period, and it appears to have been a large basilica used in pilgrimages. The water cisterns were still being used to quench the thirst of its large population, which was now approximately 3,000 households not counting pilgrims, and the people were growing food intensively. The city was now an Episcopal seat and the capital of the provincial district. During the 630s and through the 640s Hisban underwent peaceful conquest by the Muslims, and Christians remained in the site for quite some time afterwards. During the Umayyad period (636-713 A.D.) the capital of the area was Damascus. A military dynasty was present at Hisban and a split in religion occurred. There still remained a large Christian population, evidence of which is largely found in the northern part of the site. Evidence of an earthquake in the seventh century can be seen in the fire damage sustained and the abundance of broken pottery. In the Abbasid Period (750-1258 A.D.) the site was abandoned after the 9th or 10th century A.D. but was reoccupied in the 14th century. The 14th century brought forth the creation of the barracks.
In the Mamluk Period large amounts of sugar went through Hisban and some may have been produced there. This sugar could be used like cash and was exported from Hisban as far as Europe. The site became an administrative center for fifty years (1308-1356) until a prominent amir moved the capital to Amman. The Bedouin people living in Hisban enjoyed playing politics, and when Sultan al-Nasir Mohammad was removed his throne, the Bedouin gave him refuge and helped him to regain power. This is probably how Hisban became the capital of the Balqa’ district. In return for their political support, the Sultan awarded the people of Hisban by financially investing in the town. The bathhouse in the Citadel bears witness to the special services enjoyed by the soldiers stationed there: located inside the Governor’s residential complex, the hammam provided bathing services to officials that in other Mamluk garrisons were available only in the towns outside the garrison walls. Buildings at Hisban were barrel vaulted in this period, but no mortar was used. Between 1517 and 1918 Bedouins camped at the site and used the storeroom as a cemetery. Most of the pottery excavated comes from this period, including the glazed relief wares and much of the Handmade Geometric Painted wares.
|
2026-01-19T12:04:54.886842
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.