text
stringlengths 188
632k
|
|---|
PHOTO: A CT scan shows the abdomen of a patient with lymphoma. Doctors are reviewing how often such scans should be used during diagnosis, treatment and follow-up of patients with cancer. Credit: University of Iowa.
Writing in the March 11 issue of the New England Journal of Medicine, University of Iowa Health Care radiologist Malik Juweid, M.D., said that cancer physicians and radiologists should do more to identify and eliminate medical imaging that is neither cost-effective nor beneficial to patients.
“We want to ensure that imaging used to diagnose and treat cancer is provided in a way that maximizes the benefit to the patient and minimizes both the cost and the potential risks posed by exposure to medical radiation,” said Juweid, who is a professor of radiology with the UI Roy J. and Lucille A. Carver College of Medicine.
To illustrate the point, Juweid and Julie Vose, M.D., a colleague at University of Nebraska, use their letter to the journal to highlight the case for eliminating post-treatment imaging for patients with nonbulky early-stage Hodgkin’s lymphoma.
This cancer, which often occurs in younger individuals, has a recurrence rate of less than 10 percent following the best standard of care. In addition, about 80 percent of recurrences of this form of cancer are discovered without the aid of imaging by the physician or the patients themselves.
This means that multiple use of post-therapy imaging provides “earlier” disease detection in only about 2 percent of treated patients, and there is no proven benefit from this early detection.
However, patients currently might receive as many as five to 10 CT scans and/or several PET/CT scans during the three to five years following treatment. This could amount to an accumulated radiation dose of over 50 millisieverts, which is the equivalent of 2,500 chest X-rays.
Juweid and Vose conclude that for this particular cancer, routine post-treatment imaging is not beneficial and should be reserved only for situations where it is needed to guide further treatment.
“This is one example of imaging being over-used without clear benefit to the patients,” Juweid said. “We believe that identifying and eliminating this type of unnecessary imaging will help us ensure patient safety and provide the most cost-effective, beneficial treatment.”
Juweid added that he is planning to work with professional organizations, such as the American Society of Clinical Oncology, and National Cancer Institute cooperative groups to find the best way to apply his findings.
STORY SOURCE: University of Iowa Health Care Media Relations, 200 Hawkins Drive, Room W319 GH, Iowa City, Iowa 52242-1009
MEDIA CONTACT: Jennifer Brown, 319-356-7124, email@example.com
|
English, Reading and Spelling Tutor
Problems with basic reading, spelling and other language problems are the main reason students have learning difficulties at school. We provide struggling students of all ages with Specialist English Tutoring in Adelaide at our office, and we also provide live online tutoring over the internet anywhere in Australia or the world.
15 Minute Consultation
Before you consider working with us, we invite you to discuss your concerns about your child with a Specialist Teacher, either by phone or online.
Why Newspapers and Magazines are Written with a Reading Age of a 12 Year-Old
- According to recent Australian Bureau of Statistics and OECD figures, about half the adults in Australia and other English speaking nations are poor readers, and a further third are only just able to cope with the reading demands of their daily lives. This leaves only 1/6 of the population that are rated as good readers or very good readers.
- These statistics mean that if your child is only an average reader, then he/she needs help.
- What’s more, poor reading skills impact directly on a child’s ability to learn in other areas, especially from grade 5 onwards where reading is used to learn.
- As a result, many children are considered to be just average students by their teachers, when really they have the potential to be doing much better at school.
Phonics is the Solution to Basic Reading and Spelling Problems
Research in all the major English-speaking nations (such as USA, UK and Australia) consistently shows that teaching Phonics to beginning or remedial readers is the most successful method of ensuring the child is a good reader and speller. So convincing is this research that it is now government policy in many school districts to teach Phonics. (Phonics involves learning the rules between the letters and sounds in English. There are about 100 main rules, and another 100 minor rules to learn. The alternative approach to Phonics to teach reading teaches memory of whole words rather than the decoding of individual letters in the words.)
Unfortunately, Phonics has not been in favour in schools for many years so many teachers were not taught it when they were young or at University so many of them have great difficulty teaching it to their students. Furthermore, most educational publishing houses are not yet publishing high quality Phonics materials so teachers and parents are forced to rely on sub-standard and confusing materials.
High Performance Learning has its own Phonics Program originally written in 1975 and used very successfully up to the present day. It has been much enhanced in recent years by the inclusion of our BetterThanaBook Multi-Media Font which enables students to see the Phonic Rules by using coloured letters, and to hear the individual sounds of letters by clicking on them with the mouse. This addition to our program has made learning Phonics easier and faster for our students. Sample our interactive multimedia materials by clicking here. Teaching children to spell in colour using our Phonic Colour Code has also significantly decreased the amount of time needed to teach students how to spell.
Do you ever get to the end of a page and can’t remember anything that you read? This is a very common problem because inefficient word decoding skills chew up a lot of your brain power leaving little or none to devote to comprehension.
One of the reasons that Phonics is such a successful way to learn to read is that it is much more efficient to work words out as you go than it is to try to dredge them up out of your memory. Phonics leaves a lot more brain power for comprehension and learning when you read.
Reading without comprehension is not really reading because comprehension is the point of reading! Reading with poor or inaccurate comprehension is a total waste of time so it should not be called reading.
Reading Comprehension Skills Can Be Taught
Mastering just a few simple skills can dramatically improve someone’s ability to comprehend. Unfortunately most teachers expect children to work out how to comprehend themselves by doing lots of practice on comprehension exercises.
At High Performance Learning, we teach appropriate comprehension skills to students at all levels from Kindergarten, right through to University. Teaching comprehension skills is integrated into all our Reading Programs.
The Importance of Reading to Life-Long Learning
At University, many students can’t keep up with the large quantity of reading that their lecturers expect them to do, and they wonder why the lecturers don’t just tell them what they need to know during the lectures. The reason is this – one of the essential aims of tertiary education is to train students to be able to learn independently of their lecturers so they can continue to update their knowledge after they leave university.
After all, would you go to a lawyer, doctor or engineer for assistance who has not kept up to date with changes over the last 10 years?
Making Sure Your Child is A Good Reader and Speller is the Most Valuable Gift You Could Ever Give
- Most reading and spelling problems result from a failure to master some key learning or language skills during the early years of education.
- Don’t wait for these problems to fix themselves.
- Remember that at school your child is just one in a large group. Furthermore, the teacher may not have the specialist knowledge required to deal with such problems.
- Our Specialist Individual Tuition is the quickest and easiest way to ensure that your child becomes a proficient reader and speller.
If In Doubt – Find Out
If you are not certain that your child is reading at his or her full potential, we provide Diagnostic Educational Assessments which analyse your child’s style of thinking and learning so we can determine the nature and extent of any Reading, Spelling, Speech, Listening and Maths problems. More importantly, we have Programs that fix any of the Language, Learning or Maths problems we find.
Our Specialist Reading Tutors work individually with students and families to overcome basic reading problems at our office (in Adelaide, South Australia) or in your home using the internet. Our Specialist Reading Teachers adapt our reading programs to suit individual needs. Students also get online access to resources and games which speed up the learning process.
Spelling problems are usually a sign of underlying weaknesses in reading. Before planning a program for a person who has spelling problems the Specialist Spelling Tutor would check skills in all language areas (speech, listening, reading and writing). Even though our Specialist Reading and Spelling Teachers are based in Adelaide, we are able to provide detailed Diagnostic Assessments online for people living in other places around the world.
|
This is “Gas Pressure”, section 10.2 from the book Principles of General Chemistry (v. 1.0).
This book is licensed under a Creative Commons by-nc-sa 3.0 license. See the license for more details, but that basically means you can share this book as long as you credit the author (but see below), don't make money from it, and do make it available to everyone else under the same terms.
This content was accessible as of December 29, 2012, and it was downloaded then by Andy Schmitz in an effort to preserve the availability of this book.
Normally, the author and publisher would be credited here. However, the publisher has asked for the customary Creative Commons attribution to the original publisher, authors, title, and book URI to be removed. Additionally, per the publisher's request, their name has been removed in some passages. More information is available on this project's attribution page.
For more information on the source of this book, or why it is available for free, please see the project's home page. You can browse or download additional books there. You may also download a PDF copy of this book (147 MB) or just this chapter (6 MB), suitable for printing or most e-readers, or a .zip file containing this book's HTML files (for use in a web browser offline).
At the macroscopic level, a complete physical description of a sample of a gas requires four quantities: temperature (expressed in kelvins), volume (expressed in liters), amount (expressed in moles), and pressure (in atmospheres). As we explain in this section and Section 10.3 "Relationships among Pressure, Temperature, Volume, and Amount", these variables are not independent. If we know the values of any three of these quantities, we can calculate the fourth and thereby obtain a full physical description of the gas. Temperature, volume, and amount have been discussed in previous chapters. We now discuss pressure and its units of measurement.
Any object, whether it is your computer, a person, or a sample of gas, exerts a force on any surface with which it comes in contact. The air in a balloon, for example, exerts a force against the interior surface of the balloon, and a liquid injected into a mold exerts a force against the interior surface of the mold, just as a chair exerts a force against the floor because of its mass and the effects of gravity. If the air in a balloon is heated, the increased kinetic energy of the gas eventually causes the balloon to burst because of the increased pressure(P)The amount of force exerted on a given area of surface: of the gas, the force (F) per unit area (A) of surface:
Pressure is dependent on both the force exerted and the size of the area to which the force is applied. We know from Equation 10.1 that applying the same force to a smaller area produces a higher pressure. When we use a hose to wash a car, for example, we can increase the pressure of the water by reducing the size of the opening of the hose with a thumb.
The units of pressure are derived from the units used to measure force and area. In the English system, the units of force are pounds and the units of area are square inches, so we often see pressure expressed in pounds per square inch (lb/in2, or psi), particularly among engineers. For scientific measurements, however, the SI units for force are preferred. The SI unit for pressure, derived from the SI units for force (newtons) and area (square meters), is the newton per square meter (N/m2), which is called the pascal (Pa)The SI unit for pressure. The pascal is newtons per square meter: , after the French mathematician Blaise Pascal (1623–1662):
Equation 10.21 Pa = 1 N/m2
To convert from pounds per square inch to pascals, multiply psi by 6894.757 [1 Pa = 1 psi (6894.757)].
In addition to his talents in mathematics (he invented modern probability theory), Pascal did research in physics and was an author and a religious philosopher as well. His accomplishments include invention of the first syringe and the first digital calculator and development of the principle of hydraulic pressure transmission now used in brake systems and hydraulic lifts.
Assuming a paperback book has a mass of 2.00 kg, a length of 27.0 cm, a width of 21.0 cm, and a thickness of 4.5 cm, what pressure does it exert on a surface if it is
Given: mass and dimensions of object
Asked for: pressure
A Calculate the force exerted by the book and then compute the area that is in contact with a surface.
B Substitute these two values into Equation 10.1 to find the pressure exerted on the surface in each orientation.
The force exerted by the book does not depend on its orientation. Recall from Chapter 5 "Energy Changes in Chemical Reactions" that the force exerted by an object is F = ma, where m is its mass and a is its acceleration. In Earth’s gravitational field, the acceleration is due to gravity (9.8067 m/s2 at Earth’s surface). In SI units, the force exerted by the book is thereforeF = ma = (2.00 kg)(9.8067 m/s2) = 19.6 (kg·m)/s2 = 19.6 N
A We calculated the force as 19.6 N. When the book is lying flat, the area is (0.270 m)(0.210 m) = 0.0567 m2. B The pressure exerted by the text lying flat is thus
A If the book is standing on its end, the force remains the same, but the area decreases:(21.0 cm)(4.5 cm) = (0.210 m)(0.045 m) = 9.5 × 10−3 m2
B The pressure exerted by the book in this position is thus
Thus the pressure exerted by the book varies by a factor of about six depending on its orientation, although the force exerted by the book does not vary.
What pressure does a 60.0 kg student exert on the floor
Just as we exert pressure on a surface because of gravity, so does our atmosphere. We live at the bottom of an ocean of gases that becomes progressively less dense with increasing altitude. Approximately 99% of the mass of the atmosphere lies within 30 km of Earth’s surface, and half of it is within the first 5.5 km (Figure 10.3 "Atmospheric Pressure"). Every point on Earth’s surface experiences a net pressure called atmospheric pressure. The pressure exerted by the atmosphere is considerable: a 1.0 m2 column, measured from sea level to the top of the atmosphere, has a mass of about 10,000 kg, which gives a pressure of about 100 kPa:
Figure 10.3 Atmospheric Pressure
Each square meter of Earth’s surface supports a column of air that is more than 200 km high and weighs about 10,000 kg at Earth’s surface, resulting in a pressure at the surface of 1.01 × 105 N/m2. This corresponds to a pressure of 101 kPa = 760 mmHg = 1 atm.
In English units, this is about 14 lb/in.2, but we are so accustomed to living under this pressure that we never notice it. Instead, what we notice are changes in the pressure, such as when our ears pop in fast elevators in skyscrapers or in airplanes during rapid changes in altitude. We make use of atmospheric pressure in many ways. We can use a drinking straw because sucking on it removes air and thereby reduces the pressure inside the straw. The atmospheric pressure pushing down on the liquid in the glass then forces the liquid up the straw.
Atmospheric pressure can be measured using a barometerA device used to measure atmospheric pressure., a device invented in 1643 by one of Galileo’s students, Evangelista Torricelli (1608–1647). A barometer may be constructed from a long glass tube that is closed at one end. It is filled with mercury and placed upside down in a dish of mercury without allowing any air to enter the tube. Some of the mercury will run out of the tube, but a relatively tall column remains inside (Figure 10.4 "A Mercury Barometer"). Why doesn’t all the mercury run out? Gravity is certainly exerting a downward force on the mercury in the tube, but it is opposed by the pressure of the atmosphere pushing down on the surface of the mercury in the dish, which has the net effect of pushing the mercury up into the tube. Because there is no air above the mercury inside the tube in a properly filled barometer (it contains a vacuum), there is no pressure pushing down on the column. Thus the mercury runs out of the tube until the pressure exerted by the mercury column itself exactly balances the pressure of the atmosphere. Under normal weather conditions at sea level, the two forces are balanced when the top of the mercury column is approximately 760 mm above the level of the mercury in the dish, as shown in Figure 10.4 "A Mercury Barometer". This value varies with meteorological conditions and altitude. In Denver, Colorado, for example, at an elevation of about 1 mile, or 1609 m (5280 ft), the height of the mercury column is 630 mm rather than 760 mm.
Figure 10.4 A Mercury Barometer
The pressure exerted by the atmosphere on the surface of the pool of mercury supports a column of mercury in the tube that is about 760 mm tall. Because the boiling point of mercury is quite high (356.73°C), there is very little mercury vapor in the space above the mercury column.
Mercury barometers have been used to measure atmospheric pressure for so long that they have their own unit for pressure: the millimeter of mercury (mmHg)A unit of pressure, often called the torr., often called the torrA unit of pressure. One torr is the same as 1 mmHg., after Torricelli. Standard atmospheric pressureThe atmospheric pressure required to support a column of mercury exactly 760 mm tall, which is also referred to as 1 atmosphere (atm). is the atmospheric pressure required to support a column of mercury exactly 760 mm tall; this pressure is also referred to as 1 atmosphere (atm)Also referred to as standard atmospheric pressure, it is the atmospheric pressure required to support a column of mercury exactly 760 mm tall.. These units are also related to the pascal:
Equation 10.41 atm = 760 mmHg = 760 torr = 1.01325 × 105 Pa = 101.325 kPa
Thus a pressure of 1 atm equals 760 mmHg exactly and is approximately equal to 100 kPa.
One of the authors visited Rocky Mountain National Park several years ago. After departing from an airport at sea level in the eastern United States, he arrived in Denver (altitude 5280 ft), rented a car, and drove to the top of the highway outside Estes Park (elevation 14,000 ft). He noticed that even slight exertion was very difficult at this altitude, where the atmospheric pressure is only 454 mmHg. Convert this pressure to
Given: pressure in millimeters of mercury
Asked for: pressure in atmospheres and kilopascals
Use the conversion factors in Equation 10.4 to convert from millimeters of mercury to atmospheres and kilopascals.
From Equation 10.4, we have 1 atm = 760 mmHg = 101.325 kPa. The pressure at 14,000 ft in atm is thus
The pressure in kPa is given by
Mt. Everest, at 29,028 ft above sea level, is the world’s tallest mountain. The normal atmospheric pressure at this altitude is about 0.308 atm. Convert this pressure to
Answer: a. 234 mmHg; b. 31.2 kPa
Barometers measure atmospheric pressure, but manometersA device used to measure the pressures of samples of gases contained in an apparatus. measure the pressures of samples of gases contained in an apparatus. The key feature of a manometer is a U-shaped tube containing mercury (or occasionally another nonvolatile liquid). A closed-end manometer is shown schematically in part (a) in Figure 10.5 "The Two Types of Manometer". When the bulb contains no gas (i.e., when its interior is a near vacuum), the heights of the two columns of mercury are the same because the space above the mercury on the left is a near vacuum (it contains only traces of mercury vapor). If a gas is released into the bulb on the right, it will exert a pressure on the mercury in the right column, and the two columns of mercury will no longer be the same height. The difference between the heights of the two columns is equal to the pressure of the gas.
Figure 10.5 The Two Types of Manometer
(a) In a closed-end manometer, the space above the mercury column on the left (the reference arm) is essentially a vacuum (P ≈ 0), and the difference in the heights of the two columns gives the pressure of the gas contained in the bulb directly. (b) In an open-end manometer, the left (reference) arm is open to the atmosphere (P ≈ 1 atm), and the difference in the heights of the two columns gives the difference between atmospheric pressure and the pressure of the gas in the bulb.
If the tube is open to the atmosphere instead of closed, as in the open-end manometer shown in part (b) in Figure 10.5 "The Two Types of Manometer", then the two columns of mercury have the same height only if the gas in the bulb has a pressure equal to the atmospheric pressure. If the gas in the bulb has a higher pressure, the mercury in the open tube will be forced up by the gas pushing down on the mercury in the other arm of the U-shaped tube. The pressure of the gas in the bulb is therefore the sum of the atmospheric pressure (measured with a barometer) and the difference in the heights of the two columns. If the gas in the bulb has a pressure less than that of the atmosphere, then the height of the mercury will be greater in the arm attached to the bulb. In this case, the pressure of the gas in the bulb is the atmospheric pressure minus the difference in the heights of the two columns.
Suppose you want to construct a closed-end manometer to measure gas pressures in the range 0.000–0.200 atm. Because of the toxicity of mercury, you decide to use water rather than mercury. How tall a column of water do you need? (At 25°C, the density of water is 0.9970 g/cm3; the density of mercury is 13.53 g/cm3.)
Given: pressure range and densities of water and mercury
Asked for: column height
A Calculate the height of a column of mercury corresponding to 0.200 atm in millimeters of mercury. This is the height needed for a mercury-filled column.
B From the given densities, use a proportion to compute the height needed for a water-filled column.
A In millimeters of mercury, a gas pressure of 0.200 atm is
Using a mercury manometer, you would need a mercury column at least 152 mm high.
B Because water is less dense than mercury, you need a taller column of water to achieve the same pressure as a given column of mercury. The height needed for a water-filled column corresponding to a pressure of 0.200 atm is proportional to the ratio of the density of mercury to the density of water :
This answer makes sense: it takes a taller column of a less dense liquid to achieve the same pressure.
Suppose you want to design a barometer to measure atmospheric pressure in an environment that is always hotter than 30°C. To avoid using mercury, you decide to use gallium, which melts at 29.76°C; the density of liquid gallium at 25°C is 6.114 g/cm3. How tall a column of gallium do you need if P = 1.00 atm?
Answer: 1.68 m
The answer to Example 4 also tells us the maximum depth of a farmer’s well if a simple suction pump will be used to get the water out. If a column of water 2.06 m high corresponds to 0.200 atm, then 1.00 atm corresponds to a column height of
A suction pump is just a more sophisticated version of a straw: it creates a vacuum above a liquid and relies on atmospheric pressure to force the liquid up a tube. If 1 atm pressure corresponds to a 10.3 m (33.8 ft) column of water, then it is physically impossible for atmospheric pressure to raise the water in a well higher than this. Until electric pumps were invented to push water mechanically from greater depths, this factor greatly limited where people could live because obtaining water from wells deeper than about 33 ft was difficult.
Four quantities must be known for a complete physical description of a sample of a gas: temperature, volume, amount, and pressure. Pressure is force per unit area of surface; the SI unit for pressure is the pascal (Pa), defined as 1 newton per square meter (N/m2). The pressure exerted by an object is proportional to the force it exerts and inversely proportional to the area on which the force is exerted. The pressure exerted by Earth’s atmosphere, called atmospheric pressure, is about 101 kPa or 14.7 lb/in.2 at sea level. Atmospheric pressure can be measured with a barometer, a closed, inverted tube filled with mercury. The height of the mercury column is proportional to atmospheric pressure, which is often reported in units of millimeters of mercury (mmHg), also called torr. Standard atmospheric pressure, the pressure required to support a column of mercury 760 mm tall, is yet another unit of pressure: 1 atmosphere (atm). A manometer is an apparatus used to measure the pressure of a sample of a gas.
Definition of pressure
What four quantities must be known to completely describe a sample of a gas? What units are commonly used for each quantity?
If the applied force is constant, how does the pressure exerted by an object change as the area on which the force is exerted decreases? In the real world, how does this relationship apply to the ease of driving a small nail versus a large nail?
As the force on a fixed area increases, does the pressure increase or decrease? With this in mind, would you expect a heavy person to need smaller or larger snowshoes than a lighter person? Explain.
What do we mean by atmospheric pressure? Is the atmospheric pressure at the summit of Mt. Rainier greater than or less than the pressure in Miami, Florida? Why?
Which has the highest atmospheric pressure—a cave in the Himalayas, a mine in South Africa, or a beach house in Florida? Which has the lowest?
Mars has an average atmospheric pressure of 0.007 atm. Would it be easier or harder to drink liquid from a straw on Mars than on Earth? Explain your answer.
Is the pressure exerted by a 1.0 kg mass on a 2.0 m2 area greater than or less than the pressure exerted by a 1.0 kg mass on a 1.0 m2 area? What is the difference, if any, between the pressure of the atmosphere exerted on a 1.0 m2 piston and a 2.0 m2 piston?
If you used water in a barometer instead of mercury, what would be the major difference in the instrument?
Because pressure is defined as the force per unit area (P = F/A), increasing the force on a given area increases the pressure. A heavy person requires larger snowshoes than a lighter person. Spreading the force exerted on the heavier person by gravity (that is, their weight) over a larger area decreases the pressure exerted per unit of area, such as a square inch, and makes them less likely to sink into the snow.
Calculate the pressure in atmospheres and kilopascals exerted by a fish tank that is 2.0 ft long, 1.0 ft wide, and 2.5 ft high and contains 25.0 gal of water in a room that is at 20°C; the tank itself weighs 15 lb ( = 1.00 g/cm3 at 20°C). If the tank were 1 ft long, 1 ft wide, and 5 ft high, would it exert the same pressure? Explain your answer.
Calculate the pressure in pascals and in atmospheres exerted by a carton of milk that weighs 1.5 kg and has a base of 7.0 cm × 7.0 cm. If the carton were lying on its side (height = 25 cm), would it exert more or less pressure? Explain your reasoning.
If atmospheric pressure at sea level is 1.0 × 105 Pa, what is the mass of air in kilograms above a 1.0 cm2 area of your skin as you lie on the beach? If atmospheric pressure is 8.2 × 104 Pa on a mountaintop, what is the mass of air in kilograms above a 4.0 cm2 patch of skin?
Complete the following table:
The SI unit of pressure is the pascal, which is equal to 1 N/m2. Show how the product of the mass of an object and the acceleration due to gravity result in a force that, when exerted on a given area, leads to a pressure in the correct SI units. What mass in kilograms applied to a 1.0 cm2 area is required to produce a pressure of
If you constructed a manometer to measure gas pressures over the range 0.60–1.40 atm using the liquids given in the following table, how tall a column would you need for each liquid? The density of mercury is 13.5 g/cm3. Based on your results, explain why mercury is still used in barometers, despite its toxicity.
|Liquid Density (20°C)||Column Height (m)|
5.4 kPa or 5.3 × 10−2 atm; 11 kPa, 1.1 × 10−3 atm; the same force acting on a smaller area results in a greater pressure.
|
We are living in a silent epidemic, and that epidemic is burnout.
When you are burnt out, you feel that you have nothing left to give. You feel like there is a void of energy, of caring, and of motivation. This void of coping and emotional resources can leave you feeling completely depleted.
Burnout is a result of doing too much for too long.
Burnout is a phenomenon commonly seen in high achievers and people pleasers. It is a result of going above and beyond in your responsibilities, both at work and at home. This results in persistent emotional exhaustion, which can have a negative impact on your mood, decrease in functioning, and decrease your overall sense of wellbeing.
Burnout can manifest in symptoms of anxiety and depression resulting from unrelenting work related stress. In general, stress involves an overwhelming number of pressures that demand too many mental, physical, and emotional resources. The key difference between stress and burnout are the emotional ability to rebound after a period of time away from the factors that contribute to stress.
|
Class 12 Political Science Deleted Syllabus
CBSE Deleted Syllabus for Class 12 Political Science: Being updated with the syllabus helps in strategizing the study plans accordingly. Political Science is a concept-based subject in which you can score well if your basics are clear. Political Science is a subject that becomes clearer each time you practice it. Formulas need to be remembered for acing certain topics.
Political Science is the main subject of the Humanities stream. It is a purely theoretical subject but needs a lot of learning so if a student concentrates and studies fully determined; he/she can achieve high scores in this subject.
CBSE Deleted Syllabus Class 12 Political Science for Term 1 & 2
Due to the Covid-19 situation, no proper school functioning is taking place. There is increasing stress on student’s shoulders and it is very difficult to cover the entire syllabus in online classes. So CBSE has decided to reduce the syllabus by 30% of all classes.
CBSE has considered reducing the syllabus for each class by 30%. For this academic year, the reduced syllabus will be used for preparing questions for board examinations.
CBSE Political Science Deleted Syllabus for Class 12 2022
The topics which have been deleted are mentioned below. Do check out the table for more information.
|Chapter No||Chapter Name||Deleted Topics|
|Book I: Contemporary World Politics|
|6||Security in the Contemporary World||· Entire Chapter|
|7||Environment and Natural Resources||· Entire Chapter|
|Book II: Politics in India since Independence|
|· changing nature of India’s economic development Planning Commission
· Five-year Plans
|11||India’s Foreign Policy||India’s Relations with its Neighbours: Pakistan, Bangladesh,
Nepal, Sri Lanka, and Myanmar.
|14||Social and New Social Movements in India||· Entire Chapter|
|15||Regional Aspirations||· Entire Chapter|
CBSE Political Science Complete Syllabus for Class 12
Contemporary World Politics
|1||Cold War Era and Non–aligned Movement||6|
|2||The End of Bipolarity||8|
|3||New Centres of Power||8|
|4||South Asia and the Contemporary World||6|
|5||United Nations and its Organizations||6|
Politics in India since Independence
|7||Challenges of Nation-Building||08|
|9||India’s Foreign Policy||04|
|10||Parties and the Party Systems in India||08|
|12||Indian Politics: Trends and Developments||08|
Contemporary World Politics
|1||Cold War and Non-aligned Movement
The emergence of two power blocs/Bipolarity, Non-aligned Movement (NAM).
|2||The End of Bipolarity
The disintegration of Soviet Union, Unipolar World, Middle East Crisis – Afghanistan, Gulf War, Democratic Politics and Democratization – CIS and the 21st Century (Arab Spring).
|3||New Centres of Power
Organizations: European Union, ASEAN, SAARC, BRICS. Nations: Russia, China, Israel, India.
|4||South Asia and the Contemporary World
Conflicts and efforts for Peace and Democratization in South Asia: Pakistan, Nepal, Bangladesh, Sri Lanka, Maldives.
|5||United Nations and its Organizations
Principle Organs, Key Agencies: UNESCO, UNICEF, WHO, ILO, Security
Council and the Need for its Expansion.
Globalization: Meaning, Manifestations, and Debates.
Politics in India Since Independence
|7||Challenges of Nation-Building
• Nation and Nation Building
• Sardar Patel and Integration of States
• Legacy of Partition: Challenge of Refugee, Resettlement, Kashmir Issue, Nehru’s Approach to Nation – Building
• Political Conflicts over Language and Linguistic Organization of States.
• National Development Council, NITI Aayog.
|9||India’s Foreign Policy
• Principles of Foreign Policy, India’s Changing Relations with Other Nations: US, Russia, China, Israel;
|10||Parties and the Party Systems in India
• Congress System
• Bi-party System
• Multi-party Coalition System.
• Jai Prakash Narayan and Total Revolution
• Ram Manohar Lohiya and Socialism
• Pandit Deendayal Upadhyaya and Integral Humanism
• National Emergency
• Democratic Upsurges– Participation of the Adults, Backwards, and Youth.
|12||Indian Politics: Trends and Developments
The era of Coalitions: National Front, United Front, United Progressive Alliance [UPA] – I & II, National Democratic Alliance [NDA] – I, II, III& IV, Issues of Development and Governance.
CBSE Class 12 Deleted Syllabus 2021-22
CBSE Class 11 Deleted Syllabus 2021-22
CBSE Class 10 Deleted Syllabus 2021-22
CBSE Class 9 Deleted Syllabus 2021-22
CBSE New Syllabus For Class 12 & 11 2021-2022
|CBSE Class 12 New Syllabus 2021-22||CBSE Class 11 New Syllabus 2021-22|
|Class 12 Maths||Class 11 Maths|
|Class 12 Physics||Class 11 Physics|
|Class 12 Chemistry||Class 11 Chemistry|
|Class 12 Biology||Class 11 Biology|
|Class 12 Economics||Class 11 Economics|
|Class 12 Accountancy||Class 11 Accountancy|
|Class 12 History||Class 11 History|
|Class 12 Geography||Class 11 Geography|
|Class 12 Political Science||Class 11 Political Science|
|Class 12 English||Class 11 English|
CBSE New Syllabus Class 9 & 10 For 2021-2022
|CBSE Class 10 New Syllabus 2021-22||CBSE Class 9 New Syllabus 2021-22|
|Class 10 Maths||Class 9 Maths|
|Class 10 Science||Class 9 Science|
|Class 10 Social Science||Class 9 Social Science|
|Class 10 English||Class 9 English|
CBSE Class 12 Political Science Deleted Syllabus 2022 for Term 1 & 2: FAQs
Ques 1. What is the syllabus of Political Science Class 12?
Ans. The syllabus is similar to the previous year except for the topics and chapters mentioned in the above table.
Ques 2. Which chapters are deleted in 12th Political Science?
Ans. Four of the chapters have been completely removed from the syllabus by CBSE. They are
- Regional Aspirations
- Social and New Social Movements in India
- Environment and Natural Resources
- Security in the Contemporary World
Ques 3. How many chapters are there in political science class 12?
Ans.18 Chapters are there in Political Science for Class 12
Ques 4. How many chapters are there in political science class 11?
Ans. There are a total of 19 chapters in political science for Class 11
|
1. Brigade 2506, the paramilitary group that led the Bay of Pigs Invasion, took its name from the serial number of one of its members.
Early in 1960, President Dwight D. Eisenhower authorized the CIA to recruit Cuban exiles living in Miami and train them for an invasion of Cuba. The group that became known as Brigade 2506 was initially 28 members, including 10 former Cuban military officers recruited by Dr. Manuel Artime, head of the Movimiento de Recuperación Revolucionaria (MRR). After training in secret camps in the Florida Everglades as early as March 1960, the growing brigade moved its base to the Sierra Madre in Guatemala, which boasted a similar climate to Cuba and a friendly government. That September, a brigade member named Carlos Rodriguez Santana was killed in a training accident, and his comrades chose to name the brigade after his serial number: 2506.
2. Part of the invasion plan was an elaborate ruse involving a fake defection to the United States by Cuban pilots—which backfired.
On April 15, 1961, eight B-26 bombers took off from Nicaragua and bombed Cuban military aircraft on the ground, hoping to wipe out Castro’s air force before the planned invasion at Playa Girón. Later that day, two other bombers landed in Miami and Key West, Florida, where their pilots claimed to be Cuban defectors that had participated in the air raids. This drama was supposed to ensure that the attacks appeared to be the work of Cubans only, lending credibility to the U.S. government’s denial of involvement. But reporters noticed the planes’ guns looked as though they had not been fired, and the planes themselves were of a type not typically used in Cuba. The political fallout from this initial bombing raid—which in fact left much of Castro’s air force intact—led President John F. Kennedy to cancel a second planned air strike that might have completed the job.
3. In a bombing raid over Cuba on April 19, 1961, two B-26B bombers were shot down and four Americans—officers in the Alabama Air National Guard—were killed.
Officially, no Americans were supposed to be involved in the failed Bay of Pigs Invasion. Unofficially, a top-secret squadron of pilots flew a last-ditch mission authorized by Kennedy on the morning of April 19, to help defend the overwhelmed invaders at Girón. Due to a misunderstanding over time zones, the bombers arrived an hour before planned escort cover arrived from a U.S. Navy aircraft carrier, and were shot down by the Cubans. For years, the CIA refused to admit the involvement of these U.S. servicemen in the invasion, even though Castro’s government announced it had the body of an American pilot on the day it shot his plane down. After preserving the remains of the pilot, Captain Thomas Willard Ray, for years, Cuba returned his body to his family in 1979. For its part, the CIA waited until the 1990s—and the declassification of many Bay of Pigs-related documents—to admit Ray’s link to the agency and award him its highest honor, the Intelligence Star.
4. After being publicly interrogated and branded as “yellow worms,” the surviving members of Brigade 2506 were finally released in December 1962, after 20 months in captivity.
During the months after the failed invasion at Playa Girón, Cuba and the United States began negotiating for the release of hundreds of surviving brigadistas, then being held by Castro’s government. In May 1961, Castro proposed exchanging the POWs for 500 large tractors; he later upped his request to $28 million in U.S. dollars. Finally, in December 1962, Castro and the American lawyer James B. Donovan agreed to exchange the 1,113 prisoners for $53 million in food and medicine, to be raised through private donations and corporate sponsorships. (At the time, Donovan was fresh off negotiating the complicated exchange of captured American pilot Francis Gary Powers and for the Soviet spy Rudolf Abel, events that were dramatized in Steven Spielberg’s acclaimed 2015 film “Bridge of Spies,” which starred Tom Hanks as Donovan.) On December 28, President Kennedy received the brigade’s flag in an emotional “welcome back” ceremony at the Orange Bowl in Miami, promising that it “will be returned to this brigade in a free Havana.”
5. Revolutionary leader Che Guevara actually thanked President Kennedy and the United States for the Bay of Pigs invasion.
In August 1961, representatives of all American nations convened at Punta del Este in Uruguay for the Inter-American Economic and Social Council. At a cocktail party, the Cuban revolutionary leader Ernesto “Che” Guevara spoke with Richard Goodwin, then an adviser and speechwriter for President Kennedy. As Goodwin recorded in a secret White House memo declassified in the 1990s, the conversation ranged from the possibility of a “modus vivendi,” or interim settlement, between Cuba and the United States, to the U.S. naval base at Guantanamo Bay and the problems facing Castro’s revolutionary government. Near the end of the conversation, Goodwin wrote, Che “went on to say that he wanted to thank us very much for the invasion—that it had been a great political victory for them—enabled them to consolidate—and transformed them from an aggrieved little country to an equal.”
|
Scientific name: Asplenium ruta-muraria
With club-shaped leaflets on its fronds, wall-rue is easy to spot as it grows out of crevices in walls. Plant it in your garden rockery to provide cover for insects.
StatisticsHeight: up to 20cm
When to seeJanuary to December
AboutWall-rue is a small fern that can be found growing on limestone rocks and in crevices in old walls throughout town and country. It is often found close to other common species of rocks and walls, such as maidenhair spleenwort and hart's-tongue fern.
Wall-rue, like other ferns, reproduce using spores, which ripen from June to October.
|
Integrated fish cum poultry farming
- Much attention is being given for the development of poultry farming in India and with improved scientific management practices; poultry has now become a popular rural enterprise in different states of the country.
- Apart from eggs and chicken, poultry also yields manure, which has high fertilizer value.
- The production of poultry dropping in India is estimated to be about 1,300 thousand tons, which is about 390 metric tones of protein.
- Utilization of this huge resource as manure in aquaculture will definitely afford better conversion than agriculture.
Stocking Density of Fish
- The application of poultry manuring in the pond provides a nutrient base for dense bloom of phytoplankton, particularly nano plankton which helps in intense zooplankton development.
- The zooplankton has an additional food source in the form of bacteria which thrive on the organic fraction of the added poultry dung. Thus, indicates the need for stocking phytoplanktophagous and zoo planktophagous fishes in the pond.
- In addition to phytoplankton and zooplankton, there is a high production of detritus at the pond bottom, which provides the substrate for colonization of micro-organisms and other benthic fauna especially the chironomid larvae.
- Another addition will be macro-vegetation feeder grass carp, which, in the absence of macrophytes, can be fed on green cattle fodder grown on the pond embankments.
- The semi digested excreta of this fish forms the food of bottom feeders.
- For exploitation of the above food resources, polyculture of three Indian major carps and three exotic carps is taken up in fish cum poultry ponds.
- The pond is stocked after the pond water gets properly detoxified.
- The stocking rates vary from 8000 - 8500 fingerlings/ha and a species ratio of 40 % surface feeders, 20 % of column feeders, 30 % bottom feeders and 10-20 % weedy feeders are preferred for high fish yields.
- Mixed culture of only Indian major carps can be taken up with a species ratio of 40 % surface, 30 % column and 30 % bottom feeders.
- In the northern and north - western states of India, the ponds should be stocked in the month of March and harvested in the month of October - November, due to severe winter, which affect the growth of fishes.
- In the south, coastal and north - eastern states of India, where the winter season is mild, the ponds should be stocked in June - September months and harvested after rearing the fish for 12 months.
Use of poultry litter as manure
- The fully built up deep litter removed from the poultry farm is added to fish pond as manure.
Two methods are adopted in recycling the poultry manure for fish farming.
1.The poultry droppings from the poultry farms is collected, stored it in suitable places and is applied in the ponds at regular instalments.
- Applied to the pond at the rate of 50 Kg/ha/ day every morning after sunrise.
- The application of litter is differed on the days when algal bloom appears in the pond. This method of manurial application is controlled.
2. Constructing the poultry housing structure partially covering the fish tank and directly recycling the dropping for fish culture.
- Direct recycling and excess manure however, cause decomposition and depletion of oxygen leading to fish mortality. It has been estimated that one ton of deep litter fertilizer is produced by 30-40 birds in a year.
- As such 500 birds with 450 kg as total live weight may produce wet manure of about 25 Kg/day, which is adequate for a hectare of water area under polyculture.
- The fully built up deep litter contain 3% nitrogen, 2% phosphate and 2% potash. The built up deep litter is also available in large poultry farms.
- The farmers who do not have the facilities for keeping poultry birds can purchase poultry litter and apply it in their farms.
- Aquatic weeds are provided for the grass carp.
- Periodical netting is done to check the growth of fish. If the algal blooms are found, those should be controlled in the ponds.
- Fish health should be checked and treat the diseased fishes.
Poultry husbandry practices:
- The egg and chicken production in poultry rising depends upon multifarious factors such as breed, variety and strain of birds, good housing arrangement, blanched feeding, proper health care
a. Housing of birds
- In integrated fish-cum-poultry farming the birds are kept under intensive system. The birds are confined to the house entirely.
- The intensive system is further of two types - cage and deep litter system.
- The deep litter system is preferred over the cage system due to higher manurial values of the built up deep litter.
- In deep litter system 250 birds are kept and the floor is covered with litter. Dry organic material like chopped straw, dry leaves, hay, groundnut shells, broken maize stalk, saw dust, etc. is used to cover the floor up to a depth of about 6 inches.
- The birds are then kept over this litter and a space of about 0.3 - 0.4 square meters per bird is provided.
- The litter is regularly stirred for aeration and lime used to keep it dry and hygienic.
- In about 2 month’s time it becomes deep litter, and in about 10 months time it becomes fully built up litter. This can be used as fertilizer in the fish pond.
- The fowls which are proven for their ability to produce more and large eggs as in the case of layers, or rapid body weight gains is in the case of broilers are selected along with fish.
- The poultry birds under deep litter system should be fed regularly with balanced feed according to their age.
- Grower mash is provided to the birds during the age of 9-20 weeks at a rate of 50-70 gm/bird/day, whereas layer mash is provided to the birds above 20 weeks at a rate of 80-120 gm/bird/day.
- The feed is provided to the birds in feed hoppers to avoid wastage and keeping the house in proper hygienic conditions.
b. Egg laying
- Each pen of laying birds is provided with nest boxes for laying eggs.
- Empty kerosene tins make excellent nest boxes.
- One nest should be provided for 5-6 birds.
- Egg production commences at the age of weeks and then gradually decline.
- The birds are usually kept as layers up to the age of 18 months. Each bird lays about 200 eggs/yr.
- Some fish attain marketable size within a few months.
- Keeping in view the size of the fish, prevailing rate and demand of the fish in the local markets, partial harvesting of table size fish is done.
- After harvesting partially, the pond should be restocked with the same species and the same number of fingerlings depending upon the availability of the fish seed.
- Final harvesting is done after 12 months of rearing. Fish yield ranging from 3500-4000 Kg/ha/yr and 2000-2600 Kg/ha/yr are generally obtained with 6 species and 3 species stocking respectively.
- Eggs are collected daily in the morning and evening. Every bird lays about 200 eggs/year.
- The birds are sold after 18 months of rearing as the egg laying capacity of these birds decreases after that period.
- Pigs can be used along with fish and poultry in integrated culture in a two-tier system. Chick droppings form direct food source for the pigs, which finally fertilise the fish pond.
- Depending on the size of the fish ponds and their manure requirements, such a system can either be built on the bund dividing two fish ponds or on the dry-side of the bund.
- The upper panel is occupied by chicks and the lower by pigs.
Submitted by naipagropediaraichur on Fri, 21/12/2012 - 11:30
|
While today may be just another ordinary day, some people associate March 15 with misfortune, menace and superstition. Today is the Ides of March.
Many years ago, Roman leader Julius Caesar was warned by an all-knowing, albeit anonymous, soothsayer. The wise man warned him he was in imminent danger and to “Beware the Ides of March,” a day in the Roman calendar that marked the middle of the month. But Caesar did not heed the advice and was murdered by a group of conspirators on March 15, 44 B.C. The “liberators,” comprised of both friend and foe alike, brutally stabbed Julius Caesar more than 20 times on the Ides of March. Caesar bled to death in the Roman Senate.
Whether you believe March 15th is filled with doom and gloom or is just like any other day of the year, don’t let history repeat itself. Note to self - if a wise soothsayer happens to warn you of impending danger, stay home!
How to Celebrate the Ides of March
William Shakespeare used the infamous line in one of his famous plays, Julius Caesar.
What other world-wide events have happened on March 15th?
Besides Eva Longoria, Jimmy Swaggart, Robert Nye, Mike Love, Fabio and Will.i.am, who else was born on March 15th?
Some people celebrate March 15th with a Toga Party. Learn how to make a Toga out of sheets.
Whether you are superstitious or not, take a look at some of the more common superstitions.
Watch the new flick, The Ides of March, written by and starring the handsome hunk, George Clooney.
March 15 is also Everything You Think is Wrong Day. Just sayin'...
|
|State of the Environment Tasmania||Home|
|Cultural Heritage||Index of chapters|
Definitions of cultural heritage are highly varied. Defining heritage can be the product of a single person or a group of people-it can be personal or social. Regardless, a fundamental question remains whether heritage is property ('things'), or a social, intellectual, and spiritual inheritance. It is our contention that human actions, our ideas, customs and knowledge are the most important aspects of heritage. Cultural resource managers seek to understand and conserve these aspects through work on landscapes, places, structures, artefacts, and archives, and through work with individuals and the community (Davison 2000; Aplin 2002).
UNESCO (the United Nations Educational Scientific and Cultural Organisation) defines heritage as 'the product and witness of the different traditions and of the spiritual achievements of the past and . . . thus an essential element in the personality of peoples' (Davison 1991). A simpler definition is that heritage is what we value from the past. These definitions reflect what we value or reject in our present surroundings, and anticipate for the future (Davison 1991).
These definitions imply difficult questions about the purposes of heritage protection. Why save old buildings or fossil landscapes, for example? A continuing trend nationally is to answer this question in terms of economic benefit through tourism activities. This answer suggests that the main value of heritage is its capacity to generate employment and income (Davison 2000). Heritage is an important economic asset, but it is also clearly much more.
The complexity of these issues requires a wide definition of heritage, one that acknowledges that, at any given time, some meanings of heritage are likely to be more or less important to different groups of people. Community must produce heritage, and community must make decisions about heritage.
Cultural heritage management in Tasmania is concerned with what has been and what will be retained from the past, and how it will be used in the present and the future. The most fundamental aim is to understand society and its culture, and to use that knowledge to shape Tasmania's present and future.
Tasmania has inherited the cultural heritage of both Aborigines and our developing multicultural society. It is expressed through surviving heritage landscapes, places and features, objects, archival material, memory, and the social and contemporary significance they each have.
Tasmanian heritage is traditionally divided into two categories: Aboriginal and historic (including maritime). It continues to be difficult to communicate the diversity and complexity of 35,000 years of ongoing Aboriginal material cultural practices past and present. Equally, the mere two hundred years of non-Aboriginal occupation of Tasmania belies the richness and complexity of the historic and maritime cultural material left behind as our heritage.
Much of Tasmania's historic heritage has been, and continues to be, imported from other parts of the world. But Tasmania is a distinct melting pot. Examples of heritage with similar cultural, social and economic origin can be found elsewhere in the world, but here the Tasmanian environment and social make-up have shaped it in an unique way. Tasmanian heritage needs to be seen in the context of this State, and in a global context.
Cultural heritage can also include the intangible records of our past such as memories, stories and songs, ways of life, customs, attitudes and interactions between individuals and communities, and even the words only used in old crafts and trades. Sometimes these intangibles can be collected as oral histories and stored on tapes or videos or, as with crafts, trades, dances and customs, they can be passed on in a living, viable form to the next generation.
So rich and complex is this suite of heritage that in number and variety it defies easy description, cataloguing, and understanding. Each expression of heritage interrelates with other similar and dissimilar expressions in complex systems, which makes it difficult to discern change over time. The first SoE Report determined that any attempt to provide an intelligible all-inclusive description of this heritage and its components is virtually impossible. The report concluded that there was a danger in artificially imposing guidelines and systems of classification designed to protect individual aspects of heritage because it could ultimately divorce this heritage from the wider context that provides its meaning. Considering these issues, the report indicated that the extent and condition of much of Tasmania's cultural heritage was largely unknown, and this continues to be the case.
This chapter employs a new strategic and systematic process for the integrated identification and assessment of cultural heritage protection priorities. The State Government and the University of Tasmania are currently developing this process in response to the issues raised and recommendations presented in the previous SoE Report. A peer reviewed Australian Research Council Grant has been awarded to the project, providing further support for the on-going development of the process.
The process involves procedures adapted from the framework originally created to look objectively at natural heritage values on private land drawn up as part of the Regional Forest Agreement. It relies upon identifying groups of heritage items thematically, and grouping them geographically or typologically, to assist in determining the comparative conservation needs of this heritage. It involves a simple integrated process for the systematic collection of data to drive a comparative assessment of the condition of and pressure on Tasmania's cultural heritage. The indicators employed for assessing condition and pressure are based on the categories of threatened species set out in the Australian Government's Environment Protection and Biodiversity Conservation Act 1999 (section 179).
The system builds on the Burra Charter of the Australian component of the International Council on Monuments and Sites (ICOMOS). This Charter provides guidance for the conservation and management of places of cultural significance (cultural heritage places), and recognises the need to involve people in the decision-making process, particularly those that have strong associations with a place.
The process applied here is designed to look at the myriad of characteristics that go to making up individual heritage items-not just heritage places-and rank them according to their condition and the pressures to which they are currently subjected, or likely to be subjected in the short, medium, or long-term future.
The first stage of the process entails 'mapping' significant themes in Tasmania's history, and identifying and assessing the condition and pressure of surviving expressions of these themes, such as:
The social and contemporary significance of each of these expressions or 'cultural heritage categories' is also determined and the need for a response is ranked. The condition of each heritage category is scored according to category-specific criteria. To assess pressure, a score for both threat and rarity is given, because either can be an indication that management action is required. The assessment system is designed to be applied flexibly, either across category levels, or for items within each category (for example, separate archival records such as administrative correspondence, minutes, and personal accounts).
The second stage supplements our understanding of the condition of surviving heritage, through investigation of the capability of agencies and individuals to respond to emerging heritage issues.
This process identifies the areas and the heritage items likely to be most suitable for conservation management, and to progressively update priorities based on progress in identifying and securing appropriate areas and other heritage items for protection and active management.
To date, not all aspects of the strategic and systematic process have been developed. It must be considered as a work in progress. In particular, the development of themes and how they are ranked, and how different categories of heritage are linked, remain unresolved. A description of the principles and development requirements for this work-in-progress are available: Understanding and Defining Heritage Assets.
This chapter applies the above strategic and systematic process in an assessment of the condition and pressures of the surviving cultural heritage associated with the Macquarie Harbour Penal Station, and provides a more general report on the various initiatives, programs and legislation that have been developed since the previous SoE Report (1997). The cultural heritage categories have been reported on through separate issue reports. A separate case study is also provided that demonstrates the integrated process of the scoring system for all of the cultural heritage categories in relation to the Macquarie Harbour Penal Station.
The richness of Tasmania's cultural heritage has been spectacularly reconfirmed by popular and specialist observations during the last five years. The volume and quality of sites, structures, and ideas, knowledge and customs of the peoples of Tasmania-the original people who lived on the land over the past thirty millennia and the more recent history of the last two centuries-continues to benefit the Tasmanian community and visitors to our State. The concern raised in the previous SoE Report still remains: that this heritage is under stress because of inattention, a poor information base, and bad practices. Increased protection of historic heritage values has been gained through the Historic Cultural Heritage Act 1995 and the formation of the Tasmanian Heritage Council and the Department of Tourism, Parks, Heritage and Arts.
Many people and organisations have assisted greatly in compiling the State of the Environment Report. For this chapter, the Commission would like to acknowledge the kind assistance of the following:
Robyn Eastley, Caroline Evans, Brendan Lennard, Hamish Maxwell-Stewart, Sean McPhail, Brett Noble, Shane Roberts, Jim Russell, Stephen Waight, Fiona Wells, Elspeth Wishart.
Contact the Commission on: email: firstname.lastname@example.org Phone: (03) 6233 2795 (within Australia) Fax: (03) 6233 5400 (within Australia) Or mail to: RPDC, GPO Box 1691, Hobart, TAS, 7001, Australia
Last Modified: 14 Dec 2006
You are directed to a disclaimer and copyright notice governing the information provided.
|
Enveloping rage: Georgia starts a postal war with Abkhazia
An 8th century AD Abkhazian kingdom map, a picture of Mery Avidzba – the first Abkhazian female war pilot, a 15th anniversary of republic’s independence – no matter what postage stamp you look at, you discover new facts about this small Caucasus republic.
Although they still cannot be used outside Abkhazia due to postal blockade, Eduard Pilia, one of the authors of the Abkhazian stamps project, says collectors from Russia and other countries already consider these stamps as rare and valuable.
“Unfortunately, you can’t buy them in Russia yet, but we give these stamps free to many collectors who travel to Abkhazia to get them”, Pilia says.
Georgia says Abkhazian stamps are illegitimate.
Formerly, a part of Soviet Georgia, Abkhazia declared independence from Tbilisi in the early 1990s. A violent conflict followed, claiming thousands of lives. Russian peacekeepers stopped the bloodshed. But the simmering conflict lasted for 15 years.
In August 2008 Russia recognized Abkhazia’s and South Ossetia’s independence, starting direct cooperation with both former Georgian breakaway territories.
Georgia maintains the blockade of Abkhazia. Sanctions include economic, military and diplomatic measures, since Tbilisi considers the republic a part of its territory.
Georgian senior officials say that since the Universal Postal union (UPU) – a specialized agency in the United Nations, also considers Abkhazia a part of Georgia, only Georgian ministry of economic development has right to issue postage stamps.
Abkhazian authorities are hoping that sooner or later Russia, Georgia and the UPU will recognize the legitimacy of the new postage stamps.
|
Explore thought-provoking questions about desires and aspirations What Is A Portion Of A Document In Which You Set Certain Page Formatting Options?. Reflect on your deepest desires and attain insights into what truly motivates you. A section is a portion of a document in which you set certain page formatting options according to Find answers to questions about personal growth, fulfillment, and achieving your dreams. Engage in discussions that delve into the complexities of desires and the paths to fulfillment. Begin your journey of self-discovery and uncover the secrets of your desires today, only with Words of Hope!
A section is a portion of a document in which you set certain page formatting options according to Microsoft . It is like a room in a house where you can set different formatting options for each room.
To create a section break in Word, you can use the Breaks option under the Page Layout tab. You can choose from different types of section breaks such as Next Page, Continuous, Even Page, and Odd Page depending on your needs.
You can also use section breaks to change the layout or formatting in one section of your document. When you insert a section break, choose the type of break that fits the changes you want to make. Here are the types with usage suggestions: The Next Page command inserts a section break and starts the new section on the next page.
Here are some websites that discuss What Is A Portion Of A Document In Which You Set Certain Page Formatting Options? that you might find useful:
|
Foreign policy making in the Middle East
It is frequently claimed that foreign policy making in Middle East states is either the idiosyncratic product of personalistic dictators or the irrational outcome of domestic instability (Aarts, 911—25). In fact, it can only be adequately understood by analysis of the multiple factors common to all states, namely: (1) foreign policy determinants (interests, challenges) to which decision-makers respond when they shape policies; and (2) foreign policy structures and processes which factor the ‘inputs’ made by various actors into a policy addressing these determinants.
Foreign policy determinants
In any states system state elites seek to defend the autonomy and security of the regime and state in the three separate arenas or levels in which they must operate, although which level dominates attention in a given time and country may vary considerably.
The regional level geopolitics: In a states system like the Middle East, where regional militarization has greatly increased external threats, these often take first place on states’ foreign policy agendas. While, generally speaking, external threat tends to precipitate a search for countervailing power or protective alliances (or, these lacking, attempts to appease the threatening state), it is a state’s geopolitical position that specifically defines the threats and opportunities it faces. It constitutes a state’s neighborhood where border conflicts and irredentism are concentrated and buffer zones or spheres of influence sought. Position determines natural rivals: thus, Egypt and Iraq, stronger river valley civilizations, are historical competitors for influence in the weaker, fragmented Mashreq; Iran and Iraq are natural rivals for influence in the Gulf (Berberoglu). A state’s power position in the regional system, shaped by its resources, size of territory and population and the strategic importance or vulnerability of its location, shapes its ambitions: hence small states (Jordan, Gulf States) are more likely to seek the protection of greater powers and larger ones to establish spheres of regional influence (e.g. Syria in the Levant, Saudi Arabia in the GCC). (Brand)
The international level dependency: The impact of the core great powers and the international political economy constitutes a dilemma for regional states. The core is both the indispensable source of many crucial resources and of constraints on the autonomy of regional states. The constraining impact of the core ranges from the threat of active military intervention or economic sanctions to the leverage derived from the dependency of regional states, maximized where there is high need and a lack of alternatives for the client state. In extreme cases, foreign policy may be chiefly designed to access economic resources by appeasing donors and investors. Vulnerability to core demands, such as structural adjustment, can inflame domestic opposition. However, shared security and economic interests between the core powers and status quo elites may make such costs seem worth incurring.
The domestic level identity: In most Middle Eastern states identity is complex, with sub- and supra-state identities contesting exclusive loyalty to the state. Where sub-state identities are strong, they may produce irredentist pressures on decision-makers. Where suprastate Arab and/or Islamic identities are strong, regime legitimacy may be contingent on adherence to Arab-Islamic norms in foreign policy. This may mean contesting the penetration of the region by the core powers and it may de-legitimize relations with certain states: thus, while some Arab states have been pushed by economic dependency or security considerations to establish relations with Israel, these remain largely illegitimate at the societal level. (Abdel-Fadil, 119—34)
The impact of identity is not, of course, uniform. Where there are high levels of public mobilization and low levels of state consolidation, elites are more vulnerable to Pan-Arab or Pan-Islamic opinion in foreign policy making. Because supra-state identity is often an instrument of opposition forces or of subversion by rival states, status quo elites have an incentive to create state-centric identities compatible with sovereignty and to pursue the higher levels of state formation that enhance their autonomy from such pressures. However, where revisionist social forces dominate states, they may foster and use supra-state identities in the service of their foreign policy.
In the past, the United States has chalked up a series of noteworthy successes in the Middle East, from the Camp David accords and the formation of a multinational force to drive Hussein from Kuwait, to the Madrid peace conference. But these successes have hinged upon America’s role as an honest broker, intent on upholding international law and reconciling its policies with concern for human rights and justice. When the United States fails to act as a good faith partner, all the resources that it can bring to bear have little discernible impact upon the status quo.
However, core—periphery relations merely set the outside parameters within which Middle East regional politics are conducted. Moreover, far from being static, they are constantly contested and periodically stimulate anti-imperialist movements which, if they take state power, attempt to restructure these relations. Whether nationalist states can do this, however, depends on systemic structures. When there is a hegemonic power (UK, USA) able to ‘lay down the law’ on behalf of the world capitalist system (in the Middle East ensuring its access to cheap energy), and especially if the regional system is simultaneously divided (the usual condition), it is easy for external powers to exploit local rivalries to sustain their penetration of the region. Conversely, when the core was split, as under Cold War bi-polarity, nationalist states were able to exploit superpower rivalry to win protection, aid and arms from the number two state, the Union of Soviet Socialist Republics (USSR), enabling them to pursue nationalist foreign policies, and to dilute economic dependency. Moreover, as Thompson (151—67) has shown, the Middle East is a partial exception to Galtung’s feudal model in that, while fragmented economically and politically, it enjoys trans-state cultural unity which nationalist states have exploited to mobilize regional solidarity against the core. Thus, the conjuncture of the Cold War and the spread of Pan Arabism allowed Nasser’s Egypt to sufficiently roll back imperialist influence to establish a relatively autonomous regional system. Additionally, in the rise of the Organization of Petroleum Exporting Countries (OPEC), south—south solidarity produced exceptional financial power that, while failing ultimately to raise the region from the economic periphery, arguably transformed the position of the swing oil producer, Saudi Arabia, from dependence into asymmetric interdependence. However, favorable conditions for regional autonomy have, particularly since the end of the oil boom and Cold War, been largely reversed. The West’s restored ability to intervene militarily and impose economic sanctions and loan conditionality has revived key features of the age of imperialism at the expense of regional autonomy (Aarts, 1—12). No analysis of the international politics of the region can be convincing that does not take account of the profound impact of the ongoing struggle for regional autonomy from external control.
Aarts, Paul, ‘The New Oil Order: built on sand?’ Arab Studies Quarterly, 16:2, 1—12, 1994.
Aarts, Paul, ‘The Middle East: A region without regionalism or the end of exceptionalism?’ Third World Quarterly, 20:5, 911—25, 1999.
Abdel-Fadil, Mahmoud, ‘Macroeconomic tendencies and policy options in the Arab region’, in Laura Guazzone, ed., The Middle East in Global Change, London, Macmillan Press, 119—34, 1997.
Berberoglu, Berch, Power and Stability in the Middle East, London, Zed Books, 1989.
Brand, Laurie, Jordan’s Inter-Arab Relations: The Political Economy of Alliance Making, New York, Columbia University Press, 1995.
Thompson, William R., ‘The Arab sub-system and the feudal pattern of interaction: 1965’, Journal of Peace Research, 7, 151—67, 1970.
Dessouki, Ali ad-Din Hillal, ‘The new Arab political order: implications for the eighties’, in Malcolm Kerr and El Sayed Yassin, eds, Rich and Poor States in the Middle East, Boulder, CO, Westview Press, 319—47 (1982).
Fahmy, Ismail, Negotiating for Peace in the Middle East, Baltimore, Johns Hopkins University Press (1983).
Smith, Pamela Ann1986), ‘The exile bourgeoisie of Palestine’, Middle East Report, 16:5, no. 142, September—October, 23—7 (1986).
Taylor, Alan, The Arab Balance of Power, Syracuse, NY, Syracuse University Press (1982).
Telhami, Shibley, Power and Leadership in International Bargaining: the Path to the Camp David Accords, New York, Columbia University Press (1990).
|
Until Ubuntu 13.04, Ubuntu recommended all users use the 32-bit edition of Ubuntu on its download page. However, this recommendation has been removed for a reason — users of modern PCs are better off with the 64-bit edition.
While Microsoft has been installing the 64-bit edition of Windows on modern PCs by default for years, Ubuntu has been slower to recommend the use of its 64-bit edition — but that has changed.
32-bit vs. 64-bit: What’s the Difference?
We covered the difference between 32-bit and 64-bit computing when we looked at the difference between the 32-bit and 64-bit editions of Windows 7.
In a nutshell, all modern Intel and AMD processors are 64-bit processors. 64-bit processors can run 64-bit software, which allows them to use larger amounts of RAM without any workarounds, allocate more RAM to individual programs (particularly important for games and other demanding applications), and employ more advanced low-level security features.
However, 64-bit processors are backwards-compatible and can run 32-bit software. This means that you can install a 32-bit operating system on a 64-bit computer. While 64-bit operating systems were getting their kinks worked out, 32-bit operating systems were recommended.
Note that you can still run 32-bit software on a 64-bit operating system, so you should be able to run the same programs, even if you opt for a 64-bit operating system. In fact, the majority of programs installed on 64-bit editions of Windows are 32-bit programs. On Linux, the majority of programs will be in 64-bit form, as Linux distributions can recompile the open-source software for 64-bit CPUs.
Past 64-bit Problems
Like Windows, which had teething problems with 64-bit consumer operating systems back in the “Windows XP 64-bit Edition” days, Ubuntu and other desktop Linux systems have experienced a variety of problems with the 64-bit edition of their software.
- Flash (and other browser plugin) Compatibility: Adobe’s Flash plug-in was once only available in 32-bit form, while a 64-bit browser came with the 64-bit edition of Ubuntu. This meant that users had to install a separate 32-bit browser or use nspluginwrapper, a hacky solution that allowed 32-bit plugins to run in 64-bit browsers. Eventually, Adobe released a preview version of its 64-bit Flash plugin, but even this plugin had some issues. At this point, a stable version of Flash for 64-bit systems is available, so browser plugins should work fine on both 32-bit and 64-bit operating systems.
- Software Compatibility: 32-bit applications can run on 64-bit operating systems, but they need the appropriate 32-bit libraries to function. A “pure” 64-bit edition of Linux wouldn’t be able to run 32-bit applications because it doesn’t have the appropriate libraries. At this point, the 32-bit compatibility libraries have been fairly well tested and can be quickly installed from the package manager — they can even be automatically installed when you try to install a package that requires them.
- Bugs: Fewer users used the 64-bit editions of Ubuntu, so they weren’t as well-tested and bugs occasionally cropped up — particularly with the 32-bit compatibility libraries. However, many more people now use the 64-bit edition of Ubuntu, so bugs are fixed much more quickly.
- Installation Problems: One of the main reasons Ubuntu recommended new uses download the 32-bit edition was that it was guaranteed to install on their systems, whether they had 32-bit or 64-bit processors. If Ubuntu recommended the 64-bit edition, users with old computers might try to install it and fail to do so. However, 64-bit systems have become more and more common — unless you use a very old computer, your computer probably has a 64-bit processor.
Luckily, Linux uses primarily open-source drivers, so you shouldn’t need old hardware drivers that are only available in 32-bit form.
Why You Should Probably Use the 64-bit Edition
At this point, the kinks are worked out — Flash works, it’s easy to install 32-bit software, bugs aren’t common, and you probably have a 64-bit CPU. If you’re on the fence, it’s time to take the dive and use the modern version of Ubuntu.
- Performance: Phoronix has taken a look at the performance difference between the 32-bit and 64-bit editions of Ubuntu 13.04. They found that the 64-bit edition of Ubuntu had superior performance in real-world benchmarks.
- UEFI Compatibility: The 32-bit edition of Ubuntu doesn’t work with the UEFI firmware found on recent computers that come with Windows 8, so you’ll need to install the 64-bit edition of Ubuntu on them.
- Memory and Security Features: The same memory and security factors we mentioned for Windows 7 also apply to Linux. If you want your system to have the ability to assign more memory to individual processes and use the latest low-level security features, you’ll need the 64-bit edition of Ubuntu.
The main problems with 64-bit editions of Linux have been solved, so it’s a good time to switch to the 64-bit version.
When You Should Use the 32-bit Edition
If you still have a 32-bit processor, you’ll want to use the 32-bit edition. You may also want to use the 32-bit edition if you have proprietary hardware drivers that are only available in 32-bit form, but this is very unlikely to happen on Linux — it should primarily apply to Windows users.
To test whether your Ubuntu computer has a 32-bit or 64-bit CPU, run the lscpu command in a terminal. A 64-bit CPU will be able to run in both 32-bit and 64-bit modes, while a 32-bit CPU will only be able to run in 32-bit mode.
Have you found any issues with the 64-bit edition of Ubuntu, or have you been using it for a long time without any problems? Leave a reply and share any experience you have!
|
What the Goddess Athena Can Teach Us About Mentoring
Posted on August 25, 2017
In Homer’s Odyssey, Athena, the Goddess of Wisdom disguises herself as an old friend of Odysseus, named ‘Mentor’, so that she could impart wisdom and courage to the young prince Telemachus (Odysseus’ son). Athena leads Telemachus to the inner knowledge which dwells within him, and gives Odysseus a vision of something larger than himself, something that he must strive to do for those in his sphere of influence (mythicjourneys.com).
This mythical tale from ancient Greece is an archetypal story about mentoring: a powerful tool for enhancing personal and profession growth, increasingly used in business today.
Mentoring is a developmental partnership between two people normally working in a similar field/ sharing similar experiences, and is based on mutual respect, empathy, and trust (MentorSet.com). In the same way that Athena guides Telemachus and Odysseus, the mentor’s role is to to support the mentee by listening, sharing their knowledge and experience, providing different perspectives and offering feedback to help the mentee progress in their career. As a result, the mentee gains new skills, knowledge, connections, ideas and approaches to their professional journey- receiving insights from someone who’s “been there”.
Forbes published an article at the beginning of this year in which it was stated that:
“Having a great mentor is a key factor to improving employee engagement among millennials. Millennials planning to stay with their employer for more than five years are twice as likely to have a mentor (68%) than not (32%).”
(forbes.com Make 2017 The Year To Get Serious About Mentoring).
And in 2016 mentorcloud.com published the following: “Millennials place training, mentoring, and flexible work arrangements as priorities above financial benefits… mentoring is well and above their preferred way of learning…75% see mentorship as crucial to their success” (See Millennial Employee Lifecycle: Part One on their website).
Today ‘Mentor’ has evolved to take on the meaning: trusted advisor, friend, teacher and wise person and has an important role to play in organisations, across all levels. There are many examples of mentoring relationships among business giants, for example: Steve Jobs mentored Mark Zuckerberg; Oprah Winfrey was mentored by celebrated author and poet, the late Maya Angelou, Fashion designer Christian Dior mentored fellow haute couture designer Yves St. Laurent (see The Chronicle of Evidence Based Mentoring).
At Redwood and Co. we consider mentoring as both a powerful approach to developing leadership capabilities, as well as a way to encourage collaborative, creative and authentic learning between peers in and across teams, leading to an increase in engagement and success that benefits the whole landscape of an organisation.
For more information on #LearningCreativelyCrafted and our bespoke creative learning experiences visit www.redwoodandco.com
|
is isolated from Ambystoma tigrinum, with which it was once considered conspecific. It is endemic to California, and are found in the Central Valley and adjacent foothills and coastal grassland (Petranka 1998, Loredo et al. 1996).
likes a Mediterranean climate of cool wet winters and hot dry summers. They inhabit annual grasslands and open woodlands of foothills and valleys. Ground squirrel burrows are necessary for the survival af (Petranka 1998, Loredo et al 1996).
male SVL 80-108 mm
female SVL 79-118 mm
has broad rounded snouts with small eyes. It is a lustrous black and marked with rounded or irregular yellow spots. Bellies are a grayish color and may contain a few small dull yellow spots. These salamanders have 12 costal grooves on their sides (Petranka 1998).
- Development - Life Cycle
breeds from late winter into early spring in large temporary ponds. They are explosive breeders, meaning they emerge, breed quickly, and then return to their burrows. They may breed two or three times a year this way. Juveniles migrate from these ponds to underground burrows in the spring during the rains. They are especially vulnerable to dehydration and heat stress during their overland movement (Petranka 1998, Loredo et al. 1996, Holland et al. 1990). They are rarely seen, due to nocturnal breeding migrations, and living in burrows underground (Loredo et al. 1996). Females attach one egg at a time to twigs, grass stems, vegetation, or detrious. These eggs are covered by a vitelline membrane and three jelly coats. They are distinguished by a pale yellow brown coloring and are about 2 mm in diameter (Petranka 1998). Eggs hatch 2-4 weeks after deposition (Petranka 1998, Barry and Shaffer 1994). Larvae coloring is yellowish gray. They are similiar to adults, except for large dorsal fins extending onto the back, and large feathery gills (Petranka 1998).
- Key Reproductive Features
- gonochoric/gonochoristic/dioecious (sexes separate)
burrows into Spermophilus spp. (ground squirrels) and other rodent burrows near their natal pond. Adults are able to move at a rate of 50.8 meters per hour, while juveniles can only move at a rate of about 30.9 meters per hour. Adults use the same migratory pattern year after year (Petranka 1998, Loredo et al. 1996).
- Key Behaviors
larvae eat aquatic invertebrates (Petranka 1998, Barry and Shaffer 1994). Adults are known to eat earthworms. They feed with a three part gape cycle, tongue extension cycle, and anterior head body movement common to ambystomatids (Beneski et al. 1995).
Special Concern species in California (Holland et al. 1990)
Category I species on Federal Endangered Species List (Loredo et al. 1996)
Habitat loss is a big problem for. Urban development and agriculture is eliminating its natural habitat. It is preyed upon by introduced species of fish and bullfrogs (Loredo et al. 1996). has toxic skin secretions (Loredo et al. 1996), probably as a defense mechanism against the rodents it shares burrows with. The ground squirrel populations are controlled throughout much of California (Petranka 1998). This is another way in which individuals are losing thier homes. Due to this, the ways in which ground squirrels are controlled and where they are controlled should be taken into consideration.
Jerry Redding II (author), Michigan State University, James Harding (editor), Michigan State University.
living in the Nearctic biogeographic province, the northern part of the New World. This includes Greenland, the Canadian Arctic islands, and all of the North American as far south as the highlands of central Mexico.
- bilateral symmetry
having body symmetry such that the animal can be divided in one plane into two mirror-image halves. Animals with bilateral symmetry have dorsal and ventral sides, as well as anterior and posterior ends. Synapomorphy of the Bilateria.
Found in coastal areas between 30 and 40 degrees latitude, in areas with a Mediterranean climate. Vegetation is dominated by stands of dense, spiny shrubs with tough (hard or waxy) evergreen leaves. May be maintained by periodic fire. In South America it includes the scrub ecotone between forest and paramo.
animals which must use heat acquired from the environment and behavioral adaptations to regulate body temperature
A large change in the shape or structure of an animal that happens as the animal grows. In insects, "incomplete metamorphosis" is when young animals are similar to adults and change gradually into the adult form, and "complete metamorphosis" is when there is a profound change between larval and adult forms. Butterflies have complete metamorphosis, grasshoppers have incomplete metamorphosis.
having the capacity to move from one place to another.
- native range
the area in which the animal is naturally found, the region in which it is endemic.
- tropical savanna and grassland
A terrestrial biome. Savannas are grasslands with scattered individual trees that do not form a closed canopy. Extensive savannas are found in parts of subtropical and tropical Africa and South America, and in Australia.
A grassland with scattered trees or scattered clumps of trees, a type of community intermediate between grassland and forest. See also Tropical savanna and grassland biome.
- temperate grassland
A terrestrial biome found in temperate latitudes (>23.5° N or S latitude). Vegetation is made up mostly of grasses, the height and species diversity of which depend largely on the amount of moisture available. Fire and grazing are important in the long-term maintenance of grasslands.
Barry, S., H. Shaffer. 1994. The status of the California tiger salamander (Ambystoma californiense) at Lagunita: a 50 year update. Journal of Herpetology, 28(2): 159-164.
Beneski jr, J., J. Larsen jr, B. Miller. 1995. Variation in the feeding kinematics of mole salamanders (Ambystomatidae: Ambystoma). Canadian Journal of Zoology, 73(2): 353-366.
Holland, D., M. Hayes, E. McMillan. 1990. Late summer movement and mass mortality in the California tiger salamander (Ambystoma californiense). Southwest Naturalist, 35(2): 217-220.
Loredo, I., D. Van Vuren, M. Morrison. 1996. Habitat use and migration behavior of the California tiger salamander. Journal of Herpetology, 30(2): 282-285.
Petranka, J. 1998. Salamanders of the United States and Cananda. Washington D.C.: Smithsonian Institute Press.
|
The planet is heating up. In fact, the last 7 years have been the warmest on record, as part of a long-term trend towards global warming. Scientists predict that the current record for hottest year will continue to be broken as the years go on.
While Earth Day is a great time to respect the planet we call home, highlight sustainability, and recognise the companies striving for carbon-neutral – perhaps it’s time that Earth Day is more than just one day on the corporate social media calendar.
Let’s take a look at why Earth Day has even become the phenomenon it is today. When Earth Day first began in 1970, it was a way of ensuring that governments took climate acts seriously, there were no Clean Air Acts, no EPA, and one American senator had had enough. From one necessity to another, Earth Day is now a way to remind businesses, both big and small, of their responsibility to the planet – how seriously they take that responsibility is up for debate.
No one needs to be reminded of just how serious climate change is with conversations around global warming at an all time high. Higher temperatures impact air quality, crop production and increase the spread of infectious diseases. There is also a massively increased risk and intensity of natural disasters – remember the wildfires in 2021? Not only that, but cities around the world flooded to extremes last year. There were 12 cyclones and heat-waves across the planet, and to add to that, people in Florida are running out of drinking water.
The time to step in as a planet and intervene has never been more urgent.
The good news is that there are people, politicians and businesses doing their bit to combat climate change. Earth Day is an opportunity for everyone around the world to join them, learning more about the serious dangers we face as a planet and how we can work to beat them on collective and individual levels. Organisations and individuals all around the world can get involved in Earth Day by holding events, joining or creating cleanups, planting trees, and a range of other options. Check out their website for more information on how you could be a part of the action.
For our team at MODMO, Earth Day is a reminder of the reason we do what we do. We want to be part of a more sustainable future by providing a viable two-wheel electric alternative to fossil-fueled cars for city commuters and other urban riders. This year, we went beyond and introduced Earth Week (which is still ongoing) to our audience, encouraging them to get out on their bikes more, and hopefully in their cars a little less. But this is just a small part of our commitment to sustainability.
We also aim to be as sustainable as possible in our production. We are working towards our sustainability goals, making significant changes in how our team operates all around the world, and how we get our products from A to B.
Earth Day may be over as far as the calendar is concerned, but to us, recognising our responsibility to our planet is an ongoing mission. So whether it’s an Earth Week, Month or Year, let’s try and make sure it’s not just one day.
|
Authors: Miller PE, McKinnon RA, Krebs-Smith SM, Subar AF, Chriqui J, Kahle L, Reedy J
Title: Sugar-sweetened beverage consumption in the U.S.: novel assessment methodology.
Journal: Am J Prev Med 45(4):416-21
Date: 2013 Oct
Abstract: BACKGROUND: Sugar-sweetened beverage (SSB) consumption has been linked with poor diet quality, weight gain, and increased risk for obesity, diabetes, and cardiovascular disease. Previous studies have been hampered by inconsistent definitions and a failure to capture all types of SSBs. PURPOSE: To comprehensively examine total SSB consumption in the U.S. using an all-encompassing definition that includes beverages calorically sweetened after purchase in addition to presweetened beverages. METHODS: Data from the 2005-2008 National Health and Nutrition Examination Survey (N=17,078) were analyzed in September 2012 and used to estimate calories (kilocalories) of added sugars from SSBs and to identify top sources of SSBs. RESULTS: On average, Americans aged ≥2 years consumed 171 kcal (8% of total kcal) per day from added sugars in SSBs; the top sources were soda, fruit drinks, tea, coffee, energy/sports drinks, and flavored milks. Male adolescents (aged 12-19 years) had the highest mean intakes (293 kcal/day; 12% of total kcal). CONCLUSIONS: Americans consume more calories from added sugars in beverages than previously reported. The methodology presented in this paper allows for more-comprehensive estimates than those previously used regarding the extent to which SSBs provide calories from added sugars.
Last Modified: 03 Sep 2013
|
We all see the world differently. But for people with face-blindness, they see it differently with an added major difference — they can't recognize who you are.
Her whole life, Lee McClain knew something was off. It wasn't that she wasn't smart — she was. She had earned a Ph.D, authored some books, and is a tenured professor. And it wasn't that she wasn't social — because she was. She had friends, and had, for all intents and purposes, found her place in the world. But going into a social situation like a cocktail party was difficult. Remembering people was difficult. Not the people themselves, but their faces.
"It's kind of like if you are looking at a bunch of leaves on a tree, you could study them, but you'd have to work really hard to tell them apart," said McClain. That is how she sees faces. "That part of my brain that most people use to recognize faces instantly doesn't work as well for me."
Making it Work
McClain has congenital prosopagnosia, a brain disorder that makes her "face blind." She can see faces, but when she sees that face again she can't recognize it. Unlike some others, her disorder isn't so severe that she can't recognize herself in the mirror or in a picture. She can recognize her daughter and close friends. And as a professor at Seton Hill University in Greensburg, she can recognize her students.
She works hard at it, giving them assigned seats, actively studying their pictures, paying careful attention to their voices and the way that they dress.
And she goes out of her way to be overly friendly to everyone she sees at her job or in her community — operating under the assumption that she already knows them.
It wasn't until she was in her late 40's that she was diagnosed.
Online research led her to prosopagnosia, a disorder first diagnosed in the 1940s. In Greek, prosopo means 'face' and agnosia means 'without knowledge.' But while the disorder was first found all those years ago, it wasn't until the 1990s, with the advent of advanced MRI scanning and an increased understanding of how the brain works, that researchers have been able to make headway into understanding this little-known disorder, which affects around 2 percent of the population.
Marlene Behrmann, one of the world's leading researchers on the condition, is a psychology and cognitive neuroscience professor at Carnegie Mellon University. Her specialty is in the ways that the brain makes sense of what the eyes see.
"It seems fairly natural to them that this is how the world really is, and only when they're older and they realize that other people can do face recognition so effortlessly and so naturally does it dawn on them that there's something different about their own skills and their own abilities," she said.
In the last decade, there have been numerous scientific peer-reviewed papers on the underlying mechanisms that give rise to face recognition. In Behrman's research on people born with the disorder and regular vision, she has found that the part of the brain that recognizes faces lit up normally when they saw a face. But for them, there are weak links between that part of the brain and other parts of their brain.
Sometimes prosopagnosia doesn't manifest itself until later in life.
When Shawn Muzzey was 18, he was in a car accident that caused numerous injuries. He suffered a severe head injury and several subsequent strokes. Now 37, he has made strides in his recovery, but he has severe face-blindness.
"It's something that is hard to explain. Because when I look at somebody, if I look at you, I feel like if I walked down the street, I would know it's you. But whenever that happens, I realize that that's not the case. One way of explaining it is kind of like holding the picture in front of you and then taking it away. I have no recall of it. I can't take a mental pic of a person's face and store it in my head whenever I see them and match it up with a name," said Muzzey.
Muzzey's disorder is so severe that he can't even recognize his parents. He uses cues like their voices or remembering if they are wearing an unusual item of clothing.
It impacts his entire life.
There is some debate in the literature on the circuits in the brain that cause prosopagnosia. And there are variations in face-blindness. Those with autism sometimes have difficulty telling faces apart. People with some disorders lack emotional face recognition. Those with Alzheimer's also have a variation of face-blindness. But this type is unique, and a bit befuddling to researchers.
Marlene Berhmann said that the research so far is focused on what causes prosopagnosia and understanding it better, with the expectation that if they have a firm understanding of the problem and its neural basis, they might be able to devise better treatment strategies.
|
"Asking for help during what is considered one of the most joyous periods in a parent’s life can be daunting."
Education is key to informing people about mental health. Taking steps toward improving students' well-being, and assisting them with living their lives to the fullest without the burden of depression is essential.
All primary care doctors should be able to do this.
Watch the clip above, and head over to our #StrongerTogether page for more mental health coverage. Screening can help patients
National Depression Screening Day was also created to address mental illness stigma in a productive way, Holmberg explained
Depression is a disease, not a weakness, and suicide is its tragic consequence. By taking a few simple steps, primary care providers can better identify depression and ensure that patients receive needed treatment.
|
The arrival of the shiny, emerald green beetle, about 1/2 inch long and 1/8 inch wide, in the U.S. may be as serious a threat to white, green, and black ash trees as Dutch elm disease was to the American elm.
Ash trees are a common species; green and black ash grow in wet swampy areas and along streams and rivers; white ash is common in drier, upland soils. Many species of wildlife, including some waterfowl and game birds, feed on ash seeds. Ash is used as a source for hardwood timber, firewood, and for the manufacturing of baseball bats and hockey sticks. The New York State Department of Agriculture and Markets estimates the total economic value of New York’s white ash to be $1.9 billion dollars.
Although the emerald ash borer has not yet been found in the forests of New York, infestations in Pennsylvania, Ontario and Quebec have the potential of slowly spreading to our area through natural range expansion. Emerald ash borer could arrive here more quickly on transported firewood or lumber.
Emerald ash borer larva hatch and burrow deep into the sapwood of healthy ash trees where they aggressively feed until autumn, destroying essential conductive tissue. The larva overwinter in the tree, emerging as adults from May through July of the next year, leaving distinct, D-shaped emergence holes about 3 mm in diameter. Adults mate soon after emergence, and lay eggs under the bark of nearby ash trees. Eggs hatch within seven to nine days in the late spring or early summer, starting the destructive cycle once more.
The emerald ash borer was most likely introduced into North America via wood and packing materials imported from its native range in China and eastern Asia. Since its initial discovery, infestations have been found in Indiana, Illinois, Maryland, Michigan, Ohio, West Virginia, Wisconsin, Missouri and Pennsylvania, as well as Ontario and Quebec. To date, the invasive pest is thought to be responsible for the deaths of millions of ash trees according to the United States Department of Agriculture (USDA).
Area foresters, landowners and New York State Department of Environmental Conservation (NYSDEC) officials consider the threat from emerald ash borer to be real. Extensive surveying is taking place in high-risk areas, including the St. Lawrence River corridor. NYSDEC recently enacted regulations prohibiting the movement of firewood more than 50 miles from its source to prevent or slow the spread of destructive forest insects and pathogens. Campers are encouraged to “burn it (firewood) where you buy it.”
Universities and state and federal agencies, including the USDA Animal and Plant Health Information Service and the USDA Forest Service, are undertaking research to halt the spread or minimize the effects of emerald ash borer. Biologists are testing chemical control agents and searching the beetle’s native range for biological controls such as parasites that prey on the beetles’ larva.
If You See Emerald Ash Borer
Be on the lookout for metallic green beetles. They may or not be emerald ash borers. Other insects similar in appearance to emerald ash borer may be found locally. These insects along with pathogens, and environmental stresses may adversely affect the health of ash trees.
If you suspect an insect is emerald ash borer, try confirming your suspicions using photos and information online the websites of the New York State Department of Environmental Conservation at www.dec.ny.gov, or at www.EmeraldAshBorer.info. If you believe the insect you have seen may indeed by emerald ash borer, report your sighting to Christopher Lajewski, The Nature Conservancy, 315-387-3600 x 22.”
|
This post & selection were made by GWD photography teacher Eric Girouard.
Learning One Point Perspective
One of the first assignments in the class was to understand what is, find, and shoot a “one point perspective”. A single point perspective is when two lines (usually supposed to be parallel) converge at the horizon line. See the railroad track image for a classic example.
At this point in the semester, students were mostly learning how to use the Adobe Lightroom image post-processing software and were experimenting with creative interpretations of their photographs, including conversion to black & white.
Enjoy the efforts of our second year students!
Author: Eric Girouard
Eric Girouard is a photography and design teacher in the Graphic & Web Design department, which he joined in 2001. He holds a BFA in Fine Art specializing in Drawing & Painting from Concordia University. His stock images were distributed worldwide by Corbis. Eric also worked at Trey Ratcliff’s “The Arcanum – Magical Academy of Artistic Mastery” and served as a photo contest judge for Viewbug.com.
View all posts by Eric Girouard
|
Bible Study Guide- Week 5
"WAR IN HEAVEN"
MEMORY VERSE: "Ye are of your father the devil, and the lusts of your father ye will do. He was a murderer from the beginning, and abode not in the truth, because there is no truth in him. When he speaketh a lie, he speaketh of his own: for he is a liar, and the father of it." John 8:44.
STUDY HELP: Patriarchs and Prophets, 33–43.
INTRODUCTION: "The question is asked, How is the existence of sin reconcilable with the government of a wise, merciful, and omnipotent God? Why was sin permitted to enter heaven? Why was it permitted to take up its abode on the earth to cause discord and suffering? It certainly was not God’s purpose that man should be sinful. He made Adam pure and noble, with no tendency to evil. He placed him in Eden, where he had every inducement to remain loyal and obedient. The law was placed around him as a safeguard. Evil originated with the rebellion of Lucifer. It was brought into heaven when he refused allegiance to God’s law. Satan was the first lawbreaker." Review and Herald, June 4, 1901.
"THE ANOINTED CHERUB THAT COVERETH"
1. By what name was Satan known when he was in heaven? Isaiah 14:12.
NOTE: "Lucifer, ‘son of the morning,’ was first of the covering cherubs, holy and undefiled. He stood in the presence of the great Creator, and the ceaseless beams of glory enshrouding the eternal God rested upon him." Patriarchs and Prophets, 35.
[It is clear from this verse that the reference is not to an earthly king of Babylon.]
2. What position did Lucifer hold in heaven? Ezekiel 28:14.
NOTE: "Lucifer was the covering cherub, the most exalted of the heavenly created beings; he stood nearest the throne of God, and was most closely connected and identified with the administration of God’s government, most richly endowed with the glory of His majesty and power." Signs of the Times, April 28, 1890.
[Verse 14 shows clearly that the primary reference is not to an earthly king of Tyre.]
3. When he was created, what was Lucifer like? Ezekiel 28:15.
NOTE: "The angels had been created full of goodness and love. They loved one another impartially and their God supremely, and they were prompted by this love to do His pleasure. The law of God was not a grievous yoke to them, but it was their delight to do His commandments, to hearken unto the voice of His word. But in this state of peace and purity, sin originated with him who had been perfect in all his ways." Signs of the Times, April 28, 1890.
"TILL INIQUITY WAS FOUND IN THEE"
4. What went wrong with Lucifer’s thinking and led him into sin? Ezekiel 28:17, first part.
NOTE: "The change from perfection of character to sin and defection did come even in heaven. Lucifer’s heart was lifted up because of his beauty, his wisdom was corrupted by reason of his brightness. Self-exaltation is the key to his rebellion, and it unlocks the modern theme of sanctification. Satan declared that he had no need of the restraints of law, that he was holy, sinless, and incapable of doing evil; and those who boast of holiness and a state of sinlessness, while transgressing the law of God, while willfully trampling under-foot the Sabbath of the Lord, are allied on the side of the first great rebel. If the sanctified, holy angels became unsanctified and unholy by disobedience to God’s law, and their place was no longer found in heaven, think you that men, redeemed by the blood of the Lamb, will be received into glory who break the precepts of that law which Christ came to magnify and make honourable by His death upon the cross? Adam and Eve were in possession of Eden, and they fell from their high and holy estate by transgression of God’s law, and forfeited their right to the tree of life and to the joys of Eden." Signs of the Times, April 28, 1890.
5. Because of his pride, whose position did Lucifer wish to seize? Isaiah 14:13, 14.
NOTE: Notice the self-centerdness of these verses, the number of times Lucifer spoke of "I" and "my." "Pride in his own glory nourished the desire for supremacy. The high honors conferred upon Lucifer were not appreciated as the gift of God and called forth no gratitude to the Creator. He gloried in his brightness and exaltation, and aspired to be equal with God. He was beloved and reverenced by the heavenly host. Angels delighted to execute his commands, and he was clothed with wisdom and glory above them all. Yet the Son of God was the acknowledged Sovereign of heaven, one in power and authority with the Father. In all the councils of God, Christ was a participant, while Lucifer was not permitted thus to enter into the divine purposes. ‘Why,’ questioned this mighty angel, ‘should Christ have the supremacy? Why is He thus honoured above Lucifer?’" Great Controversy, 495.
"None are too high to fall. Sin originated with Satan, who was next to Christ. Lucifer became the destroyer of those whom heaven had committed to his guardianship. Satan has a church in our world today. In his church are all the disaffected ones and the disloyal. All who harbor pride, ambition, vain-glory, or selfishness, will be found wanting when weighed in the balance of the Lord." Australasian Union Conference Record, October 1, 1906.
6. What was the outcome of Lucifer’s ambition? Revelation 12:7.
NOTE: "Until this time all heaven had been in order, harmony, and perfect subjection to the government of God. It was the highest sin to rebel against His order and will. All heaven seemed in commotion. The angels were marshaled in companies, each division with a higher commanding angel at its head. Satan, ambitious to exalt himself, and unwilling to submit to the authority of Jesus, was insinuating against the government of God. Some of the angels sympathized with Satan in his rebellion, and others strongly contended for the honor and wisdom of God in giving authority to His Son. There was contention among the angels. Satan and his sympathizers were striving to reform the government of God. They wished to look into His unsearchable wisdom, and ascertain His purpose in exalting Jesus and endowing Him with such unlimited power and command. They rebelled against the authority of the Son. All the heavenly host were summoned to appear before the Father to have each case decided. It was there determined that Satan should be expelled from heaven, with all the angels who had joined him in the rebellion. Then there was war in heaven. Angels were engaged in the battle; Satan wished to conquer the Son of God and those who were submissive to His will." Early Writings, 145.
7. What was the outcome of the war in heaven? Revelation 12: 8, 9.
NOTE: "Even when it was decided that he could no longer remain in heaven, Infinite Wisdom did not destroy Satan. Since the service of love can alone be acceptable to God, the allegiance of His creatures must rest upon a conviction of His justice and benevolence. The inhabitants of heaven and of other worlds, being unprepared to comprehend the nature or consequences of sin, could not then have seen the justice and mercy of God in the destruction of Satan. Had he been immediately blotted from existence, they would have served God from fear rather than from love. The influence of the deceiver would not have been fully destroyed, nor would the spirit of rebellion have been utterly eradicated. Evil must be permitted to come to maturity. For the good of the entire universe through ceaseless ages, Satan must more fully develop his principles, that his charges against the divine government might be seen in their true light by all created beings, that the justice and mercy of God and the immutability of His law might forever be placed beyond all question." Great Controversy, 498, 499.
"Satan is a deceiver. When he sinned in heaven, even the loyal angels did not fully discern his character. This was why God did not at once destroy Satan. Had He done so, the holy angels would not have perceived the justice and love of God. A doubt of God’s goodness would have been as evil seed that would yield the bitter fruit of sin and woe. Therefore the author of evil was spared, fully to develop his character. Through long ages God has borne the anguish of beholding the work of evil, He has given the infinite Gift of Calvary, rather than leave any to be deceived by the misrepresentations of the wicked one; for the tares could not be plucked up without danger of uprooting the precious grain. And shall we not be as forbearing toward our fellow men as the Lord of heaven and earth is toward Satan?" Christ’s Object Lessons, 72.
"THE TRANSGRESSION OF THE LAW"
8. How does God’s Word define sin? 1 John 3:4.
NOTE: At a time when theologians are proposing multiple definitions of sin, like "inherited guilt," "a broken relationship," "missing the mark," "our natural spiritual condition, the condition of sinfulness," "being by nature spiritually bent," etc., it is important to know what God Himself defines as sin.
"Our only definition of sin is that given in the Word of God; it is ‘the transgression of the Law.’" Great Controversy, 493.
"The only definition given in God’s Word is: ‘Sin is the transgression of the Law,’ and the apostle Paul declares, ‘Where no law is, there is no transgression.’" Bible Echo, June 11, 1894.
9. Why do we choose to sin when tempted by the devil? James 1:14, 15.
NOTE: "Every man is tempted when he is drawn away of his own lusts and enticed. He is turned away from the course of virtue and real good by following his own inclinations. If [you] possessed moral integrity, the strongest temptations might be presented in vain. It is Satan’s act to tempt you, but your own act to yield. It is not in the power of all the host of Satan to force the tempted to transgress. There is no excuse for sin." Testimonies, vol. 4, 623.
10. What will cause our attempts to obey God’s Law to fail? Romans 14:23.
NOTE: " ‘Here are they that keep the commandments of God, and the faith of Jesus.’ In order to be prepared for the judgement, it is necessary that men should keep the law of God. That law will be the standard of character in the judgement. The apostle Paul declares: ‘As many as have sinned in the law shall be judged by the law, . . . in the day when God shall judge the secrets of men by Jesus Christ.’ And he says that ‘the doers of the law shall be justified.’ Romans 2:12–16. Faith is essential in order to the keeping of the law of God; for ‘without faith it is impossible to please Him.’ And ‘whatsoever is not of faith is sin.’ Hebrews 11:6; Romans 14:23." Great Controversy, 436.
"THAT HE MIGHT DESTROY THE DEVIL"
11. What was Christ’s purpose in coming to earth and taking our nature? Hebrews 2:14.
NOTE: "We need not place the obedience of Christ by itself as something for which He was particularly adapted, because of His divine nature; for He stood before God as man’s representative, and was tempted as man’s substitute and surety. If Christ had a special power which it is not the privilege of a man to have, Satan would have made capital of this matter. But the work of Christ was to take from Satan his control of man, and He could do this only in a straightforward way. He came as a man, to be tempted as a man, rendering the obedience of a man. Christ rendered obedience to God, and overcame as humanity must overcome. We are led to make wrong conclusions because of erroneous views of the nature of our Lord. To attribute to His nature a power that it is not possible for man to have in His conflicts with Satan, is to destroy the completeness of his humanity. The obedience of Christ to His Father was the same obedience that is required of man. Man cannot overcome Satan’s temptations except as divine power works through humanity. The Lord Jesus came to our world, not to reveal what God in His own divine person could do, but what He could do through humanity. Through faith man is to be a partaker of the divine nature, and to overcome every temptation wherewith he is beset." Signs of the Times, April 10, 1893.
12. When will the sins of the righteous be finally blotted out of the books of record? Revelation 22:11, 12. Compare Acts 3:19.
NOTE: "When the Third Angel’s Message closes, mercy no longer pleads for the guilty inhabitants of the earth. The people of God have accomplished their work. They have received ‘the latter rain,’ ‘the refreshing from the presence of the Lord,’ and they are prepared for the trying hour before them. Angels are hastening to and fro in heaven. An angel returning from the earth announces that his work is done; the final test has been brought upon the world, and all who have proved themselves loyal to the divine precepts have received ‘the seal of the living God.’ Then Jesus ceases His intercession in the sanctuary above. He lifts His hands and with a loud voice says, ‘It is done;’ and all the angelic host lay off their crowns as He makes the solemn announcement: ‘He that is unjust, let him be unjust still: and he which is filthy, let him be filthy still: and he that is righteous, let him be righteous still: and he that is holy, let him be holy still.’ Revelation 22:11. Every case has been decided for life or death. Christ has made the atonement for His people and blotted out their sins." Great Controversy, 613, 614.
13. How complete will the destruction of Satan and sin? Malachi 4:1, Psalm 37:9, 10, Ezekiel 28:19.
NOTE: "Then the end will come. God will vindicate His law and deliver His people. Satan and all who have joined him in rebellion will be cut off. Sin and sinners will perish, root and branch (Malachi 4: 1), Satan the root, and his followers the branches. The word will be fulfilled to the prince of evil, ‘Because thou hast set thine heart as the heart of God; . . . I will destroy thee, O covering cherub, from the midst of the stones of fire . . . Thou shalt be a terror, and never shalt thou be any more.’ Then ‘the wicked shall not be: yea, thou shalt diligently consider his place, and it shall not be;’ ‘they shall be as though they had not been.’ Ezekiel 28:6–19; Psalm 37:10; Obadiah 16." Desire of Ages, 763.
April 1999 Table of Contents
|
Etched into the surviving art of the Moche, one of South America's most ancient and mysterious civilisations, is a fearsome creature dubbed the Decapitator. Also known as Ai Apaec, the octopus-type figure holds a knife in one hand and a severed head in the other in a graphic rendition of the human sacrifices the Moche practiced in northern Peru 1,500 years ago.
For archaeologists, the horror here is not in Moche iconography, which you see in pottery and mural fragments, but in the hundreds of thousands of trenches scarring the landscape: a warren of man-made pillage. Gangs of looters, known as huaqueros, are ransacking Peru's heritage to illegally sell artefacts to collectors and tourists.
"They come at night to explore the ruins and dig the holes," said Cuba Cruz de Metro, 58, a shopkeeper in the farming village of Galindo. "They don't know the history, they're just looking for bodies and for tombs. They're just looking for things to sell."
A looting epidemic in Peru and other Latin American countries, notably Guatemala, has sounded alarm bells about the region's vanishing heritage.
The issue is to come under renewed scrutiny in the run-up to July's 100th anniversary of the rediscovery of Machu Picchu, the Inca citadel in southern Peru, by US historian Hiram Bingham. He gave many artefacts to Yale university, prompting an acrimonious row with Peru's government which ended only this year when both sides agreed to establish a joint exhibition centre.
A recent report, Saving our Vanishing Heritage, by the Global Heritage Fund in San Francisco, identified nearly 200 "at risk" sites in developing nations, with South and Central America prominent.
Mirador, the cradle of Mayan civilisation in Guatemala, was being devastated, it said. "The entire Peten region has been sacked in the past 20 years and every year hundreds of archaeological sites are being destroyed by organised looting crews seeking Maya antiquities for sale on the international market."
Northern Peru, home to the Moche civilisation which flourished from AD100-800, had been reduced to a "lunar landscape" by looter trenches across hundreds of miles. "An estimated 100,000 tombs – over half the country's known sites – have been looted," the report said.
The sight breaks the heart of archaeologists and historians piecing together the story of a society which built canals and monumental pyramid-type structures, called huacas, and made intricate ceramics and jewellery.
The Moche, who pre-dated the Incas by 1,000 years, also painted murals and friezes depicting warfare, ritual beheading, blood drinking and deities such as the Decapitator, who has bulging eyes and sharp teeth. Analysis of human remains confirmed that throat-cutting was all too real but, in the absence of written records, archaeology must shed light on what happened.
In villages such as Galindo that is becoming all but impossible. Crude tunnels and caves make Moche ruins resemble rabbit warrens. Deep gashes cut into walls expose the brickwork below. Millennia-old adobe bricks are torn from the ground and scattered as though in a builder's yard.
Most huaqueros are farmers supplementing meagre incomes. Montes de Oca, one of three police officers tasked with environmental protection in a region of a million people, said he was overwhelmed. "I've been doing this for 28 years. There are three of us and one truck. It's insufficient but we do everything possible."
Ten miles away Huaca del Sol, one of the largest pyramids in pre-Columbus America, is an eroded, plundered shell. Here the culprits were not impoverished farmers but Spanish colonial authorities who authorised companies to mine for treasure, said Ricardo Gamarra, director of a 20-year-old conservation project.
"They diverted the river to wash away two-thirds of the huaca and reveal its insides," he said. "They mined through the walls and caused it to collapse in various places. It's impossible to guess how much was taken because we don't know how much was there."
Donations from businesses and foundations have helped Gamarra's team protect what is left, drawing 120,000 visitors each year, but of 250 other sites in the region just five have been protected. "In the mountains it's the same. It is full with archaeological sites, almost all of them have been destroyed," said Gamarra.
There has been good news from Chotuna, also in northern Peru, where archaeologists found frescoes in a 1,100-year-old temple of the Lambayeque civilisation, which flourished around the same time as the Incas.
Jeff Morgan, executive director of the Global Heritage Fund, urged Peru to funnel tourists away from Machu Picchu, overrun by two million visitors a year, to lesser known sites which could then earn revenue to protect their heritage.
The government should resist the temptation to pocket the money. "One of the biggest problems is the disconnect between local communities and management of the sites. We think locals should get at least 30% of revenues." Only then, said Morgan, would cultural treasures fom the Moche and other civilisations be saved.
For more interesting topics related to archaeology, visit archaeology excavations.
|
Baby growth charts are used throughout the world and although there can be slight differences between each of them, their core information is very similar. For a long time growth charts have remained unchanged but in the last few years there have been improvements to their design and the information they contain.
The growth charts currently used in Australia have been and continue to be modelled on the same format as the United States Center for Disease Control (formerly the National Center for Health Statistics). These are called the CDC (Center for Disease Control) percentile charts. But in the last few years The World Health Organisation (WHO) has created separate growth charts which more accurately reflect a breastfed baby/child’s growth pattern. These evolved because babies who are breastfed gain weight in a different way to babies who are formula fed.
Breastfed babies tend to gain a lot of weight in the first three months of their life and then plateau, or stabilise their weight gain for a couple of months. Whereas formula fed babies tend to gain weight in a more consistent pattern. This difference was found to influence the way health professionals interpreted individual babies? growth. Instead of normalising the decrease in the pattern of weight gain of breastfed babies, this was seen as a reason to recommend complimentary feeding. The WHO charts also apply to babies who are formula fed.
Historically, growth charts used to focus on under nutrition, but in the last few years they have been proven to be very effective in identifying children at risk of becoming overweight and/or obese. Growth charts can also be used with the Body mass index-for-age percentile chart and these are another way of keeping a check on a child’s healthy growth patterns.
Growth charts are also a good way for governments to assess the health and wellbeing of the population.
Every parent wants to know that their baby is growing well and thriving. Parents also want to know that they are doing everything they can to support their baby’s growth. Growth charts are a good way to check that these things are happening.
Growth charts are also about averages; average babies of the same gender and the same age, living in the same country having much the same lifestyles and nutrition. The purpose of baby growth charts is to provide an objective, accurate assessment of how each individual baby is growing according to their age and gender. Although we know that every baby and child is unique and special, they still need to grow at a steady and regular pace. After all, one of the main objectives of childhood is to progress from a state of small, utter dependency towards independence. Without growth this would be impossible.
Growth charts come in either pink for girls or blue for boys. When a baby is born prematurely, their specific gestational age and corrected age can be included when plotting their growth.
Growth charts are used by parents, doctors, paediatricians, child health nurses and health professionals. Each baby has their own chart and preferably, the same charts are used to track their growth from birth until the school aged years.
|From birth to 36 months||Head Circumference, length and weight.||Pink for girls, blue for boys.|
|From 2 years to 20 years||Height-for-age percentiles, weight-for-age percentiles, body mass index-for-age percentiles.||Pink for girls, blue for boys.|
Growth charts work by looking at the percentage of babies/children who weigh or measure the same at the same age. Each graph is divided up into percentiles or percentages and includes the 3rd, 10th, 25th, 50th, 75th, 90th and 97th. So if for example, a baby is on the 10th percentile for their weight, then out of 100 babies 90 would be heavier; if they are on the 75 percentile they are above the average weight for babies of the same age and gender.
Gestational age is an important factor when looking at growth charts. Obviously, premature babies are lighter and smaller than babies who are born at term (38-42 weeks). But babies who are heavier at birth sometimes don’t gain weight at the same rate as babies of a lighter or average weight. This is because the weight of a baby when they are born relates more to the conditions in the uterus than their inherited genetics.
The major factors which influence growth are if a baby is a boy or a girl; (boys tend to grow more quickly than girls), genetics, environment, their overall health, nutrition and individual growth factors. Comparisons between babies are rarely useful though when it comes to using growth charts, we tend to relax this rule. Because although every baby is an unique individual and will grow and develop at their own unique rate, it is reasonable to compare their growth rate with others of the same age and gender.
If you are taking your baby to a health care professional, then ask them how your baby is growing. Weight gain is not consistent in babies/children and changes from day to day and week to week. Remember, it is the pattern of growth over time which is important, not one or two weights or measures.
Generally, babies take around two weeks after birth to regain their birth weight. Sometimes this doesn’t happen, especially when babies have been born prematurely, have been sick or they have had problems feeding.
Look to see that your baby is:
|Birth-3 months||3-6 months||6-12 months|
|150-200 grams per week||100-150 grams per week||70-90 grams per week|
When a baby is plotted on one of the lower percentiles at birth, especially the 10th or the 3rd it is worthwhile to carefully monitor their growth. Dropping from one percentile to a lower one or crossing two percentiles is concerning and is a sign that a baby’s growth may need further investigation.
Growth charts can also be used to predict a child’s adult height and weight, as long as the child keeps growing at a fairly constant rate. When there is a change in their rate of growth, with a sudden increase or decrease, then this warrants medical investigation.
If you are unsure about any aspect of your baby’s growth, check with your health care professional.
|
This Crawdad looks like it is trying to block me from continuing my hike but this is what they do when they feel threaten. I spotted it crossing a trail that I was on and when I got low to photograph it the Crawdad rears up, raising its claws threateningly. Most of the time they will also move backwards, flinging mud.
Crawdad Facts and Camera Settings
Crawdads live under rocks and debris on the muddy bottom of freshwater lakes and streams. They are active at night and crawl along the mud feeding on aquatic vegetation, worms, insects, mollusks, and decayed organic matter. Their pincerlike claws are used to crush and tear food into smaller pieces.
|
- self-monitored, and
- self-correcting thinking.
In consequence of this list, it requires the affirmation and mastery of strict quality criteria. Critical Thinking is the basement of effective communication and problem-solving skills. You may watch this short video on YouTube for going deeper!
Definition of “Critical Thinking“
Critical thinking is that type of thinking (applicable to any subject, content, or problem) in which a person enhances the quality of his thinking by making a commitment to competently following the inherent structures of thinking and conforming to intellectual norms to measure up.
Does there exist a simple approach?
No, unfortunately not. Creating this competence is a longer-lasting learning process and needs several preconditions (and training).
Nevertheless, we want to give some simple guidelines on how to start:
- Recognize the problem
- Do research
- Determine data relevance
- Ask questions (even to yourself)
- Determine the best solution
- Present your solution (even to yourself)
- Analyse your decision
The training contribution
The training will start with personal experience. The trainers put the focus on the analysis of written text and will discuss the different outcomes of the analysing process of the participants.
The presentation of a versatile strategic concept for Critical Thinking starts the second part of the hands-on training.
Critical thinking is the ability to collect and analyse information in order to draw a conclusion. The ability to think critically is important in virtually every industry and applicable in a wide range of positions.
The training will be held in Wiener Neustadt, Austria, beginning of May. As a means of service, The EBI will publish the Distance Learning material open for public in a multimedia-based and interactive learning unit.
|
A drying Great Salt Lake could mean increased dust, less snow, reduced lake access, elevated salinity, habitat loss, island bridges, more invasive plant growth and negative economic consequences to the state. Visit the links below to learn more about how the water, air and wildlife in and around the lake are being impacted.
The U.S. Geological Survey monitors lake levels, salinity and groundwater data. Check out their interactive site that provides real-time data and information related to the lake and its watershed.
For the unique Great Salt Lake ecosystem to thrive, salinity of the south arm should fall between 120 and 160 grams per liter (g/L). Ecosystems are impaired when salinity rises higher. In fall 2022, salinity reached 185 g/L.
Great Salt Lake is an avian oasis where 10 million migratory birds visit annually to rest, refuel and breed.
The Division of Wildlife Resources manages between 550 and 700 free-roaming bison on Antelope Island State Park.
Brine shrimp are crustaceans that inhabit saline waters around the world and are a valuable food source to migratory birds that congregate in and around Great Salt Lake. Without this food source, the birds’ long migrations wouldn’t be possible.
The extensive marshes, mudflats and meadows surrounding Great Salt Lake make up the highest concentration of vegetated wetlands in Utah and provide crucial stopover, wintering and nesting habitat for millions of shorebirds and waterfowl.
|
What Is a Spinal Cord Stimulator?
A spinal cord stimulator (SCS) is an electric nerve blocker implanted into the spine. It is designed to disrupt the signals your body uses to report pain.
How Does a Spinal Cord Stimulator Work?
The implanted device works by sending low levels of electricity into the spine. This disrupts the bioelectric signals your nerves use to pass information up the spinal column to reach the brain. Under ideal circumstances, this will reduce pain to a slight tingling sensation or nothing.
This operation is considered successful if patients achieve a 50 percent reduction in pain.
Why Should I Choose the Advanced Relief Institute for Spinal Cord Stimulation?
The Advanced Relief Institute has several pain specialists with experience implanting FDA-approved spinal stimulation devices. The skill and experience of your surgeon are of the utmost importance when considering a surgical procedure near the spinal cord.
The doctors at the Advanced Relief Institute are dedicated to reducing your pain and improving your quality of life. If you are struggling with chronic pain, contact our pain management specialists.
What Can Spinal Cord Stimulation Treat?
This spinal implant procedure is best for pain where the injury or defect causing your chronic pain cannot be identified or repaired with surgery. Spinal cord stimulation is often used after non-surgical nerve block injections have failed to provide adequate pain relief.
Do I Need Spinal Cord Stimulation?
If you have been struggling with chronic pain for three months or more and other treatments have proven ineffective, you may be a candidate for implantation of a spinal cord stimulator.
Before the operation, your pain specialist will conduct several tests to locate the source of your pain. This will help make the nerve blocker more effective.
Some medical insurance providers require a series of psychological tests before implanting the spinal cord stimulator. This is to rule out psychological conditions that may make the procedure less effective.
How Is Spinal Cord Stimulator Surgery Done?
There are two stages to implanting a spinal cord stimulator.
The first stage is effectively a test run. Your pain specialist will implant the electrodes under your skin against your spine. A belt will hold the battery for the device during the trial run.
Patients will test the SCS device for five to seven days to ensure the electrodes are well-placed and working properly.
Once your doctor confirms that electrical nerve disruption is reducing your pain, the final device, complete with a power source (similar to a pacemaker), will be implanted.
The operation is conducted under local anesthesia. Depending on where the implanted device will be attached to your spine, a single incision above your buttock or on your lower back will be used. The battery is usually placed above your buttocks or in your abdomen.
Is There a Recovery Period After Spinal Cord Stimulation?
Patients should refrain from exercise (except for brief walks) for one to two weeks. After two weeks, most patients can return to work and resume driving (with the nerve blocker turned off).
Most patients recover from the surgical implantation of the spinal stimulator within two to four weeks.
Looking for Relief From Chronic Pain?
If you have been struggling with chronic pain for three months or more, there is no need to continue to suffer. A spinal cord stimulator implant may help you live a life with less pain and more freedom.
Contact the Advanced Relief Institute today to learn more about your pain treatment options. You could be on your way to less pain in no time!Get In Touch
|
This month marks the 225th anniversary of William Carey’s now famous volume, An Enquiry into the Obligations of Christians to use Means for the Conversion of the Heathen. In 1785 Carey began pastoral work in England, and through developments in cartography, Carey learned of the earth’s geographic and ethnic landscape. His studies compelled a concern for the lost souls of humanity around the world—but Carey could find few sympathizers. Some of his fellow pastors, thinking God would not employ the efforts of men to spread the gospel message, squelched Carey’s pleas for international missions efforts.
Carey responded by pressing his ministerial associates to consider the blessings they enjoyed in Christ and the world’s need to be reconciled to the Savior. In an effort to persuade fellow pastors to form an alliance and begin sending missionaries to India, Carey wrote his Enquiry—and Carey was persuasive. On June 13, 1793 Carey set sail for India, having been commissioned by fellow pastors and churches to take the gospel to the heathen. Church historians today cite William Carey’s mission to India at the end of the 18th century as the fount of modern missions.
The current orphan crisis—and the need for the church’s response—requires brave voices like Carey’s in our day. Each year, the month of May is designated by the U.S. Department of Health and Human Services as National Foster Care Month. In her article, “Foster Children Need the Church,” Brittany Lind writes,
The need is enormous, but when you consider that there are roughly 348,067 evangelical churches in America, the 430,000 children-in-foster-care number doesn’t seem quite so daunting. Unfortunately, it’s not a problem that can be solved by simply doing the math and distributing children among churches. Many factors complicate the issue, but the numbers are still fascinating to consider.
Lind notes that the church can help in many ways (meals for a family, clothes, furniture, etc.), but ultimately these kids need homes and families.
Who within the church might be equipped to personally take orphans into their homes, giving children a nuclear family as well as connecting them with the gospel-life of a local fellowship? In the logic of William Carey, what means might God employ for such a task? Based upon analysis of statements about leadership in the Pastoral Epistles and 1 Peter, I suggest that pastors enjoy a unique position through which they might help the church to care for orphans, fulfilling James’s ideal of pure and undefiled religion (Jas 1:27). And as pastors exemplify hospitality to orphans, they will set a mark of faith for the church to imitate—thus multiplying the effect of their leadership (Heb. 13:7).
Local pastors as examples for the church
The pastoral qualifications in 1 Timothy 3 can be categorized in various ways, and I suggest three headings:
- Exemplary Christian moral integrity in spheres both proximal (“the husband of one wife” (1 Tim. 3:2, 4–5) and public (1 Tim, 3:2, 7).
- The ability to teach Christian doctrine (1 Tim 3:2).
- Hospitality to the needy (1 Tim 3:2).
These headings provide an apt framework for the similar list Paul wrote in Titus 1:5-9. In light of the dark situation on Crete (Titus 1:12), it follows that those serving as pastors would need to set the pace for good doctrine and good deeds. And this is exactly what Paul called Titus to identify in potential elders, men that: showed Christian behavior in private (Titus 1:6-7) and public (Titus 1:7-8); were able to teach (Titus 1:9); and had a reputation for good works toward the needy (Titus 1:8).
Like Paul in the Pastorals, Peter recognized that the elders of the church must set the pace for maintaining Christian orthodoxy and orthopraxy in the face of opposition. At the conclusion of his letter, Peter directed his attention to the elders of the church exhorting them to shepherd and oversee the flock among them by being examples (1 Pet. 5:3). This exhortation Peter lists as the antithesis of domineering leadership that called the congregation to act a certain way but did not model that behavior for them.
Peter’s logic, like Paul’s noted already, rests on the notion that the church at large required visual patterns of necessary Christian good works. If the church was to take up specific Christian activities to defend the Christian message before antagonists in the world, the pastors would have to demonstrate such behavior for the believers under their care.
Orphan care and pastoral leadership
In short, pastors are to model Christian integrity and wholeness—and this brings us to the argument of the Epistle of James. The religious and socio economic matrix of James’s audience placed orphans and widows at a point of peril. If the church did not come to their aid, no one would.
But James described the church as likewise in a vulnerable position—in need of working out its faith. James pictured the desperate situation of orphans and widows as God’s supply for the congregation’s equally desperate need to practice its faith, to be mature and whole before God. James wrote, “religion that is pure and undefiled before God the Father is this: to visit orphans and widows in their affliction” (James 1:27).
The idea of “visit” in James 1:27 ranges with the proximal designation of the object in view. It could imply that the subject of the verb would (1) leave a location and travel to another location with a view to assisting someone at the point of destination and then returning to their original domain or, (2) more generally, as with an object such as orphans or widows—who may not have had a stable location where they might receive a visitor—“look after” (ἐπισκέπτομαι, episkeptomai; BDAG). In the context of James 1:27, the verbal idea of “visitation” pictures the subject of the verb personally attending to the needs of the object in an ongoing, proximal manner.
To whom might James’s audience look for examples of pure and undefiled religion? In light of the general logic of leadership in the Pastoral Epistles and 1 Peter, I offer that the church under James’s care would look to its pastors as models of how to help the vulnerable among them, like orphans and widows. It is noteworthy that in 1 Timothy 5:3-16, Paul lists qualifications for widows to receive corporate congregational support but no such list is supplied for orphans. Looking after parentless children—perhaps because of their limited life/work experience and relational contacts—objectifies pure faith.
William Carey was concerned for his fellow pastors to consider the means God might use to take the gospel to the lost. I suggest that the designated pastoral tasks of teaching, hospitality and family management specially equip pastors to do this by meeting the needs of orphans. This is doubly ironic. Pastors might be thought the last demographic in the church to take in orphans because we are already busy. While I do not advocate the position that God has uniformly called all pastors to take in orphans, since becoming a foster/adopt parent, I have personally discovered a second irony: pastoral orphan care has propelled my pastoral ministry in ways that no other educational or leadership endeavor has done in several ways:
First, I am personally taking the gospel to the lost; the children I have adopted hear it often and see it modeled all around them, and I have been able to share the gospel with many social workers and children’s services officials.
In addition, perhaps no other social issue is as pressing upon American evangelical pastors today as racial strife; the fact that my adopted daughters are of a different race has taught me countless lessons about race relations and given me opportunities to show in real time the gospel’s power to break racial divides.
Finally, pastoral orphan care has allowed me to exemplify the gospel for my congregation; every time I gather with the church I am able to have show and tell. And when it comes time to challenge the church to pray about engaging the needs of orphans, even becoming foster/adopt parents? They will have an example, yet in process, to follow (Heb. 13:7).
|
1845 – Government House, Sydney, Australia
The Governor of the colony of New South Wales was the British monarch’s representative and it was considered fitting that a grand residence be built to reflect this viceregal appointment. The task of designing this residence fell on distinguished English architect Edward Blore, hose portfolio included various works at Windsor Castle, Hampton Court and Buckingham Palace, and many other grand country houses. Blore never visited the colony and the final siting of the house on Bennelong Point was determined by Colonial Engineer, Captain George Barney and Colonial Architect, Mortimer Lewis. Lewis was given the responsibility of modifying the plans to suit the site and local conditions.
|
Garter Snake Breeding
Eastern garter snakes (Thamnophis sirtalis sirtalis) reach sexual maturity at 2 years old. Upon reaching adulthood, males are 18 inches or longer and females are usually 24 inches or more.
To breed these snakes, I cool them in a similar manner as the rest of my colubrids – in a basement with natural light cycles and a temperature ranging from 50 to 65 degrees, depending on the temperature outside. In my situation, although stable and not prone to daytime highs and lows, on occasion the outside temperature can drop low enough for extended periods of time to cause the inside temperature to lower by as much as 5 degrees. For those who can control temperatures, I'd drop the temperature 10 degrees a week until the brumation temperature is reached. This will take about three to four weeks.
More morphs, like this "flame" phase, have appeared over the years and are now available on the market.
These live-bearing snakes are cooled from late November until mid-February. While the garters experience their winter slumber, I set up a new 29-gallon tank with natural substrate, live plants, rocks, logs, branches, a water basin, hiding areas and a basking light. The plan is for one pair snakes to live together in their new terrarium.
Upon coming out of cooling, the male almost immediately goes into jerky motions and purses the female. This is standard colubrid snake breeding behavior. Although most corn, king and milk snakes take a few weeks after coming out of cooling before the female ovulates and is receptive to breeding, eastern garters breed sooner. A few days after being warmed up, the snakes begin eating; however, the male’s appetite diminishes during the height of his breeding pursuits.
Their overhead light is on a timer and each morning the snakes position themselves under it when it comes on. After warming up, the snakes explore their cage or hide. Keepers should look for the male to actively court the female. This usually means him following her around and climbing on top of her. I have seen as many as three copulations in one week, and this is enough to insure a fertile breeding. Once the female is gravid, the male will lose interest. He can continue to be kept in the same cage and will not harm their offspring.
You can tell a female has become gravid when her skin stretches, spreading out her scales. A gravid female’s appetite will increase dramatically (I feed gravid females every other day) until a week or so before giving birth, when she’ll quit eating entirely. Females usually start eating again the day after having babies. It takes eight to12 weeks from successful mating to the birth of offspring. This timeframe depends on the temperature; warmer temperatures allow for quicker gestation.
The first time I bred these snakes, I was surprised at how early in the year my female gave birth. On April 25, I found 18 babies in the cage. The following year, kept under the same conditions, 22 babies were born on April 14.
Baby easterns usually start out by eating either guppies or small earthworms. I started mine on earthworms for the sake of convenience. It’s important to point out that “red worms,” sold as fish bait and sometimes found in the garden, are actually manure worms. Not only do garters find them distasteful, but they often regurgitate the manure worms they do eat and refuse to eat for several days afterward. These worms are easily identified by the light-colored rings that encircle their bodies; earthworms do not have these rings.
Young snakes are temporarily set up individually in 16-ounce, clear plastic containers to better monitor their food intake and behavior. Given that they were born in the summer and my house didn’t have air conditioning, no extra heat was supplied to the snakes. But for those of you that don’t have this kind of environment, provide an ambient temperature of 75 degrees. Although not possible to offer one in the 16-ounce temporary containers, once you move the snakes to a larger container you will need to offer a basking spot of 85 degrees by using a heating pad under one end of the enclosure. Other housing options would be plastic shoeboxes with tiny air holes or small plastic Kritter Keepers.
For substrate, I use paper towel, and I provide a water dish large enough to soak in. I’ve used baby food jars as well as plastic cups mealworms are often sold in. A small rock is used hold down the paper towel and also gives the snakes a place to hide under. I feed the babies earthworms every two to three days. Once a week I sprinkle multi-vitamin/calcium powder on the end of a worm.
After 6 months of age, the snakes can be moved into 5- or 10-gallon aquariums or Kritter Keepers of a similar size. At this time the snake will be about a foot long (some snakes grow faster than others). The young snakes graduate from eating earthworms to night crawlers that are cut in half. They eat half a night crawler every two to three days. I continue to use paper towel substrate and add a hide box and a branch to each snake’s enclosure. For the hide box, I use the plastic containers that frozen dinners are sold in. They measure 4 inches wide, 6 inches long and 1 inch tall. I cut a hole in the side of the container so that the snake may comfortably enter and exit the hide. At this point the snakes are big enough to feed half a store-bought night crawler every two to three days.
After reaching a year in age, snakes are moved to a 10-gallon tank. The heating pad is replaced with an overhead incandescent light. Food consists of an entire, full-sized night crawler every two to three days. I later convert them to feeding on scented, defrosted pinky mice by rubbing the defrosted pinky with a night crawler. Once they begin taking scented pinkies, they can often be converted to unscented pinks in just three feedings by placing less and less night crawler slime on the pinkies at each feeding. This varies from snake to snake, though. As they get larger, they move up to fuzzy mice and then regular mice. Adults are fed every five days. Most eastern garter snakes never get to a size big enough to eat adult mice, though adults readily take weaned mice. I always feed food items that are no larger than the same width as the diameter of the fattest part of the snake. I only serve one mouse per feeding. Once the snakes start eating mice, I no longer add vitamin and mineral supplements to their food.
Want to read the full story? Pick up the May 2010 issue of REPTILES, or subscribe to get 12 months of articles just like this.
|
Conflict sensitivity applies to all types of work, across humanitarian, development and peacebuilding sectors. Experience indicates that no intervention is neutral in a conflict context (Goldwyn, 2013). Nonetheless, agencies operating in various relevant sectors have failed to consistently apply a conflict sensitive approach to their interventions. This has been the case, for example, in security and justice sector reform (Goldwyn, 2013), economic recovery (International Alert, 2008), and natural resource and land management (Goddard & Lemke, 2013). This may be due to the assumption that interventions, which aim to address conflict dynamics and promote statebuilding and peacebuilding, are automatically conflict sensitive and pro-peace. This, however, cannot be assumed (see Challenges – faulty assumptions). Achieving intended impacts is particularly challenging in conflict-affected and fragile contexts – given the complexities of understanding such environments, difficulties with access, and rapidly shifting dynamics – let alone attributing causality to specific projects.
While each sector has distinct issues and questions to address in context analyses and in conducting conflict sensitivity, a consistent aspect of conflict sensitive approaches and peacebuilding found across sectors is the need to understand societal fault-lines and tensions (dividers) and to seek opportunities to build bridges (connectors) that contribute to strengthening social cohesion. In addition, Hoffman (2003) emphasises that while it is important to identify and analyse dynamics within difference sectors, it is equally important to understand how various sectors inter-relate and the implications of such dynamic interaction. This will improve the ability to assess and evaluate the positive or negative impact of particular interventions and the cumulative and spill-over effects of projects.
For discussion on social cohesion and rebuilding relationships in conflict contexts, see GSDRC guides on state-society relations and citizenship in situations of conflict and fragility (intra-society relations and civic trust and citizenship); and conflict (social renewal).
For further discussion on aspects of statebuilding and peacebuilding, see GSDRC guide on statebuilding and peacebuilding in situations of conflict and fragility.
The various needs of conflict-affected and fragile states extend beyond the reach of any one project or programme. The range of interventions should be coordinated within a multi-tiered framework that is consistent in terms of analysis, aims and methods to achieve them (Izzi & Kurz, 2009). Efforts to infuse conflict sensitivity into strategic and policy frameworks have been growing over the last 10 years. Recognising that poverty and conflict are closely interrelated, the World Bank implemented a 4 year programme in the mid-2000s aimed at improving the conflict sensitivity of country poverty reduction strategy frameworks. Key aspects include the need for good contextual analysis and the flexibility to respond quickly to changing situations and to produce alternative options. The United Nations too began exploring how to integrate conflict sensitivity into UN planning and programming, including using formal planning processes such as UN Development Assistance Frameworks (UNDAFs) as a strategic entry point for conflict sensitivity in post-conflict settings.
There are now various overarching policy frameworks that address conflict sensitivity, such as the ‘New Deal for Engagement in Fragile States’, developed through the forum of the International Dialogue for Peacebuilding and Statebuilding. The New Deal aims to mitigate risks from providing aid in conflict-affected and fragile contexts and emphasises the need for periodic country-led fragility assessments. The OECD’s ‘Principles for Good International Engagement in Fragile States and Situations’ also include the importance of context analysis and do no harm.
Strategies for conflict sensitive interventions should build on and integrate with overarching policy guidelines along with strategic programming and policy frameworks across various sectors. Translating policy guidelines into national policies and strategies, and implementing related organisational changes within donor governments, remains a challenge. In addition, Manning and Trzeciak-Duval (2010) emphasise that a key gap in coverage of the OECD Principles concerns the role of the private sector and economic growth. They stress the importance of improved coherence across various policy domains.
Through an examination of four cases (Liberia, Sierra Leone, Burundi and Afghanistan), McCandless and Tschirgi (2010) suggest that four criteria, each with their own challenges, are required for strategic frameworks to contribute to peacebuilding: 1) context analysis and context/conflict sensitivity; 2) enhanced capacity of national actors; 3) coherence, coordination and integration among actors and activities; and 4) mutual accountability amongst actors.
Manning, R. & Trzeciak-Duval, A. (2010). Situations of fragility and conflict: aid policies and beyond. Conflict, Security and Development, 10(1), 103-131.
See full text
McCandless, E. & Tschirgi, N. (2010). Strategic Frameworks that Embrace Mutual Accountability for Peacebuilding: Emerging Lessons in PBC and non-PBC Countries. Journal of Peacebuiling and Development, 5(2), 20-46.
See full text
Cordaid. (2012). Integrating gender into the New Deal for Engagement in Fragile States. The Hague: Cordaid.
See full text
Journal of Peacebuilding and Development. (2010). Special Issue on Advancing Coherence and Integrating Peacebuilding in Strategic Policy Frameworks 5(2).
OECD. (2011). International engagement in fragile states: Can’t we do better? Paris: OECD.
See full text
World Bank. (2007). Toward a conflict-sensitive poverty reduction strategy. A retrospective analysis: 2nd edition. Washington, DC. World Bank.
See full text
United Nations. (2005). Lessons learned workshop: integrating conflict sensitivity into UN planning and programming. 23-24 May, Turin, Italy.
See full text
- Goldwyn, R. (2013). Making the case for conflict sensitivity in security and justice sector reform programming. Care International See document online
- International Alert. (2008). Building a peace economy in Northern Uganda: Conflict-sensitive approaches to recovery and growth (Investing in Peace, Issue No. 1). London: International Alert. See document online
- Goddard, N. & Lempke, M. (2013). Do no harm in land tenure and property rights: Designing and implementing conflict sensitive land programs. Cambridge, MA: CDA. See document online
- Hoffman, M. (2003). PCIA methodology: Evolving art form or practical dead end? In A. Austin, O. Wils, & M. Fischer (Eds.) Peace and conflict impact assessment: Critical views on theory and practice. Berlin: Berghof Research Centre for Constructive Conflict Management. See document online
- Izzi, V. & Kurz, C. (2009). Potential and pitfalls of conflict-sensitive approaches to development in conflict zones: reflections on the case of North Kivu. (Paper presented at ISA Annual Convention, New York, 15-18 February). See document online
|
This paper It MUST relate to a topic where there are competing views (2 sides to an issue).
3. Do not cite Wikipedia or any “wiki” site as a source of information. Only use sources that provide the author’s name. For the purposes of this paper, “scholarly” sources are only journal articles. Please use sources that are called “Journal of ________” or “________Journal” that are not newspapers (since some newspapers contain “Journal” in their title). You will need to use the library databases in order to write this paper – you won’t be able to find Journals on standard websites. You may only use journal articles (at least 6) as sources of information for this paper –/ no websites, no textbooks, no encyclopedias, no blogs.
4. Use the headings listed below to organize your paper. You don’t need an abstract. Please name your paper “YourLastNamePaper”.
Headings that MUST be used to organize your paper:
Title – Should clearly state the 2 sides of the issue – ______ vs. ______
Introduction (briefly summarize the issue)
Position 1 – Within these “position” sections, you should begin the section with ONE sentence that summarizes the position. Then discuss each position using at least 3 journal articles within each position section (at least 6 total).
My position – Your first sentence should summarize your position, and you MUST pick one side to support. Summarize your position based on the information presented above.
Experimental Research Idea (just a primary heading above the 2 sections below)
Purpose of the study/Significance of results – Why conduct this study? What issue will this study resolve?
Procedure – You must propose an EXPERIMENT (make sure you know what this is). Be very specific within your description – be sure to define all of your variables (IV and DV) and explain how you would conduct the study. If your study does not utilize an experimental design, the highest grade you will earn in this section is 2 out of 4 points.
References – This should include at least 6 journal articles (the 3 from each “position” section) at a minimum. Do not use the textbook or any websites as sources of information.
5. DO NOT PLAGIARIZE!!! – if I find that you presented anyone else’s work as your own, the minimum consequence will be a 0 on the paper. If you take information directly off of a source and don’t cite that source, that is called plagiarism (look at the syllabus to see other issues related to academic dishonesty). All sources used while writing this paper must be cited both within the text and within a references section at the end of the paper.CLICK HERE
|
Welcome to the August 19, 2011 edition of ACM TechNews, providing timely information for IT professionals three times a week.
Also, please download our new ACM TechNews iPhone App from the iTunes Store by clicking here and our new ACM TechNews iPad App by clicking here.
HEADLINES AT A GLANCE
IBM Pursues Chips That Behave Like Brains
Associated Press (08/18/11) Jordan Robertson
IBM researchers announced the development of two prototype chips that can process data similarly to the way humans digest information. IBM says the chips represent a major breakthrough in a six-year-long project involving 100 researchers and about $41 million in funding from the U.S. Defense Advanced Research Projects Agency. The prototype chips are based on parallel processing, which is important for rendering graphics and analyzing large amounts of data. The chips' ability to adapt to different types of information that it was not specifically programmed to expect is a key feature, according to the University of Wisconsin-Madison professor Giulio Tononi, who worked with IBM on the project. The new chips have parts that behave like digital neurons and synapses, and each core has computing, communication, and memory functions, according to IBM Research's project leader Dharmendra Modha. "The key, key, key difference really is the memory and the processor are very closely brought together," Modha says.
Latest in Web Tracking: Stealthy 'Supercookies'
Wall Street Journal (08/18/11) Julia Angwin
It is almost impossible for computer users to detect new, legal techniques employed by major Web sites that track people's online activities through the installation of files called supercookies. Researchers at Stanford University and the University of California, Berkeley say that supercookies can re-create users' profiles after they have erased regular cookies. Supercookies are stored in different places than regular cookies, such as inside the Web browser's cache of previously visited Web sites. The Berkeley researchers determined, for example, that Hulu.com was using supercookie methods to store tracking coding in files related to Adobe Systems' Flash player, which enables numerous online videos. They also discovered that Hulu's Web site featured code from Kissmetrics, a firm that analyzes Web site traffic data, and which was embedding supercookies within users' browser caches and into files associated with the latest iteration of HTML. Meanwhile, a Stanford researcher has identified a history-stealing tracking service on Flixster.com, which uses software to mine a person's Web browser history on their computer to try to ascertain the sites the person has visited. The company then uses that information for targeted advertising.
Romance vs. STEM
Inside Higher Ed (08/16/11) Josh Jaschik
State University of New York at Buffalo professor Lora Park recently completed a series of research projects, the results of which suggest that when college-age women think about romance, they become less interested in studying science, technology, engineering, and math (STEM) fields. However, college-age men can be interested in romance without any impact on their engagement in STEM. Further study of these results could help confront the gender gap in STEM fields, according to Park. In one experiment, participants were shown images related to love or images that related to intelligence goals. Women who were exposed to the romantic images had less positive feelings about STEM fields when surveyed later, while men who were exposed to the same images had the same feelings about STEM as before the test. "This is about the cumulative impact of romantic images and scripts for women's lives" that women are exposed to from very young ages, Park says. The key is to let women "think about their future possible self" not in ways that are dictated by "the script" they have picked up over the years, but by their potential, according to Park.
Teraflop Turf: Bringing Back India's Supercomputing
Economic Times of India (08/18/11) Hari Pulakkat
Bringing India's supercomputing efforts back to a vanguard position is the goal of an ambitious program recommended by the Scientific Advisory Committee to the Prime Minister. The Indian Institute of Science's N. Balakrishnan has called for a large effort involving the development of several supercomputers of varying speeds, and the Planning Commission has agreed to fund the initiative in principle. The project would primarily use commercially available chips but develop state-of-the-art technologies in many other fields. It also would entail a major engagement with the private sector. The project would network Indian scientists and engineers at an unprecedented scale, and make massive computing resources easily available to India's scientific community and the private sector upon completion of its initial phase. "The success of such a project would also depend on its commercial viability," says Vijay Bhatkar of the Center for the Development of Advanced Computing. "So they would need to be general purpose machines capable of solving industrial problems as well." Members of the Indian supercomputing community are calling for a judicious combination of imports and domestic development, while some scientists are attempting to get Indian researchers in the habit of using supercomputers around the world.
Staying in Shape: How the Internet Architecture Got Its Hourglass Shape and What That Means for Future Internet Architectures
Georgia Tech Research News (08/15/11) Abby Robinson
Georgia Tech researchers have developed EvoArch, a computer model that describes the evolution of the Internet's architecture, suggesting that similar protocols compete with each other, driving some of them to extinction. Understanding the evolutionary processes behind the Internet's architecture could help computer scientists design new protocols. However, EvoArch suggests that unless the new Internet avoids widespread competition, it will evolve an hourglass shape similar to today's Internet. The current Internet architecture consists of six layers. Layers at the top and bottom contain many items, while the middle layers do not. EvoArch showed that even if future Internet architectures are not designed in the shape of an hourglass, they will change into that shape over time. The model showed that the top layers are so specialized that they do not compete with each other and rarely go extinct. "To avoid the ossification effects we experience today in the network and transport layers of the Internet, architects of the future Internet need to increase the number of protocols in these middle layers, rather than just push these one- or two-protocol layers to a higher level in the architecture," says Georgia Tech professor Constantine Dovrolis.
One of the challenges of virtually reproducing the look of fabric is replicating the nuanced ways it reflects light, but Cornell University researchers have devised a technique that taps computerized tomography (CT) technology to analyze fabric structure at a high resolution and then ports the output into computer-generated imagery (CGI). The CT image is built out of X-ray images taken from many different angles. CT permits the recording of the three-dimensional structure of fibers, with all of their defects, and several small fabric scraps can then be stitched together into a full garment within a computer. Light reflection is more realistic for the CGI garment because the internal structure of each piece of fabric matches that of an actual scrap of cloth. The Cornell researchers demonstrated realistic renderings of felt, gabardine, silk, and velvet at the recent ACM SIGGRAPH 2011 conference. One researcher says the technology could be potentially applicable to online retailing.
PARC Hosts Summit on Content-Centric Nets
EE Times (08/12/11) Rick Merritt
Xerox PARC is planning an event focused on content-centric networking, a new approach to organizing Internet traffic that could provide greater security and faster connections to popular content, but will require new protocols and changes in hardware design. "We think it's definitely a concept that will change how people design high performance hardware," says PARC engineer Jim Thornton. PARC recently won a U.S. National Science Foundation grant to develop the concept, working with a handful of universities as part of the Named Data Network project. "The sense we have is this is doable, it won't kill us, and forwarding hardware has always stepped up to the challenges new application demands," Thornton says. In a separate European project, Alcatel-Lucent, Orange, and several French universities are working on similar ideas. The September PARC meeting aims to gather researchers to share their work on the software. Content-centric networks could enable the pervasive caching of popular Web content based on actual demand. It also could lead to new levels of security and privacy, as content packets could carry digital signatures that would authenticate authorized users and verify that no one has tampered with the data.
NAND Flash Can Verify a Device's Identity
IDG News Service (08/12/11) Stephen Lawson
Researchers at the University of California, San Diego (UCSD) and Cornell University have developed software to test variations in flash behavior that are unique to each chip, allowing a company to verify that a flash chip is authentic by comparing it to results from when it left the factory. The technology also could be used to prevent counterfeiting devices such as cell phones and tablets that use flash chips, or to allow governments to find bugged devices on spies, according to UCSD professor Steven Swanson. He says that testing flash silicon as a proxy for an entire device provides an authentication technique that does not require hardware changes. The only requirement is firmware and an infrastructure for testing devices at key points in the supply chain. The system uses physically unclonable functions, which are variations in manufacturing that are unique to each element of a flash chip. The technique has the advantage of using immutable characteristics of the chip, so it could be carried out and repeated at any stage when a supplier or manufacturer wanted to verify the hardware, notes analyst Roger Kay.
Robot 'Mission Impossible' Wins Video Prize
New Scientist (08/12/11) Melissae Fellet
Free University of Brussels researchers have developed Swarmanoid, a team of flying, rolling, and climbing robots that can work together to find and grab a book from a high shelf. The robot team includes flying eye-bots, rolling foot-bots, and hand-bots that can fire a grappling hook-like device up to the ceiling and climb the bookshelf. Footage of the team in action recently won the video competition at the Conference on Artificial Intelligence. The robotic team currently consists of 30 foot-bots, 10 eye-bots, and eight hand-bots. The eye-robots explore the rooms, searching for the target. After an eye-bot sees the target, it signals the foot-bots, which roll to the site, carrying the hand-bots. The hand-bots then launch the grappling hooks to the ceiling and climb the bookshelves. All of the bots have light-emitting diodes that flash different colors, enabling them to communicate with each other. Constant communication enables Swarmanoid to adjust its actions on the fly, compensating for broken bots by reassigning tasks throughout the team.
Study Predicts Ward Patients at Risk of Critical Event
Australian and New Zealand College of Anaesthetists (08/12/11) Meaghan Shaw
Australian and New Zealand College of Anaesthetists researchers recently completed a computer analysis of routine blood tests and found that they can predict the likelihood of the patient having cardiac arrest or dying up to 12 hours before it occurs. The researchers developed software that analyzed six million blood tests taken from about 500,000 patients over a five-year period to determine the risk of death, cardiac arrest, or intensive care admission. The program was able to predict, on average, a critical event 10.2 hours before it occurred in a ward patients, and 11.9 hours in an emergency department patient. The software sends a message to the patient's doctor if it predicts the patient is likely to die, have a cardiac arrest, or need intensive care within the next 24 hours. The researchers are planning to install the program at Austin Health hospital to continuously collect information from blood tests. "If it looks like this is potentially useful, then the next step is a randomized control trial with patients who are in the risk zone, and half of them will receive standard care and the other half will be linked in to the alerts and we're going to see if it makes a difference," says Austin Health professor Rinaldo Bellomo.
Researchers Fight Cholera With Computer Forecasting
OSU News (08/11/11) Pam Frost Gorder
Ohio State University researchers are working with the U.S. Centers for Disease Control and Prevention (CDC) to develop a computational model that could forecast where cholera outbreaks are likely to occur. The CDC wants to know if the disease is spreading through contaminated water or through human contact. The researchers hope to identify typical patterns of cholera outbreaks and identify regions that are important to controlling the spread of the disease. The researchers found that when a new strain of cholera invades a country, it typically starts with an initial wave in the fall, and then reappears into much larger outbreaks the following summer, a pattern that held true during a recent outbreak in Haiti. "There are lots of different factors to consider--environmental conditions affecting the ability of the cholera bacteria to persist in water bodies, variation in water quality and sanitation in different locales, infection-derived immunity, seasonal drivers such as rainfall," says Ohio State professor Joseph Tien. Modeling the Haiti outbreak is proving to be difficult because hospitals, the United Nations, and UNICEF are all providing data differently, forcing the researchers to develop algorithms that can fit all the diverse data together.
The Public, Playing a Molecule-Building Game, Outperforms Scientists
Chronicle of Higher Education (08/12/11) Rachel Wiseman
Researchers at Stanford and Carnegie Mellon universities are using EteRNA, a Web-based crowdsourcing game, to understand how RNA molecules fit together. The researchers, led by Stanford's Rhiju Das and Carnegie Mellon's Adrien Treuille, designed EteRNA so that the intellectual legwork behind RNA design could be completed by about 26,000 novice scientists. Players are given a puzzle design, such as an RNA molecule in the shape of a star or a cross, which they must fill in with components, representing nucleotides, to produce the most viable solution. The community of players then votes on the blueprint that they think is most likely to succeed. The researchers select the highest rated blueprints and synthesize them, reporting the results back to the crowd. EteRNA has produced results that are more effective than computer-generated arrangements. "EteRNA players are extremely good at designing RNAs, which is all the more surprising because the top algorithms published by scientists are not nearly so good," Treuille says. The researchers believe the program shows great promise for integrating machine learning, experimental data, and crowdsourcing to generate new ideas.
Science News (08/13/11) Vol. 180, No. 4, P. 26 Alexandra Witze
Two-dimensional graphene is a material that could yield novel electronics through its unusual physical properties. The bonds between the carbon atoms in graphene afford the material significant strength and flatness, while the electrons behave as if they have zero mass, unstoppable and moving at a constant velocity. Placing a graphene sheet on top of different substances yields different electronic effects, while bilayer graphene makes building new electronic devices possible because it creates a band gap that enables the electron flow to be controlled. Devices that exploit graphene's unique properties can now be designed due to experiments in which graphene was placed atop a boron nitride substrate, which allows the electrons to flow without interference. A new age of graphene electronics could be ushered in with IBM researchers' creation of the first integrated circuit fashioned completely from the material. The inexpensiveness of graphene could make graphene circuits popular for portable devices such as smartphones. Graphene also is highly moldable, making it applicable for touchscreens, solar cells, and biological sensors.
Abstract News © Copyright 2011 INFORMATION, INC.
To submit feedback about ACM TechNews, contact: email@example.com
Current ACM Members: Unsubscribe/Change your email subscription by logging in at myACM.
|
Guide to clamps & vices
Different types of hand clamps & which ones suit what kind of project.
What you'll need
This activity has been provided by
Useful items from our shop
- Hobbyists Clamp
- F Clamps
- Clamp - Ratchet
- Clamps - Pack of 4 - 9cm
- Conker Clamp (Kids at Work)
- Vice - 50mm (Kids at Work)
Consider the environmental impact of preparing, carrying out & completing this activity. Could this impact be reduced? Specific considerations for this activity could include:
Health & Safety Considerations
Follow your usual operating procedures and carry out appropriate risk benefit assessments.
Some considerations particular to this activity include:
Spring clamps tend to be small and are useful when working with smaller, delicate workpieces such as wooden discs. They can be used to hold small objects onto a work surface but are also very useful for holding two pieces together that are being glued.
Spring clamps have two jaws, two handles and a sprung pivot. Despite their small size they can apply a lot of pressure. They also require quite a bit of pressure to open so younger children will need support to use these clamps.
Spring clamps have plastic or rubber pads to protect the workpiece.
This is a clamp with a ratchet mechanism and one-finger quick release.
The ratchet mechanism allows for a stronger grip than a spring clamp.
G-clamps are useful in woodwork projects. They have a fixed frame holding a threaded rod which attaches to a flat end/swivel shoe. The clamp is tightened by turning a T-handle attached to the rod.
The fixed frame means these clamps have a limited range (25-150mm) and a smaller jaw size compared to F-clamps.
Your choice of clamp will depend upon the size of your workpiece and workbench. The jaw size will be written in the product description.
The gripping surface of the frame and screw/swivel edge may cause indentations/marks in the wood you are working with. To avoid this use a scrap piece of wood as a protective layer between the clamp and the workpiece.
F-clamps work in a similar way to G-clamps except that the frame is not fixed. One arm of the frame slides along a bar to give a far greater clamping range (up to 600mm) without requiring a longer threaded rod. The jaw size will be written in the product description.
On tightening, the sliding arm is securely held in place, as it torques onto ridges running along the bar.
The F-clamp is also tightened with a T-handle and, like the G-clamp, you will need to protect your workpiece from indentations using scrap wood.
A variation of the F-clamp is the Rapid Bar Clamp. It differs in that you must depress a button to move the sliding arm, and tighten it by squeezing a handle.
This is a hand-held clamp suitable for small items like conkers and acorns. The clamp allows you to hold the item safely using one hand whilst using a softwood hand drill or palm drill to make a hole.
Once the item is secured, rest the clamp on the work surface - hold the clamp with one hand and drill vertically down with the other.
A vice or bench vice can be clamped or bolted to a workbench.
A vice allows workpieces to be quickly and easily clamped - in a vertical plane for the vice pictured - others may hold items horizontally. Some vices are quite small and portable.
Vices are ideal for tasks such as sawing, drilling and filing. The vice jaws can be adjusted to securely hold the object into place.
Disclaimer: Muddy Faces cannot take any responsibility for accidents or damage that occurs as a result of following this activity.You are responsible for making sure the activity is conducted safely.
|
:- A rack server, also referred to as a rack-mount server, is the standard size server used for mounting inside a datacenter server cabinet or rack frame infrastructure. A single rack server is 19″w x 1.75″h. This is referred to as a 1U rack server which is short form for 1 unit. Rack serversare most commonly available in 1U, 2U and 4U sizes. A standard rack-mount server cabinet is 42U in height. The front panel a rack server is mounted to the vertical cabinet rail posts using typical nuts and bolts.
Due to their minimized height and ability to be mounted on top of one another with minimum space in between, rack servers offer much higher datacenter density than tower servers, but less so than blade servers. As with tower servers, rack servers are connected using traditional network cables and power cords.
|
Apollo Program – The Greatest Journeys In History
The Apollo Program was an America program, run by NASA, with the goal of landing a man on the Moon which it achieved with Apollo 11 on July 21st 1969. There were a total of 6 successful moon landings which resulted in 12 men walking on the Moon and a total of over 380 kilograms of lunar samples being returned to Earth. The Apollo Program ended in 1972 after the final flight of Apollo 17.
What Spacecraft And Rockets Did The Apollo Program Use?!
The Apollo Command Module (CM) was quite different from the previous spacecraft used during Project Mercury and Project Gemini. It was larger so that it could carry 3 astronauts that were needed for the missions to the Moon; they also had room to move around.
The Apollo moon missions also required a special spacecraft to land on the Moon called the Lunar Module (LM). After successfully landing and returning the astronauts back to the CM in orbit around the Moon the LM was left behind to eventually crashed onto the Moon’s surface.
Two different rockets were used during the Apollo Program, the smaller Saturn IB and the 3-stage, 36-story high, Saturn V which was powerful enough to reach the Moon!
Facts About The Apollo Spaceflights
Thanks to the successes of Project Mercury and Project Gemini the Apollo flights could move ahead with all the technology and lessons needed for landing on the Moon. Unfortunately, it didn’t start well with a tragic fire killing 3 astronauts during training when their Apollo 1 capsule caught fire.
The first manned mission beyond Earth’s orbit to the Moon was Apollo 8. It orbited the moon 10 times (allowing humans to see the far side of the Moon for the first time), before returning the astronauts safely to Earth. The big prize though, the first Moon landing, occurred on the 20th of July 1969 by the Apollo 11 mission. The crew of Apollo 11 was Neil Armstrong, Michael Collins and Buzz Aldrin. When Neil Armstrong became the first person to walk on the moon he famously said, "That's one small step for a man; one giant leap for mankind."
During the last three Apollo missions, the astronauts drove the Lunar Rover on the Moon to explore more of the surface. The lunar rovers fitted in a storage hatch in the LM and were left on the moon when the astronauts return to Earth.
One unsuccessful mission (Apollo 13) and five more successful Moon landings occurred between November 1969 and the final mission in December 1972.
Summary of the Apollo Missions (no missions or flights were ever designated Apollo 2 or 3);
Apollo 1 – Did not fly as a fire during training destroyed the capsule and killed the three astronauts Gus Grissom, Ed White & Roger Chaffee.
Apollo 4 to 6 – Three unmanned flights to test the new enormous Saturn V rocket and Apollo spacecraft in space.
Apollo 7 – First manned flight of the Apollo CM and Saturn IB rocket in Earth orbit.
Apollo 8 – The mission included the first flight of the Saturn V rocket and the first time humans orbited the Moon.
Apollo 9 – First flight of the Lunar landing Module in Earth orbit and tested life support equipment.
Apollo 10 – Basically combined all aspects of a lunar mission by flying the LM to within 47,000 feet (14.3 km) of the lunar surface.
Apollo 11 – The most famous voyage in history; the first manned landing on the Moon. Achieved the first moonwalk and collected 21.5 kgs of geological samples.
Apollo 12 – Second landing on the Moon near the Surveyor 3 probe.
Apollo 13 – An infamous mission due to a near-fatal failure of the Service Module which resulted in aborting the mission and returning to Earth.
Apollo 14 – Third successful landing commanded by Alan Shepard and spending over 9 hours walking on the moon!
Apollo 15 – First Apollo mission to use the lunar rover!
Apollo 16 – The 5th landing of the Apollo campaign, spending over 20 hours on moonwalks.
Apollo 17 – The first and only Saturn V night launch and the final manned Moon landing.
Apollo 18 to 20 – These planned missions were cancelled due to funding cuts.
Check out each mission’s cool insignia’s here!
What Next For NASA?
With the completion of the Apollo Program’s goal of landing a man on the Moon by the end of 1969 (and winning the Space Race in the process), NASA now focused on the future Skylab missions, learning to live in space and developing the Space Shuttle.
|
Steel is primarily an alloy of iron and carbon and it is known to have been in use since 1800 BC. The right concentration of carbon in iron gives it immense tensile strength and at a very low cost. This is the main reason why it is one of the most widely used alloys in production today.
Steel manufacturing begins with the smelting of iron ore followed by removing impurities like phosphorus, silica, and sulphur. In the ore form, the concentration of carbon is more than the desired levels that give steel its unique properties. So, steel manufacturers reprocess the molten metal to reduce the carbon content to the required amount.
It is also at this point that other elements can be added to the smelted compound of iron ore that creates distinct varieties of steel. Each one of the types of steel can be used for specific industry applications ranging from construction to structure reinforcement. In order to keep a track of the salient features of steel, manufacturers have created a nomenclature system that is both systematic and exhaustive.
Keeping track of the different kinds of stainless steel that are available on the market can appear to be a daunting task at first but when you understand the naming system and the variable that determines the property of steel, it becomes extremely simple.
In order to understand the designations and nomenclature system of stainless steel, we must first take a look at the different types of stainless steels that are available on the market. The classification of different steel is done based on the arrangement of its molecules, also referred to as its crystalline structure.
The following are the three main types of steel.
Type 316 stainless steel is commonly used for engineering applications, especially in construction and fabrication, due to its corrosion resistant properties. Type 316 stainless steel is manufactured in another grade due to its widespread potential and it is differentiated by using the letter ‘L’ in its designation. The L denotes the low content of carbon in the steel. 316L is best known among fabricators for being resistant to cracks after the weld process is completed. This makes 316L the preferred choice of fabricators who look to build metallic structures for industry applications.
There are other grade denotations such as F, N, H, and several others besides L that are used by tweaking composition specifications of carbon, manganese, silicon, phosphorus, sulphur, chromium, molybdenum, nickel, etc. for desired properties.
The typical applications of steel include: Food preparation equipment, laboratory equipment, chemical containers for transport, springs, heat exchangers, screens for mining, coastal architectural panelling, railings, trim, boat fittings, quarrying and water filtration.
Steel has a property of cracking upon cooling wherever welding is required. The high temperatures of the welding process induce what is called as ‘hot brittleness’ in the steel as it cools. This makes structures built using high carbon content steel more susceptible to damage due to the formation of cracks in the welded area of the metal.
The low carbon content of 316L provides an effective solution to this widespread engineering problem of 316 stainless steel. This little variation in the application can make a significant difference to your operational costs and quality assurance parameters as a business organisation.
|
The term “detox” has become very fashionable these days, and marketing specialists understand this all too well.
The term has even been used to sell vacations. So, what’s the deal with this alluring abbreviation that regularly makes the front page of women’s magazines?
The word “detox” is often used to sell products, even unsuitable ones. In reality, detoxifying the body means you’re allowing it to restore its normal functions, so you need to begin by removing fat, sugar, and alcohol from your diet. The abbreviation “detox” is urban slang for purifying your liver, kidneys and bowels by ridding yourself of pollutants, pesticides, and food additives. So, how can we detoxify? You should being by eating foods that help the liver and kidneys in their work to eliminate the undesired substances. Natural products, particularly fruits and vegetables, are good foods for this. Here’s some examples.
- Cabbage is rich in antioxidants and low in calories, and it helps to cleanse the liver.
- Carrots are rich in antioxidants, carotenoid, and minerals, facilitating the removal of toxins.
- Celery is rich in water, vitamin C, and potassium, and it has great diuretic and depurative properties.
- Onions contain fiber, potassium, and sodium, and they help the liver to drain.
- Leeks are rich in water, fiber, and potassium.
- Lemons are rich in vitamin C, and they regulate the acidity of the body via their alkalinizing effects on the body.
- Pineapple is rich in potassium, manganese and vitamin C, and it promotes good digestion.
- Kiwi is rich in citric acid, much like lemon, and it promotes the secretion of bile, enabling fats to be digested better.
- Bananas are rich in vitamin B6, potassium, manganese, and vitamin C, as well as being loaded with fiber. Fiber helps to avoid overeating, reduces cholesterol, and controls fat absorption. By eating bananas and drinking plenty of water, you’ll increase your fiber intake and quickly cleanse your colon.
- Avocado is very rich in antioxidants, making it a good, natural detoxifying fruit.
Herbal teas can also be your friends, because they allow you to satisfy your thirst while consuming herbs that help the body to detox, such as fennel, artichoke, black radishes, burdock, and so on.
The best time to detox is definitely during seasonal transitions. During this time, you should return to a natural diet and avoid processed foods and excessive fat and protein. Soothe your body and digestive system by avoiding allergenic foods, dairy products, gluten, and eggs. You should keep your body well hydrated, so think about drinking more water and alkaline drinks that are rich in minerals. Try to balance your diet, exercise well, and say yes to a healthy lifestyle.
Here’s a simple three-day detox program for you. (All products should be organic.)
Morning Tea: Half a cucumber, a slice of green apple, and a slice of lemon.
Breakfast: Smoothie made from 1 cup water, 2 bananas, 1 cup raspberries, 1 Tbsp Almond butter, and 1 tbsp lemon juice.
Lunch: Smoothie made from 1 cup water, 1 cucumber, 1 cup kale leaves, half an avocado, 4 celery stalks, and 1 cup pineapple.
Dinner: Smoothie made from 1 cup coconut water, 1 cup mango, 1 cup raspberries, 1 cup kale leaves, and ¼ tsp cayenne pepper.
Afternoon Snack: Smoothie made from 1 cup water, 1 ½ bananas, 1 Tbsp lemon juice, 1 tsp cinnamon.
Water for the entire day: 3 pints of water, 1 sliced lemon, 1 cinnamon stalk.
Meditate at least once a day.
|
Just like each different organism on Earth, vegetation’ final objective is to outlive and reproduce. In order to attain this, they have to make trade-offs between the place and easy methods to allocate their finite set of sources. Whether they put their sources and vitality into their progress, copy or upkeep, is all a part of their so-called plant technique. With a brand new framework, Ph.D. candidate Jianhong Zhou and her supervisors discovered that not the entire at the moment recognized methods signify a plant technique in actual life.
“Plant methods are necessary to, for instance, predict how plants reply to projected adjustments in future climates,” tells Zhou from the Institute of Environmental Sciences. “However, to date, it remained unknown whether the ways we used to describe plant strategies, reflected these strategies is reality. That has consequences for the models we use to look at the functioning of ecosystems. For example, models to predict how well these ecosystems will function in a changing climate. Our novel framework allows improving these models and the projections they give.”
Relationships between totally different traits
A approach to decide a plant’s technique is to take a look at trait-trait relationships. Traits are attributes of an organism, reminiscent of weight or lifespan, and could be measured on people. The values of those traits can differ between totally different people inside a species, and so they also can differ between totally different species. When you examine how the values of two of those traits relate to one another, you get the trait-trait relationship.
To this date, scientists largely checked out trait-trait relationships between totally different species. “When we find a strong trait-trait relationship between species, we commonly regard it as a plant strategy,” says Zhou. “For example, there is a positive relationship between the leaf lifespan and the leaf mass per area of plants. This indicates that plant species that produce leaves with a high mass per area, generally also have leaves that last longer.”
According to Zhou, this attention-grabbing relationship has been interpreted as a plant’s technique through which a species make investments many sources within the development and progress of its leaves to make good use of them. “That way, the leaves will they last long enough to do sufficient photosynthesis.”
Not at all times representing technique
In actuality, nevertheless, it stays unclear whether or not a trait-trait relationship between species at all times represents a plant technique. Zhou: “A strong trait-trait relationship between species could also be caused by common environmental drivers. The availability of water and nutrients namely affects many traits. When, for example, the nitrogen content in the soil increases it could affect leaf nitrogen content and specific leaf area independently, making them arise. But that does not necessarily mean that these two traits are physiologically or eco-evolutionary related. That way, the traits would be correlated by coincidence, without representing a strategy.”
Zhou and her colleagues, subsequently, got here up with a brand new strategy to differentiate between trait-trait relationships that signify plant methods and people relationships attributable to a “coincidence.” “We proposed a new framework,” she says. “If a trait-trait relationship really represents a plant strategy, we would expect to see the same trait-trait relationship within a species; so in that case, we look at how the trait values vary between different individuals of the same species. If a trait-trait relationship between species is caused by coincidental drivers, we would not see that same strong trait-trait relationship within species,” says Zhou. “This was the case for the former example of specific leaf area and leaf nitrogen.”
More correct predictions in instances of local weather change
“This knowledge is especially important for making models that predict how ecosystems and the processes in these ecosystems will change in future climates,” says Zhou. “Due to local weather change, we’d expertise local weather situations that we now have by no means seen earlier than. This would inevitably have an effect on the environmental drivers we have talked about: two environmental drivers which are solely linked by means of coincidence, that independently have an effect on a plant trait.
Under totally different circumstances, the best way through which they have an effect on the plant trait would possibly change. The coincidental trait-trait relationships attributable to these drivers, would then break down. This means our fashions, involving these relationships, would now not make correct predictions. By determining which trait-trait relations this could be, our new technique can tackle this downside.”
The analysis was printed in New Phytologist.
Jianhong Zhou et al, Global evaluation of trait–trait relationships inside and between species, New Phytologist (2021). DOI: 10.1111/nph.17879
The technique of vegetation: It’s all about balancing traits (2021, December 28)
retrieved 28 December 2021
This doc is topic to copyright. Apart from any honest dealing for the aim of personal examine or analysis, no
half could also be reproduced with out the written permission. The content material is supplied for info functions solely.
|
Above: generative cities and architecture by Aranda & Lasch
Futurist Chris Arkenberg outlines a possible scenario for urban planning and architecture:
As complex ecosystems, cities are confronting tremendous pressures to seek optimum efficiency with minimal impact in a resource-constrained world. While architecture, urban planning, and sustainability attempt to address the massive resource requirements and outflow of cities, there are signs that a deeper current of biology is working its way into the urban framework.
Innovations emerging across the disciplines of additive manufacturing, synthetic biology, swarm robotics, and architecture suggest a future scenario when buildings may be designed using libraries of biological templates and constructed with biosynthetic materials able to sense and adapt to their conditions. Construction itself may be handled by bacterial printers and swarms of mechanical assemblers.
Full Story: Fast Coexist: Cities Of The Future, Built By Drones, Bacteria, And 3-D Printers
This reminds me of the recent sci-fi short story “Crabapple by Lavie Tidhar:
Neighborhoods sprouted around Central Station like weeds. On the outskirts of the old neighborhood, along the Kibbutz Galuyot Road and Siren Road and Sderot Menachem Begin, the old abandoned highways of Tel Aviv, they grew, ringing the immense structure of the spaceport rising high into the sky. Houses sprouted like trees, blooming, adaptoplant weeds feeding on rain and sun, and digging roots into the sandy ground, breaking ancient asphalt. Adaptoplant neighborhoods, seasonal, unstable, sprouting walls and doors and windows, half-open sewers hanging in the air, exposed bamboo pipes, apartments growing over and into each other, growing without order or sense, creating pavements suspended in midair, houses at crazy angles, shacks and huts with half-formed doors, windows like eyes–
In autumn the neighborhoods shed, doors drying, windows shrinking slowly, pipes drooping. Houses fell like leaves to the ground below and the road cleaning machines murmured happily, eating up the shrunken leaves of former residencies. Above ground the tenants of those seasonal buoyant suburbs stepped cautiously, testing the ground with each step taken, to see if it would hold, migrating nervously across the skyline to other, fresher spurts of growth, new adaptoplant blooming delicately, windows opening like fruit–
For more of Arkenberg check out our interview with him. Want to learn to think like he does? Here’s his guest post listing his favorite books on systems thinking.
And for more big, mad ideas about architecture and cities check out:
The Fab Tree Hab
Conway’s Game of Life generates a city
Aranda & Lasch’s generative architecture
|
Mission Viejo psychiatrist Martin Jensen says everyone has some degree of brain chemical differences.
"Sometimes they're assets; other times they're drawbacks," he said, but if they're causing problems concerning the quality of your life, "it's worth treating."
Jensen believes the standard psychiatric method of diagnosis and treatment works for most people, "but it's a little limited in certain complex situations."
Sometimes Valium is an appropriate treatment for people experiencing panic attacks, for example, he said, "but other times it's masking an underlying problem that can be corrected more efficiently and safely." The underlying problem, he said, can be deficiencies, instabilities or excesses of certain brain chemicals or electricity.
Instead of just making a diagnosis based on a patient's symptoms and then prescribing one medication, Jensen said, he usually prescribes a sequence of medications to further define the underlying brain chemistry and correct it.
The medication trials usually can be done at home, he said, and most chemical imbalances will be figured out within two office visits over a six-week period.
Jensen said a patient's reactions to the different medications clue him to what brain transmitter system is disturbed, and it can then be treated with a safe, low-dose medication. He said that "when a proper match occurs between the medication and a person's chemical imbalance, they know it" usually within 24 to 48 hours.
Other psychiatrists, however, say that it typically takes three or four weeks before the effects of many medications become apparent.
Gordon Globus, a Newport Beach psychiatrist, said not enough is known about brain chemistry or the effects of medications on brain chemistry to diagnose a neuro-chemical defect on the basis of medication response alone.
Globus, a professor of psychiatry and philosophy at UC Irvine, said treating brain chemical imbalances has become increasingly important as newer and better drugs have become available during the past 15 years and "are a very important point of view.
"But, in my opinion, it can be carried too far and the psychological aspects are neglected and they're very important to treat also. I think the interaction between medication and psychotherapy is more powerful than either treatment alone."
Jensen, who is writing a manual on his treatment technique for doctors and patients, said he respects the standard approach to psychiatry.
"I just want to add on to the current system a concept that has helped certain people that were not responsive" to standard treatment, he said.
|
Motions of the Sun Lab
The NAAP Motions of the Sun Lab reviews some of the material from the Basic Coordinates and Seasons Lab and The Rotating Sky Lab and adds information to put all the pieces together for a more complete description of the motions of the sun. Computation of meridional altitude and stellar visibility are also introduced.
First time users of NAAP materials should read the NAAP Labs – General Overview page.
Details and resources for this lab – including demonstration guides, in-class worksheets, and technical documents – can be found on the instructor's page. Some resources are not available for all modules.
Paths of the Sun (pdf)
|
Compare and Contrast Thesis
Imagine this scenario:
Your professor providing you with a compare and contrast thesis task in regards to a topic that is particular problem and also you feel just like the planet does not love you. Let us acknowledge it, the thesis is just one of the major jobs that most students have trouble with. Lucky for your needs, we are right here showing you the way it must be done properly.
Forget about asking for assistance from a relative, a close buddy, and on occasion even a specialist. Whenever led properly, you’re sure to wow your professor together with your excellent writing skills! Ready? We bet you are doing, so why don’t we offer it a bang!
Very first thing first…
What is contrast and compare thesis?
Through the expressed word itself, compare thesis enables you to compare two people, a few ideas, or things when it comes to aim of coming to a conclusion. Right Here, you may be in a position to develop either an evaluative or explanatory thesis statement. What exactly is benefit of compare and contrast is you to cite sources that it doesn’t always necessitate. Instead, you are going to back your claim up with a few thinking, examples, or often qualified sources that strengthen the claim.
We could state a compare and contrast thesis is good if it…
- Demonstrates what sort of thing that is particular through the other
- Explain a stance and straight back it pay to write essay with different facts
- Show a distinctive means of understanding, doing, and seeing a certain issue
- Clean up all forms of misinterpretations
- Voices the unknown
Writing a compare and contrast thesis
Now that you’re quite acquainted with compare and contrast, the step that is next the writing process. Do you know the points you ought to use and prevent?
Select your subject sensibly
When selecting a topic, pick one that drives your interest most. In this way, you’ll not have difficulty constructing the essay that is whole you will be well knowledgeable about the subject. Additionally, the subject you will be selecting for the thesis ought to include two objects, maybe two procedures, a couple, or other people.
Research and organize the info
Determine the differences and similarities involving the two items and arrange them on relevance and concern in regards to your frame of guide.
Choice of words
Usually in assess thesis, the language begins having a word that is conditional as whereas or although. Then, state the consequence of the contrast.
Present your compare and essay that is contrast
It is very important to provide the matter in certain and extensive terms. When performing an introduction, use a hook line in order to attract more variety of audiences. State all the plain things you might be comparing and contrasting by the end of your introduction.
More often than not, readers expect the thesis to locate when you look at the last sentence of this intro. They must be supplied by an arguable and revelation that is reasonable.
Focus on the essay’s structure
The essay’s main human body will include the evaluations and contrasts to attest to the thesis. Only use a few relevant points for comparing and comparison.
Remember that the final outcome in compare and thesis that is contrast a bit different when compared with other thesis. Here, you ought to reiterate the evaluations and contrasts in a type of summary plus the thesis statement you intend to show.
|
Ducks and gulls are the natural hosts of influenza A virus. Ragnhild Tønnessen's PhD research project has characterised influenza A viruses in gulls and ducks in Norway. Her discoveries may lead to a better understanding of the epidemiology and host adaptation of influenza A virus.
Wild birds, particularly ducks and gulls, are the natural hosts for influenza A viruses which can cause disease in animals and humans. Influenza A viruses can be divided into subtypes, of which the majority have been found in wild birds. Most subtypes of influenza A virus cause subclinical infections in wild birds. Infections in domestic chickens most commonly result in mild disease. In rare cases, if introduced from wild birds to poultry, some viruses of the H5 and H7 subtypes mutate and become highly pathogenic. One example of this is the highly pathogenic H5N1 virus in Southeast Asia known to cause “bird flu”.
Due to the outbreak of highly pathogenic avian influenza virus subtype H5N1 in Southeast Asia, a programme to monitor influenza viruses in wild birds in Norway was initiated in 2005. A large number of samples, gathered by hunters from ducks and gulls, were analysed at the Norwegian Veterinary Institute. Samples collected from Rogaland County in the South-West of Norway during the hunting seasons (August-December) of 2005-2007 and 2009-2010 were studied. The results showed that low pathogenic avian influenza viruses were present in 15.5% of the samples, and that the virus occurrence was higher in dabbling ducks than in gulls. The virus prevalence was lowest in December. Many different subtypes of the influenza A virus were detected, but not the highly pathogenic H5N1 virus.
The complete genetic material from a total of five influenza viruses from mallard and common gull were sequenced and characterized. The results showed that the genes of the Norwegian viruses resembled the genes found in influenza viruses from other wild birds in Europe.
Due to limited overlap between the routes used by migratory birds in Eurasia and America, influenza viruses with different genetic material have developed between these two continents. However, in some areas, it has been observed that genes can be exchanged between influenza viruses from Eurasia and America. Tønnessen studied the role that gulls play in the transfer of virus genes between these two continents. Genes from American avian influenza viruses were not detected in the European gull viruses studied. However, within avian influenza viruses from Eurasia, she found that virus genes were exchanged between influenza viruses typically found in gulls and ducks, respectively.
During the breeding seasons of 2008 and 2009, Tønnessen studied the occurrence of influenza virus in the black-legged kittiwake (Rissa tridactyla) at Hornøya in Finnmark in Northern Norway. Low amounts of influenza virus were detected in 5-15% of the samples from adult kittiwakes, and she discovered that more than 70% of the adult birds had developed antibodies against influenza A virus. The majority of the kittiwakes had antibodies against an influenza virus subtype typically found in gulls, namely H16.
Ducks can become infected with influenza virus through consumption of surface water contaminated with faeces shed by virus infected birds. Most subtypes of influenza virus from ducks can retain their infectivity in water over long periods of time. Experiments performed by Tønnessen showed that influenza virus subtypes primarily found in gulls (i.e. H13 and H16) can also remain infectious in water for several months under different salinity and temperature conditions.
To assess if a typical influenza virus subtype from gull can infect chickens, Tønnessen inoculated chickens with an H16N3 virus obtained from herring gull. Influenza virus was detected in the oropharynx of 2 of the 19 virus inoculated chickens, and specific antibodies against H16 were found in the same two chickens. The chickens did not become ill and the virus did not infect the contact chickens. These results suggest that H16N3 virus from gull can cause a limited infection in chickens.
In order to find out why influenza viruses of the H13 and H16 subtypes primarily infect gulls, Tønnessen examined whether the internal proteins of these viruses have particular signatures (amino acid composition) possibly related to host adaptation. Several signatures which can be related to host adaptation were detected, but their importance needs to be further evaluated in experimental studies.
Cand.med.vet. Ragnhild Tønnessen defended her doctoral research on 27th August 2013 at the Norwegian School of Veterinary Science with a thesis entitled "Epidemiology and Host Adaptation of Influenza A Viruses in Gulls”.
The research was conducted at the Department of Food Safety and Infection Biology at the Norwegian School of Veterinary Science and at Section for Virology at the Norwegian Veterinary Institute.
Cite This Page:
|
An ultrasound uses sound waves to create images of the inside of the body. A venous duplex ultrasound is an ultrasound that looks at the flow of blood through the veins in the arms or legs.
Reasons for Test
The test may be used for the following reasons:
To investigate the cause of the following symptoms in an arm or leg:
- Increased warmth
- Bulging veins
To diagnose the following:
- A blood clot, like deep vein thrombosis (DVT)
- Poor vein function
What to Expect
Prior to Test
No special preparation is needed for this test.
Description of Test
You will be asked to lie on a table. Gel will be placed on the skin of your arm or leg, over the veins being tested.
The ultrasound machine has a hand-held instrument called a transducer, which looks like a microphone or wand. The transducer is pushed against your skin where the gel was applied. The transducer sends sound waves into your body. The waves bounce off structures in the body and echo back to the transducer. The echoes are converted to images that are shown on a screen. The doctor examines the images on the screen. He may make a photograph of them as well.
The technologist may push the probe firmly or softly against your skin in order to better see the vein and to see if it collapses under pressure.
You can get dressed and go home.
How Long Will It Take?
The length of the test varies, depending on your situation. In most cases, it will take between 15-45 minutes.
Will It Hurt?
In general, this test is not painful. You may feel some mild discomfort as pressure is applied to your arm or leg.
A radiologist, cardiologist, or vascular surgeon will read the images. The test results will be sent to your doctor. Your doctor will notify you of the results and provide you with recommendations.
Call Your Doctor
After the test, call your doctor if any of the following occur:
- Your symptoms continue or worsen
- You develop any new symptoms
If you think you have an emergency, call for medical help right away.
- Reviewer: Michael J. Fucci, DO, FACC
- Review Date: 03/2016 -
- Update Date: 05/07/2014 -
|
Fracking has placed local communities against the energy industry for years, and two cases being heard in New York appeals court Thursday further the argument over whether the law overpowers local authority when it comes to oil and gas development.
According to the Associated Press, over 50 New York municipalities have banned gas drilling in recent years, and more than 100 have placed moratoriums on drilling activities. State Supreme Court judges have upheld bans that were challenged in Dryden and Middlefield, two rural towns in central New York. Appeals to those decisions will be argued Thursday, and a decision is expected in around six weeks.
Residents throughout New York are concerned that fracking damages the local environment and resist the state lifting its 5-year-old ban on gas drilling that uses high-volume hydraulic fracturing. Hydraulic fracturing takes place when a pressurized fluid spreads fractures throughout a layer of rock. The process can occur naturally, but when it is induced intentionally, the process is known as fracking. Since the late 1990s, fracking has spread has a means of releasing oil and gas for extraction.
Fracking accounts for much of the natural gas produced in the US, and may account for the majority going into the future. The practice has led to reduced energy costs over the past decade, making the incentive to continue the practice quite obvious. Nevertheless, the practice comes with clear environmental risks.
The highly-pressurized water injected into deep rock deposits is laden with chemicals, and communities across the country have dreaded the impact of contaminated drinking water. People fear chemicals rising up to the surface and any resulting reduction in air quality.
Heavy equipment has to be moved to begin the practice, and massive trucks carrying water and sand regularly commute to fracking sites. Such movement can congest small towns that are not accustomed to such traffic and tear up local roads. Local residents are then responsible for footing the repair bill.
Today’s cases present contrasting appeals. In Dryden, the town board voted unanimously to amend zoning law to ban gas drilling. Six weeks later, the town was sued by Anschutz Exploration, a Denver-based company with gas leases in the town. After the ban was upheld, another company, Norse Energy, stepped up to appeal. In Middlefield, a dairy farmer is appealing a ban that prevents her from reaping the economic benefits of drilling for gas on her farm. The two cases, taken together, complicate the imagery that fracking arguments pit small towns and individuals on one side and large energy companies on the other.
|
From the Name of a Place
In William Smith's Dictionary of Greek and Roman Mythology and Biography the name is spelled Maleates (Μαλεάτης), whose article entry goes on to tell us that it is
a surname of Apollo, derived from cape Malea, in the south of Laconia.
He had sanctuaries under this name at Sparta and on mount Cynortium.
This article from the Dictionary references a couple of passages mentioning the deity in Pausanias' Description of Greece. In 3.12.8, Pausanias does indeed render the name as Μαλεάτης, saying:
The Lakedaimonians have an altar of Apollon Akritas, and a sanctuary,
surnamed Gasepton, of Ge. Above it is set up Apollon Maleates.
The cape referred to in the Dictionary is called Cape Maleas on Wikipedia, which it derives from Ακρωτήριον Μαλέας (Akrotêrion Maléas), "a peninsula and cape in the southeast of the Peloponnese in Greece." The Wikipedia article adds that:
It separates the Laconian Gulf in the west from the Aegean Sea in
the east. It is the second most southerly point of mainland Greece
(after Cape Matapan) and once featured one of the largest light-houses
in the Mediterranean. The seas around the cape are notoriously
treacherous and difficult to navigate, featuring variable weather and
occasionally very powerful storms.
In the Odyssey (Book 9), it is while he sails westwards trying to round this cape, close to the final leg of his voyage to return home from Troy, that Odysseus first gets blown off course to begin his years of marine misadventure in earnest.
The light-house mentioned in the Wikipedia article
According to the 10th chapter, "Unpublished Ephebic List in the Benakeion Museum of Kalamata", written (2009) by Andronike Makres in Greek History and Epigraphy: Essays in Honour of P.J. Rhodes:
Several dedications referring to the sanctuary of 'Apollo Maleatas and
Asklepios' have been found from at least as early as the third century
BC, which show that the sanctuary housed what was in fact a common
cult of Apollo and his son Asklepios and that Apollo received the
sacrificial offering first.
Or Rather From the Name of a Guy
An alternate etymology derives this epithet of Apollo from Malos, a maternal ancestor of Asklepios.
In the 2nd chapter, entitled "Gods of (Con)fusion: Athena Alea, Apollo Maleatas and Athena Aphaia", of Volume 64 of Classica et Mediaevalia, the Danish Journal of Philology and History (2013), regarding the Epidauros [Epidaurus] sanctuary of Apollon Maleates, Jeremy McInerney says on pp. 60-64:
The cult of the Maleatas sanctuary was... of great antiquity, and
continued through the Archaic and Classical periods. The precise
identity of the original recipient of the cult is, however, much
harder to establish with certainty. Towards the end of the 4th century
BC the Epidaurian poet Isyllos composed a paian to Asklepios, in which
he referred to the god who shared Asklepios' honours:
Malos was the first to build the altar of
And made the sanctuary splendid with sacrifices.
Isyll., Coll. Alex. pp. 132-3, 27-28, tr. Bremmer
He goes on to recount the genealogy of Asklepios, born to the
union of Apollo and Koronis, the granddaughter of Malos. Isyllos does
not offer any further information about the epiklesis Maleatas, but
the occurrence of Malos in the same line suggests that Isyllos derived
the god's name from the family of his worshippers...
The evidence prior to Isyllos' paian, however, suggests a more
complex history. When Asklepios was welcomed at Athens from Epidauros
c. 421 BC regulations regarding his cult stipulated separate offerings
to Maleatas, to Apollo and to Hermes (as well as certain other minor
deities). In the eyes of the Athenians, then, Maleatas and Apollo were
two distinct gods, Maleatas was not an epiklesis of Apollo, and
hence there was no need to postulate a human eponym to explain the
god's name. Yet, around the same time in Lakonia the Spartans were
making dedications both to Apollo and Maleatas separately and to
Apollo Maleatas as a single deity... Near Kosmas in the southern
Kynouria, games appear to have been held in his honour at a festival
called the Maleateia. Hence the relationship between Apollo and
Maleatas is complicated. At times they appear separate, and at other
times they appear to be a single, syncretized figure.
It may be worth exploring the geographical associations behind
the name in order to elucidate the god's identity. The name Maleatas
derives from Cape Malea, the eastern-most peninsula of the
Peloponnese. The original deity was simply the local version of the
youthful Archer god: Apollo of Cape Malea. At some point the local god
was distinct from Apollo... It is hard to say when the fusion of the
two took place, but the extension of the god's reach from the eastern
side of Lakonia to the eastern side of the Peloponnese fit well with
Sparta's assertion of a claim to Kynouria, the contested borderland
between Lakonia and Argos. Here Apollo Maleatas was pressed into
service... Apollo Maleatas represents, in this instance, less a fusion
of local and panhellenic cults so much as a pure expression of
At Epidauros, however, the relationship of Apollo and Maleatas
was quite different. When Apollo Maleatas makes his first appearance
at Epidauros, in the paian of Isyllos, the entire focus of the poem is
on creating a genealogy for Asklepios rooted in the soil of
[T]his is the tradition which reached the ears of our
forefathers, o Phoibos Apollo: it says that Zeus gave the Muse Erato
to Malos as his wife in holy matrimony. Now Phlegyas, whose native
city was Epidauros, married Malos' daughter whom Erato, her mother
bore, and Kleophema was her name. Phlegyas fathered Aigla, that was her
name; she was also called Koronis because of her beauty. Now Phoibos
of the golden bow saw her in Malos' house and put an end to the season
of her virginity.
Isyll., Coll. Alex. pp.132-136. 37-49 tr. Bremmer
From the union of Phoibos and Koronis was born Asklepios.
It is noticeable that Apollo is addressed as Phoibos, the god of
the paian. The name Maleatas drops out, and the only vestige of his
presence is in the references to a human progenitor of the divine
dynasty, Malos. Nor is any explicit connection made with the Lakonian
Apollo Maleatas. In fact, given Malos' importance to the human side of
the family tree of Asklepios, since he is the earliest male progenitor
of the female line, it seems that, for Isyllos, Apollo Maleatas
signified not the fusion of Apollo and Maleatas, nor specifically a
Lakonian god, but simply Apollo as honoured by the family of
... How in fact the Lakonian god Apollo Maleatas became associated
with the venerable cult on Mt Kynortion is unclear, although it is not
impossible that in identifying Apollo as Maleatas the Epidaurians were
making overtures to Sparta at the expense of their nearer neighbour,
There seems to be no especially noteworthy connection between Maleates/Maleatas and the Epidauros Theatre beyond the fact that the god's sanctuary appears to be the nearest major man-made structure to the theatre, less than half an hour away on foot (at least on current modern roads, depending on which route one takes, according to GoogleMaps).
|
The first settlers in America found growing grapes growing wild, although the fruit tended to be small, thick-skinned, and seedy. Grape growing has been much improved since those early days.
Now with cultivation, growers can produce larger, sweeter, disease-resistant, seedless varieties of growing grapes.
- 1 Importance of Gathering Grape Growing Information
- 2 How to Grow Successful Grapevines
- 3 Essential Minerals and Types of Soils for Growing Grapes
- 4 How to Build a Trellis For Your Grapes
- 5 Growing Grapes for Wine
- 6 Grape Vine Maintenance
- 7 Dormant-Season pruning
Importance of Gathering Grape Growing Information
Grapefruit is known for having several uses in food, wine, and the processing of medicinal items. Just like any other endeavor that needs a lot of first-hand research and knowledge, grape growing information will help every grower determine the progress or distress they are up with.
Take note that another vital piece of information you must know about is the type of climate to choose for grape growing. It is common knowledge that certain types of plants grow in certain types of climates. Therefore, it is important to determine whether or not the place has a suitable climate to meet the needs of a grape growing.
A grape grower must also consider the importance and difference of each type of fertilizer to fit each type of grapevine. Although organic farming is widely promoted, it is still a must to know the options for a better quality of produce you as a grower want to achieve.
Knowing when to start growing grapes is is another aspect to consider as important information in grape growing. Is the overall temperature right to start with the planting, or are there any artificial or technologically promoted methods available that will make growing grapes right at any time of the year?
Grape growing information that is another must is the type of soil. Do grapevines need moist or dry land to flourish? One way to find out is to interview or contact people in the grape growing business. It is also important to know the stages of the growth of grapefruit. The following is a list of important information about grapefruit stages.
- Scale Crack – this is the first visible indication of growth. As the bud begins to swell, a small crack then appears between the outermost bud scales.
- Early Bud Swell – the light brown and fuzzy colored bud has swollen out of the hard outer bud.
• Late Bud Swell – As one or more bulges of pink and green leaf tissue is visible, a full swell occurs, and the bud has elongated about 1.5 to 2 times.
- BudBurst – The leaves have separated on the tip at this stage, and the growing point is likely exposed. However, there are no leaves at a right angle to the growing stem.
- 1-3” Shoots – This is the final stage before the stems become 4-8 inches long and flower clusters start to become visible; the occurrence happens when the stem is 4-6 cm in length, and there will be 3-4 leaves and are at the right angle to the stem.
A considerable amount of time, money, effort, and energy sum up the total workload in growing grapes. The scale of the workforce will depend on how much you want to achieve and how far someone wants to go in the business.
The competition in the market is vast and stiff, and it is wise to know grape-growing information as much as possible before you start the journey.
How to Grow Successful Grapevines
Growing grapevines can be tricky if proper preparations are not followed or done. It is worth considering doing a lot of research and preparations before eventually committing to grapevine growing.
Unlike other plants, growing grapevine could be more time and energy-consuming; therefore, it is always worth considering a comprehensive preparation to produce good quality.
Training the side branches is an important process in growing grapevines.
The first two years must be concentrated on instilling a durable framework within the vine plant. Tying the vine to wires and trellis points too tight could hinder the expansion of branches. Train the side branches to the horizontal position since this is a good way to encourage better fruit production.
It is recommended that a liquid feed is given once a month throughout the summer for the initial two years. This process will help in making a durable framework for good vine support.
When the vine produces fruit during the first two years, do not be taken away by the excitement to pick the first produce.
It is best to leave the fruit on the vine; you must consider the thinning process. Thinning process is when you remove 2/3s of the grapefruit in every bunch.
Your patience will be rewarded after a long wait of three years. In the third year, you could use fruit. At the start of the third year, though, remember to put more rotten farmyard manure around the base.
It would help if you stopped liquid feed during the summer. Expect more fruit, and the thinning process must be done again, with a third to be taken away from the bunch this time around. You must give a rich supply of sunlight to the fruit to achieve a better ripening outcome.
Growing grapevines is considerably time and energy consuming but at the end of the day, isn’t it better to grow your own produce? It is not just money that you could save since the price of grapefruit keeps getting higher all the time, but more importantly, you can also save yourself from harmful ingredients that mass producers may use to maximize the harvest.
Essential Minerals and Types of Soils for Growing Grapes
Growing grapevines is meticulous since there are many considerations that you must properly do to achieve the highest level of quality grapes.
Metaphorically speaking, the soil is the soul of the life that will sprout in your vineyard.
There are things called viticulture considerations, like the soil composition to achieve excellent produce. The nutrients needed by the vine and every plant go through the roots, and therefore the soil that supports the roots must be exposed to minerals and nutrients.
The soil also influences the drainage levels the roots need to get the maximum nutrients needed by the vine. The soil composition will determine how much heat is retained and reflect up to the vine to meet the proper amount needed in the ripening process of grapes.
Thin topsoil and subsoil are the two layers that abundantly retain water, but drainage is also acquired to protect the roots from over-saturation.
If the soil is the soul, the minerals that make the soil rich in grapevine serve as the heart. These minerals determine the characteristics of the produce altogether.
Here are some of the vital minerals in soils good for growing grapevines:
- Iron- An essential mineral for photosynthesis.
- Calcium- Mineral that helps neutralize the soil pH.
- Potassium- Mineral that improves the metabolism of the vine for a healthy crop the following year.
- Magnesium- Mineral is a vital component of chlorophyll.
- Nitrogen- Assimilated in the form of nitrates and phosphates. This mineral improves and encourages the progress or development of roots.
Obtaining the exact soil pH is not that critical. Bunch grapes perform well in soils with a pH level between 6.0 and 8.0.
If grapes are grown in fertile soil, no additional fertilizer will be required. For maximum production, growing grapes need to be kept under control.
The following are types of soil in growing grapevines:
Albariza – This type of soil is found in Spain, a composition formed from diatomaceous deposits.
Alluvial soil – The level of being fertile is excellent, and this soil comes from a river. This type normally has silt, gravel, and sand in its composition.
Basalt – Volcanic rock- This type of soil is enriched with magnesium, iron, and calcium. There is little and/or no amount of quarts, and there is a certain amount of potassium.
Boulbènes – This type of soil is common in Bordeaux, France. This type is easily compressed and fine, and siliceous in texture.
Calcareous soil – This type of soil is called Alkaline and consists of rich calcium and magnesium carbonate levels. This soil is normally cool in temperature that makes the water retained and supplies drainage to avoid saturation. However, the ripening process is affected because of the naturally cool temperature of the soil, and the grapes produced from this type of soil tend to produce acidic wines.
Greywacke – This type of soil is formed from sedimentary deposits of mudstone and feldspar from rivers. This type of soil is found in South Africa, New Zealand, and Germany.
Chalk – Rich in porous soft limestone content. The roots of grapevines can easily penetrate this type of soil. Drainage is highly provided. This soil composition is highly recommended for grapes that have high acidity levels.
Dolomite – This type of soil is rich in magnesium and carbonate.
Soil composition is the foremost consideration when growing grapevines.
Here are some preparations that you can do for rich soil:
- Dig the soil as deep as you can to be able to get plenty of the soil’s natural compost accumulated over the years.
- It is important to make sure that it is fully rotten and dry if using rotten farmyard manure. Remember that fresh manure could burn the roots of the vines.
- Lead compost gathered from oak, lime trees, and ash is a good source of organic matter that will help your vines’ healthy growth.
- Do not use beech leaves as compost because of too much acidity in them.
- Putting large stones or rocks around the base could help the soil avoid direct exposure to sunlight; take note that the roots need to be kept cool.
How to Build a Trellis For Your Grapes
Building trellis for your grapevines is a part of the rigid process of growing grapes. Building what is right for your vines is not just simply putting up the materials. Different styles of the trellis are matched with what the vines need.
Here are some tips or styles on building trellis:
Simple two-wire system
About 5ft high for the first wire, and the second wire is 15 inches above the first, with cordon trained vines. Cordons are trained on one wire. When the shoots grow higher, they are attached to the upper wire, which supports them. To make sure that the shoots don’t break, it is recommended to tie them.
For a vineyard, two types of support posts are needed
The heavy posts mainly support the wires built-in the trellis. The heavy posts are normally made from wood. The vines are supported by lighter stakes made from wood or metal. Wooden posts must be treated with an anti-rot solution to avoid collapse.
Wooden posts made from black locusts are rot-resistant and could last longer than woods that are treated. Black locust is advisable since there is no solution or chemical additives involved in making the soil free from threats of chemicals.
Wire- support posts measurement is 3 inches in diameter for intended rows up to 300 feet in length
Longer rows of need posts larger in diameters of up to 6 inches. This is to support the added weight of the wire and the stronger pull on the posts. When it is too windy, it is better to set the support posts as close as 20 feet apart during the wet season to keep the rows straight-up and not leaning. The wire size usually used to support and hold grapevines runs from 9 to 12 gauge, with 9-gauge being the heaviest.
Tempered-high tensile, stainless steel wire is recommended because it is rust-resistant, better than galvanized material that stretches just so little. If this is the case, you might need the use of more posts.
Set the heavy wire-support posts up to 2 ft in depth in average soil
Posts must be at least 8ft in length, leaving 6ft above the ground to be used for trellis. Posts will differ according to soil types such as soft soil; normally sandy, taller posts must be used and set deeper or, better yet, use cement to ensure that the posts are durable enough.
Posts are set less deeply in very rocky soil. The setting is 11/2 feet deep for an 8ft post normally. Setting posts in rocky grounds is more difficult than in softer soil, although it is a fact that posts are set firmer in rocky soils.
When proper support is given to the growing vines, a good outcome from growing grapes is very likely. It is vital to consider the proper building of the trellis to make sure that the need for vines for strong and durable support is met.
Growing Grapes for Wine
Encourage plant and fruit vigor with a good fertilizer schedule. Feed new vines monthly from March through September. Apply 2 ounces per vine of 12-4-8 fertilizer at every feeding.
Follow a similar program the following year. During the second year of grape growing, use 5 ounces of fertilizer instead of two.
Start by training each new vine up a 5-foot stake. Remove all growth except single shoots to grow each way along the wires.
Water frequently when the vines are initially planted. The limited root systems may require watering every two or three days. Soon, growing grapes will only need to be watered every 3 to 4 days.
Spread a mulch to stretch the time needed between watering and discourage weeds. Grapes are tolerant of drought and often develop the finest taste on limited water supply.
As harvest time approaches, water more frequently, up to four times a week, to prevent fruit from cracking.
Gauge the soil fertility needed by the vines’ growth rate. During the initial three years of development, healthy vines should grow four feet or more.
As the grapevines mature, there will be a reduction in vine growth as the plant invests its energy in producing grapes. If the vines grow more than two feet per season, no fertilization is required.
A vine that grows less than this amount may need extra feeding.
Grape Vine Maintenance
Grapevines will grow in abundance with care and attention. Summertime yields depend on winter pruning. Growers must reduce vines to just a few canes that bear flower and foliage buds for the new year. It is very important not to skip the pruning process, as it is a major step in growing grapes.
All grapes produce best when trained to a single upright trunk with four to six canes left to grow sideways. Bunch types of growing grapes need specific treatments before spring growth begins to maintain high yields.
With bunch grapes, vital new growth should replace the old fruiting cane each year. Traceback along the grapevine four to six healthy canes to near the main trunk. Shorten these to 8-12 buds.
Cut off all other growth, leaving a single short stem containing several buds near each main cane. These stubs will produce new canes for the next pruning time.
Ideally, it is good for you to plant vines during the dormant season from December through February. You can begin planting along an arbor or trellis, and you can expect the bunch of grapes to go eight or nine feet apart.
String wire between poles set about twenty feet apart. Run two wires at heights of 2 and a half and 5 feet.
* Renew the canes of a bunch of grapes each year.* Save one new cane per trellis wire.* Tip the ends of the canes back to 8-12 buds.* Leave a small 2 to 3 bud spur branch at the base of each major cane.
After pruning, grape plants ooze sap from the cuts. This bleeding does not hurt the plant or affect future fruit production.
Some more tips for you
Bunch grapes are the first to bud, flower, and ripen. The fruit is sweet, thin-skinned, and ranges in colors from purple to light green. Gardeners can harvest clusters from June through July.
Plants are vigorous and easy to grow. Popular varieties are self-fertile where no pollinator is needed to produce fruit.
Plant grapes in any open spot where a trellis or arbor will fit. Use the lush growth to screen a less than perfect view and reap a double benefit of fresh summer fruit on the vine.
You can start container grape growing anytime during the year. Any loose, well-drained soil will do fine as special preparation of the site is usually not necessary.
|
After WWII the Bretton Woods agreement, included the United States and 43 other countries, called for international monetary and financial order. In addition, the International Bank for Reconstruction and Development, now known as the World Bank and the International Monetary Fund were established both with the explicit goal of providing loans and a variety of assistance programs aimed at the stabilization of financial markets. From history, we know that European nations heavily used these resources to climb out of the destruction of war.
The United States also developed the Marshall Plan, with the goal of coordinating and integrating American and European economic activities. In exchange for massive loans and assistance, European nations were required to promote economic development, stabilize financial markets and furnish the U.S. with needed products and services. Strict goals were established to ensure that European nations become fully independent of foreign assistance. Lastly, in a political effort to strengthen Europe, the US formed NATO in 1949. The goal of NATO was to ensure mutual defense and promote democracy and peace throughout the world. Many of these institutions continue to exist and thrive today.
Any nation cannot fully consider and implement political processes when its citizens don’t have the most basic necessities of food, shelter, utilities and jobs. The financial strength of Iraq must serve as the second phase in the reconstruction of this nation.
To assure the people of Iraq that the U.S. will not cut and run, treaties with free nations must pledge mutual defense and support of this emerging nation. Financial incentives must be provided for the goods and services produced within Iraq and pledges of billions of dollars today and into the future must be secured, not only from the U.S. but from all the other nations of the world. A piecemeal approach will not work and existing organizations, such as the United Nations, are inappropriate forums as a result of their somewhat dubious membership requirements. The Secretary of State and other national leaders must bring together the leaders of Western nations that have a clear interest in the stabilization of this region to form a consensus on the future financial and political direction of assistance in Iraq.
The U.S. military and its allies have provided the groundwork for the next phase of the reconstruction process. The world has an opportunity to move Iraq forward with targeted financial assistance and political support. Or, the world has the opportunity to blame the U.S. military and leadership for a failed war in Iraq. The U.S. leadership must focus not on the question of when and how many troops should leave, but on how to develop a council of free nations that promote and stabilize the financial markets and political processes within Iraq.
Be the first to read Scott Rutter's column. Sign up today and receive Townhall.com delivered each morning to your inbox.
|
Different types of Aerial photographs are acquired from sensors mounted on airborne platforms. These photographs are classified into types into various factors such as orientation of camera axis, scale of photograph, lens system used, film and filter combinations used in the photography etc. Aerial photographs classified using various factors are described in this article.
Table of Contents
Types of Aerial Photographs Classified Based on Camera Axis :
Based on orientation of camera axis photographs are classified into 3 types. 1. Vertical Photographs 2. Low Oblique Photographs and 3. High Oblique Photographs
These are the photographs which are acquired when the axis of camera is in vertical condition. Following are some of the important points related to vertical photographs.
At a given height (flying height) these photographs cover relatively less amount of area compared to an oblique photograph.
Relief is not readily seen in a vertical photograph.
Scale is considered approximately uniform for flat terrains in case of vertical photographs.
A vertical photograph can be used as a substitute in the absence of map.
However, capturing of vertical photographs is very difficult due to turbulence of aircraft and unstable conditions during the photography. Hence, most of the photographs are acquired with camera axis in slightly tilted position. Such photographs are called as tilted photographs. Tilt should not be more than 3 degrees for a photograph to be considered as vertical. In other words, tilted photographs with less than 3 degree tilt are also considered as vertical photographs.
All other photographs, where axis is tilted more are called as oblique photographs.
Low Oblique Photographs:
Photographs which are taken keeping the camera axis intentionally tilted are called as Oblique Photographs. If horizon is not visible in such photographs, they are categorized as low oblique photographs. Horizon is the area where earth and sky appears to be meeting in a photograph.
High Oblique Photographs:
These are oblique photographs, where horizon is visible in the photographs. These photographs are taken with high amount of tilt and cover larger area compared to vertical photographs. Scale is normally not uniform even in case of a plain terrain for oblique photographs.
Types of Photographs Based on Scale:
Large Scale Photographs:
Based on flying height of airplane, the amount of area captured by a camera varies. If flying height is less, the camera covers less area, but objects are seen in bigger dimension. Hence, the representative fraction (ratio of photo distance to ground distance) is also of higher value in low altitude photography. Such photographs are called large scale photographs.
Small Scale Photographs:
If the flying height is more, in such situation, large area is covered in a single photograph. However, the ratio of size of object in the photograph to ground dimension becomes smaller. Such photographs are called as small scale photographs.
Types of Photographs Based on Lens System:
Based on types of lens system and no of lens arranged in a camera, photography is classified into many types. A 2 – lens system makes use of 2 cameras attached together for photography. In the same fashion a 3-lens system is used for capturing areas from one horizon to another horizon. This 3-lens system is also called as trimetrogen system and is used in worldwar-II for mapping enemy territory. Multi-lens systems is the term broadly used for denoting photographic systems with more than 2 cameras.
Types of Photographs Based on Film and Filter Combinations:
Based on different types of films and filters used while capturing, photographs are classified as below.
Panchromatic Photograph – A panchromatic film is used in the camera. It captures all visible wavelengths of energy for photography. The resultant photograph is a gray scale (black and white) image. These photographs are used for conventional applications such as map study, reconnaissance etc.
Color Photograph – The film used in this camera system is capable of capturing and recording various visible bands separately. The photograph is a color image and is used for interpretation of various objects in the study area.
Infra-red Image -An infra-red film is placed in the camera. This film responds and records only infra red energy coming from the objects. The images produced are gray scale normally. These images are used in studies such as vegetation and water body delineation etc.
Color-Infra red Imagery – The film used in this camera is capable of recording both visible and infra red energy. The photographs produced are used for vegetation studies, water body mapping, urban applications etc.
Thermal infra red imagery – Thermal infra red energy is captured for preparing photographs. The photographs are used in temperature studies.
Radar Imagery – Radar waves or microwaves are captured in this photography. These images normally contain lot of noise and require radiometric correction. These images are used in tecto-morphic studies, weather applications etc.
Spectra-zonal imagery – The images are acquired in selected portions of electromagnetic spectrum.
The above are many types of aerial photographs classified based on various factors. These photographs differ in various properties such as spatial, spectral and radiometric resolutions and are used in different type of applications.
Privacy & Cookies Policy
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.
|
By IIT Kanpur (Centre ID: 8005)
Kanpur, Uttar Pradesh
No manager is mention.
VegetablesHinglish is a learning app for children to learn about the names of the vegetables in both Hindi and English and identify the vegetable. Through this app, the children will learn to identify the vegetable with their spellings, match the spelling with the vegetable and match vegetable with the shape of the vegetable. There is also as assessment record maintained, where the number of mistakes done in every level is recorded.Documentation:
Date uploaded : July 17, 2019
|
Darwinian evolution is a brilliant and beautiful scientific theory. Once it was a daring guess. Today it is basic to the credo that defines the modern worldview. Accepting the theory as settled truth—no more subject to debate than the earth being round or the sky blue or force being mass times acceleration—certifies that you are devoutly orthodox in your scientific views; which in turn is an essential first step towards being taken seriously in any part of modern intellectual life. But what if Darwin was wrong?
Like so many others, I grew up with Darwin’s theory, and had always believed it was true. I had heard doubts over the years from well-informed, sometimes brilliant people, but I had my hands full cultivating my garden, and it was easier to let biology take care of itself. But in recent years, reading and discussion have shut that road down for good.
This is sad. It is no victory of any sort for religion. It is a defeat for human ingenuity. It means one less beautiful idea in our world, and one more hugely difficult and important problem back on mankind’s to-do list. But we each need to make our peace with the facts, and not try to make life on earth simpler than it really is.
Charles Darwin explained monumental change by making one basic assumption—all life-forms descend from a common ancestor—and adding two simple processes anyone can understand: random, heritable variation and natural selection. Out of these simple ingredients, conceived to be operating blindly over hundreds of millions of years, he conjured up change that seems like the deliberate unfolding of a grand plan, designed and carried out with superhuman genius. Could nature really have pulled out of its hat the invention of life, of increasingly sophisticated life-forms and, ultimately, the unique-in-the-cosmos (so far as we know) human mind—given no strategy but trial and error? The mindless accumulation of small changes? It is an astounding idea. Yet Darwin’s brilliant and lovely theory explains how it could have happened.
Its beauty is important. Beauty is often a telltale sign of truth. Beauty is our guide to the intellectual universe—walking beside us through the uncharted wilderness, pointing us in the right direction, keeping us on track—most of the time.
Demolishing a Worldview
There’s no reason to doubt that Darwin successfully explained the small adjustments by which an organism adapts to local circumstances: changes to fur density or wing style or beak shape. Yet there are many reasons to doubt whether he can answer the hard questions and explain the big picture—not the fine-tuning of existing species but the emergence of new ones. The origin of species is exactly what Darwin cannot explain.
Stephen Meyer’s thoughtful and meticulous Darwin’s Doubt (2013) convinced me that Darwin has failed. He cannot answer the big question. Two other books are also essential: The Deniable Darwin and Other Essays (2009), by David Berlinski, and Debating Darwin’s Doubt (2015), an anthology edited by David Klinghoffer, which collects some of the arguments Meyer’s book stirred up. These three form a fateful battle group that most people would rather ignore. Bringing to bear the work of many dozen scientists over many decades, Meyer, who after a stint as a geophysicist in Dallas earned a Ph.D. in History and Philosophy of Science from Cambridge and now directs the Discovery Institute’s Center for Science and Culture, disassembles the theory of evolution piece by piece. Darwin’s Doubt is one of the most important books in a generation. Few open-minded people will finish it with their faith in Darwin intact.
Meyer doesn’t only demolish Darwin; he defends a replacement theory, intelligent design (I.D.). Although I can’t accept intelligent design as Meyer presents it, he does show that it is a plain case of the emperor’s new clothes: it says aloud what anyone who ponders biology must think, at some point, while sifting possible answers to hard questions. Intelligent design as Meyer explains it never uses religious arguments, draws religious conclusions, or refers to religion in any way. It does underline an obvious but important truth: Darwin’s mission was exactly to explain the flagrant appearance of design in nature.
The religion is all on the other side. Meyer and other proponents of I.D. are the dispassionate intellectuals making orderly scientific arguments. Some I.D.-haters have shown themselves willing to use any argument—fair or not, true or not, ad hominem or not—to keep this dangerous idea locked in a box forever. They remind us of the extent to which Darwinism is no longer just a scientific theory but the basis of a worldview, and an emergency replacement religion for the many troubled souls who need one.
As for Biblical religion, it forces its way into the discussion although Meyer didn’t invite it, and neither did Darwin. Some have always been bothered by the harm Darwin is said to have done religion. His theory has been thought by some naïfs (fundamentalists as well as intellectuals) to have shown or alleged that the Bible is wrong, and Judeo-Christian religion bunk. But this view assumes a childishly primitive reading of Scripture. Anyone can see that there are two different creation stories in Genesis, one based on seven days, the other on the Garden of Eden. When the Bible gives us two different versions of one story, it stands to reason that the facts on which they disagree are without basic religious significance. The facts on which they agree are the ones that matter: God created the universe, and put man there for a reason. Darwin has nothing to say on these or any other key religious issues.
Fundamentalists and intellectuals might go on arguing these things forever. But normal people will want to come to grips with Meyer and the downfall of a beautiful idea. I will mention several of his arguments, one of them in (just a bit of) detail. This is one of the most important intellectual issues of modern times, and every thinking person has the right and duty to judge for himself.
Looking for Evidence
Darwin himself had reservations about his theory, shared by some of the most important biologists of his time. And the problems that worried him have only grown more substantial over the decades. In the famous “Cambrian explosion” of around half a billion years ago, a striking variety of new organisms—including the first-ever animals—pop up suddenly in the fossil record over a mere 70-odd million years. This great outburst followed many hundreds of millions of years of slow growth and scanty fossils, mainly of single-celled organisms, dating back to the origins of life roughly three and half billion years ago.
Darwin’s theory predicts that new life forms evolve gradually from old ones in a constantly branching, spreading tree of life. Those brave new Cambrian creatures must therefore have had Precambrian predecessors, similar but not quite as fancy and sophisticated. They could not have all blown out suddenly, like a bunch of geysers. Each must have had a closely related predecessor, which must have had its own predecessors: Darwinian evolution is gradual, step-by-step. All those predecessors must have come together, further back, into a series of branches leading down to the (long ago) trunk.
But those predecessors of the Cambrian creatures are missing. Darwin himself was disturbed by their absence from the fossil record. He believed they would turn up eventually. Some of his contemporaries (such as the eminent Harvard biologist Louis Agassiz) held that the fossil record was clear enough already, and showed that Darwin’s theory was wrong. Perhaps only a few sites had been searched for fossils, but they had been searched straight down. The Cambrian explosion had been unearthed, and beneath those Cambrian creatures their Precambrian predecessors should have been waiting—and weren’t. In fact, the fossil record as a whole lacked the upward-branching structure Darwin predicted.
The trunk was supposed to branch into many different species, each species giving rise to many genera, and towards the top of the tree you would find so much diversity that you could distinguish separate phyla—the large divisions (sponges, mosses, mollusks, chordates, and so on) that comprise the kingdoms of animals, plants, and several others—take your pick. But, as Berlinski points out, the fossil record shows the opposite: “representatives of separate phyla appearing first followed by lower-level diversification on those basic themes.” In general, “most species enter the evolutionary order fully formed and then depart unchanged.” The incremental development of new species is largely not there. Those missing pre-Cambrian organisms have still not turned up. (Although fossils are subject to interpretation, and some biologists place pre-Cambrian life-forms closer than others to the new-fangled Cambrian creatures.)
Some researchers have guessed that those missing Precambrian precursors were too small or too soft-bodied to have made good fossils. Meyer notes that fossil traces of ancient bacteria and single-celled algae have been discovered: smallness per se doesn’t mean that an organism can’t leave fossil traces—although the existence of fossils depends on the surroundings in which the organism lived, and the history of the relevant rock during the ages since it died. The story is similar for soft-bodied organisms. Hard-bodied forms are more likely to be fossilized than soft-bodied ones, but many fossils of soft-bodied organisms and body parts do exist. Precambrian fossil deposits have been discovered in which tiny, soft-bodied embryo sponges are preserved—but no predecessors to the celebrity organisms of the Cambrian explosion.
This sort of negative evidence can’t ever be conclusive. But the ever-expanding fossil archives don’t look good for Darwin, who made clear and concrete predictions that have (so far) been falsified—according to many reputable paleontologists, anyway. When does the clock run out on those predictions? Never. But any thoughtful person must ask himself whether scientists today are looking for evidence that bears on Darwin, or looking to explain away evidence that contradicts him. There are some of each. Scientists are only human, and their thinking (like everyone else’s) is colored by emotion.
The Advent of Molecular Biology
Darwin’s main problem, however, is molecular biology. There was no such thing in his own time. We now see from inside what he could only see from outside, as if he had developed a theory of mobile phone evolution without knowing that there were computers and software inside or what the digital revolution was all about. Under the circumstances, he did brilliantly.
Biology in his time was for naturalists, not laboratory scientists. Doctor Dolittle was a naturalist. (He is the hero of the wonderful children’s books by Hugh Lofting, now unfortunately nearing extinction.) The doctor loved animals and understood them, and had a sharp eye for all of nature not too different from Wordsworth’s or Goethe’s. But the character of the field has changed, and it’s not surprising that old theories don’t necessarily still work.
Darwin’s theory is simple to grasp; its simplicity is the heart of its brilliance and power. We all know that variation occurs naturally among individuals of the same type—white or black sheep, dove-gray versus off-white or pale beige pigeons, boring and sullen undergraduates versus charming, lissome ones. We all know that many variations have no effect on a creature’s prospects, but some do. A sheep born with extra-warm wool will presumably do better at surviving a rough Scottish winter than his normal-wooled friends. Such a sheep would be more likely than normal sheep to live long enough to mate, and pass on its superior trait to the next generation. Over millions of years, small good-for-survival variations accumulate, and eventually (says Darwin) you have a brand new species. The same mechanism naturally favors genes that are right for the local environment—warm wool in Scotland, light and comfortable wool for the tropics, other varieties for mountains and deserts. Thus one species (your standard sheep) might eventually become four specialized ones. And thus new species should develop from old in the upward-branching tree pattern Darwin described.
The advent of molecular biology made it possible to transform Darwinism into Neo-Darwinism. The new version explains (it doesn’t merely cite) natural variation, as the consequence of random change or mutation to the genetic information within cells that deal with reproduction. Those cells can pass genetic change onward to the next generation, thus changing—potentially—the future of the species and not just one individual’s career.
The engine that powers Neo-Darwinian evolution is pure chance and lots of time. By filling in the details of cellular life, molecular biology makes it possible to estimate the power of that simple mechanism. But what does generating new forms of life entail? Many biologists agree that generating a new shape of protein is the essence of it. Only if Neo-Darwinian evolution is creative enough to do that is it capable of creating new life-forms and pushing evolution forward.
Proteins are the special ops forces (or maybe the Marines) of living cells, except that they are common instead of rare; they do all the heavy lifting, all the tricky and critical assignments, in a dazzling range of roles. Proteins called enzymes catalyze all sorts of reactions and drive cellular metabolism. Other proteins (such as collagen) give cells shape and structure, like tent poles but in far more shapes. Nerve function, muscle function, and photosynthesis are all driven by proteins. And in doing these jobs and many others, the actual, 3-D shape of the protein molecule is important.
So, is the simple neo-Darwinian mechanism up to this task? Are random mutation plus natural selection sufficient to create new protein shapes?
How to make proteins is our first question. Proteins are chains: linear sequences of atom-groups, each bonded to the next. A protein molecule is based on a chain of amino acids; 150 elements is a “modest-sized” chain; the average is 250. Each link is chosen, ordinarily, from one of 20 amino acids. A chain of amino acids is a polypeptide—“peptide” being the type of chemical bond that joins one amino acid to the next. But this chain is only the starting point: chemical forces among the links make parts of the chain twist themselves into helices; others straighten out, and then, sometimes, jackknife repeatedly, like a carpenter’s rule, into flat sheets. Then the whole assemblage folds itself up like a complex sheet of origami paper. And the actual 3-D shape of the resulting molecule is (as I have said) important.
Imagine a 150-element protein as a chain of 150 beads, each bead chosen from 20 varieties. But: only certain chains will work. Only certain bead combinations will form themselves into stable, useful, well-shaped proteins.
So how hard is it to build a useful, well-shaped protein? Can you throw a bunch of amino acids together and assume that you will get something good? Or must you choose each element of the chain with painstaking care? It happens to be very hard to choose the right beads.
Inventing a new protein means inventing a new gene. (Enter, finally, genes, DNA etc., with suitable fanfare.) Genes spell out the links of a protein chain, amino acid by amino acid. Each gene is a segment of DNA, the world’s most admired macromolecule. DNA, of course, is the famous double helix or spiral staircase, where each step is a pair of nucleotides. As you read the nucleotides along one edge of the staircase (sitting on one step and bumping your way downwards to the next and the next), each group of three nucleotides along the way specifies an amino acid. Each three-nucleotide group is a codon, and the correspondence between codons and amino acids is the genetic code. (The four nucleotides in DNA are abbreviated T, A, C and G, and you can look up the code in a high school textbook: TTA and TTC stand for phenylalanine, TCT for serine, and so on.)
Your task is to invent a new gene by mutation—by the accidental change of one codon to a different codon. You have two possible starting points for this attempt. You could mutate an existing gene, or mutate gibberish. You have a choice because DNA actually consists of valid genes separated by long sequences of nonsense. Most biologists think that the nonsense sequences are the main source of new genes. If you tinker with a valid gene, you will almost certainly make it worse—to the point where its protein misfires and endangers (or kills) its organism—long before you start making it better. The gibberish sequences, on the other hand, sit on the sidelines without making proteins, and you can mutate them, so far as we know, without endangering anything. The mutated sequence can then be passed on to the next generation, where it can be mutated again. Thus mutations can accumulate on the sidelines without affecting the organism. But if you mutate your way to an actual, valid new gene, your new gene can create a new protein and thereby, potentially, play a role in evolution.
Mutations themselves enter the picture when DNA splits in half down the center of the staircase, thereby allowing the enclosing cell to split in half, and the encompassing organism to grow. Each half-staircase summons a matching set of nucleotides from the surrounding chemical soup; two complete new DNA molecules emerge. A mistake in this elegant replication process—the wrong nucleotide answering the call, a nucleotide typo—yields a mutation, either to a valid blueprint or a stretch of gibberish.
Building a Better Protein
Now at last we are ready to take Darwin out for a test drive. Starting with 150 links of gibberish, what are the chances that we can mutate our way to a useful new shape of protein? We can ask basically the same question in a more manageable way: what are the chances that a random 150-link sequence will create such a protein? Nonsense sequences are essentially random. Mutations are random. Make random changes to a random sequence and you get another random sequence. So, close your eyes, make 150 random choices from your 20 bead boxes and string up your beads in the order in which you chose them. What are the odds that you will come up with a useful new protein?
It’s easy to see that the total number of possible sequences is immense. It’s easy to believe (although non-chemists must take their colleagues’ word for it) that the subset of useful sequences—sequences that create real, usable proteins—is, in comparison, tiny. But we must know how immense and how tiny.
The total count of possible 150-link chains, where each link is chosen separately from 20 amino acids, is 20150. In other words, many. 20150 roughly equals 10195, and there are only 1080 atoms in the universe.
What proportion of these many polypeptides are useful proteins? Douglas Axe did a series of experiments to estimate how many 150-long chains are capable of stable folds—of reaching the final step in the protein-creation process (the folding) and of holding their shapes long enough to be useful. (Axe is a distinguished biologist with five-star breeding: he was a graduate student at Caltech, then joined the Centre for Protein Engineering at Cambridge. The biologists whose work Meyer discusses are mainly first-rate Establishment scientists.) He estimated that, of all 150-link amino acid sequences, 1 in 1074 will be capable of folding into a stable protein. To say that your chances are 1 in 1074 is no different, in practice, from saying that they are zero. It’s not surprising that your chances of hitting a stable protein that performs some useful function, and might therefore play a part in evolution, are even smaller. Axe puts them at 1 in 1077.
In other words: immense is so big, and tiny is so small, that neo-Darwinian evolution is—so far—a dead loss. Try to mutate your way from 150 links of gibberish to a working, useful protein and you are guaranteed to fail. Try it with ten mutations, a thousand, a million—you fail. The odds bury you. It can’t be done.
A Bad Bet
But neo-Darwinianism understands that mutations are rare, and successful ones even scarcer. To balance that out, there are many organisms and a staggering immensity of time. Your chances of winning might be infinitesimal. But if you play the game often enough, you win in the end, right? After all, it works for Powerball!
Do the numbers balance out? Is Neo-Darwinian evolution plausible after all? Axe reasoned as follows. Consider the whole history of living things—the entire group of every living organism ever. It is dominated numerically by bacteria. All other organisms, from tangerine trees to coral polyps, are only a footnote. Suppose, then, that every bacterium that has ever lived contributes one mutation before its demise to the history of life. This is a generous assumption; most bacteria pass on their genetic information unchanged, unmutated. Mutations are the exception. In any case, there have evidently been, in the whole history of life, around 1040 bacteria—yielding around 1040 mutations under Axe’s assumptions. That is a very large number of chances at any game. But given that the odds each time are 1 to 1077 against, it is not large enough. The odds against blind Darwinian chance having turned up even one mutation with the potential to push evolution forward are 1040x(1/1077)—1040 tries, where your odds of success each time are 1 in 1077—which equals 1 in 1037. In practical terms, those odds are still zero. Zero odds of producing a single promising mutation in the whole history of life. Darwin loses.
His idea is still perfectly reasonable in the abstract. But concretely, he is overwhelmed by numbers he couldn’t possibly have foreseen: the ridiculously large number of amino-acid chains relative to number of useful proteins. Those numbers transcend the details of any particular set of estimates. The obvious fact is that genes, in storing blueprints for the proteins that form the basis of cellular life, encode an awe-inspiring amount of information. You don’t turn up a useful protein merely by doodling on the back of an envelope, any more than you write a Mozart aria by assembling three sheets of staff paper and scattering notes around. Profound biochemical knowledge is somehow, in some sense, captured in every description of a working protein. Where on earth did it all come from?
Neo-Darwinianism says that nature simply rolls the dice, and if something useful emerges, great. Otherwise, try again. But useful sequences are so gigantically rare that this answer simply won’t work. Studies of the sort Meyer discusses show that Neo-Darwinism is the quintessence of a bad bet.
The Great Darwinian Paradox
There are many other problems besides proteins. One of the most basic, and the last I’ll mention here, calls into question the whole idea of gene mutations driving macro-evolution—the emergence of new forms of organism, versus mere variation on existing forms.
To help create a brand new form of organism, a mutation must affect a gene that does its job early and controls the expression of other genes that come into play later on as the organism grows. But mutations to these early-acting “strategic” genes, which create the big body-plan changes required by macro-evolution, seem to be invariably fatal. They kill off the organism long before it can reproduce. This is common sense. Severely deformed creatures don’t ever seem fated to lead the way to glorious new forms of life. Instead, they die young.
Evidently there are a total of no examples in the literature of mutations that affect early development and the body plan as a whole and are not fatal. The German geneticists Christiane Nüsslein-Volhard and Eric Wieschaus won the Nobel Prize in 1995 for the “Heidelberg screen,” an exhaustive investigation of every observable or inducible mutation of Drosophila melanogaster (the same patient, long-suffering fruit fly I meddled with relentlessly in an undergraduate genetics lab in the 1970s). “[W]e think we’ve hit all the genes required to specify the body plan of Drosophila,” said Wieschaus in answering a question after a talk. Not one, he continued, is “promising as raw materials for macroevolution”—because mutations in them all killed off the fly long before it could mate. If an exhaustive search rules out every last plausible gene as a candidate for large-scale Drosophila evolution, where does that leave Darwin? Wieschaus continues: “What are—or what would be—the right mutations for major evolutionary change? And we don’t know the answer to that.”
There is a general principle here, similar to the earlier principle that the number of useless polypeptides crushes the number of useful ones. The Georgia Tech geneticist John F. McDonald calls this one a “great Darwinian paradox.” Meyer explains: “genes that are obviously variable within natural populations seem to affect only minor aspects of form and function—while those genes that govern major changes, the very stuff of macroevolution, apparently do not vary or vary only to the detriment of the organism.” The philosopher of biology Paul Nelson summarizes the body-plan problem:
Research on animal development and macroevolution over the last thirty years—research done from within the neo-Darwinian framework—has shown that the neo-Darwinian explanation for the origin of new body plans is overwhelmingly likely to be false—and for reasons that Darwin himself would have understood.
Darwin would easily have understood that minor mutations are common but can’t create significant evolutionary change; major mutations are rare and fatal.
It can hardly be surprising that the revolution in biological knowledge over the last half-century should call for a new understanding of the origin of species.
Intelligent Design, as Meyer describes it, is a simple and direct response to a specific event, the Cambrian explosion. The theory suggests that an intelligent cause intervened to create this extraordinary outburst. By “intelligent” Meyer understands “conscious”; the theory suggests nothing more about the designer. But where is the evidence? To Meyer and other proponents, that is like asking—after you have come across a tree that is split vertically down the center and half burnt up—“but where is the evidence of a lightning strike?” The exceptional intricacy of living things, and their elaborate mechanisms for fitting precisely into their natural surroundings, seemed to cry out for an intelligent designer long before molecular biology and biochemistry. Darwin’s theory, after all, is an attempt to explain “design without a designer,” according to evolutionary biologist Francisco Ayala. An intelligent designer might seem more necessary than ever now that we understand so much cellular biology, and the impossibly long odds facing any attempt to design proteins by chance, or assemble the regulatory mechanisms that control the life cycle of a cell.
Meyer doesn’t reject Darwinian evolution. He only rejects it as a sufficient theory of life as we know it. He’s made a painstaking investigation of Darwin’s theory and has rejected it for many good reasons that he has carefully explained. He didn’t rush to embrace intelligent design. Just the opposite. But the explosion of detailed, precise information that was necessary to build the brand-new Cambrian organisms, and the fact that the information was encoded, represented symbolically, in DNA nucleotides, suggests to Meyer that an intelligent designer must have been responsible. “Our uniform experience of cause and effect shows that intelligent design is the only known cause of the origin of large amounts of functionally specified digital information,” he writes. (“Digital” is confusing here; it only means information represented by a sequence of symbols.)
Was the Cambrian Explosion unique in some absolute sense, or was it the extreme endpoint of a spectrum? After all, there were infusions of new genetic information before and after. Meyer himself writes that “the sudden appearance of the Cambrian animals was merely the most outstanding instance of a pattern of discontinuity that extends throughout the geologic column.”
It’s not easy to decide whether something stands alone or at the far end of some spectrum. Consider Meyer’s “functionally specified digital information.” Information intended for one specific purpose and spelled out in a sequence of symbols is a rare bird in nature. It’s an outlier in the world of intelligence, too. We nearly always communicate in symbols that are used for many purposes; it’s hard for us to confine any symbol system to a single purpose. Even digits are used to represent numbers of many kinds, to express order as well as magnitude, as names (2001: A Space Odyssey) or parts of English phrases (“second rate”). A line of music can be heard in the head, hummed or sung, played on a zither or performed by a large orchestra. Or it can serve as a single graphic symbol meaning “music.” But the genetic code is used to specify the structure of certain molecules only (albeit in a series of separate steps and information-transfers within the cell). Nature, for its part, encodes information in many ways: airborne scents are important to bees, butterflies, elephants seeking to mate, birds avoiding trouble, and untold other creatures. The scent is a symbol; it’s not the scent that threatens the bird. Channels in sand dunes encode information about the passing breezes—and so on. There are endless examples—none approaching the sophistication and complexity of DNA coding.
If Meyer were invoking a single intervention by an intelligent designer at the invention of life, or of consciousness, or rationality, or self-aware consciousness, the idea might seem more natural. But then we still haven’t explained the Cambrian explosion. An intelligent designer who interferes repeatedly, on the other hand, poses an even harder problem of explaining why he chose to act when he did. Such a cause would necessarily have some sense of the big picture of life on earth. What was his strategy? How did he manage to back himself into so many corners, wasting energy on so many doomed organisms? Granted, they might each have contributed genes to our common stockpile—but could hardly have done so in the most efficient way. What was his purpose? And why did he do such an awfully slipshod job? Why are we so disease prone, heartbreak prone, and so on? An intelligent designer makes perfect sense in the abstract. The real challenge is how to fit this designer into life as we know it. Intelligent design might well be the ultimate answer. But as a theory, it would seem to have a long way to go.
A Final Challenge
I might, myself, expect to find the answer in a phenomenon that acts as if it were a new and (thus far) unknown force or field associated with consciousness. I’d expect complex biochemistry to be consistently biased in the direction that leads closer to consciousness, as gravitation biases motion towards massive objects. I have no evidence for this idea. It’s just the way biology seems to work.
Although Stephen Meyer’s book is a landmark in the intellectual history of Darwinism, the theory will be with us for a long time, exerting enormous cultural force. Darwin is no Newton. Newton’s physics survived Einstein and will always survive, because it explains the cases that dominate all of space-time except for the extreme ends of the spectrum, at the very smallest and largest scales. It’s just these most important cases, the ones we see all around us, that Darwin cannot explain. Yet his theory does explain cases of real significance. And Darwin’s intellectual daring will always be inspiring. The man will always be admired.
He now poses a final challenge. Whether biology will rise to this last one as well as it did to the first, when his theory upset every apple cart, remains to be seen. How cleanly and quickly can the field get over Darwin, and move on?—with due allowance for every Darwinist’s having to study all the evidence for himself? There is one of most important questions facing science in the 21st century.
|
Today is Tuesday, Oct. 18, the 291st day of 2011. There are 74 days left in the year.
Today's Highlight in History:
On Oct. 18, 1961, the movie musical "West Side Story," starring Natalie Wood and Richard Beymer, premiered in New York, the film's setting.
On this date:
In 1685, King Louis XIV signed the Edict of Fontainebleau, revoking the Edict of Nantes that had established legal toleration of France's Protestant population, the Huguenots.
In 1867, the United States took formal possession of Alaska from Russia.
In 1892, the first long-distance telephone line between New York and Chicago was officially opened (it could only handle one call at a time).
In 1931, inventor Thomas Alva Edison died in West Orange, N.J., at age 84.
In 1944, Soviet troops invaded Czechoslovakia during World War II.
In 1962, James D. Watson, Francis Crick and Maurice Wilkins were honored with the Nobel Prize for Medicine and Physiology for determining the double-helix molecular structure of DNA.
In 1969, the federal government banned artificial sweeteners known as cyclamates (SY'-kluh-maytz) because of evidence they caused cancer in laboratory rats.
In 1971, the Knapp Commission began public hearings into allegations of corruption in the New York City police department (the witnesses included Frank Serpico).
In 1977, West German commandos stormed a hijacked Lufthansa jetliner on the ground in Mogadishu, Somalia, freeing all 86 hostages and killing three of the four hijackers.
In 1982, former first lady Bess Truman died at her home in Independence, Mo., at age 97.
Ten years ago: CBS News announced that an employee in anchorman Dan Rather's office had tested positive for skin anthrax. Four disciples of Osama bin Laden were sentenced in New York to life without parole for their roles in the deadly 1998 bombings of two U.S. embassies in Africa.
Five years ago: Secretary of State Condoleezza Rice, visiting Tokyo, said the United States was willing to use its full military might to defend Japan in light of North Korea's nuclear test. The Dow Jones industrial average passed 12,000 for the first time before pulling back to close at 11,992.68.
One year ago: Four men snared in an FBI sting were convicted of plotting to blow up New York City synagogues and shoot down military planes with the help of a paid informant who'd convinced them he was a terror operative.
Today's Birthdays: Rock-and-roll performer Chuck Berry is 85. Sportscaster Keith Jackson is 83. Actress Dawn Wells is 73. College and Pro Football Hall-of-Famer Mike Ditka is 72. Actor Joe Morton is 64. Actress Pam Dawber is 61. Author Terry McMillan is 60. Writer-producer Chuck Lorre is 59. Gospel singer Vickie Winans is 58. International Tennis Hall of Famer Martina Navratilova is 55. Boxer Thomas Hearns is 53. Actor Jean-Claude Van Damme is 51. Actress Erin Moran is 51. Jazz musician Wynton Marsalis is 50. Actor Vincent Spano is 49. Rock musician Tim Cross is 45. Tennis player Michael Stich (shteek) is 43. Singer Nonchalant is 38. Actress Joy Bryant is 37. Rock musician Peter Svenson (The Cardigans) is 37. Actor Wesley Jonathan is 33. Rhythm-and-blues singer-actor Ne-Yo is 32. Country singer Josh Gracin is 31. Country musician Jesse Littleton (Marshall Dyllon) is 30. Jazz singer-musician Esperanza Spalding is 27. Actress-model Freida Pinto is 27. Actor Zac Efron is 24. Actress Joy Lauren is 22. Actor Tyler Posey is 20.
Thought for Today: "Only those ideas that are least truly ours can be adequately expressed in words." _ Henri Bergson, French philosopher (1859-1941).
(Above Advance for Use Tuesday, Oct. 18)
Copyright 2011, The Associated Press. All rights reserved.
GM Fires Concealed Carrying Valet Who Saved Autoworker Under Attack
Prof. Malcolm: 'Shall issue' ruling changes Cali in favor of gun owners | Human Events
Stephen Self - Clinton Gets More Delegates Than Sanders Despite NH Blowout
Listen up, ladies! Seems Cosmo doesn't want you dating a gun owner [video, pic]
- What Is Your U.S. Income Percentile Ranking?
Spillage: Up to 30 accounts on Hillary server interacted with top-secret data
Donald Trump: We're Going To Keep Common Core (Updated) | RedState
|
|Flag of Groningen|
|Adopted||February 17, 1950|
The flag of the Netherlands province of Groningen has a white cross design with the upper hoist and lower fly sections red, and the remaining two sections blue. Inside the white cross is a thinner green cross.
The colours are based off of the Groningen coat of arms. The white and green cross symbolizes Groningen, the capital city of the province. Red, white, and blue are found on the coat of arms of the Ommelanden, the areas of the province surrounding the capital city. The colours of the city are placed in the centre to symbolise the central location of the city within the province.
|Subdivisions of the Kingdom of The Netherlands|
|Countries: Aruba ● Curaçao ● Netherlands ● Sint Maarten|
|Provinces of The Netherlands: Drenthe ● Flevoland ● Friesland ● Gelderland ● Groningen ● Limburg ● North Brabant ● North Holland ● Overijssel ● Utrecht ● Zeeland ● South Holland|
|Metropolitan areas: Almelo ● Almere ● Amsterdam ● Arnhem ● Barendrecht ● Dordrecht ● Eindhoven ● Gorinchem ● Leiden ● Lisse ● Maastricht ● Nieuwkoop ● Nijmegen ● Papendrecht ● Sliedrecht ● Strijen ● The Hague ● Utrecht ● Vlaardingen ● Zwijndrecht|
|Special municipalities: Bonaire ● Saba ● Sint Eustatius|
|Former countries: Netherlands Antilles (1986-2010) ● Suriname (1959-1975)|
|
Search for a relative to learn more about your family history.
Where is the Voda family from?
You can see how Voda families moved over time by selecting different census years. The Voda family name was found in the USA, the UK, and Canada between 1880 and 1920. The most Voda families were found in USA in 1920. In 1880 there were 12 Voda families living in Illinois. This was 100% of all the recorded Voda's in USA. Illinois had the highest population of Voda families in 1880.
Use census records and voter lists to see where families with the Voda surname lived. Within census records, you can often find information like name of household members, ages, birthplaces, residences, and occupations.
In 1940, Farmer and New Worker were the top reported jobs for men and women in the USA named Voda. 34% of Voda men worked as a Farmer and 34% of Voda women worked as a New Worker. Some less common occupations for Americans named Voda were Salesman and Maid. .
|
Want to build your own satellite and launch it into space? It’s easier than you may think. The first in a series of four books, this do-it-yourself guide shows you the essential steps needed to design a base picosatellite platform—complete with a solar-powered computer-controlled assembly—tough enough to withstand a rocket launch and survive in orbit for three months.
Whether you want to conduct scientific experiments, run engineering tests, or present an orbital art project, you’ll select basic components such as an antenna, radio transmitter, solar cells, battery, power bus, processor, sensors, and an extremely small picosatellite chassis. This entertaining series takes you through the entire process—from planning to launch.Prototype and fabricate printed circuit boards to handle your payload Choose a prefab satellite kit, complete with solar cells, power system, and on-board computer Calculate your power budget—how much you need vs. what the solar cells collect Select between the Arduino or BasicX-24 onboard processors, and determine how to use the radio transmitter and sensors Learn your launch options, including the providers and cost required Use milestones to keep your project schedule in motion
|
Many people mistakenly think that Hitler was either anti-religious or an atheist. This is in fact untrue. Hitler saw himself as doing God's work, was inspired by Martin Luther (the father of the protestant reformation), and was in constant contact with the church.
- Hitler was an atheist
- Hitler shows that atheism and secularism is dangerous
- Hitler persecuted Christians
Hitler was an atheist
Hitler was not an atheist. Hitler said in his famous book, Mein Kampf, that he was doing the work of God:
- I am convinced that I am acting as the agent of our Creator. By fighting off the Jews. I am doing the Lord's work.
And in 1938 Hitler declared "I am now as before a Catholic and will always remain so." Hitler also drew much of his inspiration and anti-Semitic hate from the works of the protestant reformer Martin Luther.
Hitler shows that atheism and secularism is dangerous
Hitler's Nazi Germany was anything but secular. Hitler championed religious indoctrination in public schools, negotiated a treaty with the Vatican in which German tax money goes to the church, and made special protection for Catholic churches and priests, which were de facto applied to German protestant churches and ministers as well.
- Secular schools can never be tolerated because such schools have no religious instruction, and a general moral instruction without a religious foundation is built on air; consequently, all character training and religion must be derived from faith. . . we need believing people.
- Hitler, April 26, 1933, during negotiations which led to the Nazi-Vatican Concordat of 1933.
- Embued with the desire to secure for the German people the great religious, moral, and cultural values rooted in the two Christian Confessions, we have abolished the political organizations but strengthened the religious institutions.
- Adolf Hitler, speaking in the Reichstag on Jan. 30, 1934
The EM German Army belt buckle also read "Gott Mit Uns" (God With Us). To say that Nazi Germany was secular is factually incorrect.
Hitler persecuted Christians for their belief
As the above shows, Hitler and Nazi Germany was neither atheistic or secular. Christians have claimed that many Christians were sent to the death camps. The only Christians that were sent to the death camps because of their religious beliefs are the Jehovah's Witnesses, who were pacifistic and a threat to Germany's war effort. Most other Christians in the death camps were the German administrators. Atheists, on the other hand, were targetted and sent to death camps:
- We were convinced that the people needs and requires this faith. We have therefore undertaken the fight against the atheistic movement, and that not merely with a few theoretical declarations: we have stamped it out.
- Adolf Hitler, in a speech in Berlin on Oct. 24, 1933
The situation was different in Poland. Polish churches were shut down, but that's because Polish churches resisted Nazis influences.
|
Difference between revisions of "Zaragoza"
Revision as of 21:47, 24 April 2013
Zaragoza is a warm and inviting city strategically located between Madrid, Barcelona, Bilbao, Valencia and Toulouse. In people's haste to see the big cities, this gem is often passed without so much as a second look. The city welcomes visitors with its rich culture, shopping, eating, and sightseeing. Its more than 2,000 years of history makes the city one of the greatest historical and artistic legacies in Spain. It is situated in Aragon, one of the previous kingdoms of Spain.
Signs of the city’s founding, when the city was named after Emperor August, are still visible and can be enjoyed by tourists even today. Two thousand years later, the architectural remains of large public buildings indicate Caesar Augustus’ influence over the city. Today you can still admire the city’s Forum, Thermal Baths, the River Port or the Great Theatre, archeological remains which reflect the splendour of the city as it was during the Roman Empire.
Later on, during the Muslim occupation of Spain, Zaragoza was the capital of a kingdom in which art, music, and science formed the cornerstones of life in the Court. From this period, you can still see the Aljaferia Palace, a marvelous example of Muslim art, which has been witness to Zaragoza and its rich history – right up to the present day. From the early days of Christianity, Zaragoza still possesses a multitude of indicators that tell us something of the grandeur of the city: thanks to the Mudejar, the show of tolerance whereby different cultures were able to live side by side, and World Heritage, you can still enjoy beautiful enclaves such as the San Salvador Cathedral (the Seo) or the San Pablo church. From the period of Renaissance, there is a multitude of palatial houses which tell us of the sumptuousness of Saragossa in the 16th Century. Museums, such as the one dedicated to sculptor Pablo Gargallo, or exhibition halls, such as the monumental Lonja, are archetypal of Aragonese Renaissance art.
Zaragoza is known worldwide as the home to the magnificent Roman Catholic Basilica–Cathedral of Our Lady of the Pillar, heir to a tradition which is over 2,000 years old, and a destination for Christian pilgrims of all denominations.
Zaragoza has a Continental Mediterranean climate, very dry, with cold winters and hot summers. With an average of 318 mm per year, rainfall is a rarity mostly occurring in spring. There is drought in summer with only a few storms in the late afternoon. In July and August, temperatures are typically above 30°C (86°F), reaching up to 40°C (104°F) a few days per year. On those days you will quickly understand the purpose of siesta: hiding away after lunch, during the hottest part of the day, to enjoy later the evenings and nights at a delightful 18-22°C.
In winter the temperatures are low, usually between 0 and 10°C (32-50°F), with some frosts during the night. Snow occurs only once every couple of years, but fog is not uncommon (about 20 days from November to January). However, the only bad part is the Cierzo, a cold and dry wind blowing from the NW that is quite common on clear days, and can make your stay really unpleasant. Beware also of sunny days in spring and autumn, if the Cierzo blows, you will regret not having warm clothes with you.
Detailed weather forecasts including wind speed can be found in .
When to Visit
The best time to visit Zaragoza is during spring (April to mid-June) and autumn (September to October). In late June and July the days can be quite hot but in the evenings the city is bustling with people going out for dinner or having a beer with friends in a terrace. In August the city is almost deserted, with most people being on holidays at the mountains or the coast, and more than half the bars, restaurants and small business closed.
The major city festival is El Pilar that takes place every year the week of the 12th of October, with lots of concerts, performances and street animations. It is also the best time to see a bullfight in Zaragoza.
The Easter week, although not in the same league that the Andalucia or Calanda counterparts, is very scenic, with several processions going over the city centre every day with their dramatic sculptures, black-dressed praying women and hundreds of hooded people playing drums.
The main carriers are Ryanair with flights from Alicante, Brussels-Charleroi, Milan-Orio al Serio, London-Stansted, and Rome-Ciampino, Iberia/Air Nostrum with flights from Madrid, Paris-Orly, Frankfurt, La Coruña and Vigo, and Air Europe with flights from Palma de Mallorca, Lanzarote and Tenerife. For most of these destinations there is a daily flight, while others are served 3 or 4 times a week.
There is also a web blog with more information concerning arrivals and departures, Zaragoza Airport Blog .
Transfer to/from the airport: The cheapest option is the airport bus stopping at Los Enlaces, Delicias train station, Avenida de Navarra 12, and Paseo de María Agustín 7, in the city centre (45 minutes ride). The bus costs €1.85 and runs every 30 minutes Mo-Sa and every hour on Sundays and holidays. Alternatively a taxi will cost around €25-30 and take around 20 minutes to the city centre.
As most flights to Zaragoza only run once a day, it is sometimes more convenient to fly to Madrid or Barcelona airports, from where you can reach Zaragoza in less than 3 hours.
From Madrid Barajas Airport: go to Atocha RENFE train station either by taxi (30 minutes, around €25) or by metro (45 minutes, €2) and then take the high speed train AVE to Zaragoza (1h30, around €50). A cheaper but not so comfortable alternative is taking a coach from company ALSA that runs between Barajas terminal T4 and Zaragoza every 2-3 hours (3h45 trip, single/return: €15/€26). If you are in terminals T1 T2 or T3, take the free airport bus shuttle to terminal T4. The bus to Zaragoza stops in the same place as the airport shuttle. Yes, there are no ticket counters, information posts, or timetables, but place yourself with your back towards the T4 terminal exit, look at your right and you will see the ticket vending machine of ALSA.
From Barcelona Airport: The easiest way is to take the half-hourly RENFE C-10 suburban train to Barcelona Sants (20 minutes, €2.20), and then take the high speed train AVE to Zaragoza (1h45, around €60). If you already have your AVE ticket, you can get the suburban train ticket for free in the automatic vending machines, by typing the code for “cercanías” that appears in your AVE ticket.
Zaragoza is served by the high speed train AVE that reaches Madrid in approximately 1 hour 20 minutes and Barcelona in approx. 1 hour 30 minutes. There are up to 19 trains a day in each direction for Madrid and 12 for Barcelona. Regular rates start at about €50 to Madrid and €60 to Barcelona, but you can get up to a 60% discount if you book through the web 15 days in advance.
A cheaper way to get to Zaragoza from Barcelona is using the "Regional Express" - a slow train going on an ancient track, stopping at every small village and some those post-industrial ghost towns, and really astonishing landscapes. The ride takes 5 hours, costs €22.
For more information on train schedules and prices, visit the website of RENFE .
All trains and buses arrive to Delicias station. The city centre is some 2km away from, and can be reached using urban buses 34 and 51 or by taxi (10 minutes, around €10)
You can reach Zaragoza either from Madrid or Barcelona in 3:45 hours. The coach company is ALSA and the single/return ticket costs around €15/€26. Zaragoza is also well communicated with other main capital cities, such as Valencia and Bilbao. There is possibility of getting to Zaragoza from France by bus. The main lines travel from Lourdes, Tarbes, Pau and Oloron.
For bus schedules from Barcelona, also try Barcelona Nord .
Zaragoza is very well connected by free speedways with Huesca (1h), Teruel (2h), Madrid (3h), and by toll highways with Barcelona (3h, €30), Pamplona and Bilbao. Traffic around the city is relatively light except on some weekends and holidays.
Free parking in the city centre is very scarce. Most streets have metered parking limited to 1 or 2 hours. Underground paying parkings are scattered in the entire city and usually have free places.
If you stay in or near the old town, most is walkable.
If you plan on bussing around, a card costs seven euros at any tobacco kiosk (initial card fee of two euros, so when charging it next time it will just cost €5). With the card you can change lines within an hour without being charged again. Single tickets are 1,05 euros.
The city's taxi drivers are plentiful and mostly honest. Zaragoza Taxi phone numbers
Sightseeing bus is another option. They provide more than just a great way to travel around the city, available to all pockets. It costs €7 (free if you have the Zaragoza card) and the ticket can be used the entire day.
The Zaragoza Card provides, from €7.66 per day:
Swimming Pools for Hot Days
Summer days can be very hot in Zaragoza. If you prefer relaxing by the swimming pool over a sightseeing program, here are a few suggestions. Public swimming pools in Zaragoza are generally clean and well maintained. The entrance fee is some €3 for an adult. Open-air pools are open until 9 or 10PM in the evening.
Zaragoza has much to offer in the way of shopping, with most central streets in Zaragoza being lined with shopping opportunities.
Zaragoza's shopping area stretches from Residencial Paraiso in Sagasta to the Plaza de España. The most exclusive shops are on Francisco de Vitoria, San Ignacio de Loyola, Cadiz, Isaac Peral and the streets crossing them.
Zaragoza's craft and souvenir shops are located at Anticuarios de la Plaza de San Brun.
Mercadillo La Romareda behind the La Romareda Football Stadium is the largest open-air market in Zaragoza, but if you are looking for food and fresh produce head for Mercado Central and Lanuza Market.
If you are looking for everything under one roof, then El Corte Inglés is located next to Plaza de Paraíso, and Centro Comercial Gran Casa is a one-stop super mall where you can find everything including shops, restaurants a bowling alley and cinemas.
Mercado Central is on a site which has been a market place since the Middle Ages. It is the perfect place to buy Zaragozan products as well as observe the atmosphere of a traditional Spanish market. The Misericordia Bullring is the place to go on Sunday as it is the venue for the traditional flea market.
Some of the best known regional specialities are: Bacalao al Ajoarriero, cod-fish with garlic and eggs, Huevos al Salmorejo, eggs with cold tomato cream, Longanizas y Chorizos, highly appreciated kinds of sausages, Ternasco Asado, roasted young lamb, Pollo al Chilindrón, chicken in a sauce of cured ham, tomatoes, onions and paprika, Cordero a la Pastora, lamb Shepherd's style, Lomo de Cerdo a la Zaragozana, cutlet, Migas a la Aragonesa, a dish made of crumbs scrambled with an egg and chorizo. People even eat rabbits stewed in rabbit blood. Borrajas is a vegetable which can only be found in Aragon. It is usually eaten with olive oil.Melocotón con vino, peaches in wine, is also a good option, though sometimes it is hard to find a restaurant serving this dessert.
Zaragoza is well known because of its many tapas bars. The best place to eat is the old city, commonly called "Casco viejo" which is a bunch of small streets similar to the Zoco.
One excellent choice is in Calle de los mártires which is a tapas bar in which you can only eat one tapa. In the first one the mushroom and close to it the Taberna de Doña Casta, the "Huevos rotos con foi" which is mainly scrambled egg with fries and foi or jamón serrano. Plaza Santa Marta is in the old town as well; it's a little bit more expensive but the food is of high quality. A "Tabla" is a wooden plate in which different tapas like cheese and sausages are served, often with a bottle of wine in the price.
Sea food tapas are not that common, but can be very good and cheap. Casa de Mar, located in Eusebio Blasco Street, is a local favorite. Cheap crayfish, cuttlefish and a great cold white wine. A four person meal with two bottles of wine costs less than €12 each.
There is a number of good wines produced in Aragon.
Tareas of Calle de Espoz y Mina and Calle Mayor, which are a stone's throw from Plaza del Pilar, have plenty of varied bars from which to choose.
The following places are located in the Huesca province, not more than 2 hours by car and in the middle of the Pyrenees. Charming places in the middle of the nature.
|
Sometimes people use alcohol to help them cope with a problem. Although alcohol can make some people feel good at first and help someone forget about their problems for a bit, that feeling doesn’t last. Alcohol can mess with our bodies and our brains and can make people’s problems even worse, not better.
Alcohol can make some people feel like they're invincible and make decisions that they would never normally make. These can be things that they regret afterwards and make them feel bad.
There are also laws about buying and drinking alcohol. One of the reasons these laws exist is to protect young people from getting hurt in any way.
The best way to deal with a problem for good is to talk to someone about it, and to get advice and support from people who are trained to help.
Who can help
On the ChildLine website you can also find information and advice if you're worried about how much alcohol your parent or carer might be drinking.
|
Explore energy and circuits through augmented reality, individually or together, with these interactive playing cards and free app. Includes six sets of twenty cards.
The MindLabs Energy and Circuits Class Set is a magical STEM learning tool for children ages 8 and older. It combines six sets of twenty game cards, a free digital app, and augmented reality to provide students with a fun and immersive learning experience. Play alone, or collaborate with friends, as you add and remove cards, draw wires, and create circuits that come to life in 3D! The ideal learning tool for solo or team play, MindLabs enables students to explore energy sources, circuits, and more.
Create, play, and collaborate from any location!
- Assemble cards picturing batteries, light bulbs, fans, and more into working circuits
- Draw and connect wires on a mobile device to bring circuits to life
- Explore forms of energy with animated vocabulary cards
- Investigate energy resources with animated idea cards
- Step-by-step lessons guide students through basic circuit concepts
- Work independently or collaborate with students in any location
- Includes 6 card sets (20 cards each) and more than 20 interactive lessons with step-by-step guidance
- Extra thick cards will withstand years of use in your classroom
- Free application is intuitive and engaging for young learners
How It Works
- Download the free app in the App Store or Google Play Store – MindLabs: Energy and Circuits.
- Open the app and select Challenge or Create from the menu.
- Find the character card in your card deck and point your mobile device at the card.
- Progress through the Challenges to learn circuit and energy basics.
- Create your own unique circuits, independently or collaborate with friends.
- 6x Game Piece Card Sets (20 cards per set)
- 1x Storage Box
- 1x Free MindLabs Augmented Reality Application for Mobile Devices
|
New Space Co-Creation Manager and Associate Professor
VTT Technical Research Centre of Finland Ltd, Finland
Satellite communications and terrestrial mobile communications systems have been traditionally designed and operated as distinctively separate systems. During third generation of cellular technology (3G) first steps toward convergence of satellite and terrestrial systems were done and the satellite air interface was made compatible with the terrestrial universal mobile telecommunication system (UMTS) infrastructure. In 4G, satellite systems have been used to enable global roaming in places where terrestrial 4G network is impossible to be installed or too expensive. During the 5G the progress is continuing and there are real promises of having wide-scale use of integrated systems in the future.
What will integration of networks mean for the end users and operators? First of all, a joint radio interface for terrestrial and satellite link would mean that we could use the same handheld to connect via satellite in any location, also outside terrestrial coverage. There are still plenty of locations where building the terrestrial infrastructure is not economically feasible – or not possible at all such as in marine environment. The ability to use same equipment without the need to buy specific satellite terminals would reduce barriers in adopting the technology and provide resiliency and higher feeling of safety also to consumers e.g. during hiking trips in remote locations. Secondly, it is possible to create private networks for public safety use, harbor areas etc. and use satellites as backhaul connections to connect those private networks to outside world.
Now the research community is already looking actively towards 6G networks. The aim is to enable new applications, increase capacity, reduce latency and provide even higher mobility compared to previous generations. One of the planned aspects is that the vertical dimension and the integration of terrestrial, aerial, and satellite networks is taken into account in the network design and operations from the beginning, leading to the three-dimensional (3D) architecture (Dang 2020, Höyhtyä et al., 2022). Therefore, 6G systems can be tailored to support both connectivity and positioning needs of future users and applications accurately and efficiently.
In addition to technical advantages, the 6G systems are designed with sustainable development goals in mind. The multi-layer systems are developed from economic, social and environmental viewpoints. There are currently large satellite constellations under development aiming to provide services to developing countries, enabling remote healthcare, and supporting e-learning e.g. in Congo. From the environmental viewpoint satellites can help to preserve Arctic areas since no terrestrial infrastructure need to be built in the fragile environment. In addition, increasing number of satellites in the orbit should not increase the space debris in a way that endangers satellite services to future generations.
From the frequency point of view, there are many challenges ahead regarding the spectrum management for 6G systems in order to be able to support needs and ensure interference-free operation to existing systems. Dynamic spectrum management needs to be updated to the 6G era. There are many topics to be addressed to make this successfully. First, defining the most suitable frequency bands for systems and links. Second, developing spectrum sharing mechanisms to manage the complexity of a dynamic and mobile 3D network. Most probably, artificial intelligence-based solutions are required. 6G SatCom related spectrum sharing may include spectrum coexistence a) between different satellite systems, b) between satellite and terrestrial systems and c) between systems in different layers of the multi-layer network. There have been studies on database-assisted spectrum sharing operations and predictive approaches where e.g. licensed shared access has been studied (Höyhtyä et al., 2021). However, plenty of technical studies and regulatory decisions will be required to enable visions shown by the research community.
M. Höyhtyä et al., “Licensed shared access field trial and a testbed for integrated satellite-terrestrial communications including research directions for 5G and beyond” Wiley International Journal of Satellite Communications and Networking, vol. 39, pp. 455–472, July/August 2021 https://doi.org/10.1002/sat.1380.
M. Höyhtyä, S. Boumard, A. Yastrebova, P. Järvensivu, M. Kiviranta, and A. Anttonen, “Sustainable Satellite Communications in the 6G Era: A European View for Multi-Layer Systems and Space Safety,” submitted for consideration in IEEE journal, preprint available at https://arxiv.org/pdf/2201.02408
Marko Höyhtyä received the D.Sc. (Tech.) degree on telecommunication engineering from the University of Oulu, where he currently holds associate professor position. He is an associate professor at the National Defence University as well. He is currently working as a New Space Co-Creation Manager, coordinating space technology research at VTT. He was a Visiting Researcher at the Berkeley Wireless Research Center, CA, from 2007 to 2008, and a Visiting Research Fellow with the European Space Research and Technology Centre, the Netherlands, in 2019. His research interests include critical communications, autonomous systems, and resource management in terrestrial and satellite communication systems.
|
The Internet Explorer Security tab is used to set and change options that can help protect your computer from potentially harmful or malicious online content.
If you aren't already looking at the Internet Explorer Security settings, follow these steps to find them:
Open Internet Explorer by clicking the Start button . In the search box, type Internet Explorer, and then, in the list of results, click Internet Explorer.
Click the Tools button, and then click Internet Options.
Click the Security tab.
Internet Explorer assigns all websites to one of four security zones: Internet, Local intranet, Trusted sites, or Restricted sites. The zone to which a website is assigned specifies the security settings that are used for that site.
The following table describes the four Internet Explorer security zones.
The level of security set for the Internet zone is applied to all websites by default. The security level for this zone is set to Medium High (but you can change it to either Medium or High). The only websites for which this security setting is not used are those in the Local intranet zone or sites that you specifically entered into the Trusted or Restricted site zones.
The level of security set for the Local intranet zone is applied to websites and content that is stored on a corporate or business network. The security level for the Local intranet zone is set to Medium (but you can change it to any level).
The level of security set for Trusted sites is applied to sites that you have specifically indicated to be ones that you trust not to damage your computer or information. The security level for Trusted sites is set to Medium (but you can change it to any level).
The level of security set for Restricted sites is applied to sites that might potentially damage your computer or your information. Adding sites to the Restricted zone does not block them, but it prevents them from using scripting or any active content. The security level for Restricted sites is set to High and can't be changed.
In addition to the default security levels, you can also customize individual security settings by clicking the Custom level button.
For more information about security, see Security and privacy features in Internet Explorer.
For information about adding or removing websites from security zones, see Security zones: adding or removing websites.
No. Privacy and cookie settings are changed on the Privacy tab.
This information applies to Windows Internet Explorer 7 and Windows Internet Explorer 8.
If you aren't already looking at Internet Explorer's Security settings, follow these steps to find them:
Open Internet Explorer by clicking the Start button , and then clicking Internet Explorer.
See all support pages for
Ask a question in the
|
Imagine a scenario where there is a decline in aggregate demand. Identify which part of the business cycle is part of a decline in aggregate demand. Gross Domestic Product (GDP) measures the amount of new production. A change in the amount of new production affects employment. Describe what would happen to GDP, the unemployment rate and the inflation rate if there is a decline in aggregate demand.
Reference: Chapter 6, section 6.3: Aggregate Equilibrium and Changes in Equilibrium.
Guided Response: Review and respond to at least two of your classmates’ posts by discussing this question in terms of how a decrease in aggregate demand impacts GDP, and why the change in GDP affects unemployment
|$10.00||no category||soumain||0 time(s)|
|
This blog is to encorporate discussions on Lost Continents, Catastrophism, The origin of Modern Humans and the Out of Africa theory, Genetics and Human Diversity, The Origin and Spread of Civilization and Cultural Diffusion across the face of the Globe.
Deluge of Atlantis
Deluge of Atlantis
Tuesday, April 30, 2013
Stonehenge Much Older Than Previously Thought.
From Robert Kline's Global Warming and Terraforming Terra site. To be fair, we've had some of these very old C14 dates for some time now, the information was just not known by the general public, it was not publicised.
In fairness, it was reasonable to expect as much since we know that the advent cattle husbandry began as early as the tenth millennia. This led directly to stable land ownership and boundaries and a natural need for monumental works.
The site then had to also be important as a population center long before the building of Stonehenge. This now becomes a fresh focus for new archeology. It is also a reminder of the productivity of cattle husbandry even by itself.
The more interesting question is just now crude were the astronomical alignments? Just aligning a couple of poles and observing shadows and sunrise each year will show long term predictable patterns easily discernible in a lifetime. All of this had agricultural value and was certainly applied.
I do not know when grain growing made its first appearance, but surely no later than the advent of the Bronze Age and its shipping. It would be nice to know just how far that may be pushed back.
Stonehenge was occupied by humans 5,000 years EARLIER than we thought
Human beings were occupying Stonehenge thousands of years earlier than previously thought, according to archaeologists.
Research at a site around a mile from Stonehenge has found evidence of a settlement dating back to 7500BC, 5,000 years earlier than previous findings confirmed.
And carbon-dating of material at the site has revealed continuous occupation of the area between 7500BC and 4700BC, it is being revealed on BBC One's The Flying Archaeologist tonight.
Research at a site around a mile from Stonehenge has found evidence of a settlement dating back to 7500BC, 5,000 years earlier than previous findings confirmed
Experts suggested the team conducting the research had found the community that constructed the first monument at Stonehenge, large wooden posts erected in the Mesolithic period, between 8500 and 7000BC.
Open University archaeologist David Jacques and friends started to survey the previously-unlooked at area around a mile from the main monument at Stonehenge, when they were still students in 1999.
The site contained a spring, leading him to work on the theory that it could have been a water supply for early man.
He said: 'In this landscape you can see why archaeologists and antiquarians over the last 200 years had basically honed in on the monument, there is so much to look at and explore.
'I suppose what my team did, which is a slightly fresher version of that, was look at natural places - so where are there places in the landscape where you would imagine animals might have gone to, to have a drink.
'My thinking is where you find wild animals, you tend to find people, certainly hunter-gatherer groups, coming afterwards.
Research at a site around a mile from Stonehenge has found evidence of a settlement dating back to 7500BC, 5,000 years earlier than previous findings confirmed. And carbon-dating of material at the site has revealed continuous occupation of the area between 7500BC and 4700BC.
'What we found was the nearest secure watering hole for animals and people, a type of all year round fresh water source.'
He described the site as 'pivotal'.
Dr Josh Pollard, from Southampton University and the Stonehenge Riverside Project, said he thought the team may have just hit the tip of the iceberg in terms of Mesolithic activity focused on the River Avon around Amesbury.
'The team have found the community who put the first monument up at Stonehenge, the Mesolithic posts 9th-7th millennia BC.
'The significance of David's work lies in finding substantial evidence of Mesolithic settlement in the Stonehenge landscape - previously largely lacking apart from the enigmatic posts - and being able to demonstrate that there were repeated visits to this area from the 9th to the 5th millennia BC.'
The Flying Archaeologist is being shown on BBC One (West and South) at 7.30pm tonight.
|
Albert Einstein is set to revolutionise science again – this time, for school children.
Researchers say it’s time to reverse Australia’s critical skills shortage in science, technology, engineering and maths (STEM), especially among young girls.
Two programs, Einstein First and Quantum Girls, will bring primary and high school science education into the 21st century.
The programs are led by Professor Susan Scott (Australian National University) and Professor David Blair (University of Western Australia), two Prime Minister’s Prize for Science recipients.
“Kids want to learn about things like black holes and not science that dates from before quantum physics was discovered,” Prof Blair said.
Trialled in Western Australia, the Einstein First program will give children basic understanding of the science behind technologies that drive the modern world.
“Our kids are curious and excited by science but they think science at school is about ‘old stuff’,” Prof Blair said.
“We must replace 19th century concepts and teach everyone the language of modern physics.
“The theories of Albert Einstein aren’t too hard for school kids.”
The second program, Quantum Girls, is bringing quantum science and quantum computing into classrooms, STEM clubs and hackathons to inspire girls.
Quantum computing is predicted to contribute $244 billion a year to the Australian economy by 2031.
But women make up less than 40 per cent of students in university STEM courses and less than 20 per cent in vocational courses, 2022 Australian industry data shows.
In the industry, men make up more than 70 per cent of senior management and 92 per cent of CEOs.
Girls’ confidence in STEM subjects is generally lower than boys, and falls as they get older.
Prof Scott said the lack of female students studying STEM in Australia was “disturbing”.
The Quantum Girls program aims to train 200 female teachers, who will then teach quantum science and quantum computing to 11-15 year old girls.
“We are at a critical time when it comes to developing our future STEM workforce,” Prof Scott said.
“The challenges and opportunities are already here, but at the moment our school system is failing us in what we need for the future.”
(Australian Associated Press)
|
RICHMOND, Va. (AP) — Land along the York River that archaeologists believe was the center of a vast Indian empire before the first Europeans settled in Virginia is gaining White House attention as a possible addition to the National Park System.
President Barack Obama has set aside $6 million to acquire more than 250 acres of the former Indian village in Gloucester to achieve that goal. Congress must approve the funding in the 2015 funding proposal.
Called Werowocomoco (pronounced Wehr-oh-woh-KAHM-uh-koh), the land is believed to have been the seat of power for Powhatan.
Powhatan oversaw an empire that included 30 political divisions and 15,000 to 20,000 Indians at the time Capt. John Smith and his fellow settlers established the first permanent English settlement in North America in 1607. Some Virginia Indians have called the site "our Washington, D.C."
It is also believed to be where Pocahontas appealed to Powhatan, her father, to spare the life of Smith. That story has its share of skeptics, however. Some historians believe Smith may have misinterpreted Indian intentions or inflated his adventures in the New World.
Archaeological digs have revealed a longhouse befitting the stature of Powhatan and the outlines of ditches that experts believe delineated sacred and secular portions of Werowocomoco, also indicative of Powhatan's stature.
Archaeologists worked with descendants of Indian tribes to understand the site. Some 58 acres have already been preserved to ensure they'll never be developed.
Archaeologist Martin Gallivan helped lead a dig at the site and is working on a book on the Algonquian chiefdoms, including Powhatan. Making the site a unit of the federal park system would elevate it to the status of other important American historic destinations, such as Jamestown and Yorktown.
"I think it deserves that status given the events that occurred there in the early Colonial period and the deeper history of the Powhatans," said Gallivan, a professor of archaeology at the College of William and Mary. "If it was included in the national park system that would give the American public the chance to learn that history."
Gallivan said it's also essential that Virginia Indians be consulted, should Congress approve the funding.
The National Park Service would work closely with tribes and others on how to best interpret the site, a spokeswoman said.
"That planning would have to consider the best approach for visitor experiences while at the same time protecting the archaeological and spiritual significance of the place," spokeswoman Cindy Chance wrote in an email.
If approved, the funding would be used to purchase the land from the existing owners and for interpretive materials, such as brochures and signs.
Werowocomoco would become a stop along the Captain John Smith Chesapeake National Historic Trail. The trail charts the exploration of the Chesapeake Bay and its tributaries by Smith after Europeans arrived at Jamestown in 1607.
Chance said the site has been on the park service's radar.
"Because of Werowocomo's importance to Virginia, to tribes and the NPS, the NPS has identified it as a priority for years," she wrote.
The state worked with the current landowners to ensure the 58 acres are preserved forever. Owners Bob and Lynn Ripley were paid $600,000 for the development rights.
The land, which includes the Ripley home, has been the focus of extensive archaeological digs, with the steady involvement of native representatives. Burial grounds, for instance, were left undisturbed in keeping with Indian wishes.
Gallivan said no digs are underway now, "although there's a lot more to be done at Werowocomoco."
The archaeological work that has been going on for nearly a decade is being analyzed and destined to be published, he said.
Steve Szkotak can be reached on Twitter at http://twitter.com/sszkotakap .
Captain John Smith Trail: http://www.smithtrail.net/about-the-trail/planning-process/
|
Another map of Antarctica, this time from 1531
Charles Hapgood (and those derivative of him) used other maps allegedly showing Antarctica that are, at first sight, even more convincing than the Piri Re‘is map. The first of these is a product of Orontius Finaeus Delphinus (1494-1555), whom most Bad Archaeologists consistently and incorrectly refer to as Oronteus (more properly, his name was Oronce Fine or Finé, although the Latinised version seems to be in more common use, at least among the Bad Archaeologists). The map in question was published in 1531 and its supporters claim that it shows the continent at the correct scale, placing the Weddell and Ross Seas as well as Queen Maud Land, Wilkes Land and Marie Byrd Land in their correct longitudes. Again, if these claims are correct, they would display an even more remarkable knowledge of the continent than that supposedly (but demonstrably not) shown by Piri Re’is.
Although there are fairly obvious similarities between the general depiction of the southern continent by Orontius Finaeus and modern maps of Antarctica, they do not stand up to close scrutiny; indeed, there are more differences than similarities, much as one would expect from a map drawn without genuine knowledge of the southern continent! To show that Orontius’s Terra Australis corresponds to the outline of Antarctica, it was necessary for Hapgood to rotate the depiction by about twenty degrees, move the South Pole by 7½° (1,600 km) and alter the scale, as Terra Australis is 230% the size of Antarctica. Hapgood used this change in scale to explain the absence of the Antarctic Peninsula (Palmer Land), which he believed Orontius Finaeus had to omit from his map as it would have overlapped with South America at that scale; he explained that Finaeus confused latitude 80° south with the Antarctic Circle. Just as with his treatment of Piri’s map, Hapgood also had to shuffle whole sections of coastline to make them fit. It is unclear how the hypothesised original map had become fragmented and wrongly recombined; it is even more unclear how the fringe writers can go on to claim that various geographical features are shown in their correct places and at the correct scale. Again, these writers ignore what we know about the life of Oronce Fine.
The life of Oronce Fine (Oronce Finé, Orontius Finaeus, Oronteus Finaeus) (1494-1555)
Not unexpectedly, given the way fringe writers tend to ignore inconvenient facts, a great deal is known about the biography of Oronce Fine. He was born in Briançon (France) in 1494 and educated in Paris. After a brief spell in prison in 1518, he earned a medical degree from the Collège de Navarre in Paris in 1522, although he was to follow a career as a mathematician. In 1524, he was once again in prison and in the same year built an ivory sundial that still exists. Like many mathematicians of the sixteenth century, Fine was considered an expert on fortifications and worked on the defences of Milan. In 1531, he was appointed to the chair of mathematics at the Collège Royal in Paris. He wrote voluminously on scientific subjects, his publications including treatises on astronomical instruments and astronomy (he suggested in 1520 that eclipses of the moon could be used to determine the longitude of places); he also invented a map projection, producing a map of the world in 1519 that emphasised it. He also drew the first domestically published map of France in 1525 and on his world map of 1531, the name Terra Australis appeared for the first time. It is this latter map that is popular with Bad Archaeologists. Other productions include works on arithmetic and geometry. In 1544, he calculated the value of π to be (22 2/9)/7, which he later refined to 47/15 and, in De rebus mathematicis of 1556, 3 11/78. In astronomy, he believed that the earth was at the centre of the universe (in common with most of his European contemporaries) and he built an astronomical clock based on this belief in 1553.
|
(Note: Because of the unique nature of the “arm rubbing/pounding” conditioning exercises in Uechi-ryu, I asked one of my medical doctor students to do some research into the dangers of contracting Aids from the practice of these drills. He gave me the following information.) GEM
The cumulative epidemiologic data indicates that transmission of AIDS requires direct intimate contact with or intravenous innoculation of blood and blood products, semen or tissues. The mere presence of or casual contact with an infected person cannot be construed as exposure to AIDS. Although the theoretical possiblity of rare or low risk alternative modes of transmission cannot be totally excluded, these are not documented in the medical literature.
The major mode of transmission is sexual. Also important is intravnous transmission by shared needles. AIDS is not transmitted by casual contact, fecal-oral or airborne routes or by contaminated food or drinking water.
People are at risk of AIDs to the extent that they are exposed to blood and certain bodily fluids. The AIDS virus has been isolated from blood, semen, saliva, tears, urin, vaginal secretions, cerebrospinal fluid, breast milk and amniotic fluid but only blood and blood products, semen, vaginal secretions and possibly breast milk have been directly linked to transmissionof AIDS.
Contact with saliva, tears and sweat has not been shown to result in infection. The virus is not capatible of penetrating intact skin but infection may result from infectious material coming into contact with mucus membranes or open wounds (including inapparent lesions) on the skin.
In general the source of the virus in the body fluids other than blood is lymphocytes. All body fluids that contain lymphocytes can harbor the virus. Sweat does not contain lymphocytes.
The AIDS virus can live outside the body for up to an hour but in general survives only minutes. Fresh blood must encounter an open would in order for infection to be spread during routine exercise.
Factors included in the evaluation of risk include the type of body fluid with which there may be contact, the volume of fluid to be encountered, the probability of exposure taking place, the probable route of exposure and the virus concentration in the fluid or tissues.
If the normal routine does not involve exposure to blood or body fluids which carry the AIDS virus, (although situations can be imagined or hypothesized under which anyone, anywhere might encounter potential exposure to body fluids) but does involve handling of implements or utensils, use of public or shared bathroom facilites or telephones and personal contact such as handshaking or arm rubbing, no protective equipment is required.
|
Due: Saturday, June 22
You may write on any of the short stories assigned up to this point. Your thesis must be an arguable statement: take a position on a point that reasonable, intelligent people, having read the story, could still disagree about. Your argument supporting this thesis must be based on clearly stated reasons, which you must show are grounded in the story by presenting brief quotes or specific details from the story as evidence to back them. Present all these features in a clearly organized, persuasive essay.
Here are some general suggestions to start your thought processes. They don’t supply you with a ready-made thesis, but help you develop one. Please note also that these hints aren’t intended to provide an outline for your essay.
1. Discuss what you take to be one important theme or idea in one of the stories. Use one or more of the basic elements of the short story, as described by Kennedy & Gioia, to find evidence for this theme. Do not try to cover every element; simply discuss one or two that seem especially productive for developing your argument.
2. Compare two stories that appear to be on a similar theme. Do the two writers handle specific elements of the story in ways that are mainly similar or different? What do the similarities (or differences) reveal about each writer's development of this central theme?
3. Discuss any problem…
|
Minor in Sociology
Sociology is the study of social life including the social causes and consequences of human behavior. It looks at groups, organizations, and societies - how they develop and change and how people interact in them. Since so much that humans do is social, the subject matter of sociology ranges from the intimate family to the hostile mob, from crime to religion, from divisions by race, gender, and social class to the shared beliefs of a common culture, from the sociology of work to the sociology of popular music. Indeed sociology seems to offer something for everyone, and its career potential is becoming increasingly recognized by today's students.
We offer coursework for those who are pursuing a variety of careers. Our students have practice in many of the skills most desired by today’s employers, including the ability to communicate effectively and promote teamwork, to frame and solve problems realistically, to plan and evaluate projects and programs, to prepare clear and concise reports and manuals, and to speak effectively in varied group situations.
A minor in sociology requires a minimum of 21 semester hours, and is to be completed with a major area of study.
Please note: Courses may be available through Online Learning (OL), which refers to semester-based online courses, Independent Learning (IL), which refers to non-semester based courses or both delivery methods. The delivery method is noted beside each course.
Minor Required Courses (12 hours)
- Sociology 100, Introductory Sociology (3 hours) - OL and IL
- Sociology 300, Using Statistics in Sociology (3 hours) - OL
- Sociology 302, Strategies of Social Research (3 hours) - OL and IL
- Sociology 304, Sociological Theory: Perspectives on Society (3 hours) - OL
Minor Electives available through OL and/or IL (9 hours required)
- Sociology 210, Interaction with Self and Society (3 hours) - OL
- Sociology 220, Marriage and Family (3 hours) - OL and IL
- Sociology 240, Contemporary Social Problems (3 hours) - IL
- Sociology 309, Social Deviance (3 hours) - OL and IL
- Sociology 322, Religion in Society (3 hours) - IL
- Sociology 324, Sociology of Sport (3 hours) - OL and IL
- Sociology 330, Criminology (3 hours) - OL and IL
- Sociology 332, Juvenile Delinquency (3 hours) - OL and IL
- Sociology 342, Aging in Society (3 hours) - IL
- Sociology 352, Technology, Work and Society (3 hours) - IL
- Sociology 360, The Community in Rural and Urban Settings (3 hours) - IL
- Sociology 375, Diversity in American Society (3 hours) - IL
- Sociology 438, Victimology (3 hours) - OL and IL
- Sociology 450, Occupations and Professions (3 hours) - IL
Department Contact Information
For more information on the undergraduate minor in Sociology
Call: (270) 745-5173
Interested in majoring in Sociology? Visit the WKU Sociology website to learn more.
download Adobe Acrobat Reader.
Note: documents in Excel format (XLS) require Microsoft Viewer,
Note: documents in Word format (DOC) require Microsoft Viewer,
Note: documents in Powerpoint format (PPT) require Microsoft Viewer,
Note: documents in Quicktime Movie format [MOV] require Apple Quicktime,
|
As a follow up to my earlier LinkedIn Post of Google’s BERT model on NLP, I am writing this to explain further about BERT and the results of our experiment.
In a recent blog post, Google announced they have open-sourced BERT, their state-of-the-art training technique for natural language processing (NLP) applications. The paper released (https://arxiv.org/abs/1810.04805) along with the blog is receiving accolades from across the machine learning community. This is because BERT broke several records for how well models can handle language-based tasks and more accurately NLP tasks.
Here are a few highlights that make BERT unique and powerful:
- BERT stands for Bidirectional Encoder Representations from Transformers. As the name suggests, it uses Bidirectional encoder that allows it to access context from both past and future directions, and unsupervised, meaning it can ingest data that’s neither classified nor labeled. This is unique because previous models looked at a text sequence either from left to right or combined left-to-right and right-to-left training. This method is opposed to conventional NLP models such as word2vec and GloVe, which generate a single, context-free word embedding (a mathematical representation of a word) for each word in their vocabularies.
- BERT uses Google Transformer, an open source neural network architecture based on a self-attention mechanism that’s optimized for NLP. The transformer method has been gaining popularity due to its training efficiency and superior performance in capturing long-distance dependencies compared to a recurrent neural network (RNN) architecture. The transformer uses attention (https://bit.ly/2AzmocB) to boost the speed with which these models can be trained.As opposed to directional models, which read the text input sequentially (left-to-right or right-to-left), the Transformer encoder reads the entire sequence of words at once. This characteristic allows the model to learn the context of a word based on all of its surroundings (left and right of the word).
- In the pre-training process, researchers used a masking approach to prevent words that’s being predicted to indirectly “see itself” in a multi-layer model. A certain percentage (10-15%) of the input tokens were masked to train the deep bidirectional representation. This method is referred to as a Masked Language Model (MLM).
- BERT builds upon recent work in pre-training contextual representations — including Semi-supervised Sequence Learning, Generative Pre-Training, ELMo, and ULMFit. BERT is pre-trained on 40 epochs over a 3.3 billion word corpus, including BooksCorpus (800 million words) and English Wikipedia (2.5 billion words). BERT has 24 Transformer blocks, 1024 hidden layers, and 340M parameters. The model runs on cloud TPUs (https://cloud.google.com/tpu/docs/tpus) for training which enables quick experimentation, debug and to tweak the model
- It enables developers to train a “state-of-the-art” NLP model in 30 minutes on a single Cloud TPU (tensor processing unit, Google’s cloud-hosted accelerator hardware) or a few hours on a single graphics processing unit.
These are just a few highlights on what makes BERT the best NLP model so far.
To evaluate the performance of BERT, we compared BERT to IBM Watson based NER. The test was performed against the same set of annotated large unstructured documents. The model created using BERT and IBM Watson was applied to the annotated large unstructured documents. Below table shows the results we achieved:
Based on our comparison and what we have seen so far, it is fairly clear that BERT is a breakthrough and a milestone in the use of Machine Learning for Natural Language Processing.
|
Psychiatry is the only medical specialty with a longtime nemesis; it’s called “antipsychiatry,” and it has been active for almost 2 centuries. Although psychiatry has evolved into a major scientific and medical discipline, the century-old primitive stage of psychiatric treatments instigated an antagonism toward psychiatry that persists to the present day.
A recent flurry of books critical of psychiatry is evidence of how the antipsychiatry movement is being propagated by journalists and critics whose views of psychiatry are unflattering despite the abundance of scientific advances that are gradually elucidating the causes and treatments of serious mental disorders.
What are the “wrongdoings” of psychiatry that generate the long-standing protests and assaults? The original “sin” of psychiatry appears to be locking up and “abusing” mentally ill patients in asylums, which 2 centuries ago was considered a humane advance to save seriously disabled patients from homelessness, persecution, neglect, victimization, or imprisonment. The deteriorating conditions of “lunatic” asylums in the 19th and 20th centuries were blamed on psychiatry, not the poor funding of such institutions in an era of almost complete ignorance about the medical basis of mental illness. Other perceived misdeeds of psychiatry include:
- Medicalizing madness (contradicting the archaic notion that psychosis is a type of behavior, not an illness)
- Drastic measures to control severe mental illness in the pre-pharmacotherapy era, including excessive use of electroconvulsive therapy (ECT), performing lobotomies, or resecting various body parts
- Use of physical and/or chemical restraints for violent or actively suicidal patients
- Serious or intolerable side effects of some antipsychotic medications
- Labeling slaves’ healthy desire to escape from their masters in the 19th century as an illness (“drapetomania”)
- Regarding psychoanalysis as unscientific and even harmful
- Labeling homosexuality as a mental disorder until American Psychiatric Association members voted it out of DSM-II in 1973
- The arbitrariness of psychiatric diagnoses based on committee-consensus criteria rather than valid and objective scientific evidence and the lack of biomarkers (this is a legitimate complaint but many physiological tests are being developed)
- Psychoactive drugs allegedly are used to control children (antipsychiatry tends to minimize the existence of serious mental illness among children, although childhood physical diseases are readily accepted)
- Psychiatry is a pseudoscience that pathologizes normal variations of human behaviors, thoughts, or emotions
- Psychiatrists are complicit with drug companies and employ drugs of dubious efficacy (eg, antidepressants) or safety (eg, antipsychotics).
Most of the above reasons are exaggerations or attributed to psychiatry during an era of primitive understanding of psychiatric brain disorders. Harmful interventions such as frontal lobotomy—for which its neurosurgeon inventor received the 1949 Nobel Prize in Medicine—were a product of a desperate time when no effective and safe treatments were available. Although regarded as an effective treatment for mood disorders, ECT certainly was abused many decades ago when it was used (without anesthesia) in patients who were unlikely to benefit from it.
David Cooper1 coined the term “antipsychiatry” in 1967. Years before him, Michel Foucault propagated a paradigm shift that regarded delusions not as madness or illness, but as a behavioral variant or an “anomaly of judgment.”2 That antimedicalization movement was supported by the First Church of Christ, Scientist, the legal system, and even the then-new specialty of neurology, plus social workers and “reformers” who criticized mental hospitals for failing to conduct scientific investigations.3
Formerly institutionalized patients such as Clifford Beers4 demanded improvements in shabby state hospital conditions more than a century ago and generated antipsychiatry sentiments in other formerly institutionalized persons. Such antipathy was exacerbated by bizarre psychiatrists such as Henry Cotton at Trenton State Hospital in New Jersey, who advocated that removing various body parts (killing or disfiguring patients) improved mental health.5
Other ardent antipsychiatrists included French playwright and former asylum patient Antonin Artaud in the 1920s and psychoanalysts Jacques Lacan and Erich Fromm, who authored antipsychiatry writings from a “secular-humanistic” viewpoint. ECT use in the 1930s and frontal leucotomy in the 1940s understandably intensified fear toward psychiatric therapies. When antipsychotic medications were discovered in the 1950s (eventually helping to shut down most asylums), these medications’ neurologic side effects (dystonia, akathisia, parkinsonism, and tardive dyskinesia) prompted another outcry by antipsychiatry groups, although there was no better alternative to control psychosis.
In the 1950s, a right-wing antipsychiatry movement regarded psychiatry as “subversive, left-wing, anti-American, and communist” because it deprived individuals of their rights. Psychologist Hans Eysenck rejected psychiatric medical approaches in favor of errors in learning as a cause of mental illness (as if learning is not a neurobiologic event).
The 1960s witnessed a surge of antipsychiatry activities by various groups, including prominent psychiatrists such as R.D. Laing, Theodore Lidz, and Silvano Arieti, all of whom argued that psychosis is “understandable” as a method of coping with a “sick society” or due to “schizophrenogenic parents” who inflict damage on their offspring. Thomas Szasz is a prominent psychiatrist who proclaimed mental illness is a myth.6 I recall shuddering when he spoke at the University of Rochester during my residency, declaring schizophrenia a myth when I had admitted 3 patients with severe, disabling psychosis earlier that day. I summoned the chutzpah to tell him that in my experience haloperidol surely reduced the symptoms of the so-called “myth”! Szasz collaborated with the Church of Scientology to form the Citizens Commission on Human Rights. Interestingly, Christian Scientists and some fundamental Protestants3 agreed with Szasz’s contention that insanity is a moral, not a medical, issue.
|
FC: culture shared values and beliefs in a given group of people. A set of values, beliefs, attitudes, a way of life, patterns of thought, beliefs, and behavior | Hunter Konze per.3
1: rituals A religious or solemn ceremony consisting of a series of actions performed according to a prescribed order.
2: Beliefs Something one accepts as true or real; a firmly held opinion or conviction.
3: Values The regard that something is held to deserve the importance or preciousness of something your support is of great value.
4: Ethnocentrism is making value judgments about another culture from perspectives of one's own cultural system.
5: History The study of past events particularly in human affairs.
6: Geography 1.The study of the physical features of the earth and its atmosphere and of human activity as it affects and is affected by these.
7: Economics The branch of knowledge concerned with the production, consumption, and transfer of wealth.
8: Government The system by which a nation, state, or community is governed.
9: Cultural Anthropology is the study of living peoples, their beliefs, practices, values , ideas, technologies, economies and more.
10: Multiculturalism the doctrine that several different cultures rather than one national culture can coexist peacefully and equitably in a single country.
11: Stereotype A widely held but fixed and oversimplified image or idea of a particular type of person or thing sexual and racial stereotypes.
12: Globalism refers to the increasingly global relationships of culture, people and economic activity. Most often, it refers to economics
13: Ethnicity is a group of people whose members identify with each other, through a common heritage, often consisting of a common language.
14: Culture Shock It's simply a common way to describe the confusing and nervous feelings a person may have after leaving a familiar culture to live in a new and different culture.
15: GDP The gross domestic product (GDP) is one the primary indicators used to gage the health of a country's economy.
|
A pelvic fracture is defined as one or more breaks, also known as fractures, of the bones that make up the pelvis. Several organs, blood vessels, and nerves are located in this area. Because of this, a pelvic fracture is a serious injury that needs immediate care to prevent current and future complications.
Pelvic fractures are caused by:
- Car, motorcycle, or pedestrian collisions
- High-impact sports injuries
Factors that may increase your chance of a pelvic fracture include:
- History of falls
- Decreased bone mass— osteoporosis
- Decreased muscle strength
- History of trauma in young children and adolescents, especially during sports
A pelvic fracture may cause:
- Pelvic pain
- Pain upon walking, or inability to walk
- Swelling and bruising
- Feeling of a pulled muscle, especially in adolescents that participate in sports
Your doctor will ask about your symptoms and medical history. A physical exam will be done to assess the extent of your injury. You may be referred to a doctor who is a trauma specialist and/or a doctor who is a bone specialist.
Tests may include:
- Blood tests
- Urine tests
Imaging tests can evaluate the pelvic region and surrounding structures. These may include:
A pelvic fracture is a serious injury that may be complicated by injuries to other parts of your body. Proper treatment can prevent long-term complications. Treatment will depend on how serious the fracture is, but may include:
Initial treatment focuses on managing life-threatening problems, such as bleeding or shock. Your fracture may be held in place with a sheet wrap or an external fixation device. With an external fixation device, screws are inserted through the bones and connected to a frame on the outside of your body.
Traction may be used realign and stabilize the fracture if you can't have surgery right away.
Stable fractures will heal without surgery. Unstable fractures are treated with surgery. Some fractures can be set with an external fixation device. Others may require repair with internal pins, screws, or plates.
Extra support may be needed to protect, support, and keep your pelvic bone in line while it heals. Supportive steps using a walker or crutches to help you move around while keeping weight off your legs and pelvis.
Prescription or over-the-counter medications may be given to help reduce inflammation and pain. Blood thinners reduce the risk of blood clots.
Check with your doctor before taking nonsteroidal anti-inflammatory drugs (NSAIDs), such as ibuprofen or aspirin.
Rest and Recovery
Healing time varies by age and your overall health. Young people and those in better overall health heal faster. It may take several months for an unstable fracture to heal.
Complications of a pelvic fracture can be temporary or permanent. These include:
Nerve damage, which can affect
- Bladder function
- Sexual function
You will need to adjust your activities while your pelvic bone heals, but complete rest is rarely required.
As you recover, you may be referred to physical therapy or rehabilitation to start range-of-motion and strengthening exercises. Do not return to activities or sports until your doctor gives you permission to do so.
To help reduce your chance of a pelvic fracture:
- Prevent falls by using a stool or stepladder to reach high places. Add handrails along stairways and place nonslip mats in your bathroom, shower, and under carpets.
- Wear a seatbelt in any vehicle your drive or ride in.
- Never drive if you have been drinking, or ride with anyone who has.
- Use proper safety gear for any high-risk sports you participate in.
- Maintain your muscle strength with regular exercise.
- Reviewer: Warren A. Bodine, DO, CAQSM
- Review Date: 08/2014 -
- Update Date: 08/29/2014 -
|
Solar System Models Lab
The NAAP Solar System Models Lab introduces the universe as envisioned by early thinkers culminating in a detailed look at the Copernican model.
First time users of NAAP materials should read the NAAP Labs – General Overview page.
Details and resources for this lab – including demonstration guides, in-class worksheets, and technical documents – can be found on the instructor's page. Some resources are not available for all modules.
- Elongations and Configurations
- Planetary Configurations Simulator [swf]
- Copernican Derivations (optional)
|
FTTx is a access network technology that uses optical fiber directly from the central point to the Home. FTTx offers triple play services voice, video and data.
Optical distribution network (ODN) architecture.. FTTx can be categorized into different
- FTTC = Fiber to the cabinet
- FTTW = Fiber to the wireless
- FTTH or FTTP = Fiber to the Home
- FTTB = Fiber to the building
- High Data rate
- Easy to install
- Easy to upgrade
- Cover Long distance (More than 20 Km)
- No EMI
- Low maintenance Cost
- Low installation cost.
Fiber Optic Access Mode
Active Optical Network (P2P)
- Dedicated higher bandwidth
- Long distance upto 100 Km
- Active equipment required (Router or switch)
Passive Optical Network (P2 Multi)
- Efficient (Each fiber serve many users)
- Limited distance upto 20 Km.
- No active component.
- Low cost
FTTH Passive Mode / PON Architecture
It is point to multipoint architecture in which unpowered optical splitters are used to enable a single optical fiber to serve 32-128 premises. A passive splitter that takes one input and splits it to broadcast to many users. Passive mode is further divided into
EPON = Ethernet Passive Optical Network
The Institute of Electrical and Electronics Engineers (IEEE) developed the Ethernet family of standards, which includes EPON.
GPON = Gigabit Passive Optical Network
A Central Office (CO) equipment provides PON with the various network interfaces. One OLT serves multiple ONTs
A passive equipment used to split optical signal with a ratio of 1:2 and 1:32
Customer premises equipment provides User interface
Optical Distribution Network
In a PON Technology towards downstream side, all passive components from the PON Port of OLT to the PON Port of ONT come under Optical Distribution Network.
Optical Access Network
The Optical Access Network is an access network towards the network side, it is also known as SNI (Service Network Interface).
|
What is a stye?A stye (hordeolum) is a tender red bump on the edge of your eyelid.
What causes a stye?
A stye happens when a gland on the edge of your eyelid gets infected. When it occurs inside or under the eyelid, it is called an internal hordeolum.
The infection is most often caused by a bacteria or germ called staph (Staphylococcus aureus).
Who is at risk for a stye?
You are more likely to get a stye if you:
- Have had one before
- Wear contact lenses
- Are not keeping your eye area clean
- Use eye makeup that is old or contaminated
- Have other eye conditions such as an inflamed or infected eyelid (blepharitis)
- Have other conditions such as rosacea, seborrheic dermatitis, or diabetes
What are the symptoms of a stye?
Each person’s symptoms may vary. Symptoms may include:
- Swelling, redness, pain, or tenderness of the eyelid
- Feeling like there is something in your eye
- Being bothered by bright light
- Tearing and crusting of the eye
The symptoms of a stye may look like other health problems. Always see your healthcare provider to be sure.
How is a stye diagnosed?
In most cases your healthcare provider will be able to tell that you have a stye by looking at it.
You will not need to take any tests.
How is a stye treated?
In most cases a stye will go away on its own.
There are some things you can do to treat the stye at home. These include:
- Putting a warm, wet cloth (compress) on your eyelid for 5 to 10 minutes. This should be done 3 to 5 times a day.
- Washing your hands often
- Washing your face daily, including the eye area
- Not touching the area
- Not squeezing the stye
- Not wearing makeup until the infection heals
Your healthcare provider may also:
- Give you special bacteria-fighting (antibiotic) creams or ointments to put on the area. Only certain ones are safe to use near your eyes.
- Refer you to an eye specialist (ophthalmologist) if the stye does not go away.
What can I do to prevent a stye?
To prevent a stye, you should:
- Wash your hands often
- Wash your face and eye area
- Be careful when using and removing eye makeup
When should I call my healthcare provider?
Call your healthcare provider if you:
- Notice redness or swelling of your eyelid
- Have pain in your eyelid
- Feel like something is in your eye
Key points about styes
- A stye (hordeolum) is a tender red bump on the edge of the eyelid.
- It is an infection of a gland of the eyelid.
- The infection is most often caused by bacteria called staph (Staphylococcus aureus).
- The most common symptoms are redness and swelling of the eyelid.
- In most cases a stye will go away on its own.
Tips to help you get the most from a visit to your healthcare provider:
Know the reason for your visit and what you want to happen.
- Before your visit, write down questions you want answered.
- Bring someone with you to help you ask questions and remember what your provider tells you.
- At the visit, write down the name of a new diagnosis, and any new medicines, treatments, or tests. Also write down any new instructions your provider gives you.
- Know why a new medicine or treatment is prescribed, and how it will help you. Also know what the side effects are.
- Ask if your condition can be treated in other ways.
- Know why a test or procedure is recommended and what the results could mean.
- Know what to expect if you do not take the medicine or have the test or procedure.
- If you have a follow-up appointment, write down the date, time, and purpose for that visit.
- Know how you can contact your provider if you have questions.
Online Medical Reviewer:
Berman, Kevin, MD, PhD
Online Medical Reviewer:
Sather, Rita, RN
Date Last Reviewed:
© 2000-2017 The StayWell Company, LLC. 800 Township Line Road, Yardley, PA 19067. All rights reserved. This information is not intended as a substitute for professional medical care. Always follow your healthcare professional's instructions.
|
New Fresnel Solar Lens
Remember as a kid playing with a magnifying glass and burning paper etc. using only the sun? A US company is using the same basic idea of a very large magnifying glass to generate power from the sun. Only difference is in the type of magnifying glass being used.
IAS (international Automated Systems) claim to have developed a (sort of) new solar technology. They also claim to be the first company to offer the possibility of competing head-to-head with fossil fuels for power production. They also claim that their current development systems already beats the World Government's goal for solar power cost per kilowatt by the year 2020.
The actual idea is anything but new. What is new is the type of lens which is a segmented thin-film very large Fresnel lens. This huge lens focuses the sun's energy, in the same way as a little toy magnifying glass. It is capable of producing super-heated steam for power generation.
The lenses are inexpensive, efficient, and require virtually no maintenance, (except cleaning I guess) ( the tracking gear would also need a bit of oiling from time to time ;-) ). More typical solar reflectors are quite expensive and require a good deal of maintenance. Once installed, IAS lenses need no further adjustment – so they say!
These Fresnel lenses can be run out really quickly from a manufacturing production unit. Nearly one megawatt of IAS solar panels were manufactured in a short 24-hour run, according to the company publicity blurb.
I guess the breakthrough is that the IAS solar lens can be produced at a tiny fraction of the cost of photovoltaic solar panels on a watt for watt basis.
I hope it is not just another investment magnet. We will wait and see how it pans out in the next year or two.
Have a look at their web site:
|
Many of us garden just for the sheer joy of it. But did you know that all over the country the healing aspects of gardening are being used as therapy or as an adjunct to therapy?
Although this might sound like a new concept, garden therapy has been around for decades. For example, the Garden Therapy Program at Central State Hospital in Milledgeville, and in regional hospitals in Atlanta, Augusta, Columbus, Rome, Thomasville and Savannah, has been helping people for over 40 years through gardening activities known as social and therapeutic horticulture.
So what exactly is social and therapeutic horticulture (or garden therapy)?
According to the article "Your future starts here: practitioners determine the way ahead" from Growth Point (1999) volume 79, pages 4-5, horticultural therapy is the use of plants by a trained professional as a medium through which certain clinically defined goals may be met. "...Therapeutic horticulture is the process by which individuals may develop well-being using plans and horticulture. This is achieved by active or passive involvement."
Although the physical benefits of garden therapy have not yet been fully realized through research, the overall benefits are almost overwhelming. For starters, gardening therapy programs result in increased elf-esteem and self-confidence for all participants.
Social and therapeutic horticulture also develops social and work skills, literacy and numeric skills, an increased sense of general well-being and the opportunity for social interaction and the development of independence. In some instances it can also lead to employment or further training or education. Obviously different groups will achieve different results.
Groups recovering from major illness or injury, those with physical disabilities, learning disabilities and mental health problems, older people, offenders and those who misuse drugs or alcohol, can all benefit from the therapeutic aspects of gardening as presented through specific therapy related programs. In most cases, those that experience the biggest impact are vulnerable or socially excluded individuals or groups, including the ill, the elderly, and those kept in secure locations, such as hospitals or prisons.
One important benefit to using social and therapeutic horticulture is that traditional forms of communication aren't always required. This is particularly important for stroke patients, car accident victims, those with cerebral palsy, aphasia or other illnesses or accidents that hinder verbal communication. Gardening activities lend themselves easily to communicative disabled individuals. This in turn builds teamwork, self-esteem and self-confidence, while encouraging social interaction.
Another group that clearly benefits from social and therapeutic horticulture are those that misuse alcohol or substances and those in prison. Teaching horticulture not only becomes a life skill for these individuals, but also develops a wide range of additional benefits.
Social and therapeutic horticultures gives these individuals a chance to participate in a meaningful activity, which produces food, in addition to creating skills relating to responsibility, social skills and work ethic.
The same is true for juvenile offenders. Gardening therapy, as vocational horticulture curriculum, can be a tool to improve social bonding in addition to developing improved attitudes about personal success and a new awareness of personal job preparedness.
The mental benefits don't end there. Increased abilities in decision-making and self-control are common themes reported by staff in secure psychiatric hospitals. Reports of increased confidence, self-esteem and hope are also common in this environment.
Prison staff have also noticed that gardening therapy improves the social interaction of the inmates, in addition to improving mutual understanding between project staff and prisoners who shared outdoor conditions of work.
It's interesting that studies in both hospitals and prisons consistently list improving relationships between participants, integrating with the community, life skills and ownership as being some of the real benefits to participants.
But in addition to creating a myriad of emotional and social benefits, the health benefits of being outdoors, breathing in fresh air and doing physical work cannot be overlooked. In most studies, participants noted that fresh air, fitness and weight control where prime benefits that couldn't be overlooked.
Although unable to pin down a solid reason, studies have shown that human being posses an innate attraction to nature. What we do know, is that being outdoors creates feelings of appreciation, tranquility, spirituality and peace. So it would seem, that just being in a garden setting is in itself restorative. Active gardening only heightens those feelings.
With so many positive benefits to gardening, isn't it time you got outside and started tending to your garden? Next time you are kneeling in fresh dirt to pull weeds or plant a new variety of a vegetable or flower, think about the tranquility you feel while being outdoors in your garden. Let the act of gardening sooth and revitalize you. Soak up the positive benefits of tending to your own garden.
If you have someone in your life that could benefit from garden therapy, contact your local health unit to find out more about programs in your area. Not only will the enjoyment of gardening help bond you together, but it will also create numerous positive mental and physical benefits for both of you.
So get gardening today for both your physical and mental health. You'll enjoy the experience so much that you'll immediately thank yourself.Gardening is Good Therapy
|
The old road train is made up of traction engines pulling multiple wagons. In the Crimean War, a traction engine was used to pull multiple open trucks. By 1898 steam traction engine trains with up to four wagons were utilized in military maneuvers in England. In the 1900s John Fowler & Co. offered armored road trains for use by the British forces in the Second Boer War. In the 140s The Government of South Australia run a fleet of AEC 8x8 military trucks to transport freight and supplies into the Northern Territory replacing the Afghan camel trains that had been trekking through the deserts since the late 19th century. Kurt Johnson is known as the Australian inventor of modern road train. His first road train comprised of a U. S. Army World War II surplus diamond-T tank carrier and two home-built self- tracking trailers. Both wheels sets on each trailer could drive and could negotiate the tight and narrow tracks and creek crossings that existed throughout Central Australia. Freight trailers in Australia viewed this enhanced invention and went on to build self-tracking trailers for Kurt and other customers.
The largest and heaviest road-legal vehicles in the world are found in Australia with some configurations topping out at close to 200 tons. The majority are between 80 and 120 tons. Doubles also called two-trailers are road train combinations which are permitted in most areas in Australia within the environs of Adelaide, South Australia and Perth, Western Australia. A double road train should not be confused with a B-double which is given access to most of the country and in all major cities. Triple or the three trailer road trains operate in western New South Wales, Western Queensland, South Australia, Western Australia and the Northern Territory. Road trains are not allowed in Tasmania and Victoria. Road trains are used for transporting all material such as livestock, fuel, mineral ores and general freight.
The multiple dog-trailers are detached, the dolly removed and the connected individually to multiple trucks at assembly yards when the road train gets close to populated area. The flat-top trailers of a road train need to be transported empty, it is customary practice to stack them. This is generally referred to as “double-up or doubling-up”. In the United State, trucks on public roads are limited to two trailers (two 28 ft. and a dolly to connect). The limit is 63 feet end to end. Some states allow three trailers, though triples are usually restricted to less populated areas. Triples are used for long distance less than truckload freight hauling or resource hauling in the interior west. Triples are sometimes marked with “LONG LOAD” banners both front and rear. Turnpikes doubles are tractors towing two full-length trailers are allowed on the New York Thruway and Massachusetts Turnpike as well as the Ohio and Indiana toll roads. The term road train is nor usually used in the US.
Learn more about Heavy Hauling and see Edwards company that specializes in this kind of industry.
|
Fresh Research Finds Organic Milk Packs In Omega-3s
While milk consumption continues to fall in the U.S., sales of organic milk are on the rise. And now organic milk accounts for about 4 percent of total fluid milk consumption.
For years, organic producers have claimed their milk is nutritionally superior to regular milk. Specifically, they say that because their cows spend a lot more time out on pasture, munching on grasses and legumes rich in omega-3 fatty acids, the animals' milk is higher in these healthy fats, which are linked to a reduced risk of heart disease.
But the evidence for this has been scant, except for some small studies from Europe.
Now, a new study evaluating organic milk produced in the U.S. finds that organic milk has about 62 percent more omega-3s, compared to milk produced by cows on conventional dairy farms. Cows raised on conventional farms typically spend a lot more time in a barn or confined, and instead of grazing, they're fed a diet of animal feed that contains a lot of corn.
"We were surprised by the magnitude of the differences," lead author Charles Benbrook of Washington State University tells The Salt.
Benbrook and his colleague analyzed about 400 samples of organic and conventional milk over a period of about a year and a half. The samples were taken at processing facilities around the country.
The findings, published in the journal PLoS ONE, come at a time when we're being told to consume more omega-3 fatty acids. Most people hear this advice and think of fatty fish — which is, of course, an excellent source of the omega-3s DHA and EPA.
What's less well known is that plant-based foods, such as leafy greens and nuts, are rich in another omega-3 called ALA. Now, it's becoming clearer that organic milk is a good source of that, too.
Benbrook says that consuming ALA-rich milk is also a good way to change the ratio of omega-3 and omega-6 fatty acids in your diet. According to the National Institutes of Health, the consensus is that, for good health, we need to be eating more omega-3s and less omega-6s.
Omega-6s are found in corn and sunflower oil, and in foods fried in these oils. While some experts don't see a problem with omega-6s, many say that the typical American diet already contains too many. And averaged over 12 months, the study found, organic milk contained 25 percent less omega-6 fatty acids than conventional milk.
So, here's the rub: if you want all of the omega-3s found in organic milk, are you better off drinking whole milk rather than skim?
Yes. That's because skimming off the fat also reduces the omega-3 content. For example, if you choose 1 percent milk, it has about one-third the fat of whole milk. So you're left with a much lower level of omega-3s. Of course, you're also fewer calories, so it might be a hard choice for people who are watching their weight. If they choose whole milk, they may have to trim calories elsewhere.
And there seems to be a movement towards consuming whole milk. Sales of whole, organic milk are up 10 percent this year, making it the fastest-growing category of milk, according to a spokeswoman from Organic Valley. Skim sales, meanwhile, are down 7.0 percent, she says.
As I reported earlier this year, some studies have linked fattier milk to slimmer kids, despite the fact that pediatricians routinely recommend switching kids to low-fat dairy at the age of 2 to reduce their consumption of saturated fats. These fats, which are more abundant in whole milk than in reduced fat milk, are linked to a higher risk of cardiovascular disease.
As falling sales figures show, lots of Americans have simply taken milk out of their diets — due to lactose intolerance or other reasons. Some have replaced dairy milk with alternatives such as almond milk, which many doctors say is fine, since there are plenty of other sources of calcium.
But for people who are still milk drinkers, this study suggests that yes, there is a benefit in choosing organic in terms of boosting omega-3 intake.
One thing to note: Dairy farmers of the Cooperative Regions of Organic Producer Pools, a group which markets through the Organic Valley brand, helped fund the study. But the groups had no role in its design or analysis. The analysis was funded by the Measure to Manage program at the Center for Sustaining Agriculture and Natural Resources at Washington State University.
|
Fewer people are losing their lives from severe head injuries, thanks to better critical care and injury prevention, shows a new Canadian study.
“Even though there are just as many accidents on our roads as there used to be, there are fewer serious injuries from the same accidents, which implies that our roads and our cars are somewhat safer than they used to be,” says lead author Dr. Andreas Kramer, a neurological critical care specialist in Calgary, Alta.
Widespread use of airbags and helmets has helped, but so have advancements in critical care, he says. Overall recovery and outcomes have also improved.
Although this is good news, lower rates of neurologic death have implications for organ donations and transplants, the author noted. Donation after “brain death” accounts for about half of kidney transplants, three-quarters of liver transplants, 90 per cent of lung and pancreas transplants, and all heart and small bowel transplants.
The study: Incidence of neurologic death among patients with brain injury appears in the most recent Canadian Medical Association Journal.
In the study of 2,788 patients in Calgary, researchers found that the odds of patients with brain injury deteriorating to neurologic death decreased over the 10-year study period from highs of 8.1 per cent in 2002 and 9.6 per cent in 2004 to 2.2 per cent in 2010, and was most dramatic in patients with traumatic brain injury. Brain injuries can also occur from subarachnoid hemorrhage, intracerebral hemorrhage or oxygen deprivation.
Alberta Transportation reported that annual traffic-related fatalities decreased between 2006 and 2010 (from 404 to 307), as did nonfatal injury collisions (from 18,831 to 13,552), despite consistent population growth. Similar trends have been reported nationally by Transport Canada
Up until a decade ago, severe injuries from traffic collisions were increasing with the population, says Dr. Andrew Baker, critical care chief at Toronto’s St. Michael’s Hospital. “Now we’re seeing levels of trauma flatten out a little bit in the province.”
Campaigns about injury prevention and impaired and distracted driving are paying off, he believes.
“The nature of severe trauma is changing,” Baker says. “We’re not just allowing people to survive to a bad outcome, we’re having a better functional outcome.”
Early stage prevention — with procedures such as decompressive craniectomy, where part of the skull is removed for a time to allow for swelling — makes an enormous difference, Baker says.
Kramer also credits better organized trauma teams and more effective emergency transfers.
“It’s great news for our society in general,” Kramer says. “But for a patient who needs a new liver, or needs to get off dialysis with a kidney transplant, this is of course not great news.”
Until relatively recently, deceased organ donors were exclusively patients who died from neurological criteria. Doctors are now turning to death after cardiovascular criteria — where the heart stops. These patients are put on life support with hope that they can be cured, but the injuries may be too devastating for any meaningful recovery and families usually choose to withdraw treatment.
“It turns out that a lot of deaths occur that way in the ICU,” Baker says. “In those cases, they don’t become declared dead by neurologic criteria, they go on to die because they stop breathing and their heart stops.”
It’s called Donation after Cardiac Death, or DCD. Organs must be procured within 30 minutes of death and fewer organs are viable for transplant, but it still makes a difference for the approximate 1,500 Ontarians awaiting a life-saving transplant at any given time.
“Patients’ (families) began asking for this,” Baker says. “In Ontario the greatest rise in organ donation is in this category.”
Another way to shrink the gap is by focusing on the conversion rate, which is the number of actual donors divided by the number of potential donors. Critical care specialists only want organ donation to be facilitated when appropriate, Baker says. “If we can do that 30 out of 30 times, that’s great,” he says.
According to the just-released annual report from Trillium Gift of Life Network, the provincial agency mandated with organ and tissue donation and transplantation, Ontario’s conversion rate was 63 per cent in the 2012-2013 fiscal year. It was 60 per cent the previous year and 55 per cent in 2010-2011.
|
China Great Wall Map
Click to enlarge it.
Beijing is a city rich in Great Wall resource. The history of the wall in Beijing can be traced back to Warring States Period (476 BC-221 BC). The Ming Dynasty (1368-1644) is the heyday of the Wall construction. At present, most sections remained in Beijing were built in Ming Dynasty.
According to a survey, the Wall in Ming Dynasty (1368-1644) extended about 1738.3km (1,080 miles) in Gansu, which covers one fifth of the total length of that in China. But except a few repaired sections, many parts are just left suffering the natural erosion and human demolishment. It is urgent to protect and rescue it .
The Wall in Hebei belongs to different dynasties, and the sections in Ming Dynasty are the most famous. The 2,000km (1,243 miles) walls in Hebei wind through Qinhuangdao, Chengde, Zhangjiakou, Tangshan and Baoding in the province. Due to their approach to Beijing, the capital of Ming Dynasty, the walls were strongly built, reflecting the top level of building standard in that period. Among these walls, many sections are world-known scenic sights, such as "Old Dragon Head", "The First Pass under Heaven", Zhangjiakou and Jiaoshan.
Liaoning Great Wall Maps
The Wall in Liaoning has a total length of 1,460 miles (2,350 kilometers). It was built during many historical periods between the Warring States Period (475BC-221BC) and the Ming Dynasty (1368-1644). The remaining sections are mostly from the Ming Dynasty, including Hushan, Jiumenkou, and Zhuizishan.
Shaanxi Great Wall Maps
In Shaanxi, the Wall was constructed in many dynasties from the Warring States Period (475BC-221BC) to the Ming Dynasty (1368-1644). The total length adds up to over 1,243 miles (2,000 kilometers). Today, the relics are mainly found in Yulin, Yan’an and Weinan. Due to natural erosions and human activities, the wall was badly damages. It is time to protect and restore it.
The Great Wall measures over 3,500km (2,475 miles) in Shanxi Province, which are found widely in over 40 counties and in nine cities. In history, the Warring States, Northern Qi, Northern Zhou, Sui, Song, Ming and Qing dynasties all built walls in Shanxi. The sections in Ming Dynasty are the largest in scale. It cost about 154 years to finish the construction. The wall in Shanxi was always serving as the barrier to protect Beijing.
In Chinese long history, many feudal dynasties have built or repaired the Great Wall in order to consolidate their frontier defense. Millions of ancient Chinese laborers have contributed their wisdom, blood and sweat to make it a wonder in the world. Every constructional detail reflects the defensive ideas and the high achievement of building technology in ancient China.
Today, through thousands of years’ erosion, most parts of the Wall are beyond recognition. By reading these maps, you will get to know the historical remarks left by each dynasty and be marvel at the long history and abundant resources of this huge project in China.
|
Using magnetic resonance imaging (MRI) in a different way to look for evidence of multiple sclerosis (MS) in the brain could be a step towards earlier diagnosis of the disease, according to new research published in the journal Multiple Sclerosis.
MRI is a valuable tool for detecting and diagnosing brain abnormalities, and it is particularly useful in the evaluation of damage to the white matter of the brain. However, white matter lesions in the brains of people with MS are not always an indicator of the disease. New research has highlighted a way in which the clinical MRI scanners available in specialist neuroscience centres could be used to distinguish between MS-related white matter lesions and other ‘white spots’ in the brain.
Researchers at the University of Nottingham and the Nottingham University Hospitals NHS Trust used a special type of MRI, called T2-weighted MRI, to reveal a distinctive feature of MS, white matter lesions with a central vein. In the first part of their study, a test cohort of 10 individuals with MS and 10 individuals with non-MS white matter lesions underwent T2-weighted MRI. After evaluating the scans, the researchers formulated diagnostic rules based on the number and morphology of lesions with a central vein. A second cohort of 20 people (13 with MS) was then assessed by the same process.
In the first cohort, people with MS were found to have a higher proportion of lesions with a central vein than those in the non-MS group: all people with MS had central veins visible in more than 45% of lesions, while individuals with non-MS-related white spots had central veins visible in fewer than 45% of lesions (p < 0.0001). By applying the newly formulated diagnostic rules to the second cohort, all participants were correctly categorized by a blinded observer into those with MS and those without MS.
Dr Nikos Evangelo, who led the study, said: “Our results show that clinical application of this technique could supplement existing diagnostic methods for MS.”
The recommendations outlined in Brain health: time matters in multiple sclerosis emphasize the importance of maximizing brain health in people with MS. This research represents a step towards increasing the speed of diagnosis which will directly contribute to improving brain health for people with MS.
- Mistry N, Abdel-Fahim R, Samaraweera A et al. Imaging central veins in brain lesions with 3-T T2*-weighted magnetic resonance imaging differentiates multiple sclerosis from microangiopathic brain lesions. Mult Scler 2015;Epub ahead of print.
|
The Activity schema is proposed as a way to separate a general real world activity from other schemas such as events, places, touristattractions etc.
A general activity may be anything from 'take a walk in the park' to 'learn to draw', 'swim in the ocean' or 'play poker'.
Activities to do (for example when planning a party or a team building event) are commonly searched for on the web. An activity schema would help people more easily identify, filter and browse activities.
It would also help search engine users differentiate between football as an activity, a football match (event), a football club (organization) or the object football (product).
The Schema will help people who are looking for new things to do (E.g. at will, regularly or once in a lifetime).
The Schema is important because of the growing obesity epidemic. We need to help people get more physically active.
<div itemscope itemtype="http://schema.org/Activity">
Activity added by John Doe.
<img itemprop="image" src="birdwatching-thumb1.jpg" />
Get in touch with nature and observe the birds.
Typically takes <meta itemprop="takesTime" content="PT6H"> 6 hours to do.<br/>
Typically requires <meta itemprop="prepareTime" content="PT1H"> 1 hour of planning and preparation.
Typically done by <span itemprop="personCountMin">1</span> to <span itemprop="personCountMax">4</span> persons.
<h2>How to prepare for this</h2>
<li itemprop="howToPrepare">Buy a field guide book with pictures and descriptions of birds.</li>
<li itemprop="howToPrepare">Check for "Bird Checklists" websites.</li>
<h2>How to do this</h2>
<li itemprop="howToDoIt">Get up early. Birds are the most active around dawn.</li>
<li itemprop="howToDoIt">Bring along binoculars and a field guide.</li>
<li itemprop="howToDoIt">Check the weather forecast and dress accordingly.</li>
<li itemprop="howToDoIt">Pack an extra layer of clothing.</li>
<li itemprop="howToDoIt">Wear boots and a hat to shield you from the sun.</li>
<meta itemprop="isFamilyFriendly" content="True">Family friendly activity.<br/>
<meta itemprop="isEquipmentRequired" content="True">Equipment is required.<br/>
<meta itemprop="isSomethingYouCanDoAtHome" content="True">Can be done at home.<br/>
<meta itemprop="isPhysical" content="True">You are physically active doing this.<br/>
<meta itemprop="isBrainRequired" content="True">Brain exercise.<br/>
<meta itemprop="isCreative" content="True">You get to be creative.<br/>
Add your thoughts and ideas here.
|
Who developed these guidelines?
The U.S. Preventive Services Task Force (USPSTF) is a group of experts that makes
recommendations about preventive health care.
What is the problem and what is known about it so far?
Vitamin D and calcium are known to be important for strong, healthy bones. Both come
from certain foods, and vitamin D is also produced in the body after exposure to sunlight.
However, many Americans have lower intake or levels of these substances than
recommended. This is concerning because low vitamin D and calcium levels put people at
risk for osteoporosis and bone fractures. Fractures, especially hip fractures, are associated
with pain, disability, loss of independence, and death. For that reason, many people take
vitamin D and calcium supplements with the hope of preventing fractures. However,
although vitamin D and calcium supplements are helpful for adults known to have
osteoporosis, whether they are helpful in adults who do not have osteoporosis is not clear.
It is important to note that the risk for osteoporosis and fractures is higher in women after
menopause than in premenopausal women. This means that the same recommendations
might not apply to both groups of women.
How did the USPSTF develop these recommendations?
The USPSTF reviewed studies about the benefits and harms of vitamin D and calcium
supplementation when taken to prevent fractures in adults who do not have known
What did the authors find?
Appropriate intake of vitamin D and calcium are essential to overall health. However,
there is not enough evidence to determine the effect of combined vitamin D and calcium
supplementation on fractures in men or premenopausal women. However, there is good
evidence that daily supplementation with 400 IU of vitamin D3 and 1000 mg of calcium
has no effect on the incidence of fractures in postmenopausal women. The benefits and
harms of higher doses taken to prevent fractures in postmenopausal women who do not
live in institutions are not well-defined. Supplementation with 400 IU or less of vitamin
D3 and 1000 mg or less of calcium is associated with a small risk for kidney stones.
What does the USPSTF recommend that patients and doctors do?
It remains unclear whether men and premenopausal women who are not known to have
osteoporosis or vitamin D deficiency should take vitamin D and calcium supplements to
It remains unclear whether postmenopausal women living outside of institutions, such
as nursing homes, should take daily supplements containing more than 400 IU of vitamin
D3 and more than 1000 mg of calcium.
Postmenopausal women who live outside of institutions, such as nursing homes,
should not take daily doses of 400 IU or less of vitamin D3 and 1000 mg or less of
The full report is titled “Vitamin D and Calcium Supplementation to Prevent Fractures in Adults:
What are the cautions related to these recommendations?
These recommendations do not apply to adults with known osteoporosis or vitamin D
deficiency. There may be other reasons to take these supplements aside from fracture
prevention. For example, the USPSTF recommends vitamin D supplements to prevent
falls in older adults.
U.S. Preventive Services Task Force Recommendation Statement.” It is in the 7 May 2013 issue of Annals of Internal Medicine (volume 158, pages 691-696). The authors are V.A. Moyer, for the U.S. Preventive Services Task Force. This article was published at www.annals.org on 26 february 2013.
|
URL of this page: http://www.nlm.nih.gov/medlineplus/news/fullstory_143857.html (*this news item will not be available after 04/03/2014)
As people look for fresh strategies to cut back on calories and shed pounds, a new study suggests that simply eating more slowly can significantly reduce how much people eat in a single sitting.
The study involved a small group of both normal-weight and obese or overweight participants. All were given an opportunity to eat a meal under relaxed, slow-speed conditions, and then in a time-constrained, fast-speed environment.
The catch: Although all participants consumed less when eating slowly and all said they felt less hungry after eating a slow meal compared to a fast meal, only people who were considered normal weight actually reduced their calorie intake significantly when eating more slowly.
"One possible reason [for the calorie drop seen] may be that slower eating allows people to better sense their feelings of hunger and fullness," said study author Meena Shah, a professor in the department of kinesiology at Texas Christian University, in Fort Worth.
Slow eating also seemed to increase water intake and stomach swelling, Shah said, while also affecting the biological process that determines how much food people consume.
The study was published online Jan. 2 in the Journal of the Academy of Nutrition and Dietetics.
Although just less than 15 percent of Americans were obese in the early 1970s, that figure increased to nearly 36 percent by 2010, the researchers said.
To explore a potential connection between slow eating and reduced caloric intake, the team focused on 35 normal-weight men and women and 35 overweight or obese men and women.
During a two-day study period, all were asked to consume the exact same meals under two conditions. The "slow" meal was spread over an average of 22 minutes per meal, involving small bites and deliberate chewing without concern for time. The "fast" meal involved large bites and quick chewing, under the notion that time was of the essence. The average fast-meal time was about nine minutes.
The result: Normal-weight participants were found to consume 88 fewer calories when eating slowly, a decrease deemed "significant." By contrast, the obese/overweight group saw only a 58-calorie reduction during the slow-eating session, which was not considered significant.
The researchers said the obese/overweight group actually consumed less food overall during both the slow- and fast-eating sessions than the normal-weight group. That finding might explain the smaller calorie drop during the first group's slow-eating trial, they said.
Some self-consciousness among the participants might also have affected eating patterns, leading them to consume food in a manner that differed from a private, real-world setting. "There is always the possibility that people will eat differently when they are being observed," Shah said.
Both groups ate less when eating slowly, however, and a notable spike in water intake during the slow-eating test might be a major reason why. When eating slowly, water intake increased by 27 percent among the normal-weight group, and by 33 percent among the overweight/obese group.
Susan Roberts, a senior scientist with the U.S. Department of Agriculture, suggested that the study suffers from a number of analytic flaws.
"First of all, slow eating reduces [calorie] intake by 10 percent in the normal-weight folk and 8 percent in the obese ones," said Roberts, who works at the nutrition research center at Tufts University in Medford, Mass. "The 10 percent is [deemed] statistically significant, whereas the 8 percent is not. However, there is no significant difference between 8 percent and 10 percent, meaning ... there is no difference in the effect of eating speed on [calorie] intake according to whether you are obese or lean."
"More importantly," she added, "the obese individuals in the study substantially under-ate during the measurements, which calls into question whether the results are meaningful and repeatable."
Lona Sandon, assistant professor of clinical nutrition at the University of Texas Southwestern Medical Center at Dallas, said the study did not control for a number of factors that could have influenced the findings. That makes it impossible to conclude that there is any direct cause and effect between slower eating and lower food consumption, she said.
"However, there are other theories and camps of research that support the theory that we consume less when we eat more slowly," said Sandon, a registered dietitian. "Taking time to enjoy and be more mindful of the food we are eating is associated with eating less."
"[But] it may be a better strategy for preventing weight gain, as opposed to treating overweight and obesity," Sandon said.
SOURCES: Meena Shah, Ph.D., professor, department of kinesiology, Texas Christian University, Fort Worth; Lona Sandon, R.D., assistant professor, clinical nutrition, University of Texas Southwestern Medical Center at Dallas; Susan Roberts, Ph.D., senior scientist, USDA Human Nutrition Research Center, Tufts University, Medford, Mass.; Jan, 2, 2014, Journal of the Academy of Nutrition and Dietetics, online
|
Hepatitis B is a serious disease of the liver caused by the Hepatitis B virus (HBV). Symptoms of the acute illness caused by HBV may include loss of appetite, diarrhea and vomiting, fatigue, jaundice (yellow skin and/or eyes), pain in joints, muscles and stomach. HBV can also cause a long-term or chronic illness in which the inflammation of the liver leads to liver damage (cirrhosis), liver cancer, and death. According to the CDC, about 1.25 million people in the US have chronic HBV infection, and 80,000 people (most of them young adults) get infected with HBV each year. Each year, 4,000 - 5,000 people die of chronic Hepatitis B. HBV is spread through contact with infected blood or body fluids of a person with HBV. It can be acquired through open cuts, wounds, or mucus membranes, by having unprotected sex, by sharing needles, by a baby during the birth process. Probably one-third of people who are infected with HBV in this country do not know how they got it. Hepatitis B vaccine can prevent Hepatitis B infection. It is considered the first anti-cancer vaccine because it can prevent a form of liver cancer. Since, 1991, Hepatitis B vaccine has been included in the schedule of childhood immunizations recommended by CDC and the Advisory Committee on Immunization Practice. Infants receive the vaccine and many children and adolescents have already received it. It is now required in South Carolina schools and some healthcare settings. Although Hepatitis B vaccination is not mandatory for entrance at Winthrop University, we follow the advice of CDC and the American College Health Association. That is, we strongly recommend that our students receive the Hepatitis B vaccination series.
|
The Old Stone House is the oldest structure on its original foundation in Washington, D.C. Built in 1766 in the British colony of Maryland, the house was already 59 years old when the British invaded Washington, D.C. in 1814. Although it is preserved for its architecture today, it was originally preserved through a case of mistaken identity and a desire to remember George Washington.
In 1791, George Washington and city planner Pierre L'Enfant were surveying the newly established District of Columbia. L'Enfant's ambitious plan for the city's layout depended on negotiating with local landowners for right of way. During one of these meetings, Washington and L'Enfant stayed in Georgetown's Fountain Inn at 31st and K Streets. The inn was better known to locals as Suter's Tavern, after its owner John Suter.
Over on Bridge Street (now M Street), John Suter's son, John, Jr., ran a clock shop in today's Old Stone House. As the years passed, local memory conflated the two buildings associated with the two John Suters. Thus, the desire to honor and remember George Washington's 1791 visit combined with a fuzzy memory led to the Old Stone House's preservation while so much changed around it.
The house was the site of a car dealership when the federal government purchased the property in 1953. The National Park Service opened the house to the public in 1960. Today, the house is a rare example of pre-Revolutionary architecture. Among the House's furnishings, you may find a clock built by one-time owner John Suter, Jr.
Hours:Open daily (ground floor only), with holiday hours from 11:00 a.m.-7:00 p.m. Upper floors to reopen with new exhibits by spring 2019.
|
There is a long history of prescribed fire in the South and opinions of the management tool are as varied as the different ecosystems that span this large region. Documentation shows that fires burned as often as once a year or more in southern pine forest and as infrequently as every 50 years or more in the mountains.
Historically lightning served as a major fire source in most ecosystems for millennia before Native Americans arrived some 10,000 to 12,000 years ago. We know that Native Americans were the first people of North America to use what we now call “prescribed burning.” We later learned that European settlers, whose livelihood often depended on hunting and herding, quickly learned the advantages of prescribed fire to create abundant forage and browse. By the late 19th century, the logging industry had become established throughout the South and excessive logging, followed by wildfires fueled by logging debris, left millions of acres of forestland with no trees.
Fire needed to be suppressed on a grand scale to allow forests to regenerate; even prescribed fire was banned by numerous land management agencies. Despite suppression efforts, fire was never completely removed from the landscape, but it was used sparingly for several decades until early reports showed the advantages of prescribed fire for bobwhite quail habitat and for managing southern pine forests. By the 1950s and 1960s, prescribed burning programs began across the Coastal Plain and lower Piedmont pine and grassland habitats, but prescribed burning in the mountains did not start until the 1980s.
As many land managers know, southern forests and grasslands are well adapted to fire. Many species survive or regenerate by tolerating fire, while some actually require it. Adaptations such as thick bark, light or winged seeds, or buried buds or meristems are common. Frequently burned landscapes, such as Coastal Plain longleaf pine, often have fewer trees in the over-story but have a diverse understory of fire-adapted plants. Land managers have also applied prescribed burning techniques to many other ecosystems across the South to create or maintain healthy habitat. It is also common to utilize prescribed burning across the mixed forests and grasslands of the Piedmont or the hardwoods of the Appalachian Mountains.
In 2011, an estimated 6.4 million acres were treated by prescribed fire across the 13 Southern states, further demonstrating that prescribed burning is a desirable and economically sound practice in many forests and grasslands. Few, if any, land management practices compete with prescribed fire for its combination of economy, effectiveness and scale. For instance, chemical and mechanical treatments are more expensive than prescribed burning, which limits their applications.
Up until the 1980s and 1990s, prescribed burning was most common in Coastal Plain and lower Piedmont forests. Several generations of resource managers gained experience and knowledge with prescribed burning in those regions, and much of our current knowledge originated there. Only recently have management objectives become broader. Land managers and researchers now have a new appreciation for using prescribed fire in grasslands, in hardwood forests and on steep mountain slopes, but burning in these areas presents new challenges and complexities.
For instance, most forests and grasslands require multiple prescribed fires over a number of years to reach land management objectives, but even a single fire can provide multiple benefits. The application of one prescribed fire can reduce wildfire hazard by reducing fuels, improve habitat for some wildlife species, reduce vegetative competition, enhance aesthetics and improve access.
However, if conducted in poor conditions, prescribed fire can severely damage the resources it is intended to benefit and can temporarily reduce air quality. Each prescribed fire presents a number of tradeoffs that must be recognized and carefully weighed to reach a decision regarding if and when to burn.
Proper planning and execution are always necessary to minimize any detrimental effects and to maximize effectiveness. Always weigh off-site impacts, such as air quality, and on-site impacts to soil, aesthetics and wildlife.
Land managers should work with their neighbors through open communication, and consider their concerns when defining burn objectives. Prescribed fire is a complex tool and should only be used by those trained in its use. Its most common uses are to:
Reduce hazardous fuels
Dispose of logging debris
Prepare sites for seeding or planting
Improve wildlife habitat
Manage competing vegetation
Control insects and disease
Improve forage for grazing
Enhance appearance and access
Perpetuate species and communities that require fire
The proper application and use of prescribed fire also requires knowledge of how fire affects vegetation, wildlife, soil, water, air and an understanding of how different burning techniques and timing of burns can be varied to alter fire effects.
For example, prescribed fire effects on plants varies depending upon fire behavior, fire duration, season of burning, the pattern of fuel consumption and the amount heat influences the degree of injury, mortality and recovery of plants from fire. Post prescribed fire responses can vary by plant species because they differ in their ability to survive fire and their ability to regenerate after a fire occurs. Consequently, the above factors that directly affect the vegetative communities will also directly affect wildlife in different ways.
Many major effects of prescribed burning on wildlife are considered indirect. However, a change in the structure and composition of vegetation influences availability of food and cover, which will directly influence the health, abundance and diversity of wildlife. While direct mortality events of wildlife from prescribed burning is rare, it does occur, although long-term benefits outweigh any short-term losses. To account for any potential negative effects on wildlife populations, numerous research projects work to answer both the spatial and temporal impacts on wildlife populations, while providing insight on how to apply different schedules and scales to increase wildlife responses. It is understood that the season and frequency of burning and the size of the area burned is crucial to the successful use of fire to improve wildlife habitat and adequately change the vegetative structure and composition.
Obtain the full report here.
|
Miriam and Aaron are jealous of Moses
1-3Although Moses was the most humble person in all the world, Miriam and Aaron started complaining, “Moses had no right to marry that woman from Ethiopia!12.1-3 Ethiopia: The Hebrew text has “Cush”, which was a region south of Egypt that included parts of the present countries of Ethiopia and Sudan. Who does he think he is? The LORD has spoken to us, not just to him.”
The LORD heard their complaint 4and told Moses, Aaron, and Miriam to come to the entrance of the sacred tent. 5There the LORD appeared in a cloud and told Aaron and Miriam to come closer. 6Then after commanding them to listen carefully, he said:
“I, the LORD, speak to prophets
in visions and dreams.
7But my servant Moses12.7: He 3.2.
is the leader of my people.
8He sees me face to face,
and everything I say to him
is perfectly clear.
You have no right to criticize
my servant Moses.”
9The LORD became angry with Aaron and Miriam. And after the LORD left 10and the cloud disappeared from over the sacred tent, Miriam's skin turned white with leprosy.12.10 leprosy: See the note at 5.2,3. When Aaron saw what had happened to her, 11he said to Moses, “Sir, please don't punish us for doing such a foolish thing. 12Don't let Miriam's flesh rot away like a child born dead!”
13Moses prayed, “LORD God, please heal her.”
14But the LORD replied, “Miriam would be disgraced for seven days if her father had punished her by spitting in her face. So make her stay outside the camp for seven days, before coming back.”12.14: Nu 5.2,3.
15The people of Israel did not move their camp until Miriam returned seven days later. 16Then they left Hazeroth and set up camp in the Paran Desert.
Contemporary English Version (CEV) is copyright © American Bible Society. Psalms and Proverbs © 1991, 1992; New Testament © 1991, 1992, 1995; Old Testament © 1995; translation notes, subject headings for text © 1995; Anglicisations © The British and Foreign Bible Society 1997, 2012.
|
Cervical cancer can be treated with the HPV vaccine and India is soon to take steps in this area. Dr. NK Arora, head of the National Technical Advisory Group on Immunization (NTAGI), said that cervical cancer can be treated with the HPV vaccine. He informed that India will soon be able to provide HPV vaccine for girls aged 9-14 under a national program.
HPV vaccination is recommended for ages 11–12 years. HPV vaccines can be given starting at age 9 years. All preteens need HPV vaccination, so they are protected from HPV infections that can cause cancer later in life. Teens and young adults through age 26 years who didn’t start or finish the HPV vaccine series also need HPV vaccination.
Vaccination is not recommended for everyone older than age 26 years. Adults aged 27 to 45 years, who are not already vaccinated may decide to get HPV vaccine after speaking with their doctor about their risk for new HPV infections and the possible benefits of vaccination for them. The first dose is routinely recommended at ages 11–12 years old. The vaccination can be started at the age of 9 years. Only two doses are needed if the first dose was given before the 15th birthday. Teens and young adults who start the series later, at ages 15 through 26 years, need three doses of HPV vaccine.
What is Cervical Cancer?
Cervical cancer is a type of cancer that starts in the cervix. The cervix is a hollow cylinder that connects the lower part of a woman’s uterus to her vagina. Most cervical cancers begin in cells on the surface of the cervix.
Symptoms of Cervical Cancer
- Unusual bleeding, like in between periods, after sex, or after menopause
- Vaginal discharge that looks or smells different than usual
- Pain in the pelvis
- Needing to urinate more often
- Pain during urination
Who are at higher risk?
- Numerous sexual partners
- Sexual activities at a younger age
- Pre-existence of UTIs
Disclaimer: Tips and suggestions mentioned in the article are for general information purpose only and should not be taken as professional medical advice. Please consult a doctor before starting any fitness regime or medical advice.
|
International Convention on the Elimination of All Forms of Racial Discrimination
NEW YORK 7 March 1966
Articles
The States Parties to this Convention,
Considering that the Charter of the United Nations is based on the
principles of the dignity and equality inherent in all human beings, and
that all Member States have pledged themselves to take joint and separate
action, in co-operation with the Organization, for the achievement of one
of the purposes of the United Nations which is to promote and encourage
universal respect for and observance of human rights and fundamental
freedoms for all, without distinction as to race, sex, language or
Considering that the Universal Declaration of Human Rights proclaims that
all human beings are born free and equal in dignity and rights and that
everyone is entitled to all the rights and freedoms set out therein,
without distinction of any kind, in particular as to race, colour or
Considering that all human beings are equal before the law and are entitled
to equal protection of the law against any discrimination and against any
incitement to discrimination,
Considering that the United Nations has condemned colonialism and all
practices of segregation and discrimination associated therewith, in
whatever form and wherever they exist, and that the Declaration on the
Granting of Independence to Colonial Countries and Peoples of 14 December
1960 (General Assembly resolution 1514 (XV)) has affirmed and solemnly
proclaimed the necessity of bringing them to a speedy and unconditional
Considering that the United Nations Declaration on the Elimination of All
Forms of Racial Discrimination of 20 November 1963 (General Assembly
resolution 1904 (XVIII) ) solemnly affirms the necessity of speedily
eliminating racial discrimination throughout the world in all its forms and
manifestations and of securing understanding of and respect for the dignity
of the human person,
Convinced that any doctrine of superiority based on racial differentiation
is scientifically false, morally condemnable, socially unjust and
dangerous, and that there is no justification for racial discrimination, in
theory or in practice, anywhere,
Reaffirming that discrimination between human beings on the grounds of
race, colour or ethnic origin in an obstacle to friendly and peaceful
relations among nations and is capable of disturbing peace and security
among peoples and the harmony of persons living side by side even within
one and the same State,
Convinced that the existence of racial barriers is repugnant to the ideals
of any human society,
Alarmed by manifestations of racial discrimination still in evidence in
some areas of the world and by governmental policies based on racial
superiority or hatred, such as policies of apartheid, segregation or
Resolved to adopt all necessary measures for speedily eliminating racial
discrimination in all its forms and manifestations, and to prevent and
combat racist doctrines and practices in order to promote understanding
between races and to build an international community free from all forms
of racial segregation and racial discrimination,
Bearing in mind the Convention concerning Discrimination in respect of
Employment and Occupation adopted by the International Labour Organisation
in 1958, and the Convention against Discrimination in Education adopted by
the United Nations Educational, Scientific and Cultural Organization in
Desiring to implement the principles embodied in the United Nations
Declaration on the Elimination of All Forms of Racial Discrimination and to
secure the earliest adoption of practical measures to that end,
Have agreed as follows:
- In this Convention, the term "racial discrimination" shall mean any
distinction, exclusion, restriction or preference based on race, colour,
descent, or national or ethnic origin which has the purpose or effect of
nullifying or impairing the recognition, enjoyment or exercise, on an equal
footing, of human rights and fundamental freedoms in the political,
economic, social, cultural or any other field of public life.
- This Convention shall not apply to distinctions, exclusions,
restrictions or preferences made by a State Party to this Convention
between citizens and non-citizens.
- Nothing in this Convention may be interpreted as affecting in any way
the legal provisions of States Parties concerning nationality, citizenship
or naturalization, provided that such provisions do not discriminate
against any particular nationality.
- Special measures taken for the sole purpose of securing adequate
advancement of certain racial or ethnic groups or individuals requiring
such protection as may be necessary in order to ensure such groups or
individuals equal enjoyment or exercise of human rights and fundamental
freedoms shall not be deemed racial discrimination, provided, however, that
such measures do not, as a consequence, lead to the maintenance of separate
rights for different racial groups and that they shall not be continued
after the objectives for which they were taken have been achieved.
- States Parties condemn racial discrimination and undertake to pursue by
all appropriate means and without delay a policy of eliminating racial
discrimination in all its forms and promoting understanding among all
races, and, to this end:
- (a) Each State Party undertakes to engage in no act or practice of racial
discrimination against persons, groups of persons or institutions and
to ensure that all public authorities and public institutions,
national and local, shall act in conformity with this obligation;
- (b) Each State Party undertakes not to sponsor, defend or support racial
discrimination by any persons or organizations;
- (c) Each State Party shall take effective measures to review
governmental, national and local policies, and to amend, rescind or
nullify any laws and regulations which have the effect of creating or
perpetuating racial discrimination wherever it exists;
- (d) Each State Party shall prohibit and bring to an end, by all
appropriate means, including legislation as required by
circumstances, racial discrimination by any persons, group or
- (e) Each State Party undertakes to encourage, where appropriate,
integrationist multi-racial organizations and movements and other
means of eliminating barriers between races, and to discourage
anything which tends to strengthen racial division.
- States Parties shall, when the circumstances so warrant, take, in the
social, economic, cultural and other fields, special and concrete measures
to ensure the adequate development and protection of certain racial groups
or individuals belonging to them, for the purpose of guaranteeing them the
full and equal enjoyment of human rights and fundamental freedoms. These
measures shall in no case entail as a consequence the maintenance of
unequal or separate rights for different racial groups after the objectives
for which they were taken have been achieved.
States Parties particularly condemn racial segregation and apartheid and
undertake to prevent, prohibit and eradicate all practices of this nature
in territories under their jurisdiction.
States Parties condemn all propaganda and all organizations which are based
on ideas or theories of superiority of one race or group of persons of one
colour or ethnic origin, or which attempt to justify or promote racial
hatred and discrimination in any form, and undertake to adopt immediate and
positive measures designed to eradicate all incitement to, or acts of, such
discrimination and, to this end, with due regard to the principles embodied
in the Universal Declaration of Human Rights and the rights expressly set
forth in article 5 of this Convention, inter alia:
- (a) Shall declare an offence punishable by law all dissemination of ideas
based on racial superiority or hatred, incitement to racial
discrimination, as well as all acts of violence or incitement to such
acts against any race or group of persons of another colour or ethnic
origin, and also the provision of any assistance to racist
activities, including the financing thereof;
- (b) Shall declare illegal and prohibit organizations, and also organized
and all other propaganda activities, which promote and incite racial
discrimination, and shall recognize participation in such
organizations or activities as an offence punishable by law;
- (c) Shall not permit public authorities or public institutions, national
or local, to promote or incite racial discrimination.
In compliance with the fundamental obligations laid down in article 2 of
this Convention, States Parties undertake to prohibit and to eliminate
racial discrimination in all its forms and to guarantee the right of
everyone, without distinction as to race, colour, or national or ethnic
origin, to equality before the law, notably in the enjoyment of the
- (a) The right to equal treatment before the tribunals and all other
organs administering justice;
- (b) The right to security of person and protection by the State against
violence or bodily harm, whether inflicted by government officials or
by any individual, group or institution;
- (c) Political rights, in particular the rights to participate in
elections--to vote and to stand for election--on the basis of
universal and equal suffrage, to take part in the Government as well
as in the conduct of public affairs at any level and to have equal
access to public service;
- (d) Other civil rights, in particular:
- (i) The right to freedom of movement and residence within the
border of the State;
- (ii) The right to leave any country, including one's own, and to
return to one's country;
- (iii) The right to nationality;
- (iv) The right to marriage and choice of spouse;
- (v) The right to own property alone as well as in association with
- (vi) The right to inherit;
- (vii) The right to freedom of thought, conscience and religion;
- (viii) The right to freedom of opinion and expression;
- (ix) The right to freedom of peaceful assembly and association;
- (e) Economic, social and cultural rights, in particular:
- (i) The rights to work, to free choice of employment, to just and
favourable conditions of work, to protection against
unemployment, to equal pay for equal work, to just and
- (ii) The right to form and join trade unions;
- (iii) The right to housing;
- (iv) The right to public health, medical care, social security and
- (v) The right to education and training;
- (vi) The right to equal participation in cultural activities;
- (f) The right of access to any place or service intended for use by the
general public, such as transport, hotels, restaurants, cafes,
theatres and parks.
States Parties shall assure to everyone within their jurisdiction effective
protection and remedies, through the competent national tribunals and other
State institutions, against any acts of racial discrimination which violate
his human rights and fundamental freedoms contrary to this Convention, as
well as the right to seek from such tribunals just and adequate reparation
or satisfaction for any damage suffered as a result of such discrimination.
States Parties undertake to adopt immediate and effective measures,
particularly in the fields of teaching, education, culture and information,
with a view to combating prejudices which lead to racial discrimination and
to promoting understanding, tolerance and friendship among nations and
racial or ethnical groups, as well as to propagating the purposes and
principles of the Charter of the United Nations, the Universal Declaration
of Human Rights, the United Nations Declaration on the Elimination of All
Forms of Racial Discrimination, and this Convention.
- There shall be established a Committee on the Elimination of Racial
Discrimination (hereinafter referred to as the Committee) consisting of
eighteen experts of high moral standing and acknowledged impartiality
elected by States Parties from among their nationals, who shall serve in
their personal capacity, consideration being given to equitable
geographical distribution and to the representation of the different forms
of civilization as well as of the principal legal systems.
- The members of the Committee shall be elected by secret ballot from a
list of persons nominated by the States Parties. Each State Party may
nominate one person from among its own nationals.
- The initial election shall be held six months after the date of the
entry into force of this Convention. At least three months before the date
of each election the Secretary-General of the United Nations shall address
a letter to the States Parties inviting them to submit their nominations
within two months. The Secretary-General shall prepare a list in
alphabetical order of all persons thus nominated, indicating the States
Parties which have nominated them, and shall submit it to the States
- Elections of the members of the Committee shall be held at a meeting of
States Parties convened by the Secretary-General at United Nations
Headquarters. At that meeting, for which two-thirds of the States Parties
shall constitute a quorum, the persons elected to the Committee shall be
those nominees who obtain the largest number of votes and an absolute
majority of the votes of the representatives of States Parties present and
- (a) The members of the Committee shall be elected for a term of four
years. However, the terms of nine of the members elected at the first
election shall expire at the end of two years; immediately after the first
election the names of these nine members shall be chosen by lot by the
Chairman of the Committee.
- (b) For the filling of casual vacancies, the State Party whose expert has
ceased to function as a member of the Committee shall appoint another
expert from among its nationals, subject to the approval of the
- States Parties shall be responsible for the expenses of the members of
the Committee while they are in performance of Committee duties.
- States Parties undertake to submit to the Secretary-General of the
United Nations, for consideration by the Committee, a report on the
legislative, judicial, administrative or other measures which they have
adopted and which give effect to the provisions of this Convention: (a)
within one year after the entry into force of the Convention for the State
concerned; and (b) thereafter every two years and whenever the Committee so
requests. The Committee may request further information from the States
- The Committee shall report annually, through the Secretary-General, to
the General Assembly of the United Nations on its activities and may make
suggestions and general recommendations based on the examination of the
reports and information received from the States Parties. Such suggestions
and general recommendations shall be reported to the General Assembly
together with comments, if any, from States Parties.
- The Committee shall adopt its own rules of procedure.
- The Committee shall elect its officers for a term of two years.
- The secretariat of the Committee shall be provided by the
Secretary-General of the United Nations.
- The meetings of the Committee shall normally be held at United Nations
- If a State Party considers that another State Party is not giving effect
to the provisions of this Convention, it may bring the matter to the
attention of the Committee. The Committee shall then transmit the
communication to the State Party concerned. Within three months, the
receiving State shall submit to the Committee written explanations or
statements clarifying the matter and the remedy, if any, that may have been
taken by that State.
- If the matter is not adjusted to the satisfaction of both parties,
either by bilateral negotiations or by any other procedure open to them,
within six months after the receipt by the receiving State of the initial
communication, either State shall have the right to refer the matter again
to the Committee by notifying the Committee and also the other State.
- The Committee shall deal with a matter referred to it in accordance with
paragraph 2 of this article after it has ascertained that all available
domestic remedies have been invoked and exhausted in the case, in
conformity with the generally recognized principles of international law.
This shall not be the rule where the application of the remedies is
- In any matter referred to it, the Committee may call upon the States
Parties concerned to supply any other relevant information.
- When any matter arising out of this article is being considered by the
Committee, the States Parties concerned shall be entitled to send a
representative to take part in the proceedings of the Committee, without
voting rights, while the matter is under consideration.
- (a) After the Committee has obtained and collated all the information it
deems necessary, the Chairman shall appoint an ad hoc Conciliation
Commission (hereinafter referred to as the Commission) comprising five
persons who may or may not be members of the Committee. The members of the
Commission shall be appointed with the unanimous consent of the parties to
the dispute, and its good offices shall be made available to the States
concerned with a view to an amicable solution of the matter on the basis of
respect for this Convention.
- (b) If the States parties to the dispute fail to reach agreement within
three months on all or part of the composition of the Commission, the
members of the Commission not agreed upon by the States parties to
the dispute shall be elected by secret ballot by a two-thirds
majority vote of the Committee from among its own members.
- The members of the Commission shall serve in their personal capacity.
They shall not be nationals of the States parties to the dispute or of a
State not Party to this Convention.
- The Commission shall elect its own Chairman and adopt its own rules of
- The meetings of the Commission shall normally be held at United Nations
Headquarters or at any other convenient place as determined by the
- The secretariat provided in accordance with article
10, paragraph 3, of
this Convention shall also service the Commission whenever a dispute among
States Parties brings the Commission into being.
- The States parties to the dispute shall share equally all the expenses
of the members of the Commission in accordance with estimates to be
provided by the Secretary-General of the United Nations.
- The Secretary-General shall be empowered to pay the expenses of the
members of the Commission, if necessary, before reimbursement by the States
parties to the dispute in accordance with paragraph 6 of this article.
- The information obtained and collated by the Committee shall be made
available to the Commission, and the Commission may call upon the States
concerned to supply any other relevant information.
- When the Commission has fully considered the matter, it shall prepare
and submit to the Chairman of the Committee a report embodying its findings
on all questions of fact relevant to the issue between the parties and
containing such recommendations as it may think proper for the amicable
solution of the dispute.
- The Chairman of the Committee shall communicate the report of the
Commission to each of the States parties to the dispute. These States
shall, within three months, inform the Chairman of the Committee whether or
not they accept the recommendations contained in the report of the
- After the period provided for in paragraph 2 of this article, the
Chairman of the Committee shall communicate the report of the Commission
and the declarations of the States Parties concerned to the other States
Parties to this Convention.
- A State Party may at any time declare that it recognizes the competence
of the Committee to receive and consider communications from individuals or
groups of individuals within its jurisdiction claiming to be victims of a
violation by that State Party of any of the rights set forth in this
Convention. No communication shall be received by the Committee if it
concerns a State Party which has not made such a declaration.
- Any State Party which makes a declaration as provided for in paragraph 1
of this article may establish or indicate a body within its national legal
order which shall be competent to receive and consider petitions from
individuals and groups of individuals within its jurisdiction who claim to
be victims of a violation of any of the rights set forth in this Convention
and who have exhausted other available local remedies.
- A declaration made in accordance with paragraph 1 of this article and
the name of any body established or indicated in accordance with paragraph
2 of this article shall be deposited by the State Party concerned with the
Secretary-General of the United Nations, who shall transmit copies thereof
to the other States Parties. A declaration may be withdrawn at any time by
notification to the Secretary-General, but such a withdrawal shall not
affect communications pending before the Committee.
- A register of petitions shall be kept by the body established or
indicated in accordance with paragraph 2 of this article, and certified
copies of the register shall be filed annually through appropriate channels
with the Secretary-General on the understanding that the contents shall not
be publicly disclosed.
- In the event of failure to obtain satisfaction from the body established
or indicated in accordance with paragraph 2 of this article, the petitioner
shall have the right to communicate the matter to the Committee within six
- (a) The Committee shall confidentially bring any communication referred
to it to the attention of the State Party alleged to be violating any
provision of this Convention, but the identity of the individual or groups
of individuals concerned shall not be revealed without his or their express
consent. The Committee shall not receive anonymous communications.
- (b) Within three months, the receiving State shall submit to the
Committee written explanations or statements clarifying the matter
and the remedy, if any, that may have been taken by that State.
- (a) The Committee shall consider communications in the light of all
information made available to it by the State Party concerned and by the
petitioner. The Committee shall not consider any communication from a
petitioner unless it has ascertained that the petitioner has exhausted all
available domestic remedies. However, this shall not be the rule where the
application of the remedies is unreasonably prolonged.
- (b) The Committee shall forward its suggestions and recommendations, if
any, to the State Party concerned and to the petitioner.
- The Committee shall include in its annual report a summary of such
communications and, where appropriate, a summary of the explanations and
statements of the States Parties concerned and of its own suggestions and
- The Committee shall be competent to exercise the functions provided for
in this article only when at least ten States Parties to this Convention
are bound by declarations in accordance with paragraph 1 of this article.
- Pending the achievement of the objectives of the Declaration on the
Granting of Independence to Colonial Countries and Peoples, contained in
General Assembly resolution 1514 (XV) of 14 December 1960, the provisions
of this Convention shall in no way limit the right of petition granted to
these peoples by other international instruments or by the United Nations
and its specialized agencies.
- (a) The Committee established under article 8,
paragraph 1, of this
Convention shall receive copies of the petitions from, and submit
expressions of opinion and recommendations on these petitions to, the
bodies of the United Nations which deal with matters directly related to
the principles and objectives of this Convention in their consideration of
petitions from the inhabitants of Trust and Non-Self-Governing Territories
and all other territories to which General Assembly resolution 1514 (XV)
applies, relating to matters covered by this Convention which are before
- (b) The Committee shall receive from the competent bodies of the United
Nations copies of the reports concerning the legislative, judicial,
administrative or other measures directly related to the principles
and objectives of this Convention applied by the administering Powers
within the Territories mentioned in sub-paragraph (a) of this
paragraph, and shall express opinions and make recommendations to
- The Committee shall include in its report to the General Assembly a
summary of the petitions and reports it has received from United Nations
bodies, and the expressions of opinion and recommendations of the Committee
relating to the said petitions and reports.
- The Committee shall request from the Secretary-General of the United
Nations all information relevant to the objectives of this Convention and
available to him regarding the Territories mentioned in paragraph 2 (a) of
The provisions of this Convention concerning the settlement of disputes or
complaints shall be applied without prejudice to other procedures for
settling disputes or complaints in the field of discrimination laid down in
the constituent instruments of, or in conventions adopted by, the United
Nations and its specialized agencies, and shall not prevent the States
Parties from having recourse to other procedures for settling a dispute in
accordance with general or special international agreements in force
- This Convention is open for signature by any State Member of the United
Nations or member of any of its specialized agencies, by any State Party to
the Statute of the International Court of Justice, and by any other State
which has been invited by the General Assembly of the United Nations to
become a Party to this Convention.
- This Convention is subject to ratification. Instruments of ratification
shall be deposited with the Secretary-General of the United Nations.
- This Convention shall be open to accession by any State referred to in
article 17, paragraph 1, of the Convention.
- Accession shall be effected by the deposit of an instrument of accession
with the Secretary-General of the United Nations.
- This Convention shall enter into force on the thirtieth day after the
date of the deposit with the Secretary-General of the United Nations of the
twenty-seventh instrument of ratification or instrument of accession.
- For each State ratifying this Convention or acceding to it after the
deposit of the twenty-seventh instrument of ratification or instrument of
accession, the Convention shall enter into force on the thirtieth day after
the date of the deposit of its own instrument of ratification or instrument
- The Secretary-General of the United Nations shall receive and circulate
to all States which are or may become Parties to this Convention
reservations made by States at the time of ratification or accession. Any
State which objects to the reservation shall, within a period of ninety
days from the date of the said communication, notify the Secretary-General
that it does not accept it.
- A reservation incompatible with the object and purpose of this
Convention shall not be permitted, nor shall a reservation the effect of
which would inhibit the operation of any of the bodies established by this
Convention be allowed. A reservation shall be considered incompatible or
inhibitive if at least two-thirds of the States Parties to this Convention
object to it.
- Reservations may be withdrawn at any time by notification to this effect
addressed to the Secretary-General. Such notification shall take effect on
the date on which it is received.
A State Party may denounce this Convention by written notification to the
Secretary-General of the United Nations. Denunciation shall take effect one
year after the date of receipt of the notification by the
Any dispute between two or more States Parties with respect to the
interpretation or application of this Convention, which is not settled by
negociation or by the procedures expressly provided for in this Convention,
shall, at the request of any of the parties to the dispute, be referred to
the International Court of Justice for decision, unless the disputants
agree to another mode of settlement.
- A request for the revision of this Convention may be made at any time by
any State Party by means of a notification in writing addressed to the
Secretary-General of the United Nations.
- The General Assembly of the United Nations shall decide upon the steps,
if any, to be taken in respect of such a request.
The Secretary-General of the United Nations shall inform all States
referred to in article 17, paragraph 1, of this Convention of the following
- (a) Signatures, ratifications and accessions under articles 17 and 18;
- (b) The date of entry into force of this Convention under article 19;
- (c) Communications and declarations received under articles 14, 20 and 23;
- (d) Denunciations under article 21.
- This Convention, of which the Chinese, English, French, Russian and
Spanish texts are equally authentic, shall be deposited in the archives of
the United Nations.
- The Secretary-General of the United Nations shall transmit certified
copies of this Convention to all States belonging to any of the categories
mentioned in article 17, paragraph 1, of the Convention.
IN FAITH WHEREOF the undersigned, being duly authorized thereto by their
respective Governments, have signed the present Convention, opened for
signature at New York, on the seventh day of March, one thousand nine
hundred and sixty-six.
|
Significant reductions were made in diarrheal
disease fatality rates in the 1970s and 1980s.
Because there are many causes of diarrheal
disease, SAGE emphasized the importance of providing rotavirus vaccination in the context of a comprehensive diarrheal
disease control strategy, including improvement of water quality, hygiene, and sanitation; provision of oral rehydration solution and zinc supplements; and overall improved case management.
Napo is organizing a program to accelerate the development and approval of a pediatric formulation of crofelemer targeting approval in 2011-2012 (pending additional funding), including the establishment of a global advisory board to ensure that the development program for an FDA approved pediatric product incorporates the current WHO guidelines for oral rehydration solution ("ORS"), zinc, and other guidelines for Essential Medicine treatment of diarrheal
disease and to generate a formulation that is practical and safe for resource-constrained regions with limited healthcare-trained personnel.
During the wet season, the only age distinction was that children older than 2 were less likely to suffer from diarrheal
diseases, just as they were in the dry season.
php) calling for action by the international community to stop needless deaths caused by diarrheal
Address for correspondence: John Painter, Centers for Disease Control and Prevention, Foodborne and Diarrheal
Diseases Branch, 1600 Clifton Road, Mailstop A38, Atlanta.
Key words: diarrheal
disease, drinking water, methemoglobinemia, nitrates, nitrites, Romania.
SAN FRANCISCO & NEW DELHI -- The Institute for OneWorld Health, the US-based non-profit pharmaceutical company that develops drugs for people with infectious diseases in the developing world, today announced its Strategic Advisory Board to support the OneWorld Health Diarrheal
Disease Program (DDP).
National Institute of Cholera & Enteric Diseases (ICMR) World Health Organization Collaborating Centre for Research and Training on Diarrheal
Diseases, Kolkata, India; and ([dagger]) Instituto Oswaldo Cruz, Rio de Janerio, Brazil
Because many Pacific Islands depend on stored rainwater for drinking and hygiene, variations in rainfall and temperature affect their supplies in ways that increase the likelihood of contamination with fecal matter or bacteria or the need to resort to contaminated supplies--and contaminated water is a frequent cause of diarrheal
With the pediatric death toll due to diarrheal
illnesses exceeding that of AIDS, tuberculosis, and malaria combined, OneWorld Health is working to discover and develop a novel anti-secretory diarrheal
drug to reduce fluid loss and help prevent death from dehydration caused by acute watery diarrheal
International Centre for Diarrheal
Disease Research, Bangladesh-Centre for Health and Population Research, Dhaka, Bangladesh; ([dagger]) National Public Health Service for Wales, Cardiff, United Kingdom; ([double dagger]) Institute for Medical Virology, Frankfurt, Germany, ([section]) Centers for Disease Control and Prevention, Atlanta, Georgia, USA; ([paragraph]) World Health Organization, Beijing, China; and (#) University of Queensland, Brisbane, Australia (1) Drs.
Several recent papers discuss the effects of El Nino on diarrheal
Symposium Highlights Current Trends in Diarrheal
Disease Research and Case Management and Efficacy of Zinc Supplementation Therapies
Polyak is an epidemiologist with the National Center for Infectious Diseases, Foodborne and Diarrheal
Diseases Branch, and a fellow with the Oak Ridge Institute for Science and Education.
|
Media stories claiming that "sitting is the new smoking" are rampant the world over — an analysis found more than 600 with that phrase between 2012 and 2016. But a group of researchers from Canada, Australia and the U.S. set out to analyze existing evidence to prove definitively that sitting is in no way comparable to smoking in terms of public health crisis severity, reach and cost.
"The mass-media enthusiasm for condemning sitting by making comparisons to smoking has far outpaced the available scientific evidence," the scientists write in the study, published in the November 2018 issue of the American Journal of Public Health. "It is obvious from examination of smoking research that sitting and smoking are distinct behaviors with different levels of associated risk."
Excessive sitting isn't anything to be complacent about, since it is linked to any number of related health issues, like increased diabetes risk and depression. However, the link pales when compared to smoking's increased risk of premature death and diseases, like cancers of 12 sites, dementia, Alzheimer's disease, CVD, chronic obstructive pulmonary disease (COPD), the study authors note. Indeed, smoking is definitively called one of the "greatest public health disasters of the 20th century." The 21st century alone will see a staggering 1 billion smoking-related deaths, with an annual worldwide cost of smoking-attributable disease checking in at $467 billion in 2012 alone, they write.
Although not insignificant, by comparison the cost of physical inactivity was $53.8 billion in 2013. It's much the same story for health risks. "While people who sit a lot have around a 10-20 percent increased risk of some cancers and cardiovascular disease, smokers have more than double the risk of dying from cancer and cardiovascular disease, and a more than 1,000 percent increased risk of lung cancer," says researcher and University of South Australia epidemiologist Dr. Terry Boyle in a press release.
The report also points out that, while smoking puts others at risk for secondhand smoke problems (2.5 million nonsmokers died of such issues from 1964 to 2014), sitting poses no such threat to others.
"An individual can most often decide to sit, stand, or move. However, an individual often cannot simply choose to avoid second- or thirdhand smoke," they write in the study, adding, "Furthermore, there is no research to suggest that an individual's sedentary behaviors provide harmful and unavoidable health consequences for another individual."
|
from The Century Dictionary and Cyclopedia
- n. A genus of fishes represented by the blind-fish (A. spelæus) of the Mammoth Cave of Kentucky, and typical of the family Amblyopsidæ.
- n. A genus of crustaceans.
Sorry, no etymologies found.
Nature, therefore, has in this case compensated the amblyopsis for his loss of sight by endowing him with
|
The world has seen darkness for the first time. Since 1916, black holes have been theorized, first conceptualized by Albert Einstein in his theory of relativity. But for decades they have remained hidden and illusive, impossible to capture as they sucked the light around them with their immense gravitational pull. Observed only by their effects on their surroundings, black holes have captured the imaginations of both scientists and science fiction authors alike, even being featured in prominent movies such as Interstellar. Their nature is one shrouded in mystery, an object defying laws of physics, making this image more precious and awe inspiring.
The modern definition of a black hole was first thoroughly developed by Karl Schwarzschild, who completed Einstein’s equation. Defined as an object with almost infinite density, a black hole was a radical concept to physicists and remains an unexplored area. Black holes have three “layers” – the outer and inner event horizon and the singularity. The event horizon of a black hole is the boundary around the mouth of the black hole where light loses its ability to escape. Once a particle crosses the event horizon, it cannot leave. Gravity is constant across the event horizon. The inner region of a black hole, where its mass lies, is known as its singularity, the single point in space-time where the mass of the black hole is concentrated. Virtually nothing can escape from them — under classical physics, even light is trapped by a black hole. Such a strong pull makes observation and photography impossible in the traditional manners.
However, after years of multiple attempts and hundreds of thousands of dollars on various methods, on April 10, 2019, the National Science Foundation released the world’s first image of a black hole, captured by the Event Horizon Telescope. This image physically proves the existence of black holes and solidifies the veracity of the theory of relativity.
The Event Horizon Telescope is the reason humanity can visualize this vastly theorized object for the first time. A telescope the size of the Earth is what it took to take an image of the black hole, which is 57 million light years away from Earth (3.35 x 1020 miles). A distance this large required the largest telescope humanity had access to: Earth. Using 8 separate telescope arrays from around the world and a technique called Very Long Baseline Interferometry, the Event Horizon Telescope turned Earth into one giant telescope. By using atomic clocks to align the observations in time and supercomputers to compile the petabytes of data, scientists can effectively achieve the resolution of an Earth-sized telescope—but not the light collecting capability, so the technique can only be used to observe very bright objects. VLBI can only collect radio waves on the surfaces of the dishes, which are constantly rotating with the Earth, keeping an eye on the center of M87. While the picture may not have been the original target, with the Event Horizon Telescope focused primarily on Sagittarius A* (the black hole in the center of the Milky Way), the scientific importance of this picture is unparalleled.
The first real image of a black hole, no simulations or animations, is a huge stride for humanity, affirming not only Einstein’s theories, but also the existence of black holes. And this is just the start, opening a whole new field for exploration as light is shined on the darkest object known to man.
|
A very popular topic in many of my courses is Oracle Database Architecture regarding High Availability (HA). This page is supposed to address this topic with a high level overview, covering “Ordinary” Single Instance Databases, Data Guard, Real Application Clusters (RAC) and Extended RAC (sometime called “Stretched Cluster”). The combination of RAC and Data Guard is advertised by Oracle Corp. under the label Maximum Availability Architecture (MAA). In addition to these Oracle HA Solutions, I will briefly cover also one Third Party HA Solution: Remote Mirroring. I don’t intend to dive deep into technical details of all these solutions but instead just want to differentiate them and talk briefly about the various advantages and maybe drawbacks of each of them.
At first, we look at the still most common Oracle Database Architecture: Single Instance. An Oracle RDBMS consists always of one Database – made up by Datafiles, Online Redo Logfiles and Controlfile(s) and at least one Instance – made up by Memory Structures (like a Database Buffer Cache) and Background Processes (like a Database Writer). If we have one Database and multiple Instances accessing it – that’s a RAC. If only one Instance accesses the Database – that’s Single Instance. Small Installations have stored all the components inside one Server like this:
Also common these days is the placement of the Database on a Storage Area Network (SAN) like this:
From a HA perspective this Architecture is vulnerable: Server A and Server B are Single Point of Failures (SPOFs) as well as Database A and Database B are. Also, the Site where these Servers are placed is a SPOF. Should one of the SPOFs fail, the whole Database is unavailable. An “ordinary” RAC addresses the Server SPOFs like this:
Should one of the two Servers fail, the Database C is still available. See here for a more detailed picture about 11gR2 RAC Architecture. HA is not the only reason to use RAC, of course. Amongst others, a further valid reason to use RAC is Scalability: If our requirements increase in the future, we can add another Server (Node) to the cluster. Also, we have options like Service-Management and Load Balancing with RAC. In short words: RAC is not just for HA, but it is out of the scope of this article to address the other reasons in detail. Drawback from a HA perspective of the above architecture is: The Database C resp. the Site C is a SPOF. Should i.e a fire destroy Site C, the Database C is unavailable. Therefore, we have the option to stretch the Database across two Sites, which is usually called Extended RAC:
The Sites are no longer SPOFs here. The Database D is mirrored across the two sites. Drawback of this architecture is the cost of the Network Connection between the two Sites, if long distances are desired. That is critical, because large Data Volume has to be mirrored. In effect, this leads to distances that usually do not extend a couple of kilometers – which may conflict with the goal to get Disaster Protection. Here, Data Guard kicks in: We can reach long distances for Disaster Protection with Data Guard easier, because we do not transmit the whole Data Volume but instead just the (relatively small) Redo Protocol. In the following picture, the Servers hold Single Instances like Server A and Server B above:
The Redo Protocol from the Primary Database is used to actualize the Standby Database. Should the Primary fail, we can failover to the Standby and continue to work productively. This failover can be done automatically by an Observer (which is called Fast-Start Failover). The distance between the two Servers can reach thousands of kilometers – depending on the kind of redo transmission and the protection level. If we combine RAC and Data Guard, we get MAA. Obviously, MAA is an expensive solution, but it also combines the advantages of RAC and Data Guard.
A popular Third Party HA Solution is Remote Mirroring. On a high level, it looks like that:
The Site is no SPOF here also, like with Extended RAC. Drawbacks of this solution: The distance is usually very limited for the same reason as with Extended RAC. Also, the Secondary Site cannot be used productively while the mirroring is in progress – contrary to the above Oracle HA solutions. With RAC all Servers resp. Sites are in use productively. Even with Data Guard, the Standby is not merely waiting for the Primary to fail. It can also be used for Read Only Access – effectively reducing the load on the Primary:
Above illustrates the 11g New Feature Real-Time Query for Physical Standby Databases. The Standby is accessed Read Only while the actualization continues. Additionally, it is possible to offload Backups to the Physical Standby (also before 11g).
|