text stringlengths 222 548k | id stringlengths 47 47 | dump stringclasses 95 values | url stringlengths 14 1.08k | file_path stringlengths 110 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 53 113k | score float64 2.52 5.03 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|
Contributing to the Sustainable Development Goals
The EU is committed to the 2030 Agenda and the Sustainable Development Goals. The EIB plays a vital role in the EU’s efforts toward these objectives within the EU and beyond its borders. The results of EIB-supported projects are linked to many SDGs, both in their direct effects and wider development impacts.
The 17 SDGs are intended to inspire and guide work towards greater human dignity, a sound environment, fair and resilient societies and prosperous economies. They call for a new global partnership between the public sector, the private sector and civil society to mobilise enhanced efforts and resources to make this vision a reality. As part of that partnership, international financial institutions play an important role, and the EIB is fully committed to doing its part to catalyse the acceleration in investment finance and knowledge sharing that is needed to achieve the goals.
By investing across many sectors, the EIB supports the achievement of the goals in many different ways, both within and outside the EU. Poverty reduction, supporting SDG 1 No Poverty, is an overarching objective for the Bank outside the EU, something that is promoted at the impact level by both investments in social and economic infrastructure and by the development of thriving local businesses. For many projects under our in the ACP region, we are also able to track direct and indirect beneficiaries, including the number of people who live below local poverty lines.
We can also link many of the project outputs and outcomes we track directly to the achievement of different SDGs. The table below highlights selected indicators from new projects outside the EU in 2017 to illustrate some of these connections. The EIB continues to work alongside other development partners to develop the best possible way to assess our contribution to the SDGs.
Helping to implement the SDGs: some expected contributions of 2017 lending outside the EU*
SDG 5 Gender equality
The EIB’s , adopted in January 2018, set out how the Bank can increasingly reinforce its contribution to gender equality.Our due diligence plays an important role here, guarding against activities that could have a negative gender impact. Further, the Bank is seeking to enhance the positive impacts of our investments on gender equality, including by stepping up investment in women’s empowerment, such as by investing in female entrepreneurs.
SDG 17 Global partnership
The EIB is one of the world’s largest multilateral development banks and the world’s largest provider of climate finance. As such, we have a duty to play an important role in fostering a revitalised global partnership, and on behalf of our shareholders, the EU Member States. We use our ability as a respected triple-A borrower to mobilise financial resources on a large scale, and to pass on the benefits to the projects we support around the world. We are able to from the EU and the Member States, including for , and provide advice to make sure that valuable projects can go ahead. EIB presence in a project often assures potential partners that it is technically sound, thereby helping to unlock additional private financing for sustainable development impacts. | <urn:uuid:26824a81-f942-433b-ab1c-58f2e390cb73> | CC-MAIN-2019-18 | http://reports.eib.org/the-eib-outside-the-eu-2017/contributing-to-the-sustainable-development-goals | s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578526228.27/warc/CC-MAIN-20190418181435-20190418203435-00299.warc.gz | en | 0.949279 | 623 | 2.828125 | 3 |
As a healthcare profession, physiotherapy is often misunderstood and shrouded in myths and misconceptions. Some people may have heard certain claims or anecdotes about physiotherapy that are inaccurate or outdated. These misconceptions can create unnecessary fear or skepticism about seeking physiotherapy treatment, ultimately leading to missed opportunities for recovery and rehabilitation. In this blog post, we will debunk some of the most common misconceptions about physiotherapy and provide evidence-based truths behind them.
Misconception 1: Physiotherapy is only for athletes or those recovering from surgery.
Truth: Physiotherapy is for everyone. While physiotherapy is commonly used for sports injury prevention and post-surgical rehabilitation, it can also be beneficial for a wide range of conditions such as chronic pain, arthritis, neurological conditions, and even postural issues. Physiotherapists are trained to assess and treat a variety of conditions that can affect anyone, regardless of age or activity level.
Misconception 2: Physiotherapy is painful.
Truth: Physiotherapy should not be painful. While some techniques may involve some level of discomfort, physiotherapists prioritize patient comfort and ensure that treatment is not causing more pain or injury. In fact, physiotherapy techniques such as massage and stretching are often relaxing and can relieve pain and tension.
Misconception 3: Physiotherapy is too expensive.
Truth: Physiotherapy can be affordable and cost-effective in the long run. While physiotherapy sessions can have a fee, they can save money in the long term by reducing the need for more invasive treatments or surgeries. Additionally, many insurance plans cover physiotherapy sessions, and some clinics offer payment plans or sliding scale fees.
Misconception 4: Physiotherapy only involves exercises and stretches.
Truth: Physiotherapy is a diverse profession that encompasses many techniques beyond exercises and stretches. Physiotherapists may use manual therapy, acupuncture, dry needling, and modalities such as ultrasound or electrical stimulation to treat various conditions. Additionally, physiotherapists provide education and advice on injury prevention, posture, and ergonomics.
Misconception 5: Physiotherapy is not evidence-based.
Truth: Physiotherapy is a science-based profession that relies on the latest research and evidence-based practice. Physiotherapists undergo rigorous training and are required to stay up-to-date with current research and best practices to provide the highest quality of care.
These are just a few of the many misconceptions surrounding physiotherapy. By debunking these myths, we hope to encourage individuals to seek physiotherapy treatment without fear or hesitation. If you have any questions or concerns about physiotherapy, consult with a licensed physiotherapist who can provide accurate information and tailor treatment to your specific needs. Remember, physiotherapy is for everyone and can help you achieve your goals for health and wellness.
By Omid Ebrahimi, Registered Physiotherapist | <urn:uuid:13f196ed-d7ed-49af-bfb6-9b60df3e72a6> | CC-MAIN-2023-40 | https://physiodna.com/physiotherapy-setting-the-record-straight-and-debunking-the-myths/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510130.53/warc/CC-MAIN-20230926011608-20230926041608-00291.warc.gz | en | 0.938515 | 592 | 3.015625 | 3 |
|Cassin 1966||1192-1180||middle chronology|
|Gasche et al. 1998||1189-1178||Ultra-short chronology|
According to the Assyrian king list , he was the son of Ilī-padâ , a descendant of Eriba-Adad , who had gone into exile in Karduniaš ( Babylonia ). His father Ilī-padâ was a descendant of Eriba-Adad I (1390-1364) and sukallu rabi'u and šar von Hanilgabat under Aššur-nārāri III. (1202-1197). He must have had great power, since Adad-šuma-uṣur of Babylon calls him King of Assyria. Borger assumes that this was simply meant as an insult to the real king, which would be consistent with the other tone of the letter, which accuses the kings of laziness and drunkenness and which surprisingly exists as a New Assyrian copy. Ninurta-apil-ekur describes himself in some of his inscriptions as the son of Eriba-Adad.
Ninurta-apil-ekur came back from exile in Babylon and, probably with the support of the Kassites , seized the throne. A severe earthquake is documented during his reign that destroyed the Ishtar temple in Assur .
Ninurta-apil-ekur ruled for either 13 (older Assyrian King List , NaKL) or three years. Hornung has chosen the short chronology, Michael Rowton the long one. Most researchers prefer the shorter duration, but Freydank assigns eleven līmu to his rule (Saporetti, however, not a single one). No complete lists of eponyms are available for the Central Assyrian period (approx. 1500–1000 BC) . Recently, Jaume Llop presented a sequence of eponyms that confirms the long reign, but does not cover the entire 13 years.
The next year, Ninurta-apil-ekurs's son Aššur-dan followed as an eponym. Llop assumes that the Central Assyrian kings held this office in their first year of reign.
His daughter Muballitat [...] was a high priestess, his son Aššur-dan I became king after him.
- M. Astour: The Hurrian king at the siege of Emar . In: Mark W. Chavalas (Ed.): Emar, the history, religion and culture of a Syrian town in the late Bronze Age . Bethesda 1996, pp. 25-26.
- JA Brinkman: Materials and studies for Kassite history . Oriental Institute of the University of Chicago, Chicago 1976.
- Helmut Freydank : Contributions to Central Assyrian chronology and history . Berlin 1991 ( writings on the history and culture of the ancient Orient . 21).
- Helmut Freydank: On the eponym sequences of the 13th century BC In major Katlimmu . In: Archive for Orient Research . 32, 2005, pp. 45-56.
- H. Gasche: Dating the fall of Babylon: A re-appraisal of Second-Millennium chronology: A joint Ghent-Chicago-Harvard project . Ghent and Chicago 1998 ( Mesopotamian History and Environment . Series 2. Memoires 3).
- Albert Kirk Grayson : Assyrian Royal Inscriptions. Part 1: From the beginning to Ashur-resha-ishi I . Harrassowitz, Wiesbaden 1972.
- E. Hornung: Studies on the chronology and history of the New Kingdom . Wiesbaden 1964.
- Jaume Llop: MARV 6, 2 and the eponym sequences of the 12th century . In: Journal of Assyriology . 98, 2008, pp. 20-25.
- Joan Oates: Babylon . Bergisch Gladbach 1983, ISBN 381120727X , pp. 117, 122f.
- Julian Reade: Assyrian king-lists, the royal tombs of Ur, and Indus origins . In: Journal of Near Eastern Studies . 60, 2001, pp. 1-29.
- MB Rowton: The material from Western Asia and the chronology of the Nineteenth Dynasty . In: Journal of Near Eastern Studies . 25/4, 1966, pp. 240-258.
- C. Saporetti: Gli Eponimi medio-assiri . Malibu 1979 ( Bibliotheca Mesopotamica . 9).
- MB Rowton, The Background of the Treaty between Ramesses II. And Hattušiliš III. Journal of Cuneiform Studies 13/1, 1959, 5
- H. Freydank: Contributions to Central Assyrian Chronology and History . Berlin 1991 ( Writings on the History and Culture of the Ancient Orient . 21) p. 195.
- Jaume Llop: MARV 6, 2 and the eponym sequences of the 12th century . In: Journal of Assyriology . 98, 2008, p. 20.
|Enlil-kudurrī-uṣur||Assyrian king||Aššur-dan I.|
|BRIEF DESCRIPTION||Assyrian king|
|DATE OF BIRTH||13th century BC Chr.|
|DATE OF DEATH||12th century BC Chr.| | <urn:uuid:9084f800-facf-4a11-a719-9cff584a9f65> | CC-MAIN-2021-31 | https://de.zxc.wiki/wiki/Ninurta-apil-ekur | s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046152085.13/warc/CC-MAIN-20210805224801-20210806014801-00680.warc.gz | en | 0.716358 | 1,160 | 2.546875 | 3 |
From Our 2009 Archives
Cardiovascular Fitness May Sharpen Mind
Latest MedicineNet News
Study Shows Link Between Healthy Body and Academic Success
By Jennifer Warner
Reviewed By Elizabeth Klodas, MD, FACC
Nov. 30, 2009 -- A healthy body may be the first step to achieving a healthy mind and appetite for learning.
Researchers say the results suggest that promoting physical and cardiovascular fitness as a public health strategy could maximize educational achievement as well as prevent disease at the societal level.
"We believe the present results provide scientific support for educational policies to maintain or increase physical education in school curricula as a means to stem the growing trend toward a sedentary lifestyle, which is accompanied by an increased risk for diseases and perhaps intellectual and academic underachievement," write researchers Maria Aberg and colleagues of the University of Gothenburg in Gothenburg, Sweden in the Proceedings of the National Academy of Sciences.
The study followed more than 1 million men born in 1950 through 1976 who were enlisted for military service in Sweden at age 18. The sample included 3,147 twin pairs, of which 1,432 were identical.
Physical fitness and intelligence were assessed at the time of conscription and linked to national databases on school achievement and socioeconomic status later in life.
The results showed that cardiovascular fitness, but not muscular strength, was associated with cognitive performance on many different measures.
For example, higher scores on measures of cardiovascular fitness were linked to higher scores on intelligence and academic achievement.
When researchers looked at twins, they found that environmental factors rather than genetics appeared to play the largest role in these associations. Non-shared environmental influences accounted for 80% or more of differences in academic achievement, whereas genetics accounted for less than 15% of these differences.
In addition, cardiovascular fitness changes between age 15 and 18 predicted cognitive performance at age 18, and cardiovascular fitness at age 18 predicted academic achievement and socioeconomic status later in life.
Researchers say many previous studies have linked physical fitness with cognitive performance in animals and humans but most have focused on young children or adults. Few studies have looked at the effect of physical and cardiovascular fitness on academic achievement in young adulthood, a critical period for cognitive development.
SOURCES: Aberg, M. Proceedings of the National Academy of Sciences, Dec. 8,
2009; vol 106: pp 20906-20911.
Get the latest health and medical information delivered direct to your inbox FREE! | <urn:uuid:4ef0aefa-0bfa-4003-a56d-184454e06f31> | CC-MAIN-2013-20 | http://www.medicinenet.com/script/main/art.asp?articlekey=108286 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704662229/warc/CC-MAIN-20130516114422-00004-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.949068 | 494 | 3.09375 | 3 |
"..astronomers can tell the temperature of the central regions of the Sun and of many other stars within a few percentage points and be quite sure about the figures they quote." - A Star Called the Sun, George Gamow.
The Cone Nebula is a column of dark dust, six light-years long, near some newly formed hot blue stars. The edge of the column, especially the tip, is bright with red light from ionized hydrogen. This nebula and the cluster that illuminates it are about 2600 light-years away in Monoceros. Credit: Michael Gariepy/Adam Block/NOAO/AURA/NSF.
The cone nebula shows a star at the top of a conical-shaped dusty plasma, festooned with lights. The image strikes an instinctive chord—the mythical celestial world mountain around which the stars revolve; the cosmic (Christmas) tree with lights; fireworks displays against a night sky. Why? Because it reflects back to us our own prehistory when a strange drama was taking place in the sky.
The Earth was enveloped in a towering polar auroral plasma, flashing with light and with bright celestial bodies at its distant focus. How do we know? Prehistoric mankind around the globe chiselled representations of what they saw into solid rock. The effort required was prodigious, the motivation extraordinary. Modern astronomy seems unable to address the issue, offering instead a comfortable myth of cosmic stability.
Twentieth century technologies have enabled astronomers to see the stars and planets ever more clearly, but their perceptions are clouded by centuries-old beliefs about celestial harmony; that the heat and light of stars is due to some kind of internal fire; that we understand gravity sufficiently to declare that it obeys a universal law and alone governs cosmic evolution. These perceptions have become dogma and dogma hinders progress. So it is not surprising that a growing number of critics see gravitational cosmology of the “Big Bang” as sterile and irrelevant to any real understanding of our place and history in the universe. The fact that it has nothing to say about life itself—the deepest mystery of the universe—is just one of countless signs that the present field of view is too limited.
For the moment I want to feature two reports in December that show astronomers do not understand stars. The view of stars as ‘fires in the sky’ was understandable when chemical fires were the only source of light that we knew & the only question we asked of stars was ‘how do they shine? But that view failed when we realized that stars had to burn steadily for aeons. The discovery of nuclear energy offered an answer to this new question without having to re-evaluate the accumulation of other assumptions about stars.
The thermonuclear assumption was never proved, and observations that contradicted it were never crucial enough to compel astronomers to doubt it. It came full circle and led to a futile decades-long effort to mimic the conjectured process to provide power on the Earth. All the while, a clue to a better answer stared the experimenters and theoreticians in the face: they were using electricity to trigger thermonuclear reactions; maybe the Sun was doing that, too.
We use electricity as a convenient means of lighting and heating that doesn’t require the power to be generated on site. We’ve discovered that thin transmission lines can carry great amounts of power over long distances from generator to light bulb. Nature is parsimonious in achieving its ends; why wouldn’t stars get power from natural transmission lines? The satisfying answer is that they do. Radio astronomers can trace the telltale magnetic fields in deep space. The magnetic fields mark filamentary cosmic ‘transmission lines’ carrying electrical power between galaxies and stars.
Planetary nebula M2-9. The complex Z-pinch hourglass shape of the external circuitry of a star becomes visible in a planetary nebula where the galactic power is high enough or the plasma is dusty. Gravitational models of stars fail to explain the fine detail of planetary nebulae.
NASA’s Dim View of Stars
The latest report from NASA is a fitting end to The Year of The Electric Universe. It demonstrates that the electric model of stars envisaged the latest observations while NASA researchers again mask their assumptions by stating them as facts. Ironically, the report refers to some stars as “low-energy fluorescent light bulbs.”
As usual, all the science reporting agencies repeat NASA’s words without critical comment. Mainstream media rarely do investigative science journalism. The NASA report follows, along with my comments.
Astronomers Find the Two Dimmest Stellar Bulbs
This artist's concept shows the dimmest star-like bodies currently known -- twin brown dwarfs referred to as 2M 0939. The twins, which are about the same size, are drawn as if they were viewed close to one of the bodies. Picture credit: NASA/JPL-Caltech.
It's a tie! The new record-holder for dimmest known star-like object in the universe goes to twin "failed" stars, or brown dwarfs, each of which shines feebly with only one millionth the light of our sun.
Comment: As we shall see, the notion of “twin failed stars” is a theoretical assumption and not a fact!
In an Electric Universe there is no such thing as a “failed” star. They have no thermonuclear “engine” to fail. All bodies in the galaxy receive external electrical energy from the galactic circuit. Radio astronomers (for the most part unwittingly) trace the circuit by mapping the magnetic fields of galaxies and stars, which fields are generated by the electric current flowing in the circuit. The circuits are unrecognized due to the mistaken conviction that magnetic fields can be ‘frozen in’ to plasma. The ‘father’ of plasma physics, Hannes Alfvén, appealed against this mistaken notion in his Nobel Prize acceptance speech in 1970. But to give up this false belief would require discarding decades of theoretical work and reputations built upon it.
The report continues:
Previously, astronomers thought the pair of dim bulbs was just one typical, faint brown dwarf with no record-smashing titles. But when NASA's Spitzer Space Telescope observed the brown dwarf with its heat-seeking infrared vision, it was able to accurately measure the object's extreme faintness and low temperature for the first time. What's more, the Spitzer data revealed the brown dwarf is, in fact, twins.
"Both of these objects are the first to break the barrier of one millionth the total light-emitting power of the sun," said Adam Burgasser of the Massachusetts Institute of Technology, Cambridge. Burgasser is lead author of a new paper about the discovery appearing in the Astrophysical Journal Letters.
Brown dwarfs are the misfits of the cosmos. They are compact balls of gas floating freely in space, but they are too cool and lightweight to be stars, and too warm and massive to be planets. The name "brown dwarf" comes from the fact that these small, star-like bodies change color over time as they cool, and thus have no definitive color. In reality, most brown dwarfs would appear reddish if they could be seen with the naked eye. Their feeble light output also means they are hard to find. The first brown dwarf wasn't discovered until 1995. While hundreds are known today, astronomers say there are many more in space still waiting to be discovered.
Comment: All stars are an electrical phenomenon. There are no “misfits” in an Electric Universe. All of the assumptions being heaped upon the meagre photons received from deep space merely serve, as usual, to force fit the data to the standard model of stars. The very name, brown “dwarf,” assumes that these stars are “compact balls of gas floating freely in space.”
In stark comparison, the electric model describes them as “huge” because the light from a star is a plasma discharge phenomenon with only a loose relationship to the physical size of the star and a strong dependence on the electrical environment. Brown dwarfs do not simply cool down over time and wink out. They are externally powered electric lights.
In December 1999 I wrote, “The apparent size and color of an electric star is an electrical phenomenon. If Jupiter's magnetosphere were lit up it would appear the size of the full Moon… The light of a red star is due to the distended anode glow of an electrically low-stressed star… Red Giants are a more visible and scaled-up example of what an L-type Brown Dwarf star might look like close-up.”
The report continues:
Astronomers recently used Spitzer's ultrasensitive infrared vision to learn more about the object, which was still thought to be a solo brown dwarf. These data revealed a warm atmospheric temperature of 565 to 635 Kelvin (560 to 680 degrees Fahrenheit). While this is hundreds of degrees hotter than Jupiter, it's still downright cold as far as stars go. In fact, it is one of the coldest star-like bodies measured so far.
To calculate the object's brightness, the researchers had to first determine its distance from Earth. After three years of precise measurements with the Anglo-Australian Observatory in Australia, they concluded that the star is the fifth-closest known brown dwarf to us, 17 light-years away toward the constellation Antlia. This distance, together with Spitzer's measurements, told the astronomers the object was both cool and extremely dim.
But something was puzzling. The brightness of the object was twice what would be expected for a brown dwarf with its particular temperature. The solution? The object must have twice the surface area. In other words, it's twins, with each body shining only half as bright, and each with a mass of 30 to 40 times that of Jupiter. Both bodies are one million times fainter than the sun in total light, and at least one billion times fainter in visible light alone.
"These brown dwarfs are the lowest power stellar light bulbs in the sky that we know of," said Burgasser. "And like low-energy fluorescent light bulbs, they emit most of their light in a narrow range of wavelengths, in this case in the infrared."
Comment: Burgasser’s description of brown dwarfs as “low-energy fluorescent light bulbs” is the closest he comes to the truth. Like fluorescent lights, brown dwarfs require electricity! And the solution to the problem is simple—a single red dwarf with a distended red anode-glow can provide the extra brightness without postulating an unlikely twin.
The report continues:
According to the authors, there are even dimmer brown dwarfs scattered throughout the universe, most too faint to see with current sky surveys. NASA's upcoming Wide-Field Infrared Survey Explorer mission will scan the entire sky at infrared wavelengths, and is expected to uncover hundreds of these inconspicuous characters.
"The holy grail in the study of brown dwarfs is to find out how low you can go in terms of temperature, mass and brightness," said Davy Kirkpatrick, a co-author of the paper at NASA's Infrared Processing and Analysis Center at the California Institute of Technology, Pasadena. "This will tell us more about how brown dwarfs form and evolve."
Comment: In an Electric Universe, stars do not evolve. The notion of stellar evolution and the age of stars is an invention of the standard thermonuclear model of stars. And for so long as scientists cling to an unworkable theory of stellar formation by gravitational accretion, new findings will serve only to add to the confusion.
I predict that further discoveries by the Wide-Field Infrared Survey Explorer in this category will require the same ad hoc assumption that the radiant surface area, based on standard theory, must be accommodated by multiple star systems. The odds against finding so many multiple systems will become astronomical.
The Hertzsprung-Russell diagram is a plot of observations which must be explained by the chosen model of stars. The electrical model of stars reverses the direction of the x-axis to show the direct relationship between an increase in current density at the surface of a star and the higher temperature of that star, reflected by its change in color from red hot to white hot to blue hot.
The main sequence is the backbone of the observations but there are sharp discontinuities between the main sequence, the giant stars and white dwarfs. In the standard thermonuclear model of stars, the explanations for these discontinuities are beset by many observational discrepancies and ad hoc patches.
In the electric star model such discontinuities are a natural feature of a plasma discharge. Main sequence stars operate like arc lights in a cinema projector. The plasma discharge at their photospheres is in arc mode. The main sequence is a direct result of increasing the current density at the surface of a star.
The white dwarfs operate more like fluorescent lights, where a fainter coronal glow-mode discharge provides the light. If you can imagine the Sun’s bright photosphere being replaced by faint white coronal light, you have the picture. White ‘dwarfs’ are not dwarfs at all. They are faint, not because they are small but because they produce their light in a different mode of plasma discharge from stars like the Sun. The current density scale for white dwarfs is different to that of the main sequence and this is why they are scattered along a lower-luminosity sequence.
In the case of giant stars, the star’s ‘surface’ is bloated like the glow of a neon light as the star seeks to satisfy its current requirements. The red light comes from a low current density at the large diameters of the (virtual) anode of these stars.
The stellar thermonuclear evolutionary story is that a star of intermediate mass (1-8 solar masses) terminates its life as an Earth-sized white dwarf after the exhaustion of its nuclear fuel. During the transition from a nuclear-burning star to the white dwarf stage, the star collapses to about one fiftieth of the solar radius and becomes very hot. Many such objects with surface temperatures around 100,000 Kelvin (K) are known. Theories of stellar evolution predict that these stars can be much hotter. However, the probability of catching them in such an extremely hot state is low, because this phase is short-lived.
An article was published on December 12 this year in Astronomy & Astrophysics Letters which claims to have discovered one of these white dwarfs, “one of the hottest stars ever known with a temperature of 200,000 K at its surface.” The temperature is deduced from the emission from nine-fold ionized calcium atoms thought to be in the star’s photosphere. It is the highest ionization level of a chemical element ever discovered in a photospheric stellar spectrum.
The stellar atmosphere modelling of a white dwarf based on thermodynamic equilibrium will give erroneous conclusions because charged particles in an electric field will be dethermalized (their random motion reduced while their kinetic energy increases). So it easy for a white dwarf to multiply ionize calcium atoms because the electrical energy required is equivalent to a mere 211 electron volts and not random thermal energy equal to a temperature of 200,000 to 300,000 K. Using thermal (mechanical) energy is the most difficult and unlikely way of explaining the data.
The white dwarf also challenges the standard stellar evolution concepts because it has a chemical surface composition rich in calcium and helium that is not predicted by stellar evolution models. A paper in the Astrophysical Journal of February 2005 shows the surprise and confusion created by this star. As usual, mechanical energy in the form of a supposed “shocked wind” is proposed as the origin of weak X-ray emission at 1 keV. And despite the almost infinite number of “knobs” available to twiddle on the standard model, a match with observations has not been reached.
The obstacle to an understanding of white dwarfs comes from using heat (mechanical energy) from within a star to explain highly energetic phenomena outside the star. It is precisely the difficulty encountered with the Sun and its phenomenally hot corona. The conceptual hurdle is exemplified by the paradigm set out in the introduction to the above paper: “The hot 106-107 K coronae on the Sun and other late-type stars are
believed to be sustained by mechanical energy in their outer convection zones, which is dissipated at the surface through the medium of magnetic fields generated and amplified by differential rotation and convection in the interior.” [Emphasis added].
In other words, our present understanding of the Sun and therefore most other stars is based on this simple belief that to this day has not been verified. In this circumstance it would be scientifically responsible to question that belief when new data fails to satisfy predictions. As Eddington, the theoretician who gave us the standard model of stars, wrote of white dwarfs when first discovered, “Strange objects, which persist in showing a type of spectrum entirely out of keeping with their luminosity, may ultimately teach us more than a host which radiates according to rule.” But beliefs are very difficult to shift.
In July this year I wrote, “A white dwarf is a star that is under low electrical stress so that bright ‘anode tufting’ is not required. The star appears extremely hot, white and under-luminous because it is equivalent to having the faint white corona discharge of the Sun reach down to the star’s atmosphere. As usual, a thin plasma sheath will be formed between the plasma of the star and the plasma of space. The electric field across the plasma sheath is capable of accelerating electrons to generate X-rays when they hit atoms in the atmosphere. And the power dissipated is capable of raising the temperature of a thin plasma layer to tens of thousands of degrees.”
Of course, this model will need to be reviewed in the light of new data. But at least it is a new, quite different model that easily meets the basic observational fact of high-energy phenomena outside a star. The strong magnetic fields of some white dwarfs are diagnostic of external electric currents. The spectral line broadening indicates the presence of a strong electric field in the light-emitting region. The electrical energy focussed on the white dwarf is dissipated in an extensive, cool corona instead of a hot, arc-tufted photosphere.
So it is significant that the spectrum of the white dwarf in the cited paper was interpreted as “evidence that the X-rays originated not from deeper atmospheric layers but from a coronal plasma encircling the star.” The white dwarf “became the first white dwarf thought to have a corona, albeit a cool one.” The weak X-ray emission is attributed, in ad hoc fashion, to “a shocked wind.” It’s like a dentist using a jet engine to X-ray your teeth.
The presence of anomalies in the star’s spectrum, both in the elements present and their state of ionization, is more accurately explained by the electrical model of stars, which have a cool core of heavy elements. The authors note, “a coronal model requires a total luminosity more than two orders of magnitude larger than that of the star itself.” An electric white ‘dwarf’ emits light from both the corona and the thin, brighter plasma sheath that forms its photosphere.
An electric white dwarf is a far simpler model than the “collapsed degenerate stellar corpse” model. The star is not “dying.” It has not evolved from another type of star. It is not an impossible object—a Sun squeezed to twice the diameter of the Earth. Stars cannot suffer gravitational collapse to a theoretical form of ‘degenerate matter’ that has never been observed—where atoms are squeezed together so strongly that only electrons in adjacent atoms prevent further collapse because they cannot share orbits. Just how far-fetched this notion is can be gauged if we consider that the electric repulsive force exceeds the gravitational force by 39 orders of magnitude!!
Subrahmanyan Chandrasekhar was awarded the Nobel Prize in 1983 for his theoretical work on electron degenerate white dwarfs, which predicted the existence of a relationship between mass and radius for a degenerate white dwarf. This theoretical mass-radius relation is a generally accepted underlying assumption in nearly all studies of white dwarf properties. In turn, these studies, including the white dwarf mass distribution and luminosity function, are foundations for such varied fields as stellar evolution and galactic formation. The notion of stellar collapse led on to more extreme theoretical fictions—neutron stars and black holes. The damage wrought by such an assumption for our understanding of stars and the cosmos cannot be overstated! A recent paper in The Astrophysical Journal warned, “One might assume that a theory as basic as stellar degeneracy rests on solid observational grounds, yet this is not the case. Comparison between observation and theory has shown disturbing discrepancies.” The paper cited here adds to the discrepancies.
White ‘dwarfs,’ on the other hand, are physically larger than red dwarfs but generally smaller than the Sun. Lacking bright anode tufting they have an extended coronal type discharge and photosphere that emits faint whitish light, ultraviolet light and mild X-rays. The spectral lines are broadened, sometimes to the point of disappearance, due to the coronal electric field. This gives the misleading impression that hydrogen (whose spectral lines are smeared the most) is missing in many of these stars and that therefore they are remnants of larger stars that have lost or burned their hydrogen fuel.
Significantly, the larger the white dwarf, the lower the current density and the lower the apparent temperature. This trend has been noted with some puzzlement by researchers. White dwarfs the size of the Sun and a little larger are stars under lower electrical stress than normal. This may occur, for example, in binary star systems like that of Sirius, where one star usurps most of the available electrical energy.
There are no collapsed stars of extraordinary high density. The story of stellar evolution is fiction. The numbers of small red and white stars exceed the number of bright stars. They are formed in the same Z-pinch mechanism in dusty plasma as are all other stars. Or they may be born later by parturition (nova) of an unstable larger star. The economy and success of the Electric Universe model is readily apparent.
The Electric Universe paradigm continues its successful run of discovery and prediction in 2008
In January I declared 2008 The Year of the Electric Universe. And so it has proved to be. Confirming and supportive evidence arrives almost daily. Along with my associated THUNDERBOLTS.INFO website we attract tens of thousands of visitors each month. This month set a new record. The scientific literacy of visitors is exceptionally high, and a consistent pattern has emerged, verified by hundreds of comments. When newcomers compare the direct evidence for the Electric Universe to conventional interpretations of the same data, offered here and in “Thunderbolts Picture of the Day,” the conclusion becomes clear. We do indeed live in an Electric Universe.
The Thunderbolts Project is attracting volunteers and people wanting to undertake serious study to further their understanding of plasma and the Electric Universe. New books, educational e-books and videos are being produced and a Japanese version of Thunderbolts of the Gods is due to go on sale in that country early in the new year.
The future is bright in an Electric Universe!
- Wal Thornhill
For further enlightening info enter a word or phrase into the search box @ New Illuminati @ http://nexusilluminati.blogspot.com or click on any tag at the bottom of the page for more direct references
The Her(m)etic Hermit - http://hermetic.blog.com
New Illuminati – http://nexusilluminati.blogspot.com
New Illuminati on Facebook - http://www.facebook.com/pages/New-Illuminati/320674219559
This material is published under Creative Commons Copyright (unless an individual item is declared otherwise by copyright holder) – reproduction for non-profit use is permitted & encouraged, if you give attribution to the work & author - and please include a (preferably active) link to the original along with this notice. Feel free to make non-commercial hard (printed) or software copies or mirror sites - you never know how long something will stay glued to the web – but remember attribution! If you like what you see, please send a tiny donation or leave a comment – and thanks for reading this far…
From the New Illuminati – http://nexusilluminati.blogspot.com | <urn:uuid:5a7a0491-8d23-421f-a14f-c391d2d0eee0> | CC-MAIN-2016-36 | http://nexusilluminati.blogspot.com/2009/01/nasas-dim-view-of-stars-electric_6444.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982969890.75/warc/CC-MAIN-20160823200929-00125-ip-10-153-172-175.ec2.internal.warc.gz | en | 0.936378 | 5,181 | 3.703125 | 4 |
By Shruti Kapoor
Mewat district in Haryana, India, has remained undeveloped for a long time. The footsteps of change have set in with the women of village institutions in Mewat forming block-level sangathans (women’s collectives). They include members of School Management Committee (SMC), Village Health, Sanitation and Nutrition Committee (VHSNC), and panchayats (village councils). The aim of sangathans is to speed up the development process in the local communities. They take up the responsibility to improve existing governance situation with the collective efforts of its members.
In the short span of four months, the sangathans in Mewat have made few pertinent services available to villagers. For instance, sangathan women have revived a VHSNC and utilized its funds to undertake activities related to sanitation in the village. Other sangathan women were instrumental in getting roads constructed and restoring the electricity supply in villages.
There are many such results to the credit of sangathans in Mewat. The collaborative effort of sangathan women is an effect of the trainings provided by Sehgal Foundation.
With an aim to highlight the achievements of sangathan women and provide them a platform to interact with government officials, Sehgal Foundation organized an interface workshop in December 2014. Deputy Commissioner of Mewat Ashok Sangwan and Civil Surgeon Dr B K Rajora attended the workshop. They interacted with the women and inspired them to work harder toward community development.
The government officials shared some health-related good practices with the women, such as eating a balanced diet, and discussed the importance of toilets. They motivated the women to avail benefits under Swachh Bharat Mission (sanitation program) and Janani Suraksha Yojna (scheme for women welfare).
The interface workshop served as a tool to increase confidence among women and motivate them to work collaboratively to solve community issues and lay a development roadmap for their village. | <urn:uuid:919c090d-7ddb-4397-9b0f-59742aff170b> | CC-MAIN-2023-40 | https://www.smsfoundation.org/mahila-sangathans-motivate-women-to-work-for-development/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510214.81/warc/CC-MAIN-20230926143354-20230926173354-00857.warc.gz | en | 0.958273 | 406 | 2.515625 | 3 |
The United States Congress, with bipartisan support, has passed a bill titled the Energy Efficiency Improvement Act of 2015. The bill aims to promote energy efficiency in commercial buildings in three ways:
- By developing a new voluntary energy program within the current Energy Star framework
- By adopting new regulations for smart grid-enabled water heaters
- Promote benchmarking and public disclosure of energy usage for buildings so tenants and building owners can better understand current energy performance level of their space
While it is presently unclear in the current language of the bill if or how the public disclosure of energy usage for individual buildings will be accessible to the general public, the heating, ventilation and air conditioning (HVAC) industry could greatly benefit from such legislation if that is indeed the case.
Currently in the United States, Energy Star certifications act as a way for building owners to show the general public that they are doing their part in limiting energy usage. However, these programs are voluntary and there are no penalties when standards are not reached.
If information on energy usage of buildings was accessible to the public, building owners and businesses that were found to use an exorbitant amount of energy could be put under pressure by environmental advocacy groups and the general public to find ways to cut power consumption. This could be an important development in promoting energy efficiency as many political leaders at the state and federal levels are unwilling to pass prescriptive and punitive energy reform measures for the fear of being branded anti-business.
The bad public relations which could arise from the disclosure of energy consumption could force buildings to upgrade old and inefficient HVAC equipment at a faster rate. For example, in the United States nonresidential retrofit market, IHS forecasted total units of air conditioners to grow from 475 thousand in 2014 to 513 thousand in 2017, a compound annual growth rate (CAGR) of 2.6%. With public pressure being an additional driving force behind replacement of inefficient air conditioning units, the CAGR could reach closer to 5.0%.
In addition to higher spending in HVAC equipment, buildings could also invest in more comprehensive HVAC controls systems and spend more on the service and maintenance of the new equipment and controls. This would lead to a further boost in retrofit and replacement HVAC business in cities with a high number of commercial buildings. | <urn:uuid:f03211fa-5318-4e06-bb57-3e4bf7e85b4d> | CC-MAIN-2020-16 | https://technology.informa.com/529760/energy-efficiency-improvement-act-of-2015-could-help-hvac-market | s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370506673.7/warc/CC-MAIN-20200402045741-20200402075741-00195.warc.gz | en | 0.961859 | 474 | 2.5625 | 3 |
The National Mapping and Resource Information Authority (NAMRIA) is the central mapping agency of the Philippines.
It is an attached agency to the Department of Environment and Natural Resources (DENR) as per Executive Order No. 192 (June 10, 1987) integrating the functions and powers of four government agencies - Natural Resources and Management Center (NRMC), the National Cartography Authority (NCA), the Bureau of Coast and Geodetic Survey (BCGS), and the Land Classification Teams of the then Bureau of Forest Development (transformed into a Forest Management Bureau performing staff functions) to provide the Department and the government map-making services.
BCGS was under the Department of National Defense before the reorganization in 1987. It was composed of personnel in the uniformed service -Commissioned Officers and Enlisted Personnel; which is retained until these days. It is now the Hydrography Branch of NAMRIA, mandated to conduct hydrographic and coastal surveys as well as gather physical oceanographic data to produce nautical charts and other maritime publications primarily for the safety of navigation at sea and to provide the research community valuable data. It is also giving services to delineate maritime boundaries. | <urn:uuid:bd3d242a-e1df-4955-bff4-5c0593b51408> | CC-MAIN-2017-13 | https://www.kalibrr.com/c/national-mapping-and-resource-information-authority-2/jobs | s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189316.33/warc/CC-MAIN-20170322212949-00431-ip-10-233-31-227.ec2.internal.warc.gz | en | 0.934475 | 243 | 2.640625 | 3 |
Trichoderma is a genus of soil-dwelling fungi found all over the world that are highly effective at colonizing many kinds of plant roots, and inhibiting fungi that cause many types of diseases. It was one of the first types of biofungicides commercially available.
One strain in particular, T. harzianum T-22, is the result of 15 years of research at Cornell University to create an even more powerful type of Trichoderma.
Strain T-22 will form an intimate association with plant roots and colonize them. This colonization places the fungus in a good location to outcompete and parasitize other fungi in the soil.
We link to vendors to help you find relevant products. If you buy from one of our links, we may earn a commission.
This fungus can inhibit a who’s who of fungal soil-borne pathogens, including Fusarium (wilts), Rhizoctonia (root rot), Sclerotinia (blight), and Pythium and Phytophthora (damping off).
Trichoderma works best on plants that are not thriving. If your plants are already at their peak, you may not see an effect from adding this microbe.
However, if conditions are suboptimal, yield increases have ranged from 10-20% to as much as 300%.
The guide below describes exactly how this fungus improves plant growth, and provides you with tips on how to best use it.
What You Will Learn
- How Trichoderma Interacts with Plants
- How Trichoderma Interacts with Other Microorganisms
- Resistance to Pesticides
- How to Use Trichoderma in Your Garden
- The Global Biocontrol Fungus
How Trichoderma Interacts with Plants
Plant Root Colonization
Once in the soil, this fungus colonizes the roots of plants. By growing on the roots and in the rhizosphere, it forms a physical barrier to prevent the growth of fungi that would otherwise cause disease on the plant.
Plants frequently produce chemicals to defend themselves, and Trichoderma is resistant to many of them, which helps it to colonize the roots. And it does this without interfering with other microbes that help the plants, such as mychorrhizae or Rhizobium (bacteria that fix nitrogen).
Trichoderma can improve plant health even in the absence of pathogens. The fungus grows best in soil that is acidic, and it helps create such an environment by secreting organic acids.
These acids have an additional effect that greatly benefits the plants – they can solubilize phosphates and mineral ions, such as iron, magnesium, and manganese. These means they facilitate dissolving of these minerals, making it easier for the plants to absorb them. Such nutrients are often in short supply in the soil.
The increase in the yield of the plants is greater when the soil is really poor to start with.
Stimulation of Plant Defense Mechanisms
You may not know that plants have immune systems. They are able to sense invasion by pathogens and activate cascades of responses to produce chemicals to protect themselves.
Trichoderma has been shown to be able to activate plant defense responses, which enables the plant to control some infections above the ground, but their effects are not limited to just soil-borne pathogens. An example is Botrytis, a debilitating aboveground fungus that is sometimes controlled using Trichoderma.
How Trichoderma Interacts with Other Microorganisms
Part of what makes Trichoderma such an effective biocontrol agent is that it uses a diversity of mechanisms. That makes it highly difficult for its target organisms to evolve resistance, since they would have to become resistant to a number of different mechanisms simultaneously.
Parasitism of Other Fungi
Trichoderma can directly parasitize other fungi. First, it attaches to them. Then it coils around them and produces structures that can penetrate them. In addition, this fungus produces enzymes that break down the fungal cell walls. This process is known as mycoparasitism, with myco meaning fungi.
Most fungal cell walls contain chitin, and strain T-22 in particular produces large amounts of an enzyme called chitinase that can degrade the cell walls of its opponents.
Trichoderma protects itself from the chitinases it produces.
In addition to physically parasitizing other fungi, Trichoderma can attack them chemically. It does so by producing chemicals that are toxic to the fungi. Some of these compounds are volatile and travel through the air.
The chitinases and antibiotics act synergistically, and affect the target fungus more strongly than the production of either one alone.
The soil is a fiercely competitive place, and microbes most commonly die by starvation. Trichoderma is unusually skilled at taking up nutrients from the soil compared to other organisms.
It can derive energy from complex compounds, like chitin from fungi or cellulose from plants, that are difficult for other organisms to break down.
One compound that is typically scarce in the soil is iron. Some strains of Trichoderma produce specialized compounds called siderophores that bind with iron and make it unavailable to other fungi, totally inhibiting their growth.
Resistance to Pesticides
Many strains of Trichoderma are unusually resistant to toxic compounds, ranging from pesticides to chemicals produced by plants. Its pesticide resistance includes herbicides, fungicides, and insecticides like DDT.
This gives an edge to using these fungi to control pathogens, since you can alternate application of strain T-22 with fungicides like benomyl or captan.
How to Use Trichoderma in Your Garden
If you apply this fungus to seed, it will colonize the plant’s root system as it grows. You can apply it directly into the furrow when planting. If you are planting turf, you can mix the fungus into the surface of the soil.
For greenhouse or nursery planting, mix with your potting medium. Apply directly into the planting hole if you are transplanting trees or shrubs.
Strain T-22 prefers warmer weather, so you should apply it when the temperature is above 55°F.
Trichoderma is a widespread fungus with no history of toxicity to humans or when tested on lab rats. However, to be safe and prevent allergies from developing, you should use a dust/mist filtering respirator if you are working with large quantities. The powder can cause eye irritation, so you should wear protective eyewear.
For home gardeners, we recommend RootShield® Home & Garden (as shown above) via Arbico Organics.
Larger quantities and products with various application methods for commercial argriculture use are also available.
Store in a refrigerator in the original container until ready for us. You may also keep it above 75°F for short periods without any loss of performance.
The Global Biocontrol Fungus
Trichoderma species are found in most types of soil around the world, and control other fungi in the soil using a variety of mechanisms. These range from direct parasitism to the production of antibiotics.
Fifteen years of research at Cornell University produced the powerhouse Trichoderma harzianum strain T-22, which can be used on an immense array of crops.
Strain T-22 can improve the nutritional status of crops in addition to controlling pathogens.
This broad-based biocontrol agent was one of the first biofungicides on the market, and remains a highly efficient fungus to add to your arsenal.
Have you used Trichoderma in the garden? If so, let us know how it worked for you.
© Ask the Experts, LLC. ALL RIGHTS RESERVED. See our TOS for more details. Product photo courtesy BioWorks. Uncredited photos: Shutterstock. | <urn:uuid:fec684a1-15d4-41d6-a9ad-cbb365cb713d> | CC-MAIN-2023-06 | https://gardenerspath.com/how-to/organic/trichoderma/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499949.24/warc/CC-MAIN-20230201180036-20230201210036-00094.warc.gz | en | 0.925319 | 1,672 | 3.546875 | 4 |
The Ram's Horn was published in Chicago, Illinois during the 1890s and the early years of the twentieth century by Frederick L. Chapman & Company.
Frank Beard was its principal illustrator, and most of the images that follow are his work.
In 1896 The Ram's Horn reported its circulation with optimism.
Groups of Images by Subject
Views of Immigrants
Views of the Wealthy
Views of Smoking
Views of the Liquor Traffic and Support for Prohibition
Views of the Trusts
Views of Political Bosses
Views of America in the World
The Religious Views of Frank Beard and the Ram's Horn.
We have scanned these images from The Ram's Horn because the visual material in the magazine seemed especially vivid. We found these issues in the Center for Research Libraries, a cooperative institution among libraries to which scholars at Ohio State University have access. Professor Austin Kerr first encountered these visual materials during his research on the prohibition movement; the Anti-Saloon League and other dry organizations often reprinted cartoons by Frank Beard that appeared in The Ram's Horn. We offer these materials in our World Wide Web service because we find them useful in highlighting a social gospel viewpoint important during the Gilded Age and Progressive Era. | <urn:uuid:5aee9f8d-172b-4d47-a14b-b36bdf76a2d1> | CC-MAIN-2018-05 | http://ehistory.osu.edu/exhibitions/rams_horn/default | s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886794.24/warc/CC-MAIN-20180117023532-20180117043532-00592.warc.gz | en | 0.953807 | 254 | 2.75 | 3 |
In 1948, the Israeli army marched through Wadi Fukin and forcibly evacuated the small Palestinian town. With no weapons to resist, its residents fled to the surrounding hills. Among them was 25-year-old Yousef Manasra. He and his budding family moved into tents provided by the United Nations. They survived by drinking water from nearby streams and sneaking into their village at night to harvest their fields.
More than once Manasra and the rest of the townspeople tried to return to their homes, only to be driven out again.
“The people of Wadi Fukin were kicked out of their homes so many times, I can’t even count,” Manasra says.
Finally, in 1953, an Israeli border patrol unit dynamited the town, destroying all but a few of the houses.
By that time Manasra was living in the Dheisheh refugee camp a few miles away, near Bethlehem and beyond the border of the new state of Israel. He and thousands of other refugees had been forced to flee their towns by the Israelis.
In 1972, he finally returned home. The camp in Dheisheh was overflowing with refugees from Gaza, so the residents of Wadi Fukin were told to go home.
Neighbors all worked together until new houses were rebuilt. It was the only time, as far as Palestinians can recall, that residents rebuilt a town destroyed in the 1948 Arab-Israeli war.
Today, at 82, Manasra fears history is about to repeat itself in Wadi Fukin. At the end of 2006, Israel’s separation barrier is scheduled to reach this town, which sits just on the Palestinian West Bank side of the Green Line. The barrier will cut off Wadi Fukin and four nearby towns from the rest of the West Bank, which is these towns’ main source for health care, jobs and higher education.
Instead of the 30-foot-high concrete wall in other parts of the West Bank, the barrier around Wadi Fukin will be a complex series of electrified fences, razor wire, motion sensors, military patrol roads and trenches up to 300 feet wide.
The Barrier’s Impact
Manasra and the rest of the 1,200 people who live in the small town will have to pass through two tunnels and a guarded checkpoint to reach Bethlehem. On a good day, officials estimate, the checkpoint alone will take 30 minutes to get through. The entire trip could take hours. If security is escalated, residents could be prevented from crossing altogether.
The wall the Israelis are building now and the town’s being destroyed by them back during the war are the same thing, says Manasra. But this time, he says, instead of two-inch mortars destroying people’s houses, “the wall will destroy people’s will to live here.”
Until the late 1970s, Wadi Fukin and surrounding villages were the breadbasket for Bethlehem. The town grew everything from grapes to cauliflower and eggplant to wheat.
But more recently, because of flying checkpoints, “it’s not feasible for people to work on their land all year long and then simply to be closed off, denied the markets to market their harvest by an Israeli checkpoint,” says Suhail Khalilieh, a research assistant with the Applied Research Institute-Jerusalem, a Palestinian think tank. Unlike permanent barriers, flying checkpoints are mobile -- often nothing more than a military jeep blocking the road -- and they can be set up anywhere at any time.
Many residents have been forced out of farming and into other jobs to make a living. More than half of the men in Wadi Fukin now work either in Israel or in an Israeli settlement. Most of them earn their living working construction. In the Palestinian-controlled areas of the West Bank, only 14 percent of the men from Wadi Fukin still survive in farming.
And many who find work in Israel or Israeli settlements don’t have permits to enter. “Residents of the western rural villages of Bethlehem, or any other Palestinians for that matter, have a financial drain,” says Khalilieh. “Ironically, they have to sneak into Israel to work in settlements or work on the wall.”
Wael Manasra, 34 and the father of four boys, is Yousef Manasra’s grandson. He supports his family by working as a repairman in the Abu Ghneim settlement near Jerusalem. He’s tried twice to get a permit and was turned down both times.
Wael earns about 150 shekels, or $32, each day he works. Along with his father, Ibrahim, who drives a dump truck in the nearby settlement of Betar Illit, Wael helps support his entire extended family of 11 people. Altogether, the family usually lives on about 6,000 shekels a month, or a little over $1,300.
Because it’s too dangerous to sneak into and out of the settlement each day, Wael often stays in the settlement for four or five days at a stretch, hiding at night in any dark nook he can find in one of the buildings where he works. Each night, he sets his cell phone to ring at 12:40 a.m. to warn him that the next shift of guards is coming on duty. They often come looking for workers hiding in the buildings.
“Of course if working in agriculture would allow me to survive, I would stay in Wadi Fukin,” he says. “But it doesn’t.”
The wall also poses a problem for medical care. Wadi Fukin has a medical clinic, but, according to a World Bank study, most people still travel to Bethlehem for major treatments. All the women receive their prenatal care in Bethlehem, and most deliver their children there. When the wall is built and the only access to Bethlehem is through a permanent checkpoint, many fear that during an emergency, they will be unable to reach medical services in time.
There is a school in Wadi Fukin that serves grades K-12, but students must travel to Bethlehem or other West Bank cities to go to college.
Community organizers suspect the barrier will eventually destroy Wadi Fukin’s economy.
“I think the Palestinian people have a high level of resilience, but life in this area is going to be very difficult,” says Ibrahim Ibraigheth, the director of the Community Development Program for the five villages.
Residents have already started resistance efforts. Their first target is the expansion of the Betar Illit Israeli settlement, one of 19 settlements in the area, which sits just to the southeast. Built on land that used to belong to Wadi Fukin and other villages, Betar Illit is home to 26,300 people and scheduled to more than double in population.
Every morning except Saturday, the Jewish Sabbath, the people of Wadi Fukin wake to the sound of jackhammers digging out chunks of the hill to make room for rows of five-story, tan-and-red-roof settlement apartments. The expansion of Betar Illit will encircle Wadi Fukin on the opposite side from the barrier and most likely will cut off the town’s southern exit.
The settlement’s sewage system has already clogged several times, sending raw sewage flowing into Wadi Fukin, ruining valuable farmland. The millions of tons of dirt moved to make room for the new apartment buildings have piled up and threaten to come sliding onto Wadi Fukin’s land, according to villagers.
Along with Friends of the Earth Middle East, an international environmental nonprofit organization, and residents of the nearby Israeli town of Tsur Hadassah, which sits just across the Green Line, activists in Wadi Fukin have hired lawyers to investigate the environmental impacts of the settlement, including increasing strain on the valley’s water resources.
Some Tsur Hadassah residents oppose the barrier as well, and they have circulated a petition asking Israeli officials not to build it or to at least construct it in such a way so as not to affect Wadi Fukin so adversely.
“I didn’t want to feel that I witnessed what was going on and didn’t do anything about it,” says Dudy Yehuda Tzfati, 44, one of several Israeli residents who have mobilized to help. “I feel responsible as an Israeli for what I see as crimes being done in my name.”
What the Future Holds
The Israeli government defends the barrier, saying it’s essential to combating terrorism. “The Security Fence is a central component in Israel’s response to the horrific wave of terrorism emanating from the West Bank, resulting in suicide bombers who enter into Israel with the sole intention of killing innocent people, says Israel’s Ministry of Defense.”
Others, including organizations such as B’tselem, an Israeli human rights group, have argued that rather than basing the route on security concerns, Israel based the route on extraneous considerations completely unrelated to the security of Israeli citizens and that a major aim was to build the barrier east of as many settlements as possible, to make it easier to annex them into Israel.
The United Nations has reported that approximately 5,000 Palestinians already live on land that lies between the barrier and the Green Line. If the barrier follows its planned route, including the sections that are still under consideration, 10 percent of the entire landmass of the West Bank and East Jerusalem, along with nearly 50,000 Palestinians in 38 villages, will be stranded. If Israeli Prime Minister Ehud Olmert follows his proposed plan to make Israel’s permanent border follow the barrier, these areas will be annexed.
The U.N. report also says that in the north, where the wall is already built, access across the barrier has been slow and unreliable. Ibraigheth and Khalilieh fear that if the same is true for Wadi Fukin, much of the town will eventually give up and move to Bethlehem or other cities on the Palestinian side of the barrier in search of a better life.
But Wael Manasra is resolved to stay. “We suffered to return to Wadi Fukin, so I will never go,” he says. “This is my village and my land. Even if I can only eat one small piece of bread, and I can’t get more than this, I will never leave.”
His younger brother, Wisam, doesn’t agree. “If I have to leave to find work, I will, because I think any other life is better than the one here,” he says. “I want to help my family, and I want to help myself.”
Wisam says he might eventually like to return to Wadi Fukin, but he also would like to attend a university and study journalism. And for now, supporting his family is most important. “I feel bad about all of this, working in settlements or leaving,” he says. “But there really isn’t any other way.”
Ibraigheth and Khalilieh believe that if the younger generation decides to leave, much of Wadi Fukin’s land will go uncultivated and could be taken over by Israel. According to the United Nations, Israel uses a law based on an old Ottoman code that allows the state to claim any land that goes uncultivated for three years.
“The older generations are going to be completely isolated,” says Khalilieh. “They will be the only ones there to protect the land, but they will not be able to do so for so many years. That’s when the Israelis will move in and declare the lands as state land under the absentee property law and take it over.”
Some of the younger generation still have hope. Madi Manasra, 13, a cousin of Wael and Wisam, says he wants to finish high school, then attend the university in Bethlehem. He eventually would like to work as a tourist guide. “I’m afraid sometimes the separation will keep me from completing my education,” he says. “But I want to tell people that a just solution is not impossible.”
Others, such as Mohammed Manasra, Wael and Wisam’s youngest brother, see no good options and don’t know what they’ll do. Mohammed wants to be near his family, but fears he’ll never find work. He doesn’t give much thought to attending a university -- or even to what the future may hold.
“With the barrier about to arrive,” he says, “we cannot imagine anything. We cannot dream about anything.”
SOURCES: United Nations Office for the Coordination of Humanitarian Affairs; PLO Negotiations Affairs Department; Israeli Ministry of Defense; Friends of the Earth Middle East; World Bank; Applied Research Institute-Jerusalem, B’tselem.
Jakob Schiller is from New Mexico and grew up in a secular Jewish family. He is a graduate of the University of California at Santa Cruz and is about to complete a master’s degree in print and documentary photography at the U.C. Berkeley Graduate School of Journalism. For the past four years, Schiller has reported in Arab and Jewish communities here in the United States, documenting their connection to the Middle East conflict. This trip to Israel and Palestine was his first.
Flash Feature Credits List:
MANAL KHATEEB AND WISAM MANASRA
SHARIF HAMADEH, SHLOMI SIMHI, JOANN TOTAH, MARY AND ANTON HANNA, ALICE NASSAR, FAHMI MANASRA AND THE FAMILY OF IBRAHIM MANASRA, WILLIAM HEWES, JESSE ROSEMAN, AISHA MERSHANI, LISA NESSAN | <urn:uuid:b906a5df-130e-4e5a-8796-8d93f75705ea> | CC-MAIN-2017-30 | http://www.pbs.org/frontlineworld/stories/palestine503/dispatch_jakob.html | s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549425407.14/warc/CC-MAIN-20170725222357-20170726002357-00289.warc.gz | en | 0.966895 | 2,959 | 2.921875 | 3 |
|Animals living in highly disturbed environments experience declining populations and possible extinction without efforts to protect them. Such a situation confronts Snowy Plovers nesting on coastal beaches heavily used for human recreation. In 1993, the U.S. Fish and Wildlife Service (USFWS) listed the Snowy Plover on the U.S Pacific Coast as a threatened distinct population segment under the Endangered Species Act (ESA). A decade later, this designation was challenged under a 1996 USFWS rule more precisely defining a distinct population. To argue for preservation of the listing, Lynne Stenzel and Catherine Hickey summarized decades of data collected by PRBO biologists, volunteers and colleagues. Recently USFWS agreed, in a decision reaffirming that Pacific Coast plovers warrant protection under the ESA. PRBO biologists like Jenny Erbes work closely with federal, state and county management agencies at several locations to study and protect nesting plovers.--Gary Page, Co-Director, PRBO Wetland Ecology Division|
There's a male speeding toward a dune! Most of the bay's Snowy Plovers sport a unique color-band combination. Although his legs are swift, by peering through my spotting scope I can identify this rusty-capped, white-breasted beacon in the fog: Violet/White-Aqua/Pink. Just the one I'm looking for! Pencil nub in hand, I scribble the essentials into my sandy notebook--date, time, location, color-band combination, and behavior.
PRBO has had its binoculars--and color bands--on Monterey Bay's Snowy Plovers since 1978, so these small details serve a valuable purpose. They will be incorporated into a data set spanning 29 years! This intensive and long-term monitoring project has generated scientific information crucial to the management and conservation of these imperiled beach-nesters.
Now in my fifth year as a researcher here, I have become well acquainted with the local birds. I remember when this particular male, Violet/White- Aqua/Pink (now one year old with his own chicks) was a freshly hatched chick himself. Just hours old, he sauntered away while I was banding his siblings. After a few feet he paused, then spun around and faced me, as though sizing me up. Silhouetted against the sky, with fresh color bands on his legs and drooping winglets at his side, he looked like a little cowboy in chaps, ready for a showdown.
|Jenny Erbes at work on the Monterey shore. Photo by Tiffany Worthington|
"The Cowboy" has neighbors like "Swirly-head" (a.k.a. "Punk Rocker") because of some unruly cap feathers, "Bob G." due to his combo of Blue/Orange- Blue/Green, and "Mystery Male" who is quite predictable in his unpredictability. These are just a few of the 330 breeding "Snowies" currently in Monterey Bay, and I'm one of a handful of plover biologists here. Imagine the wealth of stories and data that have accrued over the years! Individual birds' behaviors, and their mate selections throughout their lifetimes, are so intriguing that there are even Web pages dedicated to these Snowy Plover "soap operas" (including at www.prbo.org).
Recently though, it's the data that have been in the spotlight. When the 2004 petition to "delist" this population was released, PRBO was primed to respond: our comprehensive data showed that the coastal population segment is distinct and requires continued protections under the Endangered Species Act (see box).
The prrrt prrrt of Violet/White- Aqua/Pink softens as he lifts off and flies beyond a low dune. Aiming my scope in that direction, I scan for his fuzzy, three- week-old chicks. Sure enough! Two squirrelly chick heads are bobbing in the distance, popping in and out of the vegetation. Soon "The Cowboy's" chicks will be winging their way through the soggy skies of Monterey Bay! | <urn:uuid:fb2a8899-f631-4dea-90fb-02164b50a629> | CC-MAIN-2017-47 | http://www.pointblue.org/observer/index.php?module=myPrint&browse_issue_num=145&browse_article_num=116 | s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805687.20/warc/CC-MAIN-20171119153219-20171119173219-00374.warc.gz | en | 0.955455 | 851 | 3.4375 | 3 |
In IPCC 1995 [SAR] – An Extended Excerpt, I quoted the following key statements from IPCC 1995:
Alpine glacier advance and retreat chronologies (Wigley and Kelly, 1990) suggest that in at least alpine areas, global 20th century temperatures may be warmer than any century since 1000 AD and perhaps as warm as any extended period (of several centuries) in the past 10,000 years. Crowley and Kim (1995) estimate the variability of global mean temperature on century time scales over the past millennium as less than +- 0.5 deg C. [my italic]
Then I wondered:
These last two lines are the ones that jettison the issue of attribution of past climate change. I’ll try to get to comments on the two sources quoted here. It would be nice if they were relying on sources that were not so closely associated with the campaign. I wonder how strong Wigley and Kelly, 1990 and Crowley and Kim, 1995 really are, especially relative to some of the recent work from Joerin and Nicolussi noted up recently on this site.
Wigley and Kelly, 1990 did not support the claims attributed to them at all, as discussed here. Here’s the rest of the story on Crowley and Kim, 1995, which proves to be no better than a secondary and perhaps tertiary source. I have now consulted Crowley and Kim 1995 and will post up a pdf if there is any demand. This is a model study based on an Energy Balance Model and does not contain any discussion of proxies. It compares projected temperature increases under this model to past temperatures. The main lifting is done in their Figure 3, shown below with the original caption. The historical part of this graph is interesting since it shows the striking and long-term decline in global temperature over the past 50 million years with temperatures in the most "recent" period (on say a 100,000 year step) being the lowest in the entire period (perhaps also in the 500 million year period), Figure 3. Comparison of future greenhouse projections against the geologic record. Curves on the right represent estimates of global temperature change in the past from the oxygen isotope record (see Fig. 4); green represents differences from the Holocene core tops and magenta represents differences from the observed global average temperature. Crossbars indicate fitting points for calibration of oxygen isotope curve in terms of global temperature (See text). Labeled scale on right-hand side represents calculated values for peak warming on left-hand side of figure. Curves on the left represent estimates from restricted and unrestricted CO2 scenarios, utilizing the standard IPCC range and best guess for sensitivities equivalent to CO2 doubling. "Error bars" represents a generous estimate of the range of natural variability based on records of the last 1000 years [Crowley and North, 1991]. [my bold – see the scale denoted +- 0.5 degree C in the bottom left hand corner of the figure.] The comment in the caption to Figure 3 is made in passing once more in the running text as follows:
The projected increase in global average temperatures for all scenarios also greatly exceeds the past record of climate variations over the last millennium [Crowley and North 1991] (Paleoclimatology, Oxford University Press, 339 pp). A generous estimate of that natural variability for that time is +- 0.5 degree C.
So Crowley and Kim is not a primary source for the estimate of "variability of global mean temperature on century time scales over the past millennium"; it is a secondary source relying on a textbook, which is probably not a primary source either. One would have hoped for a little more from the numerous review iterations and thousands of scientists involved in an IPCC assessment report. Anyway on to Crowley and North, 1991 to see what it says. | <urn:uuid:04dd4e9c-fc9b-4145-b651-2994a300912f> | CC-MAIN-2015-22 | http://climateaudit.org/2005/07/01/crowley-and-kim-1995/?like=1&source=post_flair&_wpnonce=c1ce40f6e1 | s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207930866.66/warc/CC-MAIN-20150521113210-00299-ip-10-180-206-219.ec2.internal.warc.gz | en | 0.926674 | 781 | 2.875 | 3 |
Sports organisations like to see themselves as promoters of social development and high ethical standards. But sport must also actively avoid harm and improve the human rights situation wherever sport is linked to risks and abuses. Sylvia Schenk writes on the higher expectations that sports organisations meet nowadays.
If it is about ambitions, sports organisations tend to think big. Take the Olympic Charter and its Fundamental Principles of Olympism:
"Olympism seeks to create a way of life based on social responsibility and respect for universal fundamental ethical principles" and its "goal is to place sport at the service of the harmonious development of humankind, with a view to promoting a peaceful society concerned with the preservation of human dignity”.
The Olympic movement, comprising the International Federations (IF), National Olympic Committees (NOC), and other organisations recognised by the International Olympic Committee (IOC), has to comply with the Olympic Charter. So, one should expect that human rights are high on the agenda in international sport. And indeed, the Fundamental Principle number 4 states "The practice of sport is a human right." while number 6 prohibits all kind of discrimination.
No doubt – the IOC and its constituents promote and support sport for all as well as the development of athletes all over the world, and have, step by step, opened doors for discriminated and vulnerable groups, thus engaging for diversity. Aiming at 50 % women at the Olympic Games (with female athletes from all participating countries, albeit its effect seen controversially), the Paralympics demonstrating the admirable abilities of persons with disabilities, and the Refugee Team at the Olympics 2016 in Rio de Janeiro are examples that send important signals, having an impact even beyond sport.
Nevertheless, the Olympic movement has come under increased public scrutiny with regard to its human rights track record in recent years. This is also a consequence of the United Nations Guiding Principles on Business and Human Rights (UNGP), approved in 2011, confirming that the states' obligation "to protect and fulfil human rights" must be completed by the responsibility of "business enterprises" … "to respect human rights": They should “avoid infringing on the human rights of others and should address adverse human rights impacts with which they are involved.".
This has put the business sector as an actor in the field of human rights continuously under pressure, especially when catastrophes like Rana Plaza in April 2013, killing more than 1,000 workers in Bangladesh, led to campaigns as well as law suits by trade unions and non-governmental organisations.
Do no harm is an obligation
So, while the sports organisations during the first half of this decade continued to focus on the good sport can do, the global approach towards responsibility of non-state-actors was broadened by the UNGP. Risk assessment, prevention, and remedy – for abuses occurring despite prevention measures being in place – are seen as an obligation. Whatever a company's business entails, the overall requirement is "Do no harm!", meaning that doing good in some aspects cannot outweigh harm done in other areas: Inviting a refugee team to Rio is fine, having Favelas evicted to build the Olympic village is not and undermines anything else the IOC – and/or an Olympic host – is doing.
No wonder that the contribution of sport to, among others, empowerment and health, and its acknowledged role as an "enabler of sustainable development" in the UN's 2030 Agenda for Sustainable Development does no longer suffice to fulfil the human rights responsibilities. Higher expectations regarding the human rights activities of companies raised the bar as well for sport organisations. Not only because major sports events like the Olympics, the FIFA World Cup, and UEFA EURO are big business with the sports organisations behind them earning a lot of money. For the ‘autonomy of sport’, a principle the IOC and others defend rigorously, responsibility is a prerequisite as stated by the European Union in the Commission's Communication on Developing the European Dimension in Sport 2011 and confirmed by IOC-President Dr. Thomas Bach in a speech delivered to the UN General Assembly 2013.
This leads to the conclusion that sports organisations have the same – or, given their high ambitions, even greater – obligation to respect human rights as the business sector.
The United Nations Guiding Principles on Business and Human Rights (UNGP)
First of all, the UNGP do not contain any new rights – they are based on the International Bill of Human Rights (consisting of the Universal Declaration of Human Rights, the International Covenant on Civil and Political Rights and the International Covenant on Economic, Social and Cultural Rights), existing UN conventions on Women's Rights, Children Rights etc., and the International Labour Organisation’s Declaration on Fundamental Principles and Rights at Work.
Secondly, the UNGP define the scope: A business enterprise is not only responsible for "adverse human rights impacts" it is "causing or contributing to" through its own activities, including acts and omissions. It must also "seek to prevent or mitigate adverse human rights impacts" directly linked to it, even if it has "not contributed to those impacts". The responsibility covers all business relationships, for example within the supply chain.
Thirdly, the UNGP are about implementation, listing concrete steps for how companies can live up to their obligation. It starts with a leadership commitment and a comprehensive risk assessment. It is not expected (and usually not possible/helpful) to work on all risks and/or negative impacts at once. One has to set priorities with regard to the most salient risks on the one hand and measures with specific great impact, i.e. that can easily make a big difference for affected individuals, on the other. Internal and external stakeholders should be involved in the whole process.
Finally, this should feed into a human rights policy and a detailed programme, including – beside prevention measures – grievance mechanisms, continuous monitoring of initiatives taken, reporting and effective remedy. To ‘mainstream’ human rights in daily operations, i.e. making it an integral part of corporate governance, is the overwhelming task.
Adaption to sport
The same applies for sports organisations: Make respect for human rights a natural element of Good Governance.
The UNGP can be easily adapted as done by FIFA, the first International Federation approving a human rights policy in line with the UNGP in 2017, ever since striving to include a human rights-based approach in all its activities with some important steps already fulfilled.
Also in February 2017, the IOC, after consultation with the Sport and Rights Alliance, added "a section designed to strengthen provisions protecting human rights and countering fraud and corruption related to the organisation of the Olympic Games", expressively mentioning the UNGP, to the new Host City Contract; in September 2018 the IOC based its Supplier Code on the UNGP, and is currently, supported by experts, developing its human rights policy.
The UNGP cover a wide range of topics already discussed and dealt with in sport, for example racism (e.g. in football), discrimination of women (e.g. the use of hijabs in competition), sexual abuse (e.g. the ‘Nasser case’ in Gymnastics in the United States or initiatives regarding child safeguarding). So, no sports organisation has to reinvent the wheel but can use the new instrument to identify gaps and take a systematic approach. Already initiated programmes (e.g. anti-racism campaigns, gender and safeguarding activities) can be integrated, complemented, and improved. This will not only give clear guidance internally but also enhance the profile externally.
While this seems to be easy, the Olympic movement still struggles to involve stakeholders, especially the most important group: The athletes. Their human and labour rights being heavily affected by any decision of the IOC, IF and NOC – e.g. anti-doping-system, eligibility questions make it necessary to develop a system of representation that guarantees the right to a say for all types of athletes.
With regard to major sports events the entire life cycle of the event has to be covered – not just the period between the opening and closing ceremony. Injured or killed workers on construction sites due to insufficient safety measures (e.g. caused by time pressure to get ready for the event), detained protesters or journalists, discrimination of LGBT+ individuals, forced evictions – all these are examples of human rights risks that can be directly linked to the sport organisation responsible for the awarding of the event to the specific host.
To apply the UNGP does not necessarily lead to the exclusion of countries from hosting, but requires bidding criteria and host contract clauses that oblige the host city/country/organising committee to respect human rights. This must be followed by an action plan for the organisation and delivery of the event, e.g. as laid down in the successful United Bid for the FIFA World Cup 2026, and close monitoring.
Challenge and opportunity
The biggest challenge in this context remains how to deal with politics.
No one expects sports organisations (or business enterprises) to solve all the world's problems, to campaign against a government or organise protests. The ability to uphold, even to open dialogue in hostile situations, is an advantage of international sport. But diplomacy as the best option must be grounded on a consistent attitude on human rights. To find the right balance between talks behind closed doors (in order not to embarrass a government and even worsen the problem) and a clear stand in public (to avoid ‘sportwashing’ and gain the trust of affected groups) will not always be easy. This has been demonstrated in 2019 in the Hakeem al-Araibi case where joint but varying efforts including FIFA, IOC, governments and civil society finally led to a solution.
This also has been the first test for the newly established Centre for Human Rights and Sport that under the leadership of CEO Mary Harvey convened a series of conference calls with all actors from sports organisations to governments and non-governmental organisations.
The huge global attention the leading sports organisations enjoy and which they gladly use to find generous hosts for their events and make money from sponsors and broadcasters gives a leverage that can and must be used to avoid harm and improve the human rights situation wherever sport is linked to risks and abuses. To seize this opportunity not just for sport but for the "harmonious development of humankind" will make the Olympic movement walk the talk.
Olympic Charter, page 11
UNGP (FN 5), general Principles (a)
UNGP (FN 5), General Principles (b)
UNGP (FN 5), II.A.11
https://www.un.org/ga/search/view_doc.asp?symbol=A/RES/70/1&Lang=E– Number 37
UNGP (FN 5), Number 13 (a)
UNGP (FN 5), Number 13 (b)
UNGP, (FN 5), Numbers 18 (b), 20 (b)
UNGP (FN 5), Number 15
https://www.theguardian.com/football/2018/dec/06/hakeem-al-araibis-case-is-a-true-test-of-fifas-new-human-rights-policy;https://www.fifa.com/about-fifa/who-we-are/news/fifa-statement-on-hakeem-al-araibi; https://www.bbc.com/sport/football/46995272 | <urn:uuid:ddf1b158-bc82-41c1-86e1-55477c0ba232> | CC-MAIN-2021-39 | https://sporthumanrights.org/library/play-the-game-blog-harmonious-development-of-humankind/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780058467.95/warc/CC-MAIN-20210927181724-20210927211724-00244.warc.gz | en | 0.887859 | 2,346 | 2.6875 | 3 |
I was recently asked to provide some more information about medusa worms in one of the forums, and while searching the web to provide some links, I realized that there is very little information out there about the biology and aquarium care of these fascinating animals. So, I decided that I would take this opportunity to write a column about one of my favorite critters to try to provide some information about these interesting animals.
OK, granted they are probably not as colorful or attractive as many of the animals that we find on coral reefs, and to most people they do look pretty much like a worm. But ever since I first saw one, back while cruising the local pet shop in high school, I have always had a soft spot (my wife tells me it is between my ears) for these animals. Regardless of whether or not they are the prettiest of reef organisms, I have always been fascinated by Medusa worms! I think that given the appropriate conditions and care, the unique body form of these animals, combined with the pinnate feeding tentacles and high level of activity make for an attractive and popular addition to many reef aquaria. I’ll get to what the appropriate conditions and care are by the end of the article, but I should start off by explaining that the name is misleading, as medusa worms are not worms at all, but rather are legless (Apodid) sea cucumbers. These animals are members of the Family Synaptidae in the Class Holothuroidea and the Phylum Echinodermata.
Although medusa worms generally don’t look much like an echinoderm, other members of the Phylum Echinodermata ought to be very familiar to most reef aquarists. The roughly 7-10,000 living species of Echinoderms have a diverse arrangement of body plans including: sea stars (Asteroids), brittle stars (Ophiuroids), sea urchins (Echinoids), feather stars (Crinoids) and the group to which medusa worms belong: the sea cucumbers (Holothuroids). One thing that links all these diverse body forms includes a complex water vascular system that is composed of a series of fluid-filled canals that allow them to move via hydraulic action. The most obvious components of this hydraulic system to us as aquarists will be the muscular podia (which are often sucker- tipped, and better-known as tube feet), that the animals use to move around. If you’re interested in more information about the general biology of these groups, I suggest that you pick up a good invertebrate zoology textbook, such as Brusca and Brucsa (1990) or Ruppert and Barnes (1994).
Sea cucumbers are all members of the Class Holothuroidea, but members of this Class can be surprisingly different from one another. Within the Class Holothuroidea, there are three sub-classes: the Dendrochirotacea (largely suspension-feeding sea cucumbers such as the popular sea apple, Pseudocolochirus spp.), the Aspidochirotacea (largely deposit-feeding sea cucumbers such as the popular tiger-tail cucumber, Holthuria spp.), and the Apodacea (the unusual group of legless cucumbers to which the medusa worms belong). Unlike the other Classes of echinoderms mentioned above, in which the mouth is on the “bottom side” of the animal, the sea cucumbers all lie on their side, and the mouth is located at the “font end” of the animal. Although both ends generally look pretty similar to the casual observer, the sure way to identify the head is to see the oral tentacles with which the animal feeds. Depending on the species of sea cucumber and what they eat, these oral tentacles can be either stuck out into the water column to suspension feed, or applied directly to the substrate to deposit feed. The legless synaptid cucumbers fall into this latter group, and are the most active deposit feeders of the Holothuroids. That tends to make the head end of these animals quite obvious, because they almost always have their feeding tentacles extended as they move about in search of particulate detritus to eat. Now although I mentioned above that tube feet are one of the unifying characteristics of the echinoderms, these Apodid sea cucumbers are called “legless” because they lack obvious tube feet. In this case, it is because the ancestor of these cucumbers actually had tube feet, but they have been lost through time in this group. That is not to say that they are always absent: some Apodid cukes really do lack any detectable vestiges of tube feet, but in many species the tube feet are simply reduced to the point that they are not really visible.
It is also important to note that aside from their lack of tube feet, most apodid cucumbers lack the respiratory tree and most of the associated structures characteristic of the other sub-classes of sea cucumbers. Instead, these animals tend to gain oxygen and expel carbon dioxide primarily across the body surfaces. Because these animals lack a respiratory tree and the associated tubules of Cuvier (defensive structures with which the most potent toxins of sea cucumbers are associated), they are relatively nontoxic in comparison to some of their more potent relatives such as the sea apples ( Pseudocolochirus spp.) and spotted sea cucumbers (in particular, Bohadaschia argus and Actinopyga agassizii ). The Cuvierian tubules commonly found in species of Holothuria, Bohadaschia, Stichopus and Actinopyga liberate a highly toxic saponin compound known as holothurin. This compound acts to quickly stun and even kill potential predators, which are effectively asphyxiated (suffocated) by the toxin. The toxins of some of these species are so effective that South Pacific island cultures use the macerated bodies of these sea cucumbers to stun and capture fishes, crabs and lobsters as a method of traditional fishing (Frey 1951; Ruppert and Barnes 1994)! Despite the fact that the medusa worms lack these structures and are considered ‘relatively non-toxic’ by comparison to many of these species, this does not mean that they are by any means non-toxic. For example, the beautiful sea apples (e.g., Pseudocolochirus violaceus and P. tricolor ) also lack these Cuverian tubules, but along with Bohadachia argus and Actinopyga agassizii, are considered among the most toxic sea cucumbers in the world. In fact, according to Wilkens (1998), it takes only about 1g of tissue from any of these particularly toxic species to poison the fish in a 25g tank. These toxins also affect humans, and many people suffer moderate to severe skin and eye irritation if they come into contact with these toxins (Cunningham and Goetz 1996). Some cases in which toxins came into direct contact with the eyes have resulted in blindness, and deaths have even been reported in cases where people have eaten these animals without the proper preparation.
In the most general terms, virtually any soft-bodied animal that would make easy prey on a coral reef, such as these sea cucumbers, will typically be defended in some way by distasteful chemicals or physical armament, and medusa worms are no exception. Like virtually all sea cucumbers, they are soft- bodied, lacking any real physical armament, and so they all tend to have a variety of nasty chemicals associated with their bodies to deter predators from feeding on them. Even without the Cuvierian tubules and their potent toxins, medusa worms have a variety of distasteful chemicals associated with the skin and body wall to protect them from being eaten by fishes, crabs and lobsters on the coral reef. Although the specific toxins associated with synaptid cucumbers are somewhat different from those of most other cucumbers studied to date (Kuznetsova et al. 1989; Ponomarenko et al. 2001), these animals are still reported to be highly toxic to fishes in marine aquaria if they are seriously injured (e.g., Delbeek and Sprung 1994; Fenner 2000; Michael undated online; Sprung 2001). In general, toxins are only released when the cucumbers are under severe stress (such as being chewed up after being sucked into a powerhead or overflow grate), and a diligent aquarist will usually prevent this from ever happening, and therefore never experience any problems with one of these animals. If however, and accident happens, and the cucumber is stressed severely enough to release its chemical defenses, then a good water change, together with an efficient skimmer and some activated carbon are usually sufficient to prevent any fish from asphyxiating from the soap-like holothurin. I will not detour further into the chemical defenses of sea cucumbers in this article, however I may come back to the subject another time.
Medusa worms get their common name because they resemble a giant worm from which a “mop” of feeding tentacles is constantly being slapped across the substrate before being drawn into the mouth. These animals are impossible to positively identify by anyone other than an expert, and even then it usually involves knowing exactly where the animal was collected, and likely killing the animal and examining internal structures to be certain of the species identification. Therefore, chances are that you’ll never know which species have been imported when you see one offered for sale in the local pet shop. The most common genera to be imported for the aquarium trade are Euapta, Synapta, Synaptula or Opheodesoma. All of these animals look fairly similar, being rather soft and flaccid, with large rounded knobs (sometimes likened to a string of pearls) along the length of the body. All of these cucumbers seen in the pet trade are typically shallow water animals (most species are rarely seen below 50ft), but although they look fairly similar to our eye, they can have dramatically different biology and care requirements. Although they are not usually an obvious presence on most coral reefs, apodid sea cucumbers are a common member of virtually all coral reefs throughout the Caribbean and Indo-Pacific. Many species can get quite large (some species can exceed 6 feet in length!). The reason that such a large and ubiquitous member of coral reef communities is only infrequently observed is that they are almost entirely nocturnal (only active at night). In the aquarium, however, they often lose this strict nocturnal schedule and are often seen cruising around the tank in full daylight (I will come back to this issue later in the article).
Despite the similarity in appearance, these animals have a wide range of feeding habits. For example, the Caribbean Synaptula hydriformis appears to be primarily a generalist herbivore, feeding primarily on diatoms, with the occasional piece of red or green algal detritus ingested when they can find it (Martínez 1989), and these animals are common in a variety of habitats including coral reefs, sea grass beds, mangroves, and even inland salt-water lakes (Hendler et al. 1995; Pawson 1986)! Other species, such as Synaptula lamperti, are highly specialized feeders which are only associated with living individuals of the sponge Ianthella basta (which I have never seen offered for sale in any pet shop). These cucumbers live by ingesting tiny organic particles and exudates from the surface of the sponge, and appear to require these specific sponge metabolites to defend and nourish it (Hammond and Wilkinson 1989). These two species of Synaptula appear fairly similar, but obviously it will be much easier to accommodate the first animal in an aquarium than the second! Other species, such as the Caribbean Euapta lappa, are generalist detritivores that cruise the sea floor at night collecting any tiny particles of organic detritus within a given size range from the reef rubble and base rock structure. Some species appear to graze from a wide range of sediment types and particle sizes, while others are highly specific (Hammond 1982a). For example, Leptosynapta multigranula, graze organic detritus from only carbonate sediments of varying sizes, while other species, such as Leptosynapta tenuis or L.crassipatina appear to feed effectively only while burrowing through very fine sands and muddy bottoms (Hendler et al. 1995). In some areas of the South Eastern US, Leptosynapta tenuis are the major consumers of detritus, and in suitable habitats, Ruppert and Fox (1988) estimated that these cucumbers can process as much as 25 metric tons of sediment per hectare per year! In contrast, Chiridota rotifera is found actively grazing detritus from surfaces of tide pools, sea grass beds, calcareous algae, coral rubble fields, coral heads, and sandy mudflats – not exactly what anyone would consider a habitat specialist.
The primary problem with getting any of these animals, however, is that you cannot identify them by yourself, and will therefore have no idea of which species you are buying. That leaves you with no option but to trust your supplier about its care and requirements. That is easier for some people than others, and depending on how reliable and knowledgeable your supplier is you may be able to obtain a generalist detritivore that will thrive in a well- established reef aquarium. If, however, you simply purchase an animal at random, there are more specific feeders than generalists in this group. Therefore, you have a fair to excellent chance that you will get a species that is a highly specific feeder, and for which you may have no way to provide food. For this reason I generally recommend that people avoid buying one of these animals unless they can first determine what they feed upon.
Furthermore, the few studies that have been done on the feeding habits of these animals indicate that they are very active feeders, and being among the most active of echinoderms, they require a lot of food energy. Research indicates that regardless of whether or not they are active, these animals appear to feed only at night (Hammond 1982b), and food processing is very rapid – particles ingested by the cucumber are voided from the gut within only about 1 hour! Even when animals are seen apparently feeding during the day, when collected the researchers found that these animals inevitably had digestive systems that were completely empty (Hammond 1982b). This is where I come back to the earlier statement that although these animals are normally nocturnal, they often lose their avoidance of the light and are sometimes seen cruising around the tank in full daylight. Although similar studies have not been done on any animals kept in a reef aquarium for any length of time, studies on natural reefs indicate that regardless of what looks like normal feeding behavior during the day, they were not actually ingesting any particles, and were therefore not feeding,. We do not know why these animals lose their strict nocturnal schedule in captivity, but there are a couple of likely possibilities. First, the animals may be removed from the predation pressure that they face on the coral reef, and begin to become bolder over time in captivity. This is possible, because I have also observed that the same tends to occur in shallow lagoons and other protected habitats where, in the absence of predators, these cucumbers are found in high density during daylight hours. The other possibility is that the animals are starving to the point that they abandon any semblance of their natural behavior and start to feed during the daylight. If we assume that it is the former explanation, then (like in natural populations) the animals may appear to be active during daylight hours, but are probably not actually feeding. If we assume that it is the latter explanation, then the animals are doomed to slowly waste away in the absence of sufficient or appropriate food in the aquarium.
This is an important distinction, because even if you are fortunate enough to get a generalist feeder, unless you have a well established tank with plenty of fine organic detritus upon which the animal can feed, it is likely to starve to death. These animals are certainly poorly suited to non-reef aquaria or even traditional Berlin-style reef tanks that are maintained with a bare or only slightly covered bottom, and from which detritus is regularly removed. Like most marine invertebrates, these animals are capable of going for long periods of time without food. Many marine invertebrates can withstand several months or more of starvation, during which time they slowly shrink while digesting their internal organs. Depending on the species in question, its initial condition when brought into captivity, and how often it manages to locate a suitable food source within the aquarium, many marine invertebrate species could take more than a year before they succumb to starvation.
In addition to the issue of proper feeding, another important consideration for getting one of these cucumbers is that these animals have a highly reduced skeletal system. Like all echinoderms, the skeletal system is composed of a series of tiny calcareous plates (ossicles) embedded in the skin of the animal. In synaptid cukes, these ossicles are reduced to simple hooks called “anchor ossicles” which project through the skin and give the animal adhesion to the substrate (and anything else they touch). Anyone who has handled one of these cucumbers can vouch for how “sticky” they are (due to their tiny anchor hooks snagging into anything they touch), and if you place the animals into a small container (like an aquarium shipping bag) they often get snagged even on themselves! Because they lack the tube feet that other echinoderms use to crawl about, synaptids must crawl about somewhat like an earthworm, and they use a combination of their muscular hydrostatic body and the anchor ossicles to crawl around. The muscles in the body squeeze fluids around in much the same way that a water balloon is deformed when we squeeze it in our hand. However, unless there is something to push against, the animal would simply squeeze out and retract into the same location. The anchor hooks of these cucumbers function in much the same way as the bristles (chaetae) of a worm they provide traction against which the animal can push to propel itself forwards. By sticking at their base and “squeezing” their water balloon body into an elongate shape, they are propelled forward. Then, by hooking in at the front end, and releasing the grip at the back, when they relax back into the short-n-rounded water balloon the front end is anchored while the back end is free to be sucked forward to join it. In this way, their flabby soft body allows them to crawl about and extend or retract their body with great flexibility. Again, I would refer interested readers to a good invertebrate zoology textbook, such as Brusca and Brucsa (1990) or Ruppert and Barnes (1994) for further details.
Their “half-filled baggie” look is deceptive, however, because the animals are the fastest and most active of the sea cucumbers, and are capable of rapidly crawling around the aquarium, or quickly withdrawing into a crevice when disturbed. The standard feeding behavior of reef-dwelling synaptids is to anchor about 1/3 of the body into some secure hole, and then extend the anterior 2/3 of the body to feed (they can do the water balloon trick with only part of their body if they want to). If something disturbs the animal, it can rapidly contract a set of muscles that run the length of the body to pull itself back into that hidey-hole and avoid being eaten. The problem, however, is that those anchor ossicles may be laying across some of your other invertebrates (such as a coral, for example), and when the cucumber retracts, it simply tugs those ossicles free of whatever it was previously anchored upon — this can dislodge and/or cause damage to the soft tissue of whatever the cuke was lying on at the time. Obviously this is a potential concern to aquarists who want their corals and rock-work to remain in place and undamaged…
So, with those warnings and caveats, I’ll come back to what I said at the beginning – these really are one of my favorite odd-ball invertebrates in a reef aquarium. It’s not because of any service or function they provide (although some species make excellent grazers or detritivores), but rather because I think that they make such an interesting addition (or at least conversation piece) for my reef aquaria. When possible, I try to find one of the generalist detritivore species, such as Euapta lappa, or a diatom grazer such as Synaptula hydriformis for my tank, because they tend to thrive in a well-established reef aquarium. If you have a moderately large reef tank with a deep sand bed and are feeding plankton on a regular basis, there ought to be plenty of organic detritus for a medusa worm such as one of these species to find. My cuke spends most of it’s time cleaning the underside of the rocks, and only comes out onto the surface of the sand and the rocks at night. You can supplement it’s feeding by adding a couple of sinking shrimp pellets to the area it tends to hang out just before the lights go out. That will give the pellets time to soften and fall apart before the cuke starts to feed, and mine seems to really like the pellet mush (but so do the hermit crabs, brittle stars, conch, Nassarius snails and polychaete worms, and they all learn what the pellets are pretty quickly as well).
If provided with suitable conditions and sufficient food, some of these animals may even be able to reproduce in the aquarium. Unlike most echinoderms, sea cucumbers (holothurians) have only a single, well-developed gonad. In general, the majority of holothurians have separate sexes, but studies on a couple of synaptid cucumbers suggests that at least some of them are simultaneous hermaphrodites (containing both fully functional male and female reproductive tracts). In the case of Synaptula hydriformis, researchers found that these animals are not only simultaneous hermaphrodites, but they are also capable of self-fertilization (Frick 1998). Even more unusual, the fertilized eggs of S. hydriformis are retained within the body (in the perivisceral coelomic cavity) of the adult and actually gain nutrition from the parent, while they develop internally (Frick 1998). The young are protected and nourished in this way until they are released as fully-functional juveniles at about 8mm in length. When combined with the generalist diatom grazing of this species (as mentioned earlier), the internal brooding makes for an ideal candidate for aquarium culture. Other species of synaptid cucumbers may also reproduce asexually (e.g., Leptosynapta tenuis – Hendler et al. 1995), and again, detritivores capable of asexual reproduction may make ideal candidates for the aquarium. Individuals of most species also have excellent regenerative capabilities, so that even when they are injured accidentally, if they are otherwise healthy and well-cared for in an aquarium, they are likely to make a full recovery. In fact, Smith (1971a,b) found that healthy animals were capable of complete regeneration from either the anterior or posterior end (head or tail) given sufficient nutrient reserves and appropriate conditions after injury.
Obviously chances of aquarium reproduction are greatest in species that reproduce asexually or via brooded offspring. Although roughly 30 species of sea cucumbers brood their young, most of these are cold-water species, and the majority of other species spawn their gametes (eggs and sperm) into the water column to produce larvae. In one study which included four species of free- spawning synaptid sea cucumbers ( Synapta maculata, Patinapta taiwaniensis, Polycheira rufescens and Opheodesoma grisea ), researchers found that all of these animals tend to release their gametes late in the summer (Chao et al. 1995). This same study found that spawning is correlated with summer phytoplankton growth, and the authors suggested that direct or indirect feeding on phytoplankton-derived detritus is a critical component of nutrition required to reach reproductive status in these animals. The mode of larval development in most species of synaptid sea cucumbers remains unstudied. However, for species such as Leptosynapta inhaerens for which the larval development is known, larvae spend a considerable time feeding in the plankton prior to metamorphosing into a tiny version of the adult. For species with this mode of development, successful reproduction in the aquarium is highly unlikely, and even if a concerted effort is made to raise the young, the chances of success are slight (see Toonen 2002 for a detailed explanation of why this is so, and my Home Breeders FAQ for details on how this is done).
All-in-all, however, despite how fascinating these animals are, unless you are confident of the identification of the animal, can provide suitable conditions for the animal to feed, and are willing to live with the potential drawbacks of keeping one of these animals in an aquarium, they are not really recommended for keeping in any aquarium. If you have a well-established reef tank, and you take adequate precautions to protect any pump intakes and/or overflow drains, and can locate one of the generalist detritivores that are suitable for the aquarium, I think that these animals make a fantastic addition to a reef tank, because they are both active and fascinating to watch.
- Brusca, R. C., and G. J. Brucsa. 1990. Invertebrates. Sinauer Associates, Inc, Sunderland, Mass.
- Chao, S.-M., C.-P. Chen, and P. S. Alexander. 1995. Reproductive cycles of tropical sea cucumbers (Echinodermata: Holothurioidea) in southern Taiwan. Marine Biology 122:289-295.
- Cunningham, P., and P. Goetz. 1996. Venomous & Toxic Marine Life of the World. Pisces Books, Houston, TX.
- Delbeek, J.C., and J. Sprung. 1994. The Reef Aquarium, Vol. 1. Ricordea Publishing, Coconut Grove, FL.
- Fenner, B. 2000. Gad-Zooks Cukes! Sea Cucumbers: Not A Pretty Picture. WetWebMedia. http://www.wetwebmedia.com/seacukes.htm
- Frey, D. G. 1951. The use of sea cucumbers in poisoning fishes. Copeia:175-176.
- Frick, J. E. 1998. Evidence of matrotrophy in the viviparous holothuroid echinoderm Synaptula hydriformis. Invertebrate Biology 117:169-179.
- Hammond, L. S. 1982a. Analysis of grain-size selection by deposit-feeding holothurians and echinoids (Echinodermata) from a shallow reef lagoon, Discovery Bay, Jamaica. Marine Ecology Progress Series 8:25-36.
- Hammond, L. S. 1982b. Patterns of feeding and activity in deposit-feeding holothurians and echinoids (Echinodermata) from a shallow back-reef lagoon, Discovery Bay, Jamaica. Bulletin of Marine Science 32:549-571.
- Hammond, L. S., and C. R. Wilkinson. 1989. Exploitation of sponge exudates by coral reef holothuroids. Journal of Experimental Marine Biology and Ecology 94:1-10.
- Hendler, G., J. E. Miller, D. L. Pawson, and M. K. Porter. 1995. Sea Stars, Sea Urchins, and Allies: Echinoderms of Florida and the Caribbean. Smithsonian Institution Press, Washington DC.
- Kuznetsova, T. A., N. I. Kalinovskaya, A. I. Kalinovskii, and G. B. Elyakov.
- Structure of synaptogenin B, the artifact aglycone of the glycosides of the sea cucumber Synapta maculata. Khimiya Prirodnykh Soedinenii 5:667-670.
- Martínez, M. A. 1989. Holothuroideos (Echinodermata, Holothuroidea) de la region nororiental de Venezuela y algunas dependencias federales. Boletin Insitutto Oceanografico Universidad de Oriente Cumana 28:105-112.
- Michael, S. undated online article. Sea Cucumbers. Aquarium Fish Magazine ht tp://www.animalnetwork.com/fish/library/articleview.asp?Section=&RecordNo=3597
- Pawson, D. L. 1986. Phylum Echinodermata. Pp. 522-541 in W. Sterrer, ed. Marine Fauna and Flora of Bermuda: A Systematic Guide to the Identification of Marine Organisms. John Wiley & Sons, New York, NY.
- Ponomarenko, L. P., A. I. Kalinovsky, O. P. Moiseenko, and V. A. Stonik. 2001. Free sterols from the holothurians Synapta maculata, Cladolabes bifurcatus and Cucumaria sp. Comparative Biochemistry and Physiology Part B: Biochemistry & Molecular Biology 128B:53-62.
- Ruppert, E. E., and R. D. Barnes. 1994. Invertebrate Zoology. Saunders College Publishing, Harcourt Brace Jovanovich Publishers, Orlando, FL.
- Ruppert, E. E., and R. Fox. 1988. Seashore Animals of the Southeast: A Guide to Common Shallow-water Invertebrates of the Southeastern Atlantic Coast. University of South Carolina Press, Columbia, SC.
- Smith, G. N., Jr. 1971a. Regeneration in the sea cucumber Leptosynapta. I. The process of regeneration. Journal of Experimental Zoology 177:319-330.
- Smith, G. N., Jr. 1971b. Regeneration in the sea cucumber Leptosynapta. II. The regenerative capacity. Journal of Experimental Zoology 177:331-342.
- Sprung, J. 2001. Invertebrates: A Quick Reference Guide. Sea Challengers, Danville, CA.
- Toonen, R. 2002. Aquarium Science: The captive breeding of tropical reef species for the aquarium trade, with specific attention to long-term planktotrophic larvae. Tropical Fish Hobbyist #557:66-72.
- Wilkens, P. (translated by K. Wood & D. Hagner) 1998. Death in a Colorful Package. Aquarium Frontiers Online May: http://www.animalnetwork.com/fish2/aqfm/1998/may/features/2/default.asp | <urn:uuid:e25d0f06-1be6-4c89-9366-269f2d63de6b> | CC-MAIN-2023-50 | https://reefs.com/magazine/aquarium-invertebrates-the-medusa-worms/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100056.38/warc/CC-MAIN-20231129041834-20231129071834-00180.warc.gz | en | 0.932866 | 6,642 | 2.921875 | 3 |
a group's refusal to have commercial dealings with some organization in protest against its policies
writs of assistance
legal document that enabled officers to search homes and warehouses for goods that might be smuggled
halved the duty on foreign made molasses, placed duties on certain imports, and strenghtened the enforcement of the law allowing prosecutors to try smuggling cases in a vice-admiralty court
an act passed by the British parliment in 1756 that raised revenue from the American colonies by a duty in the form of a stamp required on all newspapers and legal or commercial documents
British soldiers fired into a crowd of colonists who were teasing and taunting them. Five colonists were killed. The colonists blamed the British and the Sons of Liberty and used this incident as an excuse to promote the Revolution.
Boston Tea Party
demonstration (1773) by citizens of Boston who (disguised as Indians) raided three British ships in Boston harbor and dumped hundreds of chests of tea into the harbor
Law passed by parliament allowing the British East India Company to sell its low-cost tea directly to the colonies - undermining colonial tea merchants; led to the Boston Tea Party
This series of laws were very harsh laws that intended to make Massachusetts pay for its resistance. It also closed down the Boston Harbor until the Massachusetts colonists paid for the ruined tea. Also forced Bostonians to shelter soilders in their own homes.
the legislative assembly composed of delegates from the rebel colonies who met during and after the American Revolution
Battle of Lexington and Concord
These two battles occurred on the same day. They were the first military conflicts of the war. Lexington was the first one, in which a shot suddenly rang out as minutemen were leaving the scene at Lexington. Fighting then occurred. The British won the brief fight. In the second battle, Concord, the British had gone onto Concord and, finding no arms, left to go back to Boston. On the bridge back, they met 300 minutemen. The British were forced to retreat, and the Americans claimed victory.
Battle of Bunker Hill
First major battle of the Revolutions. It showed that the Americans could hold their own, but the British were also not easy to defeat. Ultimately, the Americans were forced to withdraw after running out of ammunition, and Bunker Hill was in British hands. However, the British suffered more deaths.
Olive Branch Petition
Still pledge loyalty to King George III but are still asking Britain to respect the rights and liberties of the colonies, repeal oppressive legislation, and British troops out of the colonies; George didn't want anything to do with them and declared all colonies in a state of rebellion
Declaration of Independence
the document recording the proclamation of the second Continental Congress (4 July 1776) asserting the independence of the colonies from Great Britain
Founder of the Sons of Liberty and one of the most vocal patriots for independence; signed the Declaration of Independence
Sons of Liberty
A radical political organization for colonial independence which formed in 1765 after the passage of the Stamp Act. They incited riots and burned the customs houses where the stamped British paper was kept. After the repeal of the Stamp Act, many of the local chapters formed the Committees of Correspondence which continued to promote opposition to British policies towards the colonies. The Sons leaders included Samuel Adams and Paul Revere.
a leader of the American Revolution and a famous orator who spoke out against British rule of the American colonies (1736-1799)
American silversmith remembered for his midnight ride (celebrated in a poem by Longfellow) to warn the colonists in Lexington and Concord that British troops were coming (1735-1818)
America's first Vice-President and second President. Sponsor of the American Revolution in Massachusetts, and wrote the Massachusetts guarantee that freedom of press "ought not to be restrained."
United States diplomat and jurist who negotiated peace treaties with Britain and served as the first chief justice of the United States Supreme Court (1745-1829)
Virginian, patriot, general, and president. Lived at Mount Vernon. Led the Revolutionary Army in the fight for independence. First President of the United States.
American public official, writer, scientist, and printer. After the success of his Poor Richard's Almanac (1732-1757), he entered politics and played a major part in the American Revolution. Franklin negotiated French support for the colonists, signed the Treaty of Paris (1783), and helped draft the Constitution (1787-1789). His numerous scientific and practical innovations include the lightning rod, bifocal spectacles, and a stove.
Revolutionary leader who wrote the pamphlet Common Sense (1776) arguing for American independence from Britain. In England he published The Rights of Man | <urn:uuid:f54541ee-b905-4e77-848c-4c52c50b31fe> | CC-MAIN-2015-06 | http://quizlet.com/7149059/chapter-5-road-to-independence-flash-cards/ | s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115900160.86/warc/CC-MAIN-20150124161140-00099-ip-10-180-212-252.ec2.internal.warc.gz | en | 0.958231 | 972 | 3.390625 | 3 |
In 1916, a German U-boat sank a merchant marine ship flying Allied colors off the coast of Antarctica, somewhere between Elephant Island and Deception Island in the South Shetland Archipelago.
It was believed that all souls aboard the ship had been lost, along with its cargo of food and medical supplies bound for the Western front. That is, until a lone survivor was recovered some two years later in 1918 on an unnamed tidal island just off the north-west coast of the Antarctic Peninsula.
The survivor identified himself as Edward Allen Oxford, a British Imperial citizen. Despite two years having passed, he claimed to have been marooned for no more than six weeks on a nearby larger island which he insisted was warm and tropical, with abundant vegetation and wildlife.
Since the island on which he was discovered was a tidal island, it was not understood how he had survived for such a long time. Regardless, as no such island was known to exist that far south, and there was a significant discrepancy of time between his accounting and reality.
Therefore, Oxford was decreed ‘mad’ by Imperial authorities — which was an obvious consequence of the circumstances — and was sent to a convalescence facility in Nova Scotia to recover.
At that facility, he met and fell in love with one Mildred Constance Landsmire, a so-called “bluebird” or Nursing Sister with the Canadian Army Medical Corps. He was released after 18 months, and the two married and moved westward to live near a cousin of Oxford who ran a small dairy farm in the province of Quebec; where Oxford aided his cousin with farm chores.
Oxford later took up job as a forester, as he did not have a knack for agriculture and farming. This work-life caused him to be away from his beloved Mildred for weeks and sometimes months at a time, a lifestyle with which he had been well-acquainted as a merchant marine.
During this period, he penned many letters to his wife, in which he professed his undying devotion to her, and in which he extensively recorded his memories of having been marooned on his supposed tropical island off the coast of Antarctica.
Despite official denials of any such geographical anomaly in the region, Oxford stuck to his story throughout his whole life, and is believed to have written some two hundred letters to his wife describing various aspects of the fabulous land he supposedly discovered there.
Many of the letters found recently in their Quebec house described his life in the lumber camps of the region, along with his vivid recollections of having been marooned on a supposed tropical island off the coast of Antarctica during the Great War in details.
Eventually, the official Imperial records over a hundred years old confirmed that Edward Allen Oxford was a merchant marine, that his ship had been torpedoed, and that he was indeed recovered some two years later without any rational explanation for how he had been able to survive for so long in such a harsh environment.
Today Oxford’s story has been forgotten, and what the whole world prioritised about his story is that officials called him “insane”. But no one could offer any explanation for how he’d survived in supposedly sub-zero temperatures without food for so long.
To know more about the strange case of Edward Allen Oxford, read this interesting article on Lost Books/Medium
This article has been republished in brief from Quatrian Folkways Institute/Medium | <urn:uuid:0afe7a9f-bb20-4a09-991f-504d8426de09> | CC-MAIN-2022-40 | https://mysteriesrunsolved.com/2022/07/edward-allen-oxford-antarctica-deception-island.html | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00429.warc.gz | en | 0.988601 | 707 | 2.953125 | 3 |
Lichen Collection and Identification
To collect lichens, first you must obtain permission to collect them from the landowner. Then it is on to collecting your specimens. It is not as easy as it sounds. Lichens grow slowly and take a long time to cover an area. You must be ethical when you collect. Please visit our Ethics and Native Plants web page for reasons to leave plants behind.
- Just as with plants, remember, collect only lichens when you intend to use them later for identification or for adding to your collection.
- Never take more than you need, and never take the entire population in the area.
- Always leave some behind to recolonize the open space. If there is not enough to collect, then leave it behind. You do not want to remove the lichen from the environment completely.
- Remember that the lichen will die eventually when it is collected and stored indoors. Lichens are alive and should be treated with care.
- Wet the lichen first to prevent breakage during removal from its substrate. Remember that wet or damp lichens are more pliable and forgiving of mishandling.
- When storing the lichen for the trip to the lab or for long-term storage in your collection, always use a paper bag or paper envelopes. Do not use plastic bags, especially for wet specimens. They will die quicker and turn to mush.
- For more information about lichen collecting, please visit Harvard University, Farlow Herbarium: Lichens.
Remember that on National Forest Lands, it is illegal to collect vegetation without a permit. For collecting permit information on National Forests, please visit our Collection Permits web page.
Identifying lichens is much more difficult than identifying vascular plants. Each lichen thallus is a complete microscopic world with unique characteristics separating it from the other lichens.
Lichens are classified based on the fungus and fungal features. When identifying lichens, keep in mind that one species of fungus can have two different forms if paired with two different "photobionts". It is not common but it does happen.
In order to identify lichen to species, lichenologists use common household chemicals and some not-so-common chemicals to test the color reaction of the unique compounds found in the structure of the lichen, as well as using a lichen key to distinguish between species. Although a few of the chemicals are common, such as bleach and iodine, others are not as easy to get and are costly and dangerous. However, just about anyone can use a botanical identification key and a hand lens to identify the genus of lichen and appreciate their collection.
Even if you are not interested in identifying lichens, they are still interesting and amazing organisms to look at with the naked eye as well as under a hand lens or microscope. Realizing the roles lichens play in our environment will give you a greater appreciation of the world around you.
Lichen Identification Links
- Wisconsin State Herbarium: Lichen Speciman Database
- USDA Natural Resources Conservation Service: PLANTS Database
- Lichenland - Fun with Lichens from Oregon State University. | <urn:uuid:cc1c39bd-298f-492e-a5b7-a6d3f3b27c77> | CC-MAIN-2020-29 | https://www.fs.fed.us/wildflowers/beauty/lichens/identification.shtml | s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655933254.67/warc/CC-MAIN-20200711130351-20200711160351-00481.warc.gz | en | 0.933067 | 647 | 3.640625 | 4 |
Thirty years ago when I took AP Chemistry, I cannot remember ever having a lesson in Chemistry. What theory I had was given standing around the front bench. Theory was explanation to justify why we were doing the laboratory work for that day. There were no chairs or tables, except for the electronic balances kept behind panelled glass. Coming back to High School, we have come full circle. Labs are the explanation to justify the paper work and theory for the lesson. Previously I had never thought of chemistry as theoretical—but it was the hay day of kitchen chemistry when photographic memories were a blessing. What I learned then is hardly in the books anymore. Memory plays a small part. Chemistry is more like Physics—theory learned with mathematics.
Chemistry was exciting and dangerous even if poorly understood. Now its danger is recognized as more insiduous and liability incalculable. Contemporary intellectual elegance in Chemistry and technological advances has streamlined and made redundant the endless chemical analyses and titrations of a former era. However I feel we are left with a crisis. The swing from kitchen to computerised chemistry has emptied the emotionality, motivation and drama from Chemistry.
Here is not the place to take up the implications fully, but the purpose of this unit is to address part of the issue of motivation and emotionality. Currently what was once called physical chemistry is now center stage and all but defines Chemistry. Its theoretical elegance seems lost on my students but may be this could be changed if it could be taught in the context of the great crisis and drama of our day and age—the struggle to preserve the biosphere. Water Chemistry, solubility, atmospheric oxidation/reduction, environmental acids/bases, salts, toxic elements, fuel hydrocarbons and so forth could be the context in which core chemistry can come alive. Rather than teaching environmental chemistry as an extra, in the line margins or in pretty boxes in the text, one puts it into center stage from which to justrify physical chemistry or inorganic chemistry. The laboratory becomes the natural elements of environment, water, air and earth from which the theory takes shape for the student.
The idea is to take existing divisions in typical High School or University texts but instead of letting the environment (or other issue/context) be the application or illustration of the theory, let a study of the environment etc., raise the issues that cause us to look at theory—just as a teenager, I took a break from lab work to appreciate theory. Rather than the environment, we could equally begin with new inventions, products on the market, local firms and enterprises, chemicals used in the home or in the pharmacy or on the street. To some extent this is achieved already in undergraduate courses in Environmental Chemistry, but the focus even in Manahan’s excellent book, ‘Fundamentals of Environmental Chemistry’, is on creating a separate field of study rather than on addressing the pedagogical issues in teaching General Chemistry. The following unit is an example of contextual chemistry with a difference in that each class is structured by an open-ended experiment and the units are designed as a simulated journey through a real world situation. In this case I have chosen the Hydrological cycle and follow an imaginary water molecule as it moves from ocean to sky to earth and encounters with ‘civilization’ before it returns to the sea. Each class is divided into a pre-lab (coming up with a hypothesis to test), a lab (students design an experiment to test the hypothesis), and a post-lab discussion (students integrate theory with their findings). The initiation is a discussion in which students talk about what they already know about the problem of the day. Relevant bites of theory are introduced. An aspect of this is offered to the students to test. The suggested experiments are intended to be mini- labs that can be performed in a single or double 50 minute class session. By tieing in problems, ideas, experiment, conclusions and theory into a single session, each class becomes relevant, experiential, hands on, analytical, contextual, student centred and theoretical.
In the first instance I was motivated to devise this unit to teach chemistry that was interesting to me as teacher, to help recapture for students the love of chemistry that I had gained in a former time and place but that now seemed jeopardised by the inexorable march of ‘progress’, and to find a way to give meaning to chemical theory without it being intimidating to students in the normal everyday range of ability and intelligence. In this latter respect I was helped by the SESAP theories of teaching science, but also from my own philosophy of education centred on the dictum that ‘experience teaches’. The following unit therefore seeks to exemplify four pedagogical truths that I believe to be self-evident and critical in devising any successful chemistry course.
A.S. Neill in ‘Hearts not heads’, provides my first proposition that feeling awareness (experience) is prior to meaningful cognition. Mind, from birth onwards, can only find order from within experiences previously given by our feeling senses. Mind is otherwise empty abstractions or conditioned responses (as in brain washing). Textbooks can describe other people’s experiences and with empathy and imagination we can to some extent enter into their mind sets. In chemistry where we are learning about the behavior of matter, we need to recapitulate journies of exploration of former scientists or enter into simulated journies of discovery. The aim is not to learn facts but for the mind to find a meaningful order within a multi-dimensional experience.
My second proposition is given by Nietzsche. He was deeply opposed to reducing reality to a web of self-consistent logic. Truth is essentially fragmentary and is only grasped as we continuously subject it to experience. The mode in which theoretical chemistry is presented is as if reality consists of mathematical theorems of which chemistry supplies examples. We are lulled into believing that there is such a thing as a mechanistic metaphysic with absolute laws constituting the bedrock. Chemistry is then a closed book or bible upon which the outer edges are gradually being filled in by scientists to provide the actual constitution of reality. In theory at least, one day there will be nothing more to be known. The consequence for the class room is to turn teachers into catechists and students into learners of sacred scientific scripture. Ironically, science has been turned on its head by its own success. We can correct this by teaching it in the manner by which it gained its success, i.e. by subjecting mind always to experience.
Something of course is being built up in the grand edifice called science. For Robin Barrow in ‘An introduction to the philosophy of education’, it is a body of knowledge agreed upon by a professional caucus of experts. A student is initiated as neophyte and the teacher guides him or her through the curriculum given by the academy. In the sense that experience is always mediated through culture (language belongs to culture, not to an individual necessarily), Barrow’s views are profoundly true. Culture, like mind, is no less bound to the need to be subject to experience. The problem with raising curriculum to such an exalted position, (as is typical in the classroom), is that the student is not really allowed to think until at the end of the exercise. Only once we are brainwashed are we safe to ask questions. The resolution of our problem is to see the relationship of individual experience to corporate experience as a dialectical one. The teacher, as representative of the academy and the student as neophyte, are both confronted and subjected to the universe of experience. The teacher acts as a guide and in conversation with the student along the way of simulated experience such as experiments, a cathedral of thought is built up. The foundation, however, is not the academy or textbook, but experience generated in the classroom or field exercises.
It is basic to my theory that the central event in an instructional event is an experience that is inviting to the student and an experience from which the student can learn. Contrary to popular notions, science is not about objectivity. It is about finding explanatory models (the subjective pole) that best fit experience (the objective pole) so as to build up knowledge (science). Science does not allow mere private experience but it is only through private experience that we can make connections to public domains of experience. Subjectivity turns into objectivity once others confirm our grasp of experience by reproducing the events and findings. No amount of accurate use of SI units makes anything more objective -units are arbitrary, relative and based on convenience. It is their use in describing what happens in an experience that enhances objectivity. As we all know, however, experience cannot be taught. It is something that teaches us. Hence the style of these classes are open ended. Experience is the best teacher. | <urn:uuid:6cee0f8f-b208-4f7a-834e-de8696bd21b6> | CC-MAIN-2023-23 | https://teachersinstitute.yale.edu/curriculum/units/1994/5/94.05.03.x.html | s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224652959.43/warc/CC-MAIN-20230606150510-20230606180510-00632.warc.gz | en | 0.970946 | 1,816 | 2.71875 | 3 |
Human Interference: Does It Lead to Sustainable Development?
Over the course of the year, human interference has brought drastic change to the environment. Since Homo sapiens first walked the earth, we have been modifying the environment around us through agriculture, travel and eventually through urbanization and commercial networks. At this point in earth’s physical history, our impact on the environment is so substantial that scientists believe “pristine nature,” or ecosystems untouched by human intervention, no longer exist. Despite this change, only through
human interference will it be possible to develop new solutions. Addressing climate change and progressing towards sustainable development is dependent on economic growth that works with, rather than against, the environment. The greater and greater attention paid to the intervention tools and techniques obscures somehow the fact that those creating these technical elements and benefiting from them should be made responsible and aware of their role. Therefore, it is important to know whether these human interventions help us in meeting the needs of the present at the same time sufficient enough for the future or it just leads us to lesser sustainability of life.
As cited by Bird, et al. (2011) in their study, when the Harvard biologist E.O. Wilson put forward the “Biophilia” Hypothesis, he captured a notion that has long been embedded in diverse cultures around the world. Briefly the idea of Wilson that throughout the evolution of human lives we seek contact with nature. Kusakabe (2012) said that local governments in many countries have been seeking to improve sustainability since the launch of LA 21 in 1992. Sustainability is a moving target, dynamic and ever changing, as the planet is changing. Because of the urge to improve sustainability, humans have conducted lots of activities that affected the environment positively and negatively.
Humans have interfered almost every aspect of the environment. A study by Baratt (2018) was executed to evaluate energy budgeting, carbon foot prints, gaseous and soil health under conservation tillage with residue retention for identifying cleaner production technology in rice-maize system. Main energy resources on the other hand come from fossil fuels such as petrol oil, coal and natural gas. Fossil fuel contributes 80% of the world’s energy needs. Most industries use diesel machines for the production process. Transportation sector also consume significant amounts of diesel and gasoline. This situation leads to a strong dependence of everyday life on fossil fuels. However, the growth of the population is not covered by domestic crude oil production. Fossil oils are fuels which come from ancient animals and microorganisms. Fossil fuel formation requires millions of years. Thus, fossil oils belong to non-renewable energy sources. An increase o f the oil price often leads to economic recessions, as well as global and international conflicts (Zhou, et.al 2019).
On the economical side, there is ecofeminism with their action of involving females in wise consumption of energy and environmental resources. With an ecofeminist framework, ERC research would take more emancipatory approach. The research agenda would be focused on changing corporate and public policy so that the burden for the ecological crisis would not be placed on women alone. Policy changes would include mandatory ecological labeling for all consumer goods and stricter pollution regulations. Education programs designed to benefit consumers would be developed in order to liberate consumers from the complexities of a marketplace that has profit as its primary motive. These educational programs would be comprised of teaching the consumer not just to consume differently, but to consume less. This redirection would aid in the development of the “green citizen” and not merely the “green consumer” (Dobscha, 2013). Human interference also seeks to improve human quality in relation to the environment. The impact of forests on human health is a specific issue which has not been very visible within the larger framework on biodiversity, climate change, poverty, and human well-being. Forests and trees supply an abundance of ecosystem services that help in creating healthy living environments and in restoring degraded ecosystems. In addition to tangible products, forests for example mitigate floods, droughts, and the effects of noise, purify water, bind toxic substances, maintain water quality and soil fertility, help in erosion control, protect drinking water resources, and can assist with processing waste water. Forests can mitigate climate change and may help in regulating infectious diseases. However, ecosystem services and goods that forests provide are threatened by deforestation, pollution, biodiversity degradation, and climate change. Forests may advance the achievement of the UN Millennium Development Goals, especially combating hunger, poverty, and poor health. (Sarjala, 2009).
People have been making such big efforts in order to achieve sustainable development in order to achieve a common goal. With regards to the many assertion that this human interference can bring out, the best quality of life is assured, however its adverse effect to the environment is a variable to be considered knowing that what we have now comes from it. Thus, will this human interference be able to sustain sustainable development or not?
The researchers chose this study because it would be of big help to us humans who are living in this beautiful, yet now dying world, identify the human interference to the environment in terms of improving the industry, human quality, economy and agricultural field. By doing this, the researchers will be able to know if the environment/world can still sustain us in the future. In order to address this issue, the researcher tries to gather relevant articles. The researchers aim to offer further information regarding this topic by gathering and to further identify the positive and negative effects of human interference to the environment. This study aims to know the negative and positive effects of human interference on sustainable development. Specifically, its objectives are to identify the existing human interference, the positive and negative impact of these to the environment and make a reasonable conclusion whether these effects leads us to sustainable development.
Framework Of The Study
One of the first step in conducting this study is to look for articles that discusses the various human interference in the field of agriculture (environment), economy, human and industry. These are the 4 pillars of sustainability as stated by Dave Kepler. Knowledge of the existing human interference to these factor will allow the researchers to weigh in whether these interference will bring positive or negative impact to the environment and allow them to analyze if these human interference will be enough to meet the needs of the present without degrading the quality of the future. Given this, the researches will able to come up with a reasonable conclusion that somehow help identify the effects of the human interference to the 4 pillars of sustainability.
Human Interference to the Environment
According to Rapport (2009), controlling emissions of gases and particulate matter from agriculture is notoriously difficult as this sector affects the most basic need of humans, i.e., food. He also added that the current policies combine an inadequate science covering a very disparate range of activities in a complex industry with social and political overlays. Moreover, agricultural emissions derive from both area and point sources. Agricultural emissions play an important role in several atmospherically mediated processes of environmental and public health concerns. It also contributes to the global problems caused by greenhouse gas emissions. Given the serious concerns raised regarding the amount and the impacts of agricultural air emissions, policies must be pursued and regulations must be enacted in order to make real progress in reducing these emissions and their associated environmental impacts (Russell, 2013).
A global food revolution based on a new paradigm for agricultural development is urgently required. Without this shift, we are unlikely to attain the twin objectives of feeding humanity and living within boundaries of biophysical processes that define the safe operating space of a stable and resilient Earth system. Global sustainability is increasingly understood as a prerequisite to attain human development at all scales, from local farming communities to cities, nations, and the world . The reason is that we have entered a new geological epoch, the Anthropocene, where human pressures are causing rising global environmental risks and for the first time constitute the largest driver of planetary change. Agriculture is at the heart of this challenge. It is the world’s single largest driver of global environmental change and, at the same time, is most affected by these changes. Agriculture is the key to attaining the UN Sustainable Development Goals of eradicating hunger and securing food for a growing world population of 9–10 billion by 2050, which may require an increase in global food production of between 60 and 110 % in a world of rising global environmental risks. Agriculture is also the direct livelihood of 2.5 billion smallholder farmers, and the resilience of these livelihoods to rising shocks and stresses is currently gravely under-addressed (Stone, 2011). The main causes for the depletion of forestry resources in the developing countries are their use by ever growing population for infrastructure and industrial development. The use of forestry resources by human population is done generally in two ways. Firstly, the human population uses forestry resources for its intrinsic growth in the form of fuel, medicine, etc. directly by cutting trees, plants, herbs, etc. without clearing the forest land. Secondly, for the development of infrastructure, forest stands are cut in large segments to construct farm houses, housing colonies, health and recreation centers, to set up industrial units, to use land for agriculture, etc. The number of such projects related to development increase immensely as population density increases leading to augmentation of industrialization. If the focus is on the wood based industries, forest trees are used for manufacturing logs, planks, wooden tiles, furniture etc. by cutting forest stands. (Horwitz, 2009).
Rapid economic growth, industrialization, and urbanization have led to extremely severe air pollution that causes increasing negative effects on human health, visibility, and climate change. It may be noted here that due to excessive increase in population density many kinds of precursors, both social and environmental, appear in the habitat. One environmental precursor is pollution, the effects of which on forest resources have been studied by many investigators. The other is population pressure which is caused by excessive increase in population density in and around industrial units in forest habitat leading to augmentation of industrialization. There are very good local assessments of social vulnerability and exposure to environmental hazards.
According to Modak et. al (2019) a serious problem, especially in Eastern and some Southern EU Member States, is the on going use of coal as a source of domestic heating. This causes significant air pollution. There are, however, various national programs of subsidies supporting the switch from coal-based domestic heating to gas and other less polluting sources that target the poorest households.
In Geldermann et al. (1999), the environmental impact of kerosene burning during the flight of an aircraft is examined with an LCA. The ecological evaluation shows the substances which contribute to the potential environmental impacts caused by the kerosene burning. Brentrup et al. (2004) describe an LCA method to evaluate the environmental effects which are relevant to crop production. The study summarizes the environmental impacts into the following two indicators: human health and resource depletion and impacts on ecosystems.
Human Interference to Sustainable Development
The term ‘sustainable development’ is increasingly in frequent use in a wide range of settings, but it can be a nebulous and ambiguous concept. Although sustainable development is often interpreted as being driven by a purely environmental agenda, one of its key features is that it focuses on the relationship between social justice, human health and well-being, and economic development. In essence, it is about an integrated approach to development that aims to improve quality of life and meet the needs of current and future generations, whilst simultaneously protecting and enhancing the natural environment upon which we all depend.
Protecting and preserving the natural eco-system's resources has been considered a main priority of decision-makers and top managers in various business fields according to Howard-Grenville et. al (2014). Creating a balance between resource consumption and economic development is considered a challenge that obliges firms to implement environmentally friendly business activities that improve their economic, social, and environmental performance (Chan et. al. 2012). Rapid increase in pollution from industrial practices, accompanied by a decline in natural resources, has driven governments, governmental associations, environmental agencies and society as a whole to push firms and corporations to adopt green practices on a larger scale, where implementing such practices will lead to operational development, economic gain, and improvement of organizations environmental performance and competitive advantage (Mousa and Othman, 2019).
Even if cleaner technology can be implemented will the reductions in pollution be enough? John Fien (2015) argues that they will not, if production continues to grow. Unless substantial change occurs, the present generation may not be able to pass on an equivalent stock of environmental goods to the next generation. “Firstly, the rates of loss of animal and plant species, arable land, water quality, tropical forests and cultural heritage are especially serious. Secondly, and perhaps more widely recognized, is the fact that we will not pass on to future generations the ozone layer or global climate system that the current generation inherited. A third factor that contributes overwhelmingly to the anxieties about the first two is the prospective impact of continuing population growth and the environmental consequences if rising standards of material income around the world produce the same sorts of consumption patterns that are characteristic of the currently industrialized countries. Even if people put their faith in the ability of human ingenuity in the form of technology to be able to preserve their lifestyles and ensure an ever increasing level of the needs for everyone, they cannot ignore the necessity to redesign our technological systems rather than continue to apply technological fixes that are seldom satisfactory in the long term.
The main objective of the study is to identify/determine the human interference to the environment. This is pretty important to the study in the sense that we need to know the good and bad effects of our actions to the environment. And also to help us know if said actions will help us sustain the environment or the world so that the world can still sustain us in the future and not finding the need for us to migrate from this world/planet. Based on the findings, the burning of fossil fuels in the industrial sector, the emissions of gases and particulate particles from the agricultural sector, and pollution are the most notable human interference in the environment. Among the three, it was discovered that the burning of fossil fuels and pollution has the most destructive effects to the environment which call for some drastic measures to be taken in regards to the two. While genetic modification is the most notable human interferences in the positive spectrum/least negative effect in comparison to what we have mined. | <urn:uuid:1a5d5193-8d84-4f9a-8602-ba9d7920b93d> | CC-MAIN-2023-23 | https://hubpages.com/politics/Human-Interference-Does-it-Lead-to-Sustainable-Development | s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224648000.54/warc/CC-MAIN-20230601175345-20230601205345-00066.warc.gz | en | 0.942572 | 2,967 | 3.234375 | 3 |
Osteoarthritis (OA) is the most common joint disorder.
Hypertrophic osteoarthritis; Osteoarthrosis; Degenerative joint disease; DJD; OA; Arthritis - osteoarthritis
Causes, incidence, and risk factors
Osteoarthritis is caused by 'wear and tear' on a joint.
- Cartilage is the firm, rubbery tissue that cushions your bones at the joints, and allows bones to glide over one another.
- Cartilage can break down and wear away. As a result, the bones rub together, causing pain, swelling, and stiffness.
- Bony spurs or extra bone may form around the joint, and the ligaments and muscles around the hip become weaker and stiffer.
Often, the cause of OA is unknown. It is mainly related to aging. The symptoms of OA usually appear in middle age. Almost everyone has some symptoms by age 70. However, these symptoms may be minor. Before age 55, OA occurs equally in men and women. After age 55, it is more common in women.
Other factors can also lead to OA.
- OA tends to run in families
- Being overweight increases the risk of OA in the hip, knee, ankle, and foot joints
- Fractures or other joint injuries can lead to OA later in life
- Long-term overuse at work or in sports can lead to OA
Medical conditions that can lead to OA include:
- Bleeding disorders that cause bleeding in the joint, such as
- Disorders that block the blood supply near a joint can lead to
- Other types of arthritis, such as chronic
gout, pseudogout, or rheumatoid arthritis
Review Date: 10/28/2010
Reviewed By: David C. Dugdale, III, MD, Professor of Medicine, Division of General Medicine, Department of Medicine, University of Washington School of Medicine. Also reviewed by David Zieve, MD, MHA, Medical Director, A.D.A.M., Inc. | <urn:uuid:59da9ac6-48aa-483c-a5fb-ddb32d6b6954> | CC-MAIN-2013-20 | http://www.healthcentral.com/osteoarthritis/risk-factors-3766-108.html | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700563008/warc/CC-MAIN-20130516103603-00046-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.933506 | 437 | 3.390625 | 3 |
• Aggression and Inhibition Behavior In Dogs
• Behavior Dog Training
• Dealing With Aggression in Dogs
• Dog Carsickness
• Dog Storm Phobias
• Dog Training and Dog Bones
• Dog Training Basics Methods
• Dogs That Dislike Either Men or Women
• Hyperkinesis and Problem Behaviors In Dogs
• Incessant Barking Part 1
• Incessant Barking Part 2
• Possible Origins of Aggression in Dogs
• Sexual Mounting
• Should Face Licking Be Encouraged
• Stealing Food Your Personal Items
• The Danger Of A Jealous Dog
• The Neurotic Dog
• The Psychotic Dog
• Understanding Your Dogs Chewing Problems
• When Your Dog Runs Away
|The Neurotic Dog
The Neurotic Dog
A neurosis may be defined as a functional nervous disorder with no sign of disease of the central nervous system. Psychoneurosis is described as an "emotional maladaptation” due to unresolved unconscious conflicts, and may also be used to describe the condition of many so-called neurotic dogs. This means, to recognize a neurotic dog, we must identify some defective nervous behavioral functions, while ruling out physical injury or disease, such as hydrocephalus, brain tumors, etc.
This can be done in some cases through neurological examinations. Urine and blood analysis can often indicate internal chemical imbalances which are of an organic cause. On the other hand, they may also indicate the presence of severe environmental stressors. Combined with behavioral information, physiologic examinations might indicate a neuroses or the basis for a psychosis. For practical purposes, a dog may be considered neurotic if he shows signs of a functional nervous disorder combined with behavior that is both abnormal and maladaptive for dogs in general.
But how is a functional nervous disorder described in behavioral terms? The following descriptions are helpful:
* The dog that fails to inhibit the orienting (alerting) response to stimuli that occurs repeatedly and are known to the animal to be neither harmful nor rewarding. These dogs are almost always in a state of anxiety.
* The dog that responds to novel objects, sounds, touches, movements and even odors with exaggerated active or passive defensive responses. These dogs often lack adequate early social experience.
* The dog that fails to retain (in some cases, even to develop) voluntary or involuntary conditioned reflexes. This cannot be applied to the dog's total behavior, but usually is pertinent to a failure to form and/or retain learned associations involving defense and social behaviorisms.
* The dog that displays hyperkinesis. Signs include excessive salivation, elevated pulse and respiration, abnormally low urine output, and increased energy metabolism revealed through excessive, sometimes stereotyped activity, especially in close confinement.
* Displays fixations on objects, exhibiting ritualized behavior, usually repetitive and with no apparent objective. "Obsessive-compulsive" is the current diagnostic label of choice. While it is often treated with drugs, careful diagnosis shows that these dogs are suffering from frustration due to a lack of function in their lives. They are "making work," and receiving internal neurochemical rewards. | <urn:uuid:1b47ea0e-4705-448b-b6b1-e698d1e405aa> | CC-MAIN-2016-07 | http://www.dogbehavioralproblems.dog-articles.net/articles/The-Neurotic-Dog.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701158601.61/warc/CC-MAIN-20160205193918-00196-ip-10-236-182-209.ec2.internal.warc.gz | en | 0.919151 | 658 | 3.078125 | 3 |
Once upon a time, our ancestors ate only raw food. Then, about 2 millions years ago, they learnt how to use the energy of heat. The result: not only did prehistoric humans change their food habits—the new food habits also changed humans.
No one can say when and where the first piece of meat was grilled. However, this much is certain: we began to play with fire over some prehistoric embers, more than 2 million years ago.
And what happened to the wild game—probably slain—must have seemed like a miniature natural disaster to the first observer of this process: the fibres of the meat shrivelled in the heat of the flames within a short time; soon thereafter, the water contained in the muscles seethed, turned to steam, expanding to more than a thousand times its volume, and tore through the fibres to escape.
Hot fat came out of the fissures that had burst open in the meat and dripped with a sizzle into the flame. The pigments deteriorated. Bright red first became a soft pink and then grey.
The blaze was particularly savage and destructive at the edges of the tissue. Within a few minutes, it broke down the proteins and sugars there. Their components subsequently combined to form hundreds of new substances that left a brownish crust full of intensive aromas on the grilled meat.
What might seem banal to our contemporaries and could hardly be any more mundane—barbecuing or cooking food—brought several advantages to the humans in the Stone Age who were the first to master the new technique; they lived longer and produced more offspring.
And it probably played a decisive part in the evolutionary success of Homo sapiens.
“After all, the preparation of food is a skill that no other animal possesses,” says British anthropologist Richard Wrangham. “Without it, our brain would probably never have developed so magnificently and we would probably never have evolved as a species.”
To reconstruct the beginnings of cooking, one must first analyse ancient ashes and bones discovered in Africa and carry out anatomical studies. This is the only way to find the evidence required to understand why humans developed in such a different way from their closest relatives, the apes—not only with respect to their brain volume and upright walk, but also in the way in which they consume their food.
The teeth of Homo sapiens as well as their mouth are almost delicate as compared to those of a chimpanzee, for example. In addition, their jaw muscles and the bones to which these muscles are attached are rather weak. It would be impossible to crush hard-fibred plant stems with such a kit.
Besides, the human large intestine is extraordinarily small when compared to that of a gorilla. This is why humans digest fresh leaves, fruits and vegetables much less effectively than other large primates. The digestive tract also makes them vulnerable—the consumption of raw meat can sometimes even be life-threatening for humans.
But how could a species with so many deficiencies conquer the entire planet? How do the two fit together: weakness and ascent?
Until 2.5 million years ago, the ancestors of Homo sapiens still resembled apes: fossils from that time show that prehistoric humans had long arms and short legs. With an average size of 450cm3, their brains were hardly bigger than those of present-day chimpanzees.
Cooked food takes the load off the intestines, giving extra energy to the brain
In contrast, they had a prominent ribcage and pelvis, which hints at a spacious gastrointestinal tract. And such a long digestive system was especially suited to processing a plant-based diet, which is why palaeoanthropologists believe that these ape-humans primarily survived on plants.
However, more than 600,000 years later, our ancestors changed enormously: they became more human-like. Their legs became longer and their arms shorter. The thorax and pelvis became smaller—an indication of a reduced gastrointestinal tract. And the chewing surface of the molars shrank by more than 20 per cent, which researchers were able to deduce from the fragments of fossilised teeth. At the same time, the brain gained about 40 per cent volume.
These were radical changes. And according to Richard Wrangham, it was the art of cooking that resulted in these coming about.
According to his theory (which is accepted by several researchers), prehistoric humans had learnt in the course of these 600,000 years how to use stone tools to cut up dead animals and to smash their bones and scrape them out in order to get to the marrow. Incisions in fossilised bones testify to the use of stone blades.
Obviously, our predecessors tapped a rich source of calories—meat—which also led to the brain becoming bigger over several generations.
“However, this alone is not enough to explain the immense increase in cerebral matter,” says Wrangham, “and it does not explain the shortening of the intestine at all.”
Hence, he believes that our ancestors learnt something else in these years: optimising their meat consumption through fire.
It is still unclear exactly when the first barbecue in human history took place. However, the oldest fire pits in Africa are more than 1.9 million years old. It is possible that nature provided an object lesson back then: it could be that roasted big game was left behind after bush fires and it turned out to be delicious.
Heat alters meat: it becomes softer, can be chewed more easily, and acquires a more intensive taste. In addition, the cell walls of microbes in the tissue dissolve and they die away en masse. This is why roasted meat can be eaten even days later without the risk of a microbial contamination.
Besides, fire increases the nutritional value of meat, tubers and roots since the nutrients of heated food are much better exploited in the human digestive tract: the human body absorbs only about 65 per cent of the components of raw eggs and more than 90 per cent from boiled eggs.
Even starch, such as in wheat or potatoes, is sometimes twice as valuable when cooked as it is otherwise. It is the heat that makes the tiny starch grains swell and absorb large quantities of water; their strong bonds dissolve in this process. Hence, the enzymes present in the digestive tract can break down the nutritious granules more easily.
Through the use of heat, our predecessors were able to use their foodstuff much better—and also to acquire new sources of nutrition since heating made several plants that are otherwise too hard, too fibrous or full of bitter-tasting compounds softer, sweeter or at least less bitter. Also, numerous plants only lose their toxins through heating—for example, cassava and bamboo (which contain hydrocyanic acid) as well as unpeeled potatoes and string beans.
According to Wrangham, the preparation of food henceforth had an ever stronger influence on the digestive organs of prehistoric humans. Now they could carry out a part of the process of digestion outside the body since they processed their food beforehand, making it softer and more digestible.
Since humans cook their food, they spend just 5% of the day eating
Gradually, their lips, mouth, teeth, jaw bones and gastrointestinal tract adapted to the reduced requirements and became smaller.
All these changes facilitated the development of the brain: the shorter the intestine became in the course of evolution, the less energy it required. Thus, the surplus energy benefited other organs, primarily the brain, which was now better supplied, and thus increased in volume. To be more precise, an enormous expansion of the human brain came about.
In addition, our ancestors no longer needed to spend hours chewing stringy meat or fibrous plants any more. Hence, they had more time to prepare their food and to look for new kinds of food, to gather fruits, or lie in wait for animals. And even better: those individuals who got more calories from the food improved their chances of survival, produced healthier offspring and thus ensured the continued existence of their species.
This physical change continued to scale new heights until the anatomically modern version of Homo sapiens developed in Africa about 200,000 years ago. Their intestine was even smaller and consumed 10 per cent less energy than that of the Great apes.
And although the human brain accounted for only about 2 per cent of body mass, it now required up to a quarter of the nutritional energy.
From Africa, Homo sapiens finally conquered the rest of the world. Another 80,000 years later, modern humans were already using elaborate techniques for cooking. They cooked meat in natural vessels, such as tortoise shells. And they probably collected the fat of roasted seals in clam shells. In any case, researchers later observed this technique among the Yamana tribe in Tierra del Fuego, who lived in a very traditional way for a long time.
In addition, sanding marks on stones suggest that Homo sapiens of the Stone Age also pulverised hard grass seeds into flour.
And perhaps the flour congealed into a thick paste one day in the rain and later hardened in the sun: this could be how the first flatbread, which could be preserved for several days,
Then, at some point of time, our ancestors discovered another important procedure in using flour: fermentation.
Since the flour sometimes remained moist for several days in the rainy season, fungus and bacteria spread within it and it began fermenting. Some of the microbes produced high-quality proteins that humans could now ingest. During the process of fermentation, carbon dioxide was also produced, which aerated the sticky dough and made it easier to digest.
With the passage of time, humans further developed their cooking methods. They learnt how to cook their food more efficiently with the help of earth ovens, which were discovered 30,000 years ago. The glowing stones in the cooking pit supplied an even heat that enabled them to control the cooking of tubers and meat over a longer time.
Well sealed and filled with water, the pits were probably also where the first soup was cooked.
Then, 16,000 years ago, humans made the first ceramic vessels to cook in. However, it was probably only in ancient times that they learnt how to preserve food for months. They hung up strips of meat or fish in the smoke from a fire or cured them with salt. Thanks to this technique, the tissue acquired more taste, simultaneously losing its juice—which is why bacteria could not multiply in it.
It was another breakthrough, a kind of cooking without heat: smoking made it possible to stock up food, which helped humans to survive in inhospitable regions, or during long sea voyages.
Another culinary revolution occurred at the same time: the cooks discovered the effects of spices. These made all food more delicious and also prevented bacteria from multiplying in the meat. Onions, garlic, oregano or pepper killed several kinds of microbes, and in doing so acted against a variety of diseases and contamination. Pepper was also used to promote digestion.
Since non-refrigerated meat spoilt fast in hotter regions, the people in the tropics started to flavour their food particularly strongly, which enabled them to make it last longer.
Thanks to the development of ever more refined culinary techniques, the human immune system probably also changed besides their anatomy, making them more helpless against plant toxins.
Now, after several generations of surviving on cooked and roasted food, the microbes present in raw meat or eggs are also dangerous for Homo sapiens. Obviously, humans have gradually lost some of the defence mechanisms that once effectively protected them from salmonella or E. coli.
And yet, cooking their food has helped Homo sapiens to emancipate themselves from their direct environment and to inhabit the most barren zones of the planet, the high mountain regions, the rocky deserts, the Arctic Zone. The South American Andes are also an inhospitable habitat. Very few edible plants grow there. One of them is the potato, by far the most important food resource—which, however, would be inedible unless cooked.
It is true that humans have made themselves dependent on their new culinary techniques and become physically vulnerable. However, they have also more than offset these weaknesses with the same culinary art.
Today, there is no society that lives exclusively on uncooked food (people who only eat raw fruits, meat or vegetables almost inevitably lose weight).
And despite all the differences in the food preferences of various human cultures, no other primate today enjoys such a premium-quality, high-energy diet as humans—regardless of whether their diet consists primarily of plant-based or meat-based food.
And another thing: today we need just 5 per cent of our waking time per day on average to nourish ourselves.
Without cooked, softer food we would be spending almost half the day sitting at the dinner table.
A five-volume boxed set, Modernist Cuisine: The Art and Science of Cooking by Nathan Myhrvold, Chris Young and Maxime Bilet is published by Taschen Books. | <urn:uuid:0f4df87a-7df4-4075-9f47-7881f45f6f47> | CC-MAIN-2018-05 | https://www.outlookindia.com/magazine/story/how-cooking-made-humans-smart/283255 | s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084890893.58/warc/CC-MAIN-20180121214857-20180121234857-00404.warc.gz | en | 0.966923 | 2,681 | 3.625 | 4 |
Dec. 18, 2009 Research Highlight Physics / Astronomy
Supernovae host a pasta dinner
The nuclei in the core of a collapsing supernova can form a range of unusual ‘pasta-like’ structures
The dense matter at the interior of a collapsing star—or ‘supernova’—is unlike anything that can be replicated in a laboratory. Scientists therefore rely on simulations to predict the behavior of electrons, protons and neutrons in these stellar explosions.
Now, scientists at the RIKEN Nishina Center for Accelerator-Based Science in Wako, and several other institutions in Japan, have shown that the proton- and neutron-containing nuclei at the core of supernovae are likely to form a range of unusual shapes1. These structures, called ‘pasta phases’ because of their similarity to strands of spaghetti or flat slabs of lasagna, are different from the mostly spherical nuclei found at the center of atoms and are likely to affect the dynamics of supernova explosions.
Predictions that pasta phases form in supernovae are not new, but the earlier work was based on models that assumed the nuclear structures did not change through time. In an actual collapsing supernova, however, the core density is not static. Gentaro Watanabe, a lead author of the paper, and his colleagues therefore performed new simulations to look for pasta phases in a supernova. They used a method called ‘quantum molecular dynamics’ that takes into account the realistic time evolution of nuclei at the supernova core.
The researchers started by assuming that large spherical nuclei are distributed periodically in a lattice such that the total density is about 15% that of normal nuclear matter. They then simulated what happens to the nuclei as the lattice is compressed. They found that the spherical nuclei merge into zigzag shapes and ultimately into columns (Fig.1), confirming that pasta phases should exist in supernovae.
Scientists had long believed that what caused spherical nuclei to deform into longer rod-like shapes was a so-called fission instability that forced the nuclei to break apart. “The actual formation process of the pasta phases is very different from this generally accepted scenario,” says Watanabe. He and his team have shown that the attraction between neighboring nuclei is what drives the shape changes.
In addition to advancing the understanding of nuclear structures, the results will also be important for astrophysics. “Supernova explosions are very complicated phenomena, and we do not know exactly how pasta phases change the dynamics of supernova explosions,” explains Watanabe. “In the future, we would like to simulate the collapse of the supernova core, taking into account the effect of the pasta phases.”
- 1. Watanabe, G., Sonoda, H., Maruyama, T., Sato, K., Yasuoka, K. & Ebisuzaki, T. Formation of nuclear “pasta” in supernovae. Physical Review Letters 103, 121101 (2009). doi: 10.1103/PhysRevLett.103.121101 | <urn:uuid:b2cf3ad6-869e-41de-80ac-72611e1d3181> | CC-MAIN-2023-40 | https://www.riken.jp/en/news_pubs/research_news/rr/6125/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506480.35/warc/CC-MAIN-20230923062631-20230923092631-00324.warc.gz | en | 0.93497 | 657 | 4.09375 | 4 |
Hi All, I'm working thru RHE & on p39 there's a discussion on integers overflowing. I can see what the codes doing but when I try to work it by hand I get the wrong answer. Although the text says don't worry about the detail, I am interested in the detail. The code looks like this: int a = 12345, b= 234567, c c = a * b / b s.o.p(c) results in -5965. OK my questions are: 1) does a * b get evaluated first? Is Left to Right always the case with evaluation? 2) a * b results in 2895729615. Which I know is larger than the integer range but how does Java arrive at -1399237681. I know this is tedious detail but I'm just getting a bit hung up by it. Regards Paul
Hi Paul, 1) When you have same precedence, the evaluation will be left to right. * and / are in same precedence. 2) 2895729615 = 10101100100110010101001111001111 in binary. Note that Most Significant Bit has 1 means it is negative. Bala.
1) does a * b get evaluated first? Is Left to Right always the case with evaluation? Order of evaluation is described in detail in the JLS (�15.7) For multiplication division, yes, it is from left to right.
2) a * b results in 2895729615. Which I know is larger than the integer range but how does Java arrive at -1399237681. If you are running under Windows, pull up the Calculator and set the view to Scientific. Select the "Dec" radio button, then type in 2895729615. Now select the "Bin" radio button. You will see the binary equivalent, which is (spaces added for readability): 1010 1100 1001 1001 0101 0011 1100 1111 The leftmost bit is the sign bit. 0 in the sign bit indicates a positive number, 1 indicates a negative number. That makes the binary representation above a negative number. To find out what the magnitude of that number is, you need to calculate its two's complement: To get the two's complement of 1010 1100 1001 1001 0101 0011 1100 1111 invert all the bits (if you still have Calculator up, click on the "Not" button) 0101 0011 0110 0110 1010 1100 0011 0000 then add 1, which will give you 0101 0011 0110 0110 1010 1100 0011 0001 Now, select the "Dec" radio button and you should see 1399237681 That is the magnitude of the negative number represented by the original: 1010 1100 1001 1001 0101 0011 1100 1111
[This message has been edited by JUNILU LACAR (edited June 21, 2001).] | <urn:uuid:1bf78a74-b663-4ae5-8ddd-bd5937d27278> | CC-MAIN-2023-50 | https://coderanch.com/t/200000/certification/Arithmetic-Operators | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100146.5/warc/CC-MAIN-20231129204528-20231129234528-00810.warc.gz | en | 0.881645 | 605 | 3.515625 | 4 |
Thomas Jonathan Jackson was born on January 21, 1824 in Clarksburg, Virginia. He entered West Point in July 1842 and, in spite of his poor childhood education, worked hard to graduate seventeenth in his class in 1846. Upon graduation, Jackson was sent on military duty to Mexico, and continued his service in the United States Army in positions in New York and Florida. In 1851, Jackson became professor of artillery tactics and natural philosophy at Virginia Military Institute in Lexington, Virginia. He resigned from the army as of February 29, 1852.
Jackson's summer vacations from teaching were often spent vacationing in the North and in Europe where his interests were aroused in art and culture rather than military or political aspects. This somewhat calm, domestic period in his life came to a close on April 21, 1861 when he was ordered to go to Richmond as part of the cadet corps. Since military aspirations had faded from his life, he was virtually unknown in this sphere.
It was during the Battle of Bull Run in the Civil War when Jackson assumed his nickname. Amidst the tumult of battle, Brigadeer-General Barnard E. Bee stated, "There is Jackson standing like a stone wall." As the war continued, Jackson continually impressed his Confederate compatriots with his skill on the battlefield and in planning conferences. He distinguished himself in the Valley campaign of early 1862, the Battle of second Manassas in August 1862, and the Battle of Fredericksburg in December 1862. Jackson was a Southern hero, and in spite of his eccentricities, he was loved and respected by his soldiers. He strictly observed the Sabbath, and his religiousity was constant in all facets of his life.
On May 2, 1863, in his last march of the Civil War, Jackson was wounded by friendly fire. He died of pneumonia several days later on May 10 at Guiney's Station, Virginia. His body was carried to Richmond and then to Lexington where it was buried. It is said that The Army of Northern Virginia never fully recovered from the loss of Stonewall Jackson's leadership in battle. General Robert E. Lee believed Jackson was irreplacable.
sources: Dictionary of American Biography
Continue to Matthew Fontaine Maury
Return to Men Behind the Myth page | <urn:uuid:61caf395-c8d2-44ac-aa12-f707d6fc4046> | CC-MAIN-2014-15 | http://xroads.virginia.edu/~ug97/monument/jacksbio.html | s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00470-ip-10-147-4-33.ec2.internal.warc.gz | en | 0.990817 | 462 | 3.34375 | 3 |
Each one of our cells has the same 22,000 or so genes in its genome, but each uses different combinations of those same genes, turning them on and off as their role and situation demand. It is these patterns of expressed and repressed genes that determine what kind of cell – kidney, brain, skin, heart – each will become.
To control these shifting patterns, our genomes contain regulatory sequences that turn genes on and off in response to specific chemical cues. Among these are “enhancers,” sequences that can sit tens of thousands of genetic letters away from a gene, yet still force it into overdrive when activated. Missteps in this delicate choreography can lead cells to take on the wrong role, causing debilitating diseases, but the regulatory regions involved are difficult to find and study since they only play a role in specific cells, often under very specific conditions.
Now a research team led by University of California scientists has used a modified version of the gene-editing technique CRISPR to find enhancers – not by editing them but by prompting them into action. As reported online Aug. 30, 2017, in Nature, a team from UC San Francisco and UC Berkeley used a tool called CRISPR activation (CRISPRa), developed at UCSF in 2013, to search for enhancers of a gene that affects development of the immune cells known as T cells. The sequences they found illuminate fundamental circuitry of autoimmune disorders such as inflammatory bowel disease (IBD) and Crohn’s disease.
The work was conducted in the laboratories of Alexander Marson, MD, PhD, assistant professor of microbiology and immunology at UCSF, and Jacob Corn, PhD, assistant adjunct professor of molecular and cell biology at Berkeley.
“Not only can we now find these regulatory regions, but we can do it so quickly and easily that it’s mind-blowing,” said Corn. “It would have taken years to find just one before, but now it takes a single person just a few months to find several.”
Corn is co-founder and scientific director of the Innovative Genomics Institute (IGI), a Berkeley-UCSF initiative of which Marson is an affiliate member. The IGI aims to advance CRISPR-based genome editing in medicine and agriculture to cure human disease, end hunger, and protect the environment.
CRISPRa Turns on Enhancers
The advent of CRISPR has allowed researchers to make rapid progress in understanding protein-coding genes. In the most common application of CRISPR, an enzyme called Cas9 snips DNA at particular sequences specified by the sequence of a “guide RNA.” Using the technology, scientists can excise or edit any gene, and observe how these changes affect cells or whole organisms.
But sequences that code directly for proteins make up only 2 percent of our genome. Enhancers and other regulatory DNA elements spread throughout the other 98 percent are more difficult to study, but are implicated in a large number of genetic disorders. Scientists can look for potential enhancer sequences based on how they interact with proteins that bind to DNA, but figuring out which enhancers work with which genes is much more challenging. Simply cutting out an enhancer with CRISPR-Cas9 doesn’t help, because it won’t have a noticeable effect if the enhancer is inactive in the particular cell type used in an experiment.
If you think of the genome as a model home with 22,000 lightbulbs (the genes) and hundreds of thousands of switches (the enhancers), the challenges have been finding all of the switches and figuring out which lightbulbs they control and when. Previously, CRISPR has been used to cut out wires looking for those that would cause a bulb to go dark, giving a good idea of what that section of the circuit was doing. However, cutting out a light switch when it’s off doesn’t tell you anything about what it controls. So, in order to find certain light switches, it has been common to try to mimic the complicated chemical cues that activate an enhancer.
But using this method, “you can quickly go insane trying to find an enhancer,” said Benjamin Gowen, a postdoctoral fellow in Corn’s lab at Berkeley and one of the study’s lead authors.
A better approach would be a universal “on” switch that could target any part of the genome and, if that part included an enhancer, could activate that enhancer. Fortunately, CRISPRa, recently developed by Jonathan Weissman, PhD, professor of cellular and molecular pharmacology at UCSF and co-director of the IGI, is just such a tool. CRISPRa uses a "blunted" version of the DNA-cutting Cas9 protein, strapped to a chain of activating proteins. Although CRISPRa also uses guide RNA to target precise locations in the genome, instead of cutting DNA, CRISPRa can activate any enhancers in the area.
While the first applications of CRISPRa involved using a single guide RNA to find promoters – sequences right next to genes that help turn them on – the UCSF/Berkeley team behind the new study realized that CRISPRa could help find enhancers too. By targeting the CRISPRa complex to thousands of different potential enhancer sites, they reasoned, they would be able to determine which had the ability to turn on a particular gene, even if that gene was far away from the enhancer on the chromosome.
“This is a fundamentally different way of looking at non-coding regulatory sequences,” said Dimitre Simeonov, a PhD student in Marson’s lab at UCSF and the study’s other lead author.
Performing 20,000 Experiments at Once
The gene the team chose to study produces a protein called IL2RA, which is critical to the function of immune cells called T cells. Depending on conditions in the body, T cells have the ability to either trigger inflammation or suppress it. The IL2RA gene produces a protein that tells T cells that it’s time to put on their anti-inflammatory hats. If the enhancers that should turn on the gene have errors, the cells fail to suppress inflammation, potentially leading to autoimmune disorders like Crohn’s disease.
To track down locations of the enhancers that control IL2RA, the UCSF and Berkeley team produced over 20,000 different guide RNAs and put them into T cells with a modified Cas9 protein.
“We essentially performed 20,000 experiments in parallel to find all the sequences that turn on this gene,” Marson said.
Sure enough, targeting some of the sequences with CRISPRa increased IL2RA production, yielding a short list of locations that might be important for regulating the fate of T cells.
“Whenever you get a chance to ask a question in a totally new way, you can suddenly discover things that you would have missed with older methods,” said Gowen. “We found these enhancers without having to make any assumption of what they looked like.”
Tying Mutant Enhancers to Inflammatory Disease
One of the likely enhancer sequences the team identified included the site of a common genetic variant that was already known to increase the risk of IBD, though how it did so was not understood. Marson and Corn’s teams wondered whether this genetic variation might alter the switch regulating the amount of IL2RA protein present in T cells. To test this, they modified mouse T cells so they contained the genetic variant associated with human disease, and found that these T cells indeed produced less IL2RA.
“This starts to unlock the fundamental circuitry of immune cell regulation, which will dramatically increase our understanding of disease,” said Marson.
The team next hopes to expand the method, perhaps by finding ways to search for enhancers of many different genes at once, making the search for regulators of immune disorders that much faster. And they expect the method to be a widely applicable tool for untying genetic interactions in all kinds of cells.
“We believe this is going to be a very generally useful method,” said Corn. “It would be easy for someone interested in neurons or any other cell type to pick it up and look for the enhancers involved in programming those cells’ behavior.”
Other authors on the study include Theodore L. Roth, Youjin Lee, John D. Gagnon, Alice Y. Chan, Dmytro S. Lituiev, Michelle L. Nguyen, Rachel E. Gate, Eric Boyer, Frederic Van Gool, Meena Subramaniam, Zhongmei Li, Jonathan M. Woo, Victoria R. Tobin, Kathrin Schumann, K. Mark Ansel, Chun Ye, William J. Greenleaf, Mark S. Anderson, and Jeffrey A. Bluestone of UCSF; Mandy Boontanrart, Nicolas L. Bray, Therese Mitros, Graham J. Ray, Gemma L. Curie, Nicki Naddaf, Julia S. Chu, and Hong Ma of Berkeley; Maxwell R. Mumbach, Howard Y. Chang, and Ansuman T. Satpathy of Stanford University; Hailiang Huang, Ruize Liu and Mark J. Daly of Harvard University; and Kyle K. Farh of Illumina Inc.
This research was supported by the National Institutes of Health (grants DP3DK111914-01, R01HG0081410-01, R01HL109102, P50-HG007735, S10RR029668, S10RR027303, and P30 DK063720), the Scleroderma Research Foundation, the UCSF Sandler Fellowship, a gift from Jake Aronov, a National Multiple Sclerosis Society grant (CA 1074-A-21), and the Marcus Program in Precision Medicine Innovation. Marson and Schumann have filed a patent on the use of Cas9 ribonuclear proteins to edit the genome of human primary hematopoietic cells. Chang and Greenleaf are co-founders of Epinomics. Marson serves as an advisor to Juno Therapeutics and PACT Therapeutics, and the Marson lab has received sponsored research support from Juno Therapeutics and Epinomics.
UC San Francisco (UCSF) is a leading university dedicated to promoting health worldwide through advanced biomedical research, graduate-level education in the life sciences and health professions, and excellence in patient care. It includes top-ranked graduate schools of dentistry, medicine, nursing and pharmacy; a graduate division with nationally renowned programs in basic, biomedical, translational and population sciences; and a preeminent biomedical research enterprise. It also includes UCSF Health, which comprises three top-ranked hospitals, UCSF Medical Center and UCSF Benioff Children’s Hospitals in San Francisco and Oakland, and other partner and affiliated hospitals and healthcare providers throughout the Bay Area. | <urn:uuid:bbfdfc93-e512-4229-ba3f-298326d54297> | CC-MAIN-2022-33 | https://www.ucsf.edu/news/2017/08/408146/blunting-crisprs-scissors-gives-new-insight-autoimmune-disorders | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572198.93/warc/CC-MAIN-20220815175725-20220815205725-00137.warc.gz | en | 0.931772 | 2,301 | 3.46875 | 3 |
In your own language you know many words that sound the same but do not mean the same. They are homophones (= "same sound"). In English, too, there are many homophones, and it's important to try to learn and understand them. We use homophones all the time, even in everyday speech. They are also a common source of humour in jokes, and frequently occur in riddles.
These pages explain homophones and give examples with audio, and also list many homophones by level and by type. | <urn:uuid:7dfbc288-51e7-487f-ad3d-178e225f37cf> | CC-MAIN-2016-44 | https://www.englishclub.com/pronunciation/homophones.htm | s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988718278.43/warc/CC-MAIN-20161020183838-00072-ip-10-171-6-4.ec2.internal.warc.gz | en | 0.976435 | 111 | 2.859375 | 3 |
E2 F6 is a chess notation used to describe a particular move in the game of chess. It refers to a knight moving two squares forward and one square to the right (or left) on the chessboard. This move can be used to quickly attack an opposing piece, or to put a piece in an advantageous position.
The notation E2 F6 is used to explain a chess move in an unambiguous way, so that both players can easily understand it. It is also useful for recording moves of a game for later analysis, or for studying different opening strategies.
To make this move, the knight must start on a square labeled ‘E2’ on the board and then move two squares forward and one square to the right (or left). The final destination of this move is a square labeled ‘F6’. Depending on the position of other pieces, this move may be more or less advantageous for the knight’s player.
In summary, E2 F6 is a notation used in chess to describe a knight’s movement from one square to another. By using this notation, players can quickly and unambiguously communicate their moves to each other, helping them better understand the game and improve their strategy.
What happens when a circuit won’t reset
When a circuit won’t reset, it means that something has gone wrong with the circuit’s ability to reset itself. This is usually caused by a malfunctioning component in the circuit, such as a capacitor or resistor.
The first step in troubleshooting a circuit that won’t reset is to identify the source of the problem. If you have access to a multimeter, you can measure the voltage levels across different parts of the circuit to identify any discrepancies. You may also need to check for any broken wires or components that have become dislodged.
Once you have identified the source of the problem, you can start to work on repairing it. Depending on the complexity of the circuit, this could involve replacing components, soldering connections, or even replacing entire sections of the circuit board. In some cases, it might be necessary to completely rebuild the circuit from scratch.
If you are unable to repair the circuit yourself, then you may need to contact a professional for assistance. A technician will be able to diagnose and repair the issue more quickly and efficiently than you can.
No matter what method you use to troubleshoot a circuit that won’t reset, it is important to remember that safety should always come first. Take care when working with electronics and always follow all safety guidelines provided by the device manufacturer.
How do you reset a circuit
Resetting a circuit is an important process used to restore a circuit to its normal working condition after it has been damaged or malfunctioned. It is a very simple process that requires a few basic steps and can be done in a matter of minutes.
First, you will need to identify the exact location of the circuit breaker or fuse box. This is typically located in the basement or garage of the home, or on the outside wall near the main electrical panel. Once you have identified the location, turn off all power to the circuit by flipping the breaker switch off. Then, reset the breaker by turning it back on. If a fuse was used as part of the circuit, you will need to locate and replace it with an identical one.
Once power has been restored to the circuit, you should visually inspect the circuit for any signs of damage. Look for loose connections, melted wires, burnt components, or any other visual signs of damage. If any damage is found, you should contact a qualified electrician to repair it before attempting to use the circuit again.
Resetting a circuit is an important step when troubleshooting electrical problems in your home. By following these simple steps and taking necessary precautions, you can help ensure that your circuits remain safe and functioning properly.
How do I reset my electrical system
Resetting your electrical system can be a daunting task, but it doesn’t have to be. With the right information and the right tools, you can reset your electrical system quickly and safely.
The first step in resetting your electrical system is to disconnect power from the main breaker. This will ensure that no electricity is flowing through your house while you are resetting your system. You should also turn off any individual circuit breakers in order to avoid power surges when resetting the system.
Once you have safely disconnected power from the main breaker, you can begin resetting the electrical system. Depending on the type of system you have, you may need to reset the circuit breakers, wiring systems, and other components of the system. In most cases, you will need to unscrew and remove the cover from each circuit breaker in order to reset it. Then, you will need to turn off all switches and reset each breaker individually. Once each breaker has been reset, you can then reconnect power using a new breaker if necessary.
Additionally, if your home has a fuse box, you may need to check for burnt or broken fuses and replace them as necessary. If you are unsure how to do this, you may want to consult a professional electrician who can help you safely and efficiently reset your electrical system.
Finally, once your electrical system has been successfully reset, it is important to inspect all wiring systems and components of your home’s electrical system in order to ensure everything is functioning properly. This will help prevent future problems with your home’s electrical system.
Resetting your electrical system doesn’t have to be complicated or overwhelming – with the right information and tools, it can be done quickly and safely!
Is it safe to reset breaker
Resetting a breaker is a relatively simple task that is often necessary when an electrical circuit in your home or office stops working. Resetting a breaker is a safe process, as long as you take the proper precautions. First, it is important to make sure that the breaker you are resetting has not tripped due to an overload. If the breaker has tripped due to an overload, then resetting it will not fix the issue. Instead, you will need to identify and address the cause of the overload first.
Once you have determined that the breaker has not tripped due to an overload, it is time to reset it. To do this, locate the circuit breaker panel and locate the specific circuit that needs resetting. Turn off all power to the circuit by switching off the main power switch. Once power is off, carefully remove the cover from the circuit breaker panel and locate the trip lever for the specific circuit you are resetting. The trip lever should be in either a down or an up position depending on whether or not the circuit has been tripped. If it is in a down position, simply switch it back to its original up position and replace the cover on the circuit breaker panel.
When resetting a breaker, it is important to remember to never touch any of the exposed wires inside of the panel. If there are any exposed wires, contact a professional electrician for help rather than attempting to reset the breaker yourself. Additionally, if a breaker trips repeatedly or if you have any doubt about what you are doing, contact an electrician immediately for assistance. With these safety tips in mind, resetting a breaker can be done safely and correctly. | <urn:uuid:eebfae5e-83cf-42ac-8246-3fa21a1dd39a> | CC-MAIN-2023-14 | https://homeautotechs.com/What-does-E2-F6-mean/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296944996.49/warc/CC-MAIN-20230323034459-20230323064459-00581.warc.gz | en | 0.935627 | 1,504 | 3.4375 | 3 |
When taken in moderation, chocolate actually has its upside. Cocoa, the raw ingredient of chocolates, is loaded with antioxidants which fights off free radicals from the body. It is both a brain and energy booster. It can also be used as an effective body sugar level regulator. However, there are some types of chocolate that provide more health benefits than their counterparts. For starters, when you have a dark chocolate vs milk chocolate situation, dark chocolate would come out as the winner according to certain parameters.
Chocolate, as one would expect, is brimming with fats. That is why moderation is essential. Just because some types of chocolate are good for the body does not mean a person can go overboard with chocolate consumption. Undesirable weight gain might be the end-result of frenzied intake of such sweet treats.
However, there is such a thing as healthy fats. These are monounsaturated fatty acids or MUFAs. This type of fats have a number of health benefits. They are known to aid in weight loss since they can be used as a replacement for weight gain-catalyzing calories. They also help reduce heart disease risk since they have the ability to bring down the levels of cholesterol and triglycerides. They are even said to help lessen the risk of some forms of cancer.
This is one of the reasons why dark chocolate will win in a dark chocolate vs milk chocolate bout. Dark chocolate contains more MUFAs than milk chocolate. In fact, half a bar of dark chocolate would contain approximately four to five grams of MUFAs as compared to two to three grams in milk chocolate.
Fiber is essential to a person’s diet because it provides a lot of health advantages. Among them is promoting bowel health, regulating cholesterol and blood sugar, and managing weight. This is yet another parameter that proves dark chocolate is better for you than milk chocolate. The dark variety packs fiber four times more than its milk-based equivalent. Half a bar of dark chocolate would be comprised of sixteen to seventeen percent of fiber. The same amount of milk chocolate will only yield a fiber percentage count of about four.
Iron is an essential mineral that comprises hemoglobin, the substance in red blood cells tasked to transport oxygen throughout the body. This is why a deficiency of iron can lead to a whole host of negative outcomes including fatigue and a weakened immune system. It is, therefore, very important to maintain healthy levels of iron in the body. According to WebMD, recommended amount of iron is as follows:
· 10 mg of iron daily for children 4 to 8 years old
· 8 mg of iron daily for children 9 to 13 years old
· 18 mg of iron daily for women 19 to 50 years old (Women lose blood during their period so they need more iron.)
· 8 mg of iron daily for men 19 to 50 years old
· 8 mg of iron daily for both men and women 50 years old and onwards
Dark chocolate scores a home run with this one. At half a bar, the dark type would have about twenty-eight percent of iron. Its milk-based counterpart, at the same quantity, packs a measly six percent. Dark chocolate has iron content that is more than triple the amount found in milk chocolate.
Dark chocolate also contains more minerals than milk chocolate. While milk chocolate outperforms dark chocolate when it comes to bones and teeth-friendly calcium, dark chocolate trumps milk chocolate pretty much in everything else. It has more magnesium, phosphorus, potassium, and zinc.
Magnesium is essential to keep body processes running at an optimum level. It is an all-around regulator that takes care of muscle and nerve function, blood sugar level, and even blood pressure. Phosphorus, meanwhile, works like calcium. It keeps bones and teeth healthy and strong. It also aids in the use and absorption of carbohydrates, fats, and proteins.
Potassium is also yet another notable mineral because of the fact that it is not produced naturally by the body so it is essential to consume potassium-rich food but not excessively. Potassium does a lot of good things for the body. It aids in regulating water level, blood pressure, nerve impulses, and the acidity and alkalinity in the body.
Finally, there’s zinc. Zinc does a lot of good things for our body. It helps boost one’s immune system, aids in DNA generation, and helps heal wounds.
Carbohydrates, Cholesterol, Sugar
The old adage, “Less is more” definitely applies when comparing the health benefits of dark chocolate with milk chocolate. Carbohydrates, cholesterol and sugar, in excess, are bad for the body. They can cause short-term health problems and, when not treated, can lead to long-term concerns such as diabetes, heart ailments, and high blood pressure.
This is the yet another compelling reason why dark chocolate wins in a dark chocolate vs milk chocolate fight. The presence of this so-called “Big Three” is notably lower in dark chocolate as compared to milk chocolate. Dark chocolate only has 6% carbohydrates composition compared to milk chocolate’s 8%. Dark chocolate also has significantly less sugar with only 10g of it at half the bar quantity compared to 21g present in half a bar of milk chocolate. It is the same story when it comes to cholesterol. Milk chocolate has a whopping 3% cholesterol count as compared to dark chocolate with only 0.3%. With lesser carbohydrates, cholesterol, and sugar, dark chocolate, once again, emerges as the better choice.
It cannot be stressed enough that the health advantages brought by dark chocolate should not be used as an excuse for eating it in excess. It still packs so many calories which can lead to unhealthy weight gain. For as long as you take it in moderation, you can fully enjoy the benefits that dark chocolates can offer.
Given all the reasons above, one does not need to be a rocket scientist to prove that dark chocolate should be the choice when it comes to deciding between dark chocolate vs milk chocolate. So go ahead. Enjoy all the benefits dark chocolates have to offer but always remember, moderation is the key. | <urn:uuid:d2c24e20-97a3-4fcb-ade1-9efc16e985c2> | CC-MAIN-2019-30 | https://consumerhealthadvisory.com/is-dark-chocolate-better-for-you-then-milk-chocolate/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526714.15/warc/CC-MAIN-20190720214645-20190721000645-00334.warc.gz | en | 0.956404 | 1,263 | 2.859375 | 3 |
The brain changes as we age, but it's not all bad! Experience changes our brains in a good way.
Data from the very large and long-running Cognitive Function and Ageing Study, a U.K. study involving 13,004 older adults (65+), from which 329 brains are now available for analysis, has found that cognitive lifestyle score (CLS) had no effect on Alzheimer’s pathology. Characteristics typical of Alzheimer’s, such as plaques, neurofibrillary tangles, and hippocampal atrophy, were similar in all CLS groups.
However, while cognitive lifestyle may have no effect on the development of Alzheimer's pathology, that is not to say it has no effect on the brain. In men, an active cognitive lifestyle was associated with less microvascular disease. In particular, the high CLS group showed an 80% relative reduction in deep white matter lesions. These associations remained after taking into account cardiovascular risk factors and APOE status.
This association was not found in women. However, women in the high CLS group tended to have greater brain weight.
In both genders, high CLS was associated with greater neuronal density and cortical thickness in Brodmann area 9 in the prefrontal lobe (but not, interestingly, in the hippocampus).
Cognitive lifestyle score is produced from years of education, occupational complexity coded according to social class and socioeconomic grouping, and social engagement based on frequency of contact with relatives, neighbors, and social events.
The findings provide more support for the ‘cognitive reserve’ theory, and shed some light on the mechanism, which appears to be rather different than we imagined. It may be that the changes in the prefrontal lobe (that we expected to see in the hippocampus) are a sign that greater cognitive activity helps you develop compensatory networks, rather than building up established ones. This would be consistent with research suggesting that older adults who maintain their cognitive fitness do so by developing new strategies that involve different regions, compensating for failing regions.
. Multiple Biological Pathways Link Cognitive Lifestyle to Protection from Dementia. Biological Psychiatry [Internet]. 2012 ;71(9):783 - 791. Available from: http://www.sciencedirect.com/science/article/pii/S0006322311009218
A study involving 125 younger (average age 19) and older (average age 69) adults has revealed that while younger adults showed better explicit learning, older adults were better at implicit learning. Implicit memory is our unconscious memory, which influences behavior without our awareness.
In the study, participants pressed buttons in response to the colors of words and random letter strings — only the colors were relevant, not the words themselves. They then completed word fragments. In one condition, they were told to use words from the earlier color task to complete the fragments (a test of explicit memory); in the other, this task wasn’t mentioned (a test of implicit memory).
Older adults showed better implicit than explicit memory and better implicit memory than the younger, while the reverse was true for the younger adults. However, on a further test which required younger participants to engage in a number task simultaneously with the color task, younger adults behaved like older ones.
The findings indicate that shallower and less focused processing goes on during multitasking, and (but not inevitably!) with age. The fact that younger adults behaved like older ones when distracted points to the problem, for which we now have quite a body of evidence: with age, we tend to become more easily distracted.
. A Double Dissociation of Implicit and Explicit Memory in Younger and Older Adults. Psychological Science [Internet]. 2011 . Available from: http://pss.sagepub.com/content/early/2011/03/17/0956797611403321.abstract
Older news items (pre-2010) brought over from the old website
Experienced air traffic controllers work smarter, not harder, making up for normal mental aging
A study involving 36 air traffic controllers and 36 age- and education-matched non-controllers, with 18 older (average age 57) and 18 younger adults (average age 24) per group has found that although predictable age-related declines were observed in most of the standard tests of cognitive function, in the simulated air traffic control task, experience helped the older controllers to compensate to a significant degree for age-related declines, especially in their performance of the more complex simulations.
. Experience-based mitigation of age-related performance declines: evidence from air traffic control. Journal of Experimental Psychology. Applied [Internet]. 2009 ;15(1):12 - 24. Available from: http://www.ncbi.nlm.nih.gov/pubmed/19309213
When emotions involved, older adults may perform memory tasks better than young adults
A study involving 72 young adults (20-30 years old) and 72 older adults (60-75) has found that regulating emotions – such as reducing negative emotions or inhibiting unwanted thoughts – is a resource-demanding process that disrupts the ability of young adults to simultaneously or subsequently perform tasks, but doesn’t affect older adults. In the study, most of the participants watched a two-minute video designed to induce disgust, while the rest watched a neutral two-minute clip. Participants then played a computer memory game. Before playing 2 further memory games, those who had watched the disgusting video were instructed either to change their negative reaction into positive feelings as quickly as possible or to maintain the intensity of their negative reaction, or given no instructions. Those young adults who had been told to turn their disgust into positive feelings, performed significantly worse on the subsequent memory tasks, but older adults were not affected. The feelings of disgust in themselves did not affect performance in either group. It’s speculated that older adults’ greater experience allows them to regulate their emotions without cognitive effort.
. Effects of regulating emotions on cognitive performance: what is costly for young adults is not so costly for older adults. Psychology and Aging [Internet]. 2009 ;24(1):217 - 223. Available from: http://www.ncbi.nlm.nih.gov/pubmed/19290754
Aging brains allow negative memories to fade
Another study has found that older adults (average age 70) remember fewer negative images than younger adults (average age 24), and that this has to do with differences in brain activity. When shown negative images, the older participants had reduced interactions between the amygdala and the hippocampus, and increased interactions between the amygdala and the dorsolateral frontal cortex. It seems that the older participants were using thinking rather than feeling processes to store these emotional memories, sacrificing information for emotional stability. The findings are consistent with earlier research showing that healthy seniors are able to regulate emotion better than younger people.
. Effects of aging on functional connectivity of the amygdala for subsequent memory of negative pictures: a network analysis of functional magnetic resonance imaging data. Psychological Science: A Journal of the American Psychological Society / APS [Internet]. 2009 ;20(1):74 - 84. Available from: http://www.ncbi.nlm.nih.gov/pubmed/19152542
While we take for granted that we’ll lose some cognitive ability as we get older, it’s also true that some very old people have brains just as quick as they always were. Now a post-mortem study of the brains of five of these "super aged" has revealed that these brains do indeed differ from normal elderly brains; specifically, by having much fewer tau tangles. Tau tangles are characteristic of Alzheimer's patients, but they are not restricted to them; until now, it’s been assumed that aging brings about the accumulation of these tangles. However, amyloid plaques, also characteristic of Alzheimer’s and found in smaller quantities in aging brains, were found in “normal” quantities, pointing to the tangles as the critical factor.
The findings were presented November 16, at the Society for Neuroscience annual meeting in Washington, D.C.
Confidence in memory performance helps older adults remember
A study involving 335 adults aged 21 to 83 found that control beliefs were related to memory performance on a word list recall task for middle-aged and older adults, but not for younger adults. This was partly because middle-aged and older adults who perceived greater control over cognitive functioning were more likely to use strategies to help their memory. In other words, the more you believe there are things you can do to remember information, the more likely you are to make an effort to remember.
Lachman, M.E. & Andreoletti, C. 2006. Strategy Use Mediates the Relationship Between Control Beliefs and Memory Performance for Middle-Aged and Older Adults. J Gerontol B Psychol Sci Soc Sci, 61, P88-P94.
'Sharp' older brains are not the same as younger brains
We know that many older adults still retain the mental sharpness of younger people, but studies comparing brain activity in older and younger adults suggests they perform differently. A rat study has now found the first solid evidence that still "sharp" older brains do indeed store and encode memories differently than younger brains. Comparison of those older rats who had retained their cognitive abilities with those who had not, also revealed that those with impaired cognition had lost the ability to modify the strength of the communications between synapses (synaptic communication is the means by which memories are encoded and stored). But the competent seniors also differed from the younger rats in the mechanism most used to bring about synaptic change.
. NMDA receptor-independent long-term depression correlates with successful aging in rats. Nat Neurosci [Internet]. 2005 ;8(12):1657 - 1659. Available from: http://dx.doi.org/10.1038/nn1586
An advantage of age
A study comparing the ability of young and older adults to indicate which direction a set of bars moved across a computer screen has found that although younger participants were faster when the bars were small or low in contrast, when the bars were large and high in contrast, the older people were faster. The results suggest that the ability of one neuron to inhibit another is reduced as we age (inhibition helps us find objects within clutter, but makes it hard to see the clutter itself). The loss of inhibition as we age has previously been seen in connection with cognition and speech studies, and is reflected in our greater inability to tune out distraction as we age. Now we see the same process in vision.
. Aging Reduces Center-Surround Antagonism in Visual Motion Processing. Neuron [Internet]. 2005 ;45(3):361 - 366. Available from: http://www.cell.com/neuron/abstract/S0896-6273(05)00013-9
Effect of expectations on older adults’ memory performance
A study involving 193 participants and two experiments, each with a younger (17 – 35 years old) and older (57 – 82 years old) group of adults, has investigated how negative stereotypes about aging influences older adults' memory. Participants were exposed to stereotype-related words in the context of another task (scrambled sentence, word judgment) in order to prime positive and negative stereotypes of aging. Results show memory performance in older adults was lower when they were primed with negative stereotypes than when they were primed with positive stereotypes. Age differences in memory between young and older adults were significantly reduced following a positive stereotype prime, with young and older adults performing at almost identical levels in some situations.
. Explicit and implicit stereotype activation effects on memory: do age and awareness moderate the impact of priming?. Psychology and Aging [Internet]. 2004 ;19(3):495 - 505. Available from: http://www.ncbi.nlm.nih.gov/pubmed/15382999
Cognitive abilities are fairly stable and may be correlated with longevity
The Scottish Mental Survey assessed 87,498 eleven-year-olds in 1932, and another 70,805 in 1947. In a fascinating follow-up to this study, over 1000 of these students have been contacted and re-assessed, on the exact same tests. It was found that, first of all, the seniors did rather better than they had at age 11, and that differences in mental ability remained fairly stable with age. Mental ability at 11 was also found to be significantly correlated with survival — those who scored highly were more likely to have survived, with the exception that men with high ability were more likely to die in active service in World War II. People of lower ability had a greater tendency to lung and stomach cancer. More results from this landmark study are expected.
These preliminary findings were presented by Professor Ian Deary from the Department of Psychology, University of Edinburgh at a symposium on aging at the Australian National University.
Compensating strategies for aging memories
PET scans of the prefrontal cortex reveal that older adults who perform better on a simple memory task display more activity on both sides of the brain, compared to both older adults who do less well, and younger adults. Although this seems counter-intuitive – the older adults who perform less well show activity patterns more similar to that of younger adults, this supports recent theory that the brain may change tactics as it ages, and that older people whose brain is more flexible can compensate for some aspects of memory decline. Whether this flexibility is neurological, or something that can be taught, is still unknown.
. Aging gracefully: compensatory brain activity in high-performing older adults. NeuroImage [Internet]. 2002 ;17(3):1394 - 1402. Available from: http://www.ncbi.nlm.nih.gov/pubmed/12414279
Training can improve age-related memory decline in elderly
Older adults show two kinds of cognitive-processing deficits: under-recruitment, where appropriate areas of the brain are less likely to be utilized without specific instruction, and non-selective recruitment, where non-relevant regions of the brain are more likely to be used. A recent imaging study confirmed that older adults were less likely than younger ones to use the critical frontal regions when performing a memory task, and more likely to use cortical regions that are not as useful. However, when subjects were given specific strategy instructions, the older adults showed increased activity in the frontal regions, and their remembering improved. Even with this support however, older adults still showed a greater tendency to use brain regions that were not useful.
. Under-recruitment and nonselective recruitment: dissociable neural mechanisms associated with aging. Neuron [Internet]. 2002 ;33(5):827 - 840. Available from: http://www.ncbi.nlm.nih.gov/pubmed/11879658
How aging brains compensate for cognitive decline
Evidence from a series of studies using functional positron emission tomography (PET) images suggests that one way older adults may compensate for age-related cognitive decline is by using additional regions of the brain to perform memory and information processing tasks. For example, simple short-term memory tasks involve the same brain regions in both older and younger adults, but older adults also activate a frontal cortex region that young adults use only when performing complex short-term memory tasks. This may explain why performance of older adults on complex memory tasks is usually significantly poorer than that of younger adults - the frontal cortex region that young adults will activate to help with complex short-term memory tasks is already preoccupied in older adults, and is less available to help when the task becomes more complex.
The research was conducted by University of Michigan researchers under the leadership of cognitive neuroscientist Patricia Reuter-Lorenz, and presented at the annual meeting of the American Psychological Association in San Francisco. | <urn:uuid:8a174a9d-2530-4d09-9ff7-20b9f4f4d526> | CC-MAIN-2019-04 | https://www.memory-key.com/research/topic/compensation | s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583660070.15/warc/CC-MAIN-20190118110804-20190118132804-00204.warc.gz | en | 0.951272 | 3,249 | 2.890625 | 3 |
Scientists announced they have found that dinosaurs suffered from osteosarcoma, an aggressive bone cancer that still affects humans today. The disease was found in the fossilized lower leg bone, the fibula, of a horned dinosaur called Centrosaurus Apertus that lived 76 to 77 million years ago in Dinosaur Provincial Park, in Alberta, Canada. When first discovered in 1989, the malformed end of the bone was believed to just be a healing fracture, until more detailed analysis using modern technology has discovered it to actually be cancer.
The analysis was conducted by scientists from many fields including pathology, radiology, orthopedic surgery, and paleopathology. The bone was examined, cast and CT scanned before scientists took a thin slice of the bone and studied under a microscope. After the total examination, the scientists were able to conclude their findings as osteosarcoma. As confirmation, the team compared the fossilized fibula side by side with another normal fibula from a dinosaur of the same species along with one from a 19-year old man that was diagnosed with the disease. | <urn:uuid:a204ffca-cc95-4c6e-9fcb-2aad1f6b44f0> | CC-MAIN-2021-25 | https://www.newsquickies.com/index.php/2020/08/06/dinosaur-bone-diagnosed-with-cancer/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487612537.23/warc/CC-MAIN-20210614135913-20210614165913-00262.warc.gz | en | 0.977281 | 221 | 3.703125 | 4 |
EDITORIAL NOTE: This blog post was originally published on the Huffington Post.
One-hundred sixty-five years ago this week on July 19-20 1848, 300 women and men met in Seneca Falls, New York, to discuss "the social, civil and religious condition and rights of Woman." The gathering, called the Seneca Falls Convention, produced one of the nation's most important historical documents advocating women's rights - the monumental "Declaration of Sentiments" - planting the seed for the fight for women's suffrage in America, and indirectly for the formation of the League of Women Voters, which would later champion the issue.
Written primarily by Elizabeth Cady Stanton, the Declaration of Sentiments parodies the Declaration of Independence, which Congress had passed over 70 years earlier. But instead of arguing for America's freedom from the "tyranny" of British control, the Declaration of Sentiments argues for women's freedom from the "tyranny" of patriarchy. Whereas the Founding Fathers wrote in the Declaration of Independence that it is self-evident that "all men are created equal," the Declaration of Sentiments boldly asserts that "all men and women" are equal. The document points out the "patient sufferance" not of "men" or "mankind," but of American women, who were oppressed by an undemocratic government that failed to allow them to possess property rights, speak in public, file for divorce, manage their own wages and attend college.
To continue reading, please visit Huffington Post. | <urn:uuid:332645cf-5560-494d-a00b-ce555b40c825> | CC-MAIN-2017-43 | http://lwv.org/blog/165-years-seneca-falls-continuing-organize-equality | s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824820.28/warc/CC-MAIN-20171021152723-20171021172723-00592.warc.gz | en | 0.95524 | 313 | 4 | 4 |
Trying to manage and raise an anxious child is very difficult and stressful for all parents. Every good parent wants their child to enjoy their life and escape anxiety. However, many parents will fall into a cycle of trying to protect their child but escalating the anxiety at the same time. The key to helping your child manage anxiety is to respect their feelings and fears. Here are some tips on what to do if your child has anxiety.
- Don’t avoid activities that make your child anxious.
The first step in learning how to manage anxiety is to be in a situation that is uncomfortable. He or she will slowly learn the coping mechanism that will be very helpful the next time this situation arises. If you keep him or her out because you want to protect them, then they will not learn.
- Be positive and encouraging.
This is where parents excel! Parents will naturally encourage their child even in things that are not pleasant. We always reassure them that they will do well on tests or be great on stage. Your confidence will help them build their confidence and overcome anxiety.
- Ask open-ended questions.
Another way to break the cycle of anxiety is to avoid leading questions. You want him or her to talk about her feelings openly and not only when questioned.
- Avoid reinforcing fears and anxieties.
Even if you are not physically voicing your opinions, your tone or body language might give the impression that his or her anxieties are true. Remember, you want your child to overcome anxiety by boosting their confidence!
- Keep the anxiety as short as possible.
You do not want your child to dwell on their anxiety for too long. If you have to take them to the doctor’s office in three hours, you might not want to tell them immediately. By shortening this period, you can help them overcome that anxiety while minimizing it in general.
If you child seems to be anxious about everything, you might want to seek help from a children’s counselor or psychologist. However, there is nothing wrong with being anxious about something. The important thing is to encourage him or her to build confidence and learn how to manage anxiety. | <urn:uuid:fd435410-0826-42a4-b488-bd6fadb90fde> | CC-MAIN-2018-17 | http://carlscounseling.com/parenting-an-anxious-child/ | s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125947033.92/warc/CC-MAIN-20180424174351-20180424194351-00023.warc.gz | en | 0.972098 | 439 | 3.203125 | 3 |
Voltage MultipliersIf you're trying to build a high voltage power supply, a 'voltage multiplier' may sound like just the thing you're looking for. Voltage multipliers take AC voltage, and multiply that voltage by some factor giving a larger DC voltage. In order to build an effective and SAFE voltage multiplier, you need to have a transformer to isolate the circuit from the main power source. This site is about building coilguns, which require very large capacitor banks to be effective. This means that you need to have a high power, high voltage power supply. If you choose to build a voltage multiplier in order to get a high voltage, you need to have a high power transformer. High power transformers can be quite expensive, which is why I recommend building a switching power supply instead.
The Greinacher voltage doubler
Villard Cascade Voltage Multiplier
This is my villard cascade voltage quadrupler. As you can see, it if very compact, simple, and easy to build. The only necessary materials are wire, capacitors, diodes, and a prototype board. The prototype board and wire came from Radioshack, and the capacitors and diodes came from Jameco. Each capacitor is 10 uF rated for 350 V. It is very important that the polarity of the capacitors is correct, and it is very important that you use capacitors which are rated for a sufficiently high voltage. Otherwise, they WILL explode. I am not saying this because I read it on some other website, I am saying this because I have experienced it first hand.
Why my Villard voltage multiplier failedMy villard cascade voltage multiplier was successful in that it did multiply the voltage correctly. However, because it multiplied the voltage 4 times, theis circuit was simply not able to supply enough total power in order to charge my coilgun's capacitor banks. Perhaps it would have been better if I had a high power transformer, but alas, I could only afford a cheap transformer from Radioshack.
Before you decide to buile a voltage multiplierThe single most important thing you need to keep in mind when building any of these voltage multipliers is that AC voltage is not equal to DC voltage. 120 volts AC actually has a peak voltage of about 170 volts. Likewise, for any AC voltage, you need to multiply the AC voltage number by the square root of two, and this will give you the actual, peak DC voltage. If your capacitors aren't rated for such a voltage, they will explode. So be absolutely sure that your capacitors are of sufficient rating and quality to be used in a high voltage multiplier.
Coilgun projects can be extremely dangerous if you don't know what your doing. Capacitors can unleash massive amounts of electricity which can seriously injure or kill. Please use this information with caution, as I can not be held responsible for your actions. | <urn:uuid:fea2a749-c21d-4d3b-944f-aba6979c0660> | CC-MAIN-2016-22 | http://www.savedpennies.com/coilgun/multipliers.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464052946109.59/warc/CC-MAIN-20160524012226-00083-ip-10-185-217-139.ec2.internal.warc.gz | en | 0.944879 | 595 | 3.046875 | 3 |
If shoes could talk, and lift their top away from their base like the soup can in Wet Hot American Summer, what would they say? I’d be worried that they might cry, “‘Shoe’ don’t know me! ‘Shoe’ don’t know anything about me!” To which I’d have to confess, “That’s true.”
To prevent such a frightening event from happening to me or you, I’ve diagrammed the Once Upon a Time Boot in Parable and Yes I Twill Heel, defining each of the terms therein below!
The part of the boot that covers the leg.
The small holes in a shoe through which laces pass. Often reinforced with metal, cord, or fabric eyestays.
Toe box [toh bokd]
Also known as the toe cap, the front tip of the shoe which covers and creates space for the toes.
1. Slim part of shoe under arch, running from heel to ball of foot.
2. The material, often metal, that reinforces this part of the shoe.
Heel counter [heel koun·ter]
Material that forms the back of a shoe. Generally stiff or reinforced to give the foot support.
The interior of a shoe upon which the foot rests. Sometimes, this is covered by a piece of material known as sock lining.
The part of a shoe upper that wraps around the sides and back.
The portion of the shoe upper that covers the top of the foot, from toes to ankle, or just over the top of the toes if shoe doesn’t cover entire foot. | <urn:uuid:a8ea50a5-5c9c-48d0-ad40-3dde580cd569> | CC-MAIN-2018-47 | https://blog.modcloth.com/beauty/wouldnt-shoe-like-to-know/ | s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039746800.89/warc/CC-MAIN-20181120211528-20181120233528-00467.warc.gz | en | 0.938197 | 354 | 2.796875 | 3 |
A response paper is often referred to as the reaction essay your response paper should neither be overly reactionary nor autobiographical. creative writing grade 3 although you may use “i” in response papers, always check with your instructor or prompt the response paper will help you writing a comparative essay begin to see how to pinpoint and assess the kinds of issues that most interest you in the texts you read. this is a great examples of a response paper opportunity to improve how to add a direct quote to an essay your understanding of work standards and broaden your horizons. reading response to “why we crave horror movies” by stephan king. when reading reaction documents, emphasize successful elements and failed structures how to write examples of a response paper a reaction paper: as you state your main idea, subject, and a purpose, remember to capture reader’s interest a guide to job satisfaction literature review writing a response paper operational business plan template example it is not surprising where to buy a good research paper you have no idea on how to samples of research papers write a response if you have been assigned to compose this kind mla citation for research papers of examples of a response paper assignment writing an essay for college application for the first time. assume that some people know nothing about it when preparing arguments and examples of a response paper arranging them in a logical order responding to peer reviewer comments can often be stressful, particularly when the comments are exhaustive. r. this can also be a reaction paper because your write up is seen as your reaction to the reviewed work. what you think and why response resturant business plan paper definition of genre in a reaction or response paper, writers respond to one or more texts they have read. | <urn:uuid:71d223b5-dba7-46b6-a7a4-d85992306f8f> | CC-MAIN-2021-31 | https://samedayessay.co/2021/07/07/examples-of-a-response-paper_fg/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046150134.86/warc/CC-MAIN-20210724063259-20210724093259-00131.warc.gz | en | 0.949759 | 342 | 3.484375 | 3 |
Search for opportunities to participate in a clinical research study.
UT Southwestern Medical Center’s ultrasound services allow physicians and specialists to make accurate diagnoses and prescribe targeted treatments and therapies for a wide range of conditions, from the simple to the complex.
Our radiology team performs more than 800,000 inpatient and outpatient exams every year. We specialize in advanced technologies and the latest clinical innovations in today’s changing field of medical imaging
Fast, Safe, and Reliable Imaging
Ultrasound, or sonography, is a scan that uses high-frequency sound waves to show the inside of the body. The ultrasound reveals movement and live function in the body’s organs in real time. The test is safe and easy and does not use X-rays or radiation.
Because the body is more than 90 percent water, sound waves can travel through it just as sonar is used in the ocean. As sound waves from the ultrasound machine go through the body, they create an echo when they hit various tissues. The returning echoes are recorded by a computer that then displays them on a screen to an ultrasound technologist.
UT Southwestern offers experience and expertise in all types of ultrasound, including technologies and techniques that might not be available at other medical facilities.
Conditions We Diagnose with Ultrasound
We can use the results of an ultrasound to detect and diagnose a wide range of medical conditions.
Ultrasound is most effective in diagnosing conditions by:
- Examining the heart
- Evaluating vascular disease
- Revealing information about the size and shape of tumors and cysts
- Evaluating the gallbladder and related organs
- Evaluating the uterus and ovaries
- Examining the fetus during pregnancy
Ultrasound: What to Expect
An ultrasound is safe and painless. Patients might be asked to fast for several hours before the exam or drink several glasses of water to create fullness in the bladder. We will give patients any specific instructions they need.
Patients should avoid carbonated beverages before the exam, because bubbles in the body can interfere with the ultrasound images. An ultrasound technologist can answer any questions a patient might have about a health condition that could affect the exam.
An ultrasound exam usually takes 30 minutes. The technologist will ask the patient to lie or sit on an examination table. The technologist can then lower the lights in the room to make the computer display easier to see. A gel will be applied to the patient’s skin over the area to be scanned. This gel allows the ultrasound transducer, which transmits images to the computer, to slide easily over the skin.
The patient might feel some discomfort if he or she has a full bladder and the technologist is pressing the transducer wand over the abdomen.
For pelvic examinations, such as those for the prostate gland, uterus, or ovaries, the technologist will explain the use of an ultrasound probe. This probe is placed in the rectum or vagina to better capture images of internal structures. Patients can ask for a third person, or chaperone, to be present at these types of intimate exams, if they wish.
As the transducer transmits live images of the patient’s body to the computer, the technologist will capture pictures for permanent reference.
A radiologist will review the images and send a report to the doctor, who will notify the patient of any findings. The patient can also request to receive the images on CD.
Showing 6 locations
Fort Worth, Texas 76104 817-429-3050
Plano, Texas 75024 469-303-3591 | <urn:uuid:3d91d98b-9fef-488e-9e0d-1e96499911f7> | CC-MAIN-2019-39 | https://utswmed.org/conditions-treatments/ultrasound/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514575751.84/warc/CC-MAIN-20190922221623-20190923003623-00477.warc.gz | en | 0.911208 | 752 | 2.78125 | 3 |
Over the years, most people will agree that Margaret Thatcher did whatever she could to bring out the best in people. Along with this, she made many changes during her time in office to not only help others but to change the way of thinking of an entire nation.
You don’t have to look far to realize that Thatcher was determined to fight homelessness while increasing the overall number of homeowners. While this was not an easy task, it is safe to say that she was always up for the challenge.
During the 1970’s, many tenants were interested in purchasing the home in which they were living. However, there was one major problem with this. There was major resistance due to an obligation to house the homeless. This is something she tackled with aggression.
In the video below, you will get a better idea of what Thatcher had to say about the homeless (in the late 80’s into 1990). If you are not interested in watch the video in its entirety, here is a brief excerpt that will help you better understand her point of view:
“May I point out what precisely has been done. I understand that there are something like 1,000 drunk who sometimes sleep out in London. Some of them have other problems such as mental retarded. Some of them are genuine social cases, others are not. But during the lifetime of this government, a great many more hostel places have been built so that we now have more than 21,000 hostel places in London including some 3,000 emergency cases and direct access beds in London.”
As you can see, Thatcher uses some language in this video that would not be deemed acceptable in today’s day and age. However, she definitely got the point across while helping people better understand what she was doing, at the time (1990), for homelessness in London.
What are your thoughts on what Margaret Thatcher did for homelessness during her time in charge? Share your opinion in the comment section below. | <urn:uuid:efaf6c22-b42d-40bd-997a-83b8bf01c993> | CC-MAIN-2017-30 | http://www.insidermonkey.com/blog/heres-what-margaret-thatcher-thought-about-the-homeless-111234/ | s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423992.48/warc/CC-MAIN-20170722102800-20170722122800-00405.warc.gz | en | 0.986297 | 403 | 3.046875 | 3 |
Gross Margins for Remodeling in the Construction Industry
Caron Beesley, moderator of the U.S. Small Business Administration Community blog, defines gross margin as "a measure of production efficiencies and a determination of a company's break even point." Simply put, it's the revenue minus the cost of goods sold divided by sales revenue. Gross margins, especially those in the remodeling industry, vary and depend largely on economic conditions.
Calculating Gross Margin
Having a gross margin of 40 percent simply means that the sum of total overhead and profit equals 40 percent. Gross margin is the gross profit expressed as a percentage. The remaining percentage includes the job costs such as labor and material. For example, your company sells a kitchen remodel for $65,000; overhead, the ongoing expense of the remodeling company, is 25 percent or $16,250, and you make a 10 percent or $6,500 net profit. In this example, the gross margin would be 35 percent -- overhead plus net profit. Job costs would equal 65 percent of the total job price.
One of the leading factors that affect the gross margin of remodeling companies is the state of the economy. Times of progressive economic growth see the highest remodeling gross margins, while recessions may force companies to lower their gross margins to stay competitive.
Recommended Gross Margins
A remodeling company should aim to keep its gross margins within industry standards. Keeping gross margins within standard ranges ensures that such a business stays competitive and its pricing model consistent. Typically, the gross margin range for remodeling jobs run 34 percent to 42 percent.
Avoid Common Mistakes
Knowing the difference between markup and gross margin is important to your bottom line. Markup is the difference between the selling price and the total cost divided by the total cost. For example, a remodeling job with a $25,000 selling price and a total cost of $20,000 represents a 25 percent markup. Calculating gross margin with the same criteria yields 20 percent. To avoid common pitfalls, keep the methodology of your pricing consistent. A pricing model used for a company's estimating structure can be beneficial.
- U.S. Small Business Administration: Understanding Gross Margin
- Construction Programs and Results Inc: Using Gross Margin to Price Jobs? Better Use It Correctly
- Butler Consultants: Free Industry Statistics -- Sorted by Highest Gross Margin
- Panoramic Business Solutions: 5 Tips to Minimize Markup vs. Gross Margin Mistakes
- NA/Photos.com/Getty Images | <urn:uuid:75f350de-da82-4204-a876-7a0a02c4fb5a> | CC-MAIN-2017-13 | http://yourbusiness.azcentral.com/gross-margins-remodeling-construction-industry-27643.html | s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218191396.90/warc/CC-MAIN-20170322212951-00125-ip-10-233-31-227.ec2.internal.warc.gz | en | 0.889008 | 522 | 2.609375 | 3 |
The video above is from the September 2012 iPad edition of National Geographic magazine.
Choosing a map projection is a major challenge for cartographers. Features such as size, shape, distance, or scale can be measured accurately on Earth. Once projected on a flat surface, however, only some of these qualities can be accurately represented. Every map has some sort of distortion. The larger the area covered by a map, the greater the distortion.
Depending on the map's purpose, cartographers must decide what elements of accuracy are most important to preserve. This determines which projection to use. For example, conformal maps show true shapes of small areas but distort size. Equal area maps distort shape and direction but display the true relative sizes of all areas. There are three basic kinds of projections: planar, conical, and cylindrical. Each is useful in different situations.
Cartographers at National Geographic chose to use a version of the Mollweide projection for their map highlighting ocean floors, published as the map supplement in the September 2012 issue of National Geographic magazine. This Mollweide projection is referred to as a pseudocylindrical projection. The specific version of the Mollweide projection used is called an interrupted Mollweide, because lines of longitude, or meridians, are interrupted. The map is pulled apart at specific meridians to minimize distortion in areas where the cartographer would like the map reader to focus their attention.
Find more interactive content, photos, and videos in the iPad version of National Geographic magazine.
When did Flemish cartographer Gerardus Mercator first design the famous projection named after him?
According to the video, how many times larger than Greenland is Africa?
What map projection was chosen for the National Geographic Magazine September 2012 map supplement and which ocean was chosen as the center point of the map?
Why did cartographers at National Geographic choose the map projection they did?
What are two characteristics of the Mollweide map projection that a cartographer would consider when creating a map?
- In 1922, the National Geographic Society adopted the Van der Grinten projection, which depicts the globe by projecting it in a circle rather than a rectangle (as in the well-known Mercator projection) or an ellipse, common in other projections. The Van der Grinten projection was used by National Geographic until 1988.
- In 1995, the Winkel Tripel projection replaced the Robinson projection on the Society's signature world maps. Long used in various European atlases, the Winkel Tripel, first published as a map supplement in National Geographic Magazine in April 1995, is one of the most accurate representations of the round globe on flat paper.
- The "Map of the Moon," published in the February 1969 issue of National Geographic Magazine, was the first map to show the entire lunar surfaceincluding the far side of the moonon a single sheet of paper.
- Many popular online map services like Google Maps and ArcGIS Online use a variation of the Mercator projection. This projection is very good for preserving angles in maps, but is not good for viewing areas of the world close to the North and South Poles.
Term Part of Speech Definition Encyclopedic Entry accuracy Noun
condition of being exact or correct.
bathymetric map Noun
representation of spatial information displaying depth underwater.
person who makes maps.
art and science of making maps.
conformal map Noun
representation of spatial information where angles, scale, and shape are preserved.
cylindrical projection Noun
map projection where the Earth's surface is projected onto a tube, or cylinder, shape.
representation that is twisted, mistaken, or false.
shape of an elongated oval with some dimension of depth.
equal area map Noun
maps that show true relative sizes but distort shape and direction.
Goode projection Noun
representation of a sphere that does not distort land masses and divides spatial information into six unequal lobes. Also called an orange-peel map.
distance east or west of the prime meridian, measured in degrees.
Encyclopedic Entry: longitude map Noun
symbolic representation of selected characteristics of a place, usually drawn on a flat surface.
Encyclopedic Entry: map map projection Noun
method by which shapes on a globe are transferred to a flat surface.
Mercator projection Noun
representation of a sphere where lines of latitude and longitude are straight and at right angles to one another.
line of longitude, dividing the Earth by north-south.
Mollweide projection Noun
representation of a sphere where area is shown accurately but directions and shapes are distorted.
art and science of determining an object's position, course, and distance traveled.
Encyclopedic Entry: navigation | <urn:uuid:207167ad-1771-42e2-8cfb-55202125de94> | CC-MAIN-2019-09 | https://www.nationalgeographic.org/media/selecting-map-projection/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247482347.44/warc/CC-MAIN-20190217172628-20190217194628-00276.warc.gz | en | 0.902612 | 1,003 | 4.09375 | 4 |
This page provides an introduction to author rights: the typical rights that are included in copyright, as well as information on how to negotiate the rights you retain when working with publishers.
What are Author Rights?
Author rights refer to the rights you retain over your work when you sign a publication agreement. When you publish a book or a paper, many publishers will typically ask you to transfer all your copyrights to the work as a condition of publication. This contract may also be referred to as a “copyright transfer agreement”.
It is important to look carefully at your agreement. Unless the agreement indicates otherwise, you may be forbidden to:
- Make copies to distribute to your students.
- Post your work to a personal website or online archive.
- Give copies of your work to colleagues.
- Use parts of the work in future publications.
- Grant permission for others to use your work.
Remember that publication agreement are negotiable. See information under Managing Copyright for author addendums, which can be attached to publication agreement to retain certain rights over your work.
If you’d like assistance interpreting a publication agreement, or assistance with negotiating rights, contact us.
For information about how to post copies of your research online in a copyright-compliant manner, please see the following: How to avoid receiving a takedown notice from a publisher (and what to do if you get one!)
Who owns the copyright to my research?
Copyright protection arises automatically for all original literary, artistic, dramatic and musical works, computer programs, translations and compilations of works, as well as sound recordings, performances and communication signals.
As the creator of the work, you automatically have copyright over your intellectual work, unless you sign an agreement transferring certain rights to another individual organization, which is often the purpose of the publication agreement you sign with a publisher.
Publication agreements are negotiable: if the agreement does not allow for certain key rights(the right to archive a peer-reviewed copy of your work, for instance), you can request an amendment to your agreement. For more information, see the information under Managing Copyright.
How do I retain my rights when working with publishers?
Copyright is actually a bundle of rights, including the rights over reproduction, creation of derivative works, distribution, and public display. Read your publication agreement (sometimes also called a copyright transfer agreement) to see if the publisher allows you to retain rights that are important to you.
Remember that publication agreements are negotiable. If your agreement asks you to forfeit important rights over your work, there are tools available to negotiate a more balanced agreement: The SPARC Canadian Author Addendum and Scholar’s Copyright Addendum Engine generate PDF documents that may be attached to a publisher agreement, and allow you retain certain rights, including:
- To reproduce your work for non-commercial purposes.
- To reuse portions of your article in derivative works.
- To grant permission for others to make non-commercial use of your work.
For more information about copyright in general, see the UBC copyright site.
Can I share my research online through a personal website or digital repository?
This will depend on the copyright policies of your publisher, the rights you retained through your publication agreement, and the version of the work you’d like to share.
Some publisher agreements allows authors to archive pre-prints (your manuscript prior to peer review), and/or post prints (your final peer-reviewed manuscripts). Open Access journals often allow you to retain copyright over your article with very few or no restrictions (authors are often able to freely share the PDF of the final article). Publishers may also specify conditions such as where you can archive, how soon after publication, and how to cite the resource when archiving.
The best way to determine what you can do is to read your publication agreement. You can also refer to Sherpa/Romeo, which provides details of the archiving rights normally given by the publishing agreements of various publishers. Sherpa/Romeo ranks publishers on the following scale:
- Green: authors can archive post-prints (the final draft of an article after peer review) and the publisher’s final PDF version of articles.
- Blue: authors can archive post-prints or the publisher’s final PDF version of the article.
- Yellow: authors can archive pre-prints (the version of the paper before peer-review).
Can I use parts of an article I have published in a new work?
Again, this will depend on your publication agreement.
These agreements typically establish who owns the copyright for the article (usually the publisher), and they often also outline how the article may be reused by the authors in the future. If you did sign such an agreement and you still have a copy on file, then you should check to see (1) who owns the copyright and (2) whether the agreement provides advance permission for authors to reuse content from the article in future publications.
If you don’t have a copy of your publication agreement, or if the agreement does not provide advance permission for reuse in future publications, then you will need to seek permission from the publisher to republish content. For most large publishers, you can usually obtain permission through the Copyright Clearance Center, or by contacting the permissions office of your publisher directly. | <urn:uuid:fa3c9d55-573b-4474-9434-cc750dbe9072> | CC-MAIN-2017-22 | http://scholcomm.ubc.ca/author-rights/ | s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608668.51/warc/CC-MAIN-20170526144316-20170526164316-00401.warc.gz | en | 0.924257 | 1,099 | 3.0625 | 3 |
Sample english literature essays. An essay in an edited volume. Definition of Literature and Other Essays. Definition, Format Examples Related Study Materials. Definition of essay in english literature we have thousands of free essays across a wide range of subject areas. Database of FREE english literature essays. American Literature 9th Grade English.
Definition of essay in english literature literature Department. Experienced scholars, quality services, timely delivery and other benefits can be. Lang Performing Arts Center. Essay definition in literature.
In order to have a deeper insight about the reasons that urge women to rebel, we need first to understand the meaning of rebel. A short piece of writing on a particular subject. Professionally crafted and definition of essay in english literature. HQ academic writings. An attempt or effort.
If you get stuck in the writing process, we can help. It definition of essay in english literature is an essential reference tool for students of literature in any language. My personal definition of war is an act of bravery and heroism of citizens who have a sincere and deep. War and Its Influence on Literature. English Dictionaries and Thesauri. How do you write the perfect literature essay.
The Definition of Literature. Custom Essay Writing Service. Follow tips definition of essay in english literature from professional writers. Free examples of Definition essay on. And The Oxford English.
Important historical periods in English literature include Old English. Definition of Literature Essay. Definition of definition of essay in english literature Literature. Literature is an outlet of escape from.
Definition of literature for English Language Learners. All terms in this dictionary are bookmarked and it is possible to place a link to any term that will then open the definition in. Since Montaigne adopted the term essay in the 16th century, this slippery form has resisted any sort of precise, universal definition. Line English Literature. Such as poems, plays, and novels.
That are considered to be very good and to have lasting. Written works, especially those considered of superior or lasting artistic merit. Definition of literature. GCSE English Literature will introduce you to some of.
Sample Definition Essay. English literature is hundreds of years old and continues to be one of the most popular courses of study in high. AP English Sample Essays. When you are writing a for an. AP English Language or AP English Literature prompt you need to make sure that. GCSE English Literature.
Definition, Usage and a list of Essay Examples in common speech and literature. English Literature Writing Guide. Essay definition, a short literary composition on a particular theme or subject, usually in prose and generally analytic, speculative, or interpretative. An essay is a short form of literary composition based. What is English Literature. As a term to cover the most distinctive writers who flourished in the last years of the 18th century and the first decades.
Need help writing your English Literature essay. English Literature essay at University level, including. Essay Writing Writing. No plagiarism or hidden fees.
Essay meaning, definition, what is essay. A short piece of writing on a particular subject, especially one done by students as part. Department of English. Literature definition, writings in which expression and form, in connection with ideas of permanent and universal interest, are characteristic or essential features. Britannica Web sites. More than two hundred professional academic writers are ready to assist you 24. Articles from Britannica encyclopedias for elementary and high school students.
In a definition essay, its origins and the evolution of its use in literature. That will fully illustrate and explain your definition. A genre is a particular type of literature, painting, music, film, or other art. We provide superior quality original and custom essays with high. Meaning, pronunciation, translations and examples.
Using Elements of Literature. Try our best English essay writing service features that you can imagine. S choices and attempt to explain their significance. Your essay should point out the author.
An essay has been defined in a variety of ways. T confuse Modernism with the standard definition of modern. Daria channeled her struggle into a college admissions essay that talks about losing herself in literature to cope with. Definition of essay for English. One definition is a prose composition with a focused subject of discussion or a long, systematic discourse.
The definition provided is as simple as comprehensive. Essay Definition Literature. Literature, for example, Great English Lesson Plans for High School. Essay and Resume Service provides professional writing services for students, executive, management and entry level positions in USA, CA, GB.
The Modernist Period in English. It contains alphabetical lists of literary terms, the vocabulary of literature. Travel writing as a literary genre. The York Dictionary Literary Terms and Their Origin. Literature occupied the years from shortly after the beginning of the twentieth century through roughly. Travel Writing As A Literary Genre English Literature.
English, French, German. What is a Definition Essay. Do not use any examples that will not support the definition. A definition essay is writing that. Definition Of Literature Essay Definition of Literature.
English dictionary definition of literature. The body of written works. Literature synonyms, literature pronunciation, literature translation. Definition what is literature essay. Essay definition, meaning, what is essay. Oxford English Dictionary offers several short definitions that can be used to build one ultimate.
Tema m, composizione f. Old English in Anglo. Old English literature, or Anglo. Saxon literature, encompasses the surviving literature written in. An essay is a short form of literary composition. Saxon England, in the period after the. | <urn:uuid:9509d9a4-bf51-424d-839f-4d2da37af9bf> | CC-MAIN-2018-09 | http://usual-freezer.ml/VW0rHucdjefe0q.php | s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813691.14/warc/CC-MAIN-20180221163021-20180221183021-00663.warc.gz | en | 0.907519 | 1,141 | 2.6875 | 3 |
Abundance and size of the sea scallop population in the Mid-Atlantic Bight
University of Delaware
The stock of the Mid-Atlantic Bight sea scallop fishery is assessed every year through the use of various dredging and imaging techniques. The sustainability of the fishery depends on the proper setting of the yearly catch limits based on the assessment of the preceding year. Within the past 10 years, digital image surveys have been explored as a potential method to supplement the yearly dredged based surveys. AUVs have been shown to be a successful platform for rapidly and accurately performing seafloor image surveys of benthic habitats. In 2011, a Teledyne-Gavia autonomous underwater vehicle (AUV) with a hull-mounted camera was used to non-invasively optically and acoustically image 313 km of the seafloor within the Mid-Atlantic Bight at a constant altitude of 2 m. Survey transects were completed at 24 open access ground locations and 3 additional locations within the Elephant Trunk Access Area. Trained image analysts, using a scallop counting and sizing algorithm developed for this stock assessment, were able to enumerate and size sea scallops within the collected 250,000 seafloor images, finding that the region had an overall scallop density of 0.027 scallops/m2. Georeferenced data was tagged by the AUV inertial navigation system (INS) to every seafloor image, allowing for unprecedented meter scale spatial analysis of the sea scallop distribution. The relationship between image subsampling and the accuracy of the resulting scallop density was explored via simulations run on the image analysis results. Eight AUV transects were resurveyed by a New Bedford commercial scallop dredge for shell height calibration data and to calculate the harvest efficiency of the dredge (0.60). Image analysis and backscatter data collected by the AUV’s 900 kHz side-scan sonar were used to classify seafloor substrate types. The surveyed scallop strata were classified as 98.6% sandy seafloor with the remaining 1.4% representing intermittent shell hash, mounds, and ripples. The side-scan backscatter data revealed other varied seafloor texture, including escarpments from scallop dredge trawling and wave created sorted bedforms. Seafloor dredge scar area measured from the side-scan backscatter data and National Oceanic and Atmospheric Administration (NOAA) vessel monitoring system (VMS) tracking data were used as a proxy for fishing effort. Increased dredging was found to positively skew shell height distributions. | <urn:uuid:89d9f9be-c47e-4b88-9bd9-da075857e78f> | CC-MAIN-2023-06 | https://udspace.udel.edu/handle/19716/12664 | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499888.62/warc/CC-MAIN-20230131154832-20230131184832-00059.warc.gz | en | 0.934256 | 577 | 2.609375 | 3 |
The Human Brain Project, an ambitious undertaking to better understand the human brain, has officially kicked off. The goal of the project is to use a super computer to simulate a complete human brain, something that will not only aid in the treatment of a variety of ailments, but will also be used to help create new computing technologies. One of the areas of focus is neurorobotics.
If all goes as planned, the scientists involved will create the most accurate simulation of the human brain ever developed. The work is being done by 130 research institutions, with the entire project being coordinated by the Ecole polytechnique federale de Lausanne, more commonly called the EFPL. For the next 30 months, the various platforms will be tested, opening up to researchers working under the Human Brain Project globally in 2016.
There are six platforms that are being focused on: brain simulation, neuroinformatics, medical informatics, high-performance computing, neuromorphic computing, and neurorobotics. Each platform has its own objectives that it will need to meet, and each section will focus on its own implementation of data received, eventually forming a cohesive whole.
The neurorobitics platform, in particular, will work on taking the simulated neural networks and implementing them into robots, starting with virtual ones and working to physical constructs. With such a project, the information gathered and assembled could eventually lead to, for example, microchips tasked with certain functions it is equipped to handle via imitating specific neural network functions. | <urn:uuid:5ef85a00-b5f9-4834-947a-9ba9a34fdea0> | CC-MAIN-2015-27 | http://www.slashgear.com/human-brain-project-kicks-off-will-aid-in-robotics-and-more-07300524/ | s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375102712.76/warc/CC-MAIN-20150627031822-00291-ip-10-179-60-89.ec2.internal.warc.gz | en | 0.946636 | 309 | 3.65625 | 4 |
PORTLAND, Ore. — Solar cell designs today pursue performance at the lowest possible cost, neglecting the dimensions of thin-and-lightweight, according to Massachusetts Institute of Technology (MIT) researchers who aim to design the world's thinnest solar cells.
For mobile electronics, thin-and-lightweight are prime design goals, but solar cells have aimed instead at the highest efficiency. Today, making solar cells thinner and lighter would be welcome for applications in aviation, space exploration, and in remote areas where transportation costs are high, according to MIT. In the future, as materials become more scarce, the conservation achieved with ultra-thin solar cells could cost-reduce even urban installations.
"Our predictions are for what may very well be the thinnest solar cells possible, ones out of only two layers of materials," professor Jeffrey Grossman told EE Times. Grossman performed the work with post-doctorate researcher Marco Bernardi and Maurizia Palummo, a visiting researcher from the University of Rome.
"There are indeed applications where weight is crucial, where the thinnest possible amount of active layer material with minimal encapsulation may change the installation game, because it could get us onto [thinner, more durable] substrates," Grossman said. "In addition, this gets to the heart of what I think is an important question: namely, what is the most power we can squeeze out of each and every atom or bond of a given material?"
MIT researchers use computer simulations to shuffle through different materials in the search for the thinnest possible solar cells. (Source: MIT)
MIT estimates that its ultra-thin solar cell films -- essentially two-dimensional (2D) layers as thin as one nanometer -- can deliver 1,000 times more energy-per-pound than conventional solar cells. The tradeoff is that their efficiency is lower, requiring about 10 times the area of a conventional solar cell to produce the same amount of energy, since ultra-thin solar cells have an efficiency of up to 2 percent, compared with up to 20 percent for conventional photovoltaic (PV) solar cells. However, the researchers have plans for stacking the ultra-thin 2D solar cells in layered structures to improve their efficiency.
"These two-sheet stacks we predict could have efficiencies of 1 to 2 percent. However, it is certainly possible to make stacks that consist of more than just two layers, and in that case the efficiency would go up," said Grossman. "There is no reason efficiencies of cells made from 2D materials couldn't be just as efficient as current 'traditional' PV -- in the 10 to 20 percent range."
The ultra-thin solar cell design is still in simulation while the researchers decide which material to use for prototypes. In detailed simulations, various topologies of stacked sheets use atomically thin graphene, molybdenum-disulfide, and molybdenum-diselenide. The best of these designs not only provide a weight advantage over conventional solar cells, but are also immune to oxygen, ultraviolet radiation, and moisture in the environment -- the three killers of long-term stability in conventional solar cells -- giving the new ultra-thin designs the additional advantage of eliminating the need for glass covers or standoff mounting, which consumes over half the cost of conventional PV installations.
"Ultralight solar cells (with extremely high power/weight in our case) have the potential to reduce installation costs. Current solar modules based on silicon are heavy and made heavier by the glass protecting them. Their installation amounts to 60 percent of the total cost of a solar array, largely due to the high weight," said Bernardi. "By finding ultra-thin and mechanically flexible materials, the hope is to make very light solar cells, which can be encapsulated with plastics rather than glass, and hence create new paradigms for photovoltaic installation."
The material cost for ultra-thin solar cells would be minimal, compared to conventional solar cells, but the researchers have yet to create prototypes in the lab or to work on making the materials manufacturable in high volume. Next they plan to test their formulations in the lab by measuring the efficiency and long-term stability of various formulations and stacking structures. | <urn:uuid:46627758-b13b-4299-9105-982309a57bff> | CC-MAIN-2017-34 | http://www.eetimes.com/document.asp?doc_id=1318841&piddl_msgorder= | s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886117519.92/warc/CC-MAIN-20170823035753-20170823055753-00357.warc.gz | en | 0.940394 | 870 | 3.84375 | 4 |
Ethan Crumlin, Andrew Hoy, Joseph Walish, Peter Weigele, John Craven, Gerardo Jose, and Jungik Kim of Team BioVolt
Team BioVolt from MIT has created a prototype device that generates cheap electricity from grass clippings. They won first place in the MIT and Dow Materials Engineering Contest (MADMEC) 2007:
The device the team invented for the competition generates electricity from cellulosic biomass. The device is intended to generate enough electricity to charge a cell phone in developing countries. Team members say that the current power output of the device would take about six months to recharge your cell phone.
However, BioVolt is quick to point out that the materials in the device only cost about $2 to obtain and the biomass "fuel" can be found everywhere in nature as leaves and grass clippings. The team members say that multiple units could be connected together to increase power output and that refinements in the design of the device could yield a 100 times increase in efficiency. | <urn:uuid:3cfa7a83-1c77-4ce3-bddf-bf35ad497e5e> | CC-MAIN-2017-34 | http://www.neatorama.com/2007/10/07/mit-students-device-creates-electricity-from-grass-clippings/ | s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105955.66/warc/CC-MAIN-20170819235943-20170820015943-00553.warc.gz | en | 0.939559 | 211 | 2.796875 | 3 |
There have been theories put forward that cryptids are Tulpas. So what is a tulpa?
A tulpa is a thought form , a manifestation of mental energy, A being or object created out of thought that sometimes manifests in the “real world” so that others can see it, interact with it .Some people have found that Tulpas tend to reflect the worst of the person .The petty fears, desires and anger that we all bury inside ourselves can come out in the Tulpa.
What evidence is there for their existence?
When Alexandra David-Neel journeyed through Tibet, one of the many techniques she studied was that of tulpa creation. A tulpa, according Tibetan doctrines, is an entity created by an act of imagination. David-Neel decided to try to create one. Her tulpa began its existence as a plump, benign little monk. Then the apparition slipped from her conscious control. She discovered that the monk would appear from time to time when she had not willed it. The friendly little the figure was now taking on a distinctly sinister aspect. Eventually her companions, who were unaware of the mental disciplines she was practicing, began to ask about the "stranger" who had turned up in their camp-a clear indication that a creature had become real. David-Neel decided things had gone too far and applied different techniques to reabsorb the creature into her own mind. A sort of exorcism . The tulpa proved very unwilling to comply so that the process took several weeks and left its creator, David-Neel exhausted.
See : Body Mind & Spirit: A Dictionary of New Age Ideas, People, Places, and Terms [Paperback]Eileen Campbell (Author), J. H. Brennan (Author), Tuttle Pub; Rev Exp Su edition (Feb 1994)
So if anyone concentrated enough they could produce their own tulpa in theory. Usually though when people see cryptids they are not concentrating and quite often not expecting to see anything. Could the sub conscious mind produce the tulpa/cryptid perhaps, without the need for concentration? If you are in a creepy place you are more likely to think you have seen things as atmosphere affects our emotions. There is no proof either way whether cryptids are real creatures or tulpas but it adds another layer to the world of cryptozoology. Could something like the bigfoot creature have been a tulpa created out of one mind and now seen by others? Tulpas do seem to develop a life and will of their own according to David-Neel. Something more for discussion in the cryptozoology ring and perhaps fuel for the sceptics. | <urn:uuid:fe64738b-bb1d-41c2-b65a-50a2050eb610> | CC-MAIN-2023-40 | https://cryptozoo-oscity.blogspot.com/2013/03/tulpas.html | s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506669.96/warc/CC-MAIN-20230924223409-20230925013409-00028.warc.gz | en | 0.9683 | 546 | 2.53125 | 3 |
HUINCHIRI, Peru—The Incas didn't invent the wheel, never figured out the arch, and never discovered iron. But they were masters of fiber. They built ships out of fiber. (You can still find reed boats sailing on Lake Titicaca.) They made armor out of fiber, which was stronger, pound for pound, than the steel worn by the conquistadors. Their greatest weapon, the sling, was woven from fibers, and it was powerful enough to split a Spanish sword. They even communicated in fiber, developing a language of knotted strings known as quipo, which has yet to be decoded. And so when it came to solving a problem like how to get people and goods across the steep gorges of the Andes, it was only natural that the Incas would look to fiber for a solution.
Five centuries ago, the Andes were strung with suspension bridges. By some estimates, there were as many as 200 of them, braided from nothing more than twisted mountain grass and other vegetation, with cables that were sometimes as thick as a human torso. At least 300 years before Europe saw its first suspension bridge, the Incas were spanning longer distances and deeper gorges than anything that the best European engineers, working with stone, were capable of.
Over the centuries, most of the empire's grass bridges gradually gave way and were replaced with more conventional works of modern engineering. The most famous Incan bridge—the 148-footer immortalized by Thornton Wilder in The Bridge of San Luis Rey—lasted until the 19th century, but it, too, eventually collapsed. Today, there is just one Incan grass bridge left, the keshwa chaca, and just one last Incan bridge-keeper. His name is Victoriano Arisapana.
We drove four and a half hours south of Cusco with anthropologist Jean-Jacques Decoster to find Arisapana and the keshwa chaca. With the help of a hitchhiker who knew the route, we made our way down a series of hairpin curves to a sagging 90-foot-long bridge that stretched between two sides of a rocky canyon. According to locals, it has been there for at least 500 years.
Here in the desolate, 2-mile-high Andean altiplano, little else grows besides ichu, a tall needly grass that covers the mountainsides, feeds the llamas, and is the raw material from which the keshwa chaca is constructed. Unlike a modern suspension bridge, where the roadway hangs from suspended cables, the roadways of Incan bridges are the cables themselves. The keshwa chaca consists of just four parallel ropes with a mat of small twigs laid across, anchored at both ends by a platform of larger rocks. Two other thick cables act as arm rails and are connected to the roadway with a cobweb of smaller cord.
When we finally found him, the keeper of the last Incan bridge was tending his three cows on a barren hillside not far from the bridge. After sharing some coca leaves with us, and consecrating them to the spirit of the keshwa chaca, Arisapana guided us to his home, a tiny one-room mud brick house with a small garden.
Five centuries ago, at the height of the Incan empire, there would have been dozens, perhaps hundreds, of men like Arisapana throughout the Andes; they carried the title of chacacamayoc, or bridge-keeper. Each was responsible not only for maintaining and administering a bridge, but also for collecting tolls and helping frightened travelers across.
Exposed to the weather, thegrass ropes of the keshwa chaca wouldn't hold up for more than a couple of years. Unlike the Golden Gate or George Washington bridges, which are almost constantly being repaired, an Incan bridge can't be patched up or have parts swapped out. It can only be replaced wholesale. The largest Incan bridges were maintained by the state and supported by a system of compulsory public service that demanded several weeks' labor by every grown male each year. In the case of smaller communal bridges, like the keshwa chaca, the work of regularly rebuilding the bridge fell to local communities. To this day, the four surrounding villages convene in the valley each June for a three-day festival to cut down the previous year's keshwa chaca and replace it with cables twisted from fresh ichu. Each household is responsible for bringing 90 feet of braided cord to the ceremony. The entire process happens under the orchestration of the chacacamayoc, Arisapana.
Arisapana, who says he is 48, inherited the job of chacacamayoc from his father. Back at his home, he offered to show us how the bridge's grass rope is woven. He disappeared inside and came back a few minutes later with a bundle of fresh ichu, a plastic jug of water, and a stone slightly bigger than a softball. He sat down on a rock and demonstrated how the 2-foot-long blades of grass are first soaked and pounded to make them supple, and then twisted into cords as thick as a finger. Three dozen of those cords are braided together to form a rope about 6 inches in diameter. It takes about 10 miles of cord to make the entire bridge.
The first conquistadors to encounter grass bridges pronounced them "the work of the devil" and trembled at the thought of crossing. Ultimately, the Spanish discovered that the largest bridges were strong enough to carry not only horses but also cannons, as well as an army marching two-by-two. Indeed, modern load-testing by John Ochsendorf, an MIT professor and MacArthur fellow who researches the engineering accomplishments of ancient civilizations, has found that a length of keshwa chaca cable can sustain 4,000 pounds of tension. Ochsendorf estimates that in peak condition, the small bridge could support the weight of 56 people spread our evenly across its length.
Knowing that the strength of the keshwa chaca had been verified by no less of an authority than Ochsendorf and had been field-tested for no fewer than five centuries, I had no hesitation about stepping out onto the precarious-looking structure. It had been five months since the bridge had last been rebuilt, which meant it was almost halfway through its life cycle, and it was already showing wear. At its midpoint, the bridge sagged about 10 feet, and one of the arm rails had dropped about a foot lower than the other. Most of the small twigs that constitute the bridge's floor had long ago fallen into the Apurímac River, 60 feet below, which meant I had to step carefully to avoid dropping my foot through the floor. It was easy to imagine why even the bravest Spanish soldiers would sometimes crawl across on all fours. But as Dylan and I knew from our time in Colombia, there are scarier ways to cross an Andean valley.
In 1968, the government built a steel truss bridge just a few hundred yards upstream from the kewsha chaca. Though most locals now use it rather than the grass bridge to cross the gorge, the tradition of rebuilding the keshwa chaca each year still continues. Divorced from practical necessity, its annual renewal has become an impassioned, highly ritualized act of cultural preservation.
Today, the nearby metal truss bridge shows signs of wear. Its orange paint has grown rusty. Its wood planks are rickety. One of the metal barriers is badly warped from a vehicle collision. It was built as a permanent replacement for an inherently impermanent structure. But nothing is truly permanent. All bridges someday collapse. And it's not impossible to imagine that it may yet be outlived by its more fragile neighbor downstream, whose very ephemeralness seems to be the source of its staying power.
Click here to launch a slide show on a bridge made out of grass.
GoPro provided the travelers with some camera equipment free of charge.
For more on the world's wondrous, curious, and esoteric places, check out Atlas Obscura. | <urn:uuid:16cdbb48-e851-4a23-95c4-3a64df58f4b0> | CC-MAIN-2023-50 | https://www.slate.com/articles/life/world_of_wonders/2011/02/the_last_incan_grass_bridge.html | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00667.warc.gz | en | 0.974027 | 1,709 | 3.515625 | 4 |
Peace is an elusive thing. Everyone wants peace, yet few seem to actually possess it in any substantive form. For many, the attraction of the Christmas season is the momentary fulfillment of that dream, the wonderful moment of 'Peace on Earth'. For one night, it seems possible. As Christmas approaches, we experience a sense of 'Peace on Earth'. A few times in history, this sense of peace at Christmas had real impact on human affairs. During World War I a series of widespread, unofficial ceasefires took place along the Western Front around Christmas 1914. This so-called Christmas truce is seen as a symbolic moment of peace and humanity amidst one of the most violent events in modern history. Another example, little known, is the the signing of the Treaty of Ghent on Christmas Eve, December 24, 1814, ending a war, the War of 1812, between the United States and the British Empire and their allies.
The War of 1812
This war was an intense military conflict which resulted in no territorial change, but a resolution of many issues remaining from the American War of Independence. The separation, begun in 1776, sealed by treaty in 1783, had been made by war, a long war, which left angry feelings. Beside these unpleasant memories there were also controversies over important material interests that emerged from time to time: trade restrictions brought about by Britain's ongoing war with the French Empire, the impressment of American merchant sailors into the Royal Navy, British support of American Indian tribes against American expansion, etc. The United States declared war on June 18, 1812. This was the first time that the United States had declared war on another nation.
By 1814, both sides were weary of a costly war that offered little but stalemate. They both sent delegations to the neutral Flemish town of Ghent, (then) The Netherlands. Diplomatic negotiations began in early August. The Americans sent top commissioners, John Quincy Adams, James A. Baynard, Henry Clay, Jonathan Russell and Albert Gallatin, while the British sent minor officials who kept in close touch with their superiors in London, the Right Honourable James Lord Gambier, late Admiral of the White, then Admiral of the Red Squadron of His Majesty’s Fleet, Henry Goulburn, Esq. member of the Imperial Parliament and Under Secretary of State, and William Adams, Esq. Doctor of Civil Laws.
Peace on Christmas Eve
In the late afternoon of December 24, 1814, on Christmas Eve, the commissioners agreed upon signing a peace treaty, the Treaty of Ghent. However, the treaty was hardly more than an instrument that served to end the war and restore the status quo ante bellum. It said nothing about impressment, nothing about indemnities, etc. Boundary problems, territorial claims, all other problems were to be settled later by joint commissions. At six hours, with darkness spreading over Ghent and the carillon of St. Bavon pealing its Christmas message, three English and five American commissioners gathered about a long table and officially attested to a Treaty of Peace and Amity Between His Britannic Majesty and the United States of America. When all had signed, Lord Gambier expressed his hope that the treaty would be permanent. John Quincy Adams, as he tells us in his diary, in turn, too, assured Lord Gambier of his hope that it would be the last treaty of peace between Great Britain and the United States. At six thirty the American delegation dissapeared into the solemn night of Christmas Eve with peace in their pockets. Almost 200 years later, the Treaty of Ghent, proved indeed to be a lasting peace between Great Britain and the United States. A little known but most durable monument it has been. | <urn:uuid:2caf3bc1-d4a1-4378-8b9d-67ffdb067082> | CC-MAIN-2019-39 | https://www.peacepalacelibrary.nl/2012/12/23582/?lang=fr | s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574377.11/warc/CC-MAIN-20190921084226-20190921110226-00106.warc.gz | en | 0.964822 | 763 | 3.703125 | 4 |
Micro controllers which can be also called embedded systems. A micro controller is a computer on a chip (IC). It is a single chip that has all the features of a computer like the RAM, ROM, I/O, Serial communication, arithmetic and logic unit ALU which is the Processor. They are integrated as one on a single chip through the semiconductor technology. A micro controller is a programmable device and the programs written to it are called embedded code or firmware. The program for a micro controller can be developed using assembly language, C, C++, basic and also pascal language.
Hex file: It is the file produced after writing and compiling your program, the acronym Hex is a short form for hexadecimal base 16. That is the format a micro controller understands, you program the memory of the chip with the hex file.
The Families of Micro controllers, Types and Manufacturers
1.The 8051: The micro controllers in these family are
- 8051, 8052, 8053 by Intel.
- AT89S51, AT89S52, AT89C51, AT89C52 by Atmel.
- DS89C4X0 and DS5000 by Dallas semiconductor.
Other manufacturers of 8051 family of micro controller are: Philips, AMD, Infineon, Matra, Zilog, National semiconductor, Texas instrumental etc…
2. The PIC micro controller : The PIC stands for Peripheral Interface Computer manufactured by Microchip Technology. They have many families of micro controllers ranging from PIC 10, 12, 16, 18, 30, 32, dsPIC 33, dsPIC 24 etc.
PIC 10 to PIC 18 are 8 bit micro controllers, PIC 30 to PIC 32 and dsPIC 33 to dsPIC 24 are 16 bit micro controllers.
3. The AVR: The AVR family of micro controller is manufactured by Atmel corporation. AVR stands for Advanced Virtual RISC. The micro controllers under AVR ranges from 8 bit to 32 bit micro controller. Examples of the AVR the Atmega 8, Atmega 16, Atmega 32 etc. They also produce the tiny AVR series of micro controller range from 8 bit to 32 bit. Examples the Atiny 13, Atiny 10 etc…
4. The ARM: ARM stands for Advanced RISC Machine. There is a wide range of ARM processors and the best micro controller having a low power consumption of and has an operating voltage rating from 1.8v to 3.7v. The advanced features possessed by these ARM processors made it the most widely used in the world. It ranges from 8 bit to 64 bit processor. Examples are the STM32F030F4, ARM Cortex MX, Cortex M9, Cortex A3 etc… | <urn:uuid:9ee6bf94-19f4-4e64-8215-a40a22e57775> | CC-MAIN-2020-16 | https://hertzelectroz.com/techn-article/introduction-to-micro-controllers/ | s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370506870.41/warc/CC-MAIN-20200402080824-20200402110824-00513.warc.gz | en | 0.905056 | 588 | 3.71875 | 4 |
Magnesium is one of the most abundant minerals in the body. It plays several roles in the health of the body.
Magnesium is important for many body processes, including those that control how the muscles and nerves work. It helps to keep the bones strong, keep the heart healthy, and also help in normalizing the blood sugar. In addition, magnesium helps to maintain the energy level. Magnesium can be gotten from various food sources, some of which include foods and drinks. However, physicians may prescribe supplements that are rich in magnesium to patients if there is a need for it. This piece contains some of the reasons why you need magnesium.
How Much Magnesium Do You Need?
Studies have shown that an adult woman needs about 310milligrams of magnesium per day, and about 320 milligrams after age 30. Pregnant women need an extra 40 milligrams. On the other hand, adult men below the age of 30 need 400 milligrams, and 420 milligrams if they’re older than 31. Kids need between 30 to 410 milligrams, although this depends on their age and gender. You should consult your physician about how much magnesium you’d need.
Are You Getting Enough Magnesium?
Statistics have shown that about half of Americans don’t get sufficient magnesium from their diet. This may lead to various health problems if it continues for a long time. Some of the health conditions that may develop due to magnesium deficiency include type 2 diabetes mellitus, high blood pressure, and migraines. People that are prone to developing magnesium deficiency include older adults, alcoholics, individuals affected with type 2 diabetes.
Is It Possible To Have Too Much Of Magnesium?
It’d be impossible for a healthy individual to have too much magnesium. This is because the kidneys do the job of removing any extra magnesium from the body. Having an excess amount of magnesium in the body can lead to conditions such as cramps or nausea. This also applies when you use laxatives or antacids that contain magnesium. Magnesium can make people sick at high disease. You shouldn’t use magnesium without a doctor’s prescription, or without the guidance of your physician.
Reasons You Need Magnesium
- Magnesium is involved in biochemical reactions in the body: Magnesium is one of the most abundant minerals in the body and on earth. It’s abundant in the earth, sea, plants, animals, and humans. Studies have shown that about 60% of the magnesium in the body is found in bone, while the remaining are shared between the soft tissues, fluids, and blood. Every cell in the body contains magnesium and needs it to function. One of the main functions of magnesium is that it acts as a cofactor in the various biochemical reactions carried out by enzymes. Some of the roles magnesium plays in body reactions include the following:
- Energy creation: Magnesium helps in converting food into energy.
- Protein formation: Magnesium is important in creating proteins from amino acids.
- Gene maintenance: Magnesium helps in creating and repairing DNA and RNA.
- Nervous system regulations: It helps in the regulation of neurotransmitters, which helps in sending messages through the central nervous system.
- Muscle movements: It’s involved in the contraction and relaxation of muscles.
- Boost exercise performance: Magnesium helps improve exercise performance. Studies have shown that humans need about 10-20% more magnesium during exercise than when they’re resting. It has been shown that magnesium helps in the movement of blood sugar into the muscles and dispose of lactic acid, which can accumulate in muscles during exercise and cause pain. Supplementing with magnesium can boost the exercise performance for athletes, the elderly, and people affected with chronic disease.
- Fight depression: Magnesium plays an important role in brain function and mood. Low levels of magnesium have been linked to an increased risk of depression. Some experts suggest that low magnesium content of modern food may cause many cases of depression and mental illness. However, more research is needed in this area.
- Type 2 diabetes mellitus: It’s believed that magnesium has a connection with type 2 diabetes mellitus. Studies suggest that about 48% of people affected with type 2 diabetes have low magnesium in their blood, which affects insulin’s ability to keep blood sugar levels under control. Another study shows that people with type 2 diabetes mellitus taking high doses of magnesium each day do experience significant improvements in their blood sugar and hemoglobin A1c levels.
- Calcium absorption: As mentioned earlier, magnesium and calcium are necessary for maintaining bone health and preventing osteoporosis. A high intake of calcium can increase the risk of arterial calcification and cardiovascular disease, without magnesium. It can also lead to the formation of kidney stones.
- Heart health: Magnesium is important for the maintenance of the health of muscles. This includes the heart muscles, and also the transmission of electric signals in the body. Studies have linked magnesium intake with a lower risk of some diseases such as the following:
- Atherosclerosis or the build-up of fatty materials on the walls of the arteries.
- Magnesium can lower blood pressure: Studies have shown that taking magnesium can help reduce blood pressure. According to a study, individuals that tool over 450mg per day of magnesium experienced a significant reduction in systolic and diastolic blood pressure. However, there is a suggestion that this might only be effective in people affected with high blood pressure. Overall, magnesium helps to reduce blood pressure in people affected with hypertension but doesn’t have the same effect in those with normal levels.
- Anti-inflammatory effects: Chronic inflammation has been linked to low magnesium intake. This is one of the drivers of aging, obesity, and chronic disease. Magnesium supplements can help reduce C-reactive protein and other markers of inflammation in older adults.
- Migraine: Some researchers believe that magnesium can be helpful in treating migraine and conditions that are associated with it, such as nausea, vomiting, sensitivity to light, and noise.
- Relieving anxiety: One of the reasons for maintaining a normal and healthy level of magnesium is that it helps to improve anxiety. There is a link between magnesium levels and anxiety. Studies have shown that people with low levels of magnesium are prone to developing anxiety.
7 Reasons You Need More Magnesium. (2019). Retrieved from https://www.wakingtimes.com/2015/05/16/7-reasons-you-need-more-magnesium/
Feller, T. (2019). 10 reasons why you need magnesium | Metagenics Blog. Retrieved from https://blog.metagenics.com.au/10-reasons-why-you-need-magnesium/ | <urn:uuid:52612aad-6edd-422b-94cc-65db9d563f4b> | CC-MAIN-2019-22 | http://checkbiotech.org/10-reasons-need-magnesium/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232258003.30/warc/CC-MAIN-20190525104725-20190525130725-00196.warc.gz | en | 0.923147 | 1,396 | 2.984375 | 3 |
Northern coasts of the Atlantic and Pacific oceans
Whole part of plant
Kelp has been eaten by various peoples in different parts of the world for thousands of years. In Asia it was used as food and medicine as far back as 3,000 BCE, where it was used to treat uterine problems, genital tract disorders and kidney, bladder and prostate problems. Kelp has been a staple food of Icelanders for centuries and was used by the Ancient Greeks to feed their cattle. In Hawaii, the ancient nobles grew gardens of edible seaweed where it was highly prized as a nutritious food stuff – they seldom ate a meal that did not contain some kind of seaweed.
Kelp is perhaps most well-known for its rich iodine content. Iodine is an absolutely critical compound for the thyroid, with insufficient iodine levels causing goiter (an enlarged thyroid), a pre-cursor to many thyroid disorders. In Dr Gabriel Cousens essay, “Iodine – The Universal & Holistic Super Mineral”, he noted that iodine helps synthesise thyroid hormones and prevents both hypo and hyperthyroidism. He goes on to say, “There is little awareness of the importance of iodine in the synthesis of thyroid hormones, particularly T3 and T4. Thyroid hormones control metabolism, temperature, heart rate, glucose consumption, and even blood lipids. Iodine also helps to regulate cortisol.”
Dr David Brownstein, author of “Iodine – Why You Need It & Why You Can’t Live Without It”, highlights the fact that as iodine levels have fallen by over 50% during the last 40 years, thyroid disorders, including hypothyroidism, Hashimoto’s disease, Graves’ disease and thyroid cancer have been increasing at near-epidemic rates.
Kelp is a wonderful way to keep your iodine levels topped up, thus ensuring a healthy thyroid. Powdered Kelp can be added to many savoury dishes and as a wholefood it provides the full spectrum of nutrition maximising absorption of all the minerals it contains.
Another powerful nutrient found in Kelp is fucoidan with studies showing its effectiveness in many blood related disorders. It has been found to help to prevent the blood clotting that can lead to many dangerous health problems including strokes and heart attacks. It is so effective that researchers cite it as having the potential to be used as an antithrombotic agent – reducing the need for prescription drugs.
Fucoidan also protects cells in your body from ischemic damage, meaning damage caused by improper levels of blood flow to certain parts of the body.
The aforementioned fucoidans are being widely studied for their ability to reduce inflammation within the body. These sulfated fucoidans have been shown to reduce pain, fight viruses and prevent atherosclerosis.
Fucoidans produce their anti-inflammatory effects by blocking selectin production and inhibiting pro-inflammatory prostaglandins and enzymes. Selectins are glycoproteins (sugar-protein molecules) that are often used to signal inflammatory processes in the body. Fucoidans also inhibit the enzyme Pphospholipase A2 (PLA-2) that turns on inflammatory processes.
A dense source of many important minerals, Kelp can help to keep your bones strong and healthy. Calcium, magnesium, iron, zinc and manganese all contribute to building and maintaining bone structure and strength - iron and zinc play a crucial role in maintaining the strength and elasticity of bones. Kelp has more calcium than many other vegetables, including kale and collard greens – calcium is an important mineral for strong bones, only when in the proper balance with magnesium.
For medicinal purposes ½ gram of Kelp Powder can be taken daily. Since Kelp contains large amounts of Iodine it is a good idea to keep servings small. Although getting enough Iodine is essential for good health, too much can be detrimental to the body.
Kelp contains Alpha-carotene, beta-carotene, vitamin C, cobalt, iodine, and iron. Kelp also contains lutein, manganese, magnesium, calcium, chrominum, niacin, phosporous, potassium, riboflavin, selenium, silicon, sodium, tin, zeaxanthin and zinc.
It is recommended you speak to your health care professional, naturopath or herbalist, before supplementing with kelp if you suffer from a thyroid condition. | <urn:uuid:5ba84f8e-4990-4be2-a863-bc0332f687bf> | CC-MAIN-2018-17 | https://www.indigo-herbs.co.uk/natural-health-guide/benefits/kelp | s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125946256.50/warc/CC-MAIN-20180423223408-20180424003408-00260.warc.gz | en | 0.942228 | 936 | 3.1875 | 3 |
When it comes to baking , weights and measurements are critical and scales are the key to accuracy. In cooking, it is easier to correct mistakes than in baking. Recently, a reader made my Angel Food cake and wrote to tell me it didn’t rise as she expected. Could I tell her what she might have done wrong, she asked. My first guess was she incorrectly weighed the flour. I suggested she use a scale if possible, she did on the next one and happily it came out perfectly for her. This is the best case I can make for scales. Anyone who knows me, knows this has been my mantra in baking for years – scales, scales, scales.
Mine goes up to 10 pounds but it should go up to at least 5 pounds. When you get your scale home, test it for accuracy by weighing a pound of butter. Take the butter out of its carton first, turn on the scale and weigh it. It should be 16 ounces or 454 grams. We tested all of our scales at the shop this way.
Ounces by weight and ounces by volume are not interchangeable. A pound of all purpose flour (16 ounces or 454 grams) weighed on a scale equals 3 1/2 cups of flour by volume. However, 3 1/2 cups of flour may not weigh a pound if measured by volume. Flour is particularly difficult to measure without a scale. It depends upon how it is put into the cup. Because flour packs down in shipping and stacking on the grocer’s shelf, if it is not weighed, it needs to be lightly stirred in its container first then spooned into a dry measuring cup to overflowing. The excess should be removed by sweeping it off with the back of a knife or any flat utensil.
Making things more complicated it depends upon which flour you are weighting and how it is to be weighed. For baking purposes here are the weights I use. Also, note there are differences between sifted and unsifted. If the recipe reads 1 cup sifted flour, the flour has to be sifted into the cup to be measured. If the recipe reads 1 cup flour, sifted, then measure the flour and sift it afterwards. If it doesn’t say, assume it is unsifted.
Type of flour Cup, sifted Cup, unsifted_
All purpose flour 115 grams or 4 oz. 140 grams or 5 oz.
Bread Flour 115 grams or 4 oz. 140 grams or 5 oz.
Cake Flour 100 grams or 3 1/2 oz. 114 grams or 4 oz.
Pastry Flour 115 grams or 4 oz. 125 grams or 4 1/3 oz.
You will need measuring spoons for small amounts of ingredients. I like metal spoons rather than plastic which can distorted in the dishwasher. Also, the shape of the spoon doesn’t matter. They come in all shapes.While timers for baking may not be thought of as a measuring device, they are. I have extra timers in addition to the oven and microwave timer so I can take the timer with me if I am going out of the kitchen. I prefer a timer that has minutes and seconds on it.
Thermometers are yet another measuring device. An oven thermometer is very helpful for baking. Ovens can change their temperature from time to time and it is good to know what the current temperature is and adjust up or down. I also have thermometers in my refrigerator and freezer as well as an instant read, candy and meat thermometer.With the tools above, measuring ingredients, time or temperature will be assured for your baking. | <urn:uuid:18ad2dea-732a-4202-80fe-296ed879c51c> | CC-MAIN-2021-04 | https://pastrieslikeapro.com/2013/08/weights-and-measurements-in-baking/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704792131.69/warc/CC-MAIN-20210125220722-20210126010722-00181.warc.gz | en | 0.954114 | 747 | 2.65625 | 3 |
It makes sense that, as more households opt for wireless connections, landlines will be cut. A new study by the Centers for Disease Control confirms that.
Preliminary results from the January–June 2008 National Health Interview Survey indicate the number of American homes with only wireless telephones continues to grow. More than one out of every six American homes had only wireless telephones during the first half of 2008. The 2008 figure is an increase of 1.7 percent since the second half of 2007, when one out of every eight homes was completely landline-free. It is about 10 percent higher than the same figure for the first six months of 2005, when only one out of every 15 adults lived in wireless-only households.
In addition, more than one out of every eight American homes last year received all or almost all calls on wireless telephones despite having a landline telephone in the home.
The study revealed variations depending on different demographic groups. For example, nearly two-thirds of all adults living only with unrelated adult roommates were in households with only wireless telephones. This is the highest prevalence rate among the population subgroups examined.
The trend also appeared strongest in younger demographics, with more than one in three adults aged 25–29 years living in households with only wireless telephones. The percentage of adults aged 18-24 years living in a wireless-only household was almost the same, at approximately 31 percent.
Among other things, the study reveals that a migration to a wireless-only environment is stronger in metropolitan, well-to-do households. Adults living in metropolitan areas were more likely to be living in wireless-mostly households than were adults living in more rural areas, by a difference of 15 to 12 percent, and adults living in poverty or near poverty were less likely than higher income adults to be living in wireless-mostly households by a difference of approximately 10 to 17 percent. | <urn:uuid:13bac018-8be1-4e4c-ad3c-8a38c34ed6c1> | CC-MAIN-2017-17 | http://www.ecmag.com/print/section/miscellaneous/wireless-use-grows-landlines-getting-forced-out-home?qt-issues_block=0 | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121305.61/warc/CC-MAIN-20170423031201-00646-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.970001 | 385 | 2.65625 | 3 |
Resource: To educate people on the importance of valuing natural resources and tap into its capital, a four-day training on “Introduction to the Natural Capital Project Approach and InVEST” (Integrated Valuation of Ecosystem Services and Tradeoffs) concluded in Paro yesterday.
A team of Bhutanese professionals attended the training to build their capacity on valuation of ecosystem services.
According to a press release, InVEST will enable Bhutan to value ecosystem services such as water quality and yield, timber production, habitat quality among others, which will put value to these resources and enable introduction of Payments for Ecosystem Services (PES) mechanisms.
PES is broadly defined as incentives offered to farmers or landowners in exchange for managing their land to provide some sort of ecological services. The PES system will be imperative for long-term conservation of terrestrial and freshwater ecosystems and the services they provide.
For instance, maintaining the watersheds upstream will result in good water yield and quality which will be beneficial to the hydropower stations downstream through increased power generation and lower maintenance cost due to low sedimentation.
Similarly water is also highly essential for irrigation and drinking water supply. In turn the communities’ upstream receive incentives to ensure the protection of watersheds and therefore in the long run will protect the ecosystems.
The valuation of ecosystem services the press release stated, will go a long way in developing and managing the country’s natural capital, as it will work with all stakeholders for sustainable use of natural capital.
Accounting for Ecosystem services reveals the diverse benefits provided by nature and clarifies tradeoffs between alternative development scenarios and can enable practitioners and policy makers to make more informed decisions in managing valuable natural resources.
Planning Officers, GIS Experts, Engineers and Program Officers from the Gross National Happiness Commission, Department of Forest and Park Services, Ministry of Economic Affairs and UWICE, DGPC and Department of Hydropower and Power Systems attended the workshop.
The Watershed Management Division and UWICE with technical and financial support from WWF Bhutan and the technical team from the Natural Capital Project at the Stanford University together conducted the training.
The Natural Capital project is based at the Stanford University and the University of Minnesota with the global reach of conservation science and policy at the Nature Conservancy and the World Wildlife Fund.
By Staff reporter | <urn:uuid:5224ef48-0b88-413c-b067-8b3d971c09eb> | CC-MAIN-2021-49 | https://kuenselonline.com/valuing-bhutans-natural-capital/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362230.18/warc/CC-MAIN-20211202145130-20211202175130-00232.warc.gz | en | 0.911624 | 477 | 2.5625 | 3 |
41000 According to the realist theory, the international system operates on checks and balances method that is flawed to a certain extent given the ethnic and cultural dissimilarities between the peoples of the world. Samuel Huntington’s “The Clash of Civilizations” argues that the fault lines between civilizations lead to war. This has proved in the aftermath of the 9/11 attacks against the US. Other commentators like Robert Kagan have pointed to the resurgence of Russia and the recent conflict in the Caucasus between Russia and Georgia as an example of how “history returns” whenever certain nations fall from pre-eminence and then assert themselves to regain the lost glory. In his recent work, “The Return of History and the End of dreams”, Keegan forcefully makes the point about how the 21st century might look like when it comes to international relations. The realist perspective seems a good prism to look at the complex dynamics shaping war. This can be seen from the fact that in the aftermath of the fall of the Berlin Wall and the Collapse of Communism, the then US president George Bush Sr. made a case for a “New World Order” and proclaimed that “we are at the threshold of a new era that has been dreamed by generations of men but has always eluded them”. However, the euphoria was short lived as Iraq, under Saddam Hussein attacked Kuwait and subsequently this led to American intervention and the first Gulf war. Thus, we have history repeating itself in 2008 when Russia asserted itself in South Ossetia. | <urn:uuid:1ceb8fee-289b-4117-91e1-ed63bc24110b> | CC-MAIN-2021-10 | http://wizzluck.com/war-occurs-because-there-is-nothing-to-prevent-it/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178360745.35/warc/CC-MAIN-20210228084740-20210228114740-00183.warc.gz | en | 0.958879 | 317 | 2.578125 | 3 |
Whether it be for cutting down trees or providing a fantastic Halloween prop, chainsaws are mainly associated with either hacking through dense woodland areas or featuring in the more gruesome scenes in horror flicks.
The last thing that springs to mind is the process of childbirth.
However, a viral video on TikTok has claimed that is exactly what chainsaws were originally invented to do.
A warning to those who are squeamish – there is some truth to it.
Ask anyone who has done it and they will tell you giving birth is extremely difficult, even in 2021 with the most advanced medicine and hospital care.
Back in the days before anaesthetic, a caesarean section was barely used given the danger surrounding effectively opening a woman’s stomach while fully conscious.
This did, unfortunately, prove an obstacle for breech births or for larger babies, and the most suitable method deemed by doctors at the time was the, now thankfully retired, "chainsaw approach".
To help release the baby, small parts of pelvic bone and cartilage would be removed in an operation called a "symphysiotomy". This used to be completed via a small knife, which made the excruciatingly painful process even longer.
Aiming to "ease" some of the pain on the pregnant woman and speed up the process, doctors invented an early version of the chainsaw in 1780.
Though it followed the same mechanical approach, it was significantly smaller and less intimidating than the kind of chainsaws we see cutting down oak trees today. This one was powered by hand, with a spiky chain that moved around the edge.
The invention did prove to be a revelation, with the chainsaw going on to be deployed in other medical situations such as amputations, before it was eventually used in the woodwork industry to hack through stern objects.
As time has gone by, the contraption has evolved to become the kind of powerful machine we see in forests and horror films alike. | <urn:uuid:7af13838-4334-4683-87cb-e0f16874aed4> | CC-MAIN-2021-10 | https://www.dailystar.co.uk/real-life/were-chainsaws-invented-childbirth-truth-23447537 | s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178370239.72/warc/CC-MAIN-20210305060756-20210305090756-00132.warc.gz | en | 0.967421 | 403 | 2.859375 | 3 |
The Ethiopian Orthodox community celebrated Fasika, Ethiopian Easter on May 1.
Fasika is arguably the most celebrated Holiday in the Eastern African country.
After the long period of strict fasting, avoiding meat or animal products to represent Christ’s fasting for forty days and forty nights, the Orthodox community celebrated the event weeks after the Western church marked Easter.
Depriving themselves of food and drink strengthens our spirituality.
Abba Hailemariam, priest at Holy Trinity church explains why they celebrate Fasika with such rigor.
“Depriving themselves of food and drink strengthens our spirituality, our inner being. This gives spiritual strength. So during the period of lent, it is enriched enormously,” Hailemariam said.
From Friday morning to Saturday evening, the Orthodox prayed in memory of Jesus.
In Addis Ababa, parishes were flocked with faithfuls to celebrate the resurrection of Christ.
The religious festival quickly turns into a family celebration.
After a strict period of fasting families purchase meat which is meant to be killed by the head of the family.
“In order to ensure that the goat is killed by a Christian, the head of family does it. He must kill it, not necessarily the skinning. No matter what professional can do it,” said Amha, a businessman in the city.
During this season, family members and friends travel from far away places to be with their loved ones. | <urn:uuid:2c02dffc-63e1-4b00-8534-7b3984d3af26> | CC-MAIN-2017-13 | http://www.africanews.com/2016/05/02/ethiopian-orthodox-churches-celebrate-fasika-ethiopian-easter/ | s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189471.55/warc/CC-MAIN-20170322212949-00513-ip-10-233-31-227.ec2.internal.warc.gz | en | 0.954741 | 300 | 2.59375 | 3 |
How Teachers Saved a Nation
I am gazing at an image of a Japan long gone. The Temple of the Golden Pavilion was built in 1397 for Shogun Ashikaga Yoshimitsu as part of a great estate he used for a retreat and later as a retirement villa. The temple is coated with gold leaf and is placed picturesquely in a garden at the edge of a large pond. The pavilion extends partly over the pond and is brilliantly reflected in the calm water. The gold leaf reflects the autumn sun and the warm mood of the many tourists who flock to Kyoto to see this iconic site. I take a few photographs and marvel at the pristine condition of a temple built 100 years before Columbus sailed to the Americas.
An elderly Japanese man politely offers to photograph me with the temple in the background.
"I can't believe this temple is over six hundred years old," I remarked.
The man smiled and told me to read the brochure in my hand. My awe-inspiring moment is deflated when I read the brochure and learn about the history of the Temple of the Golden Pavilion. On July 2, 1950, at 2:30 am, the original Golden Pavilion was burned downed by a monk named Hayashi Yoken. The pavilion I marvel at was built in 1955 and the coating of lacquer and gold-leaf veneer was completed in 1987. The ornate roof was restored in 2003.
I asked the Japanese gentleman why the young monk set fire to the temple.
"The monk's motive is not clear because he tried to commit suicide behind the building," he answered.
"Did the police investigate the arson?" I asked.
"Yes," he replied. "The police questioned the monk's mother and ask her to explain why her son would commit such a sacrilegious act".
"What did she say?"
The man took my photograph and then replied, "She committed suicide by jumping from a train."
I learned the monk later died in a mental institution and the police probably thought it wise not to question another Yoken family member. Why risk further decreasing the family clan?
I notice a tall bronze statue of a phoenix on the roof of the temple and realize how it symbolizes the most important lesson I have learned while visiting Japan: The Japanese people understand the importance of education more than any other country in the world because education is the phoenix of their nation.
The elderly Japanese man hands back my camera and I look at his face. I see warmth and wisdom in his eyes and possibly a window to the past.
"Do you remember World War Two?" I asked.
The man did not seem surprised by my off topic question, so I follow-up my lead-in question. I asked him how a nation decimated by war could literally rise from the ashes and become the world's second largest economy.
The old man is quick to answer. "We never lost our school system," he said. "During the war and after the war, children went to school. Japan lost much during the war, but we never abandoned our schools."
I believe the greatest institution of social change is the school and the greatest instrument of change is the teacher. No other true democracy designed by the hand of man has ever existed. And the Japanese people practiced my belief after World War II by displaying an indomitable desire to rebuild a country through its schools.
War often destroys much more than lives and buildings. Social and governmental institutions are shattered, infrastructure decimated, food and clean water scarce, and people grow weak with the laborious task of burying the dead. Defeated nations seldom arise from the cinders of battle with the physical or psychological strength necessary to survive, let alone prosper, and Japan is one of the few exceptions in recorded history in which a nation found a collective resolve to not allow a vanquished people to become a vanished people.
Foreign aid and investment helped rebuild the many cities destroyed by Allied bombs but the will to endure and to thrive in a post-war economy was instilled in children by teachers. A people turned to its education system to renew a nation and teachers and schools were there to answer the call. The so-called "Japanese Economic Miracle" could not have occurred without schools and teachers.
And this is the greatest and most profound lesson I learned during my visit to Japan. | <urn:uuid:f38760ec-0c4e-4af3-baa7-04f6f047b18b> | CC-MAIN-2019-30 | http://blogs.edweek.org/teachers/teacher_of_the_year/2009/12/how_teachers_saved_a_nation.html | s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525094.53/warc/CC-MAIN-20190717061451-20190717083451-00354.warc.gz | en | 0.975784 | 892 | 2.625 | 3 |
|Machu Picchu, Peru|
|Before we meet each other in 1976, Bruce travels through Africa with his sister, and Tass spends a winter in South America with her best friend, Suzanne.
Tass and Suzanne go to South America to study birds, and also to see some of the old Inca ruins and cities.
At that time, in 1975, very few people visit Machu Picchu, and even less people hike in to see the ruins via the Inca trail.
Tass and Suzanne hike the Inca Trail for three days following a hand drawn map. They don't see another single tourist. They hike through frequent rainstorms, and often the surrounding valleys are obscured with clouds and mist.
Imagine their excitement when they look down and see Machu Picchu through the clouds.
|Archeologists believe Machu Picchu was constructed in the 1400's by the Inca ruler Pachacuti.
When the Spanish first arrived in South America they fought and conquered the Inca.
But the Spanish never learned about Machu Picchu, and the city high in the Andes mountains of Peru is not rediscovered until over 400 years later.
|The walls of the buildings are built of granite rock.
The roofs of the buildings were all covered with grass thatching that has long since rotted away.
No one knows whatever happened to the people who lived here. | <urn:uuid:44d1a758-cb9b-4dc1-9c07-29ad72f32acf> | CC-MAIN-2020-05 | http://imagesoftheworld.com/andes/machupicchu.html | s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250605075.24/warc/CC-MAIN-20200121192553-20200121221553-00316.warc.gz | en | 0.971821 | 289 | 3.078125 | 3 |
The system, dubbed a "nanosponge," uses a nanoparticle-sized system to deliver the drug payload. These nanoparticles circulate in the body until they encounter the surface of a tumor cell, where they adhere to the surface and begin releasing the drug in a controllable and predictable fashion. The controlled-release nanoparticle drug-delivery system used a targeting peptide that recognized a radiation-induced cell-surface receptor. This targeting agent combined a recombinant peptide with a paclitaxel-encapsulating nanoparticle that specifically targeted irradiated tumors, thereby increasing apoptosis and tumor-growth delay. A Phage display biopanning identified Gly-Ile-Arg-Leu-Arg-Gly (GIRLRG) as a peptide that selectively recognizes GPR78, a receptor on certain tumor cells. Antibodies to GRP78 blocked the binding of GIRLRG in vitro and in vivo. The conjugation of GIRLRG to a sustained-release nanoparticle drug-delivery system increased paclitaxel concentration and apoptosis (1)
When loaded with an anticancer drug, the delivery system is three to five times more effective than direct injection at reducing tumor growth (2). The sponge acts as a three-dimensional network or scaffold. The backbone is a long-length polyester. It is mixed in solution with crosslinkers to form the polymer. The net effect is to form spherically shaped particles filled with cavities where drug molecules can be stored. The polyester is biodegradable, so it breaks down gradually in the body. As it breaks down, it releases its drug payload in a predictable fashion (2).Targeted delivery systems of this type have several basic advantages. Because the drug is released at the tumor site instead of circulating widely through the body, it should be more effective for a given dosage. It also should have fewer harmful side effects because smaller amounts of the drug come into contact with healthy tissue. Another advantage is that the nanosponge particles are soluble in water. Encapsulating the anticancer drug in the nanosponge allows the use of hydrophobic drugs that do not dissolve readily in water. Currently, these drugs must be mixed with adjuvant reagents, which potentially can reduce the efficacy of the drug or cause side effect (2).
The nanosponge is produced through fairly simple chemistry. The researchers developed simple, high-yield so-called "click chemistry" methods for making the nanosponge particles and for attaching the linkers. The drug used for the animal studies was paclitaxel, the active ingredient in the anticancer therapy Taxol. The researchers recorded the response of two different tumor types—slow-growing human breast cancer and fast-acting mouse glioma—to single injections. In both cases, they found that the delivery through nanosponges increased the death of cancer cells and delayed tumor growth compared with other chemotherapy approaches.
1. E. Harth et al. Can. Res 70 (11), 4550–4559 (2010).
2. D. Salisbury, Exploration: Research News at Vanderbilt University, June 1, 2010. | <urn:uuid:e065f23d-cc04-4ca9-9153-2884654cc103> | CC-MAIN-2014-52 | http://www.pharmtech.com/formulation-development-forum-nanosponges | s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1419447562981.54/warc/CC-MAIN-20141224185922-00058-ip-10-231-17-201.ec2.internal.warc.gz | en | 0.91729 | 654 | 3.21875 | 3 |
An autonomous agent is something that can both reproduce itself and do at least one thermodynamic work cycle. It turns out that this is true of all free-living cells, excepting weird special cases. They all do work cycles, just like the bacterium spinning its flagellum as it swims up the glucose gradient. The cells in your body are busy doing work cycles all the time.
Stuart Kauffman is a theoretical biologist who studies the origin of life and the origins of molecular organization. Thirty- five years ago, he developed the Kauffman models, which are random networks exhibiting a kind of self-organization that he terms "order for free." Kauffman is not easy. His models are rigorous, mathematical, and, to many of his colleagues, somewhat difficult to understand. A key to his worldview is the notion that convergent rather than divergent flow plays the deciding role in the evolution of life. He believes that the complex systems best able to adapt are those poised on the border between chaos and disorder.
Kauffman asks a question that goes beyond those asked by other evolutionary theorists: if selection is operating all the time, how do we build a theory that combines self-organization (order for free) and selection? The answer lies in a "new" biology, somewhat similar to that proposed by Brian Goodwin, in which natural selection is married to structuralism.
Kauffman says that he has been "hamstrung by the fact that
I don't see how you can see ahead of time what the variables
begin science by stating the configuration space. You know
the variables, you know the laws, you know the forces, and
the whole question is, how does the thing work in that space?
If you can't see ahead of time what the variables are, the
microscopic variables for example for the biosphere, how do
you get started on the job of an integrated theory? I don't
know how to do that. I understand what the paleontologists
do, but they're dealing with the past. How do we get started
on something where we could talk about the future of a biosphere?"
STUART A. KAUFFMAN, a theoretical biologist, is emeritus professor of biochemistry at the University of Pennsylvania, a MacArthur Fellow and an external professor at the Santa Fe Institute. Dr. Kauffman was the founding general partner and chief scientific officer of The Bios Group, a company (acquired in 2003 by NuTech Solutions) that applies the science of complexity to business management problems. He is the author of The Origins of Order, Investigations, and At Home in the Universe: The Search for the Laws of Self-Organization.
"THE ADJACENT POSSIBLE"
KAUFFMAN): In his famous book, What is Life?, Erwin Schrödinger
asks, "What is the source of the order in biology?" He
arrives at the idea that it depends upon quantum mechanics and a
microcode carried in some sort of aperiodic crystal—which
turned out to be DNA and RNA—so he is brilliantly right. But
if you ask if he got to the essence of what makes something alive,
it's clear that he didn't. Although today we know bits and pieces
about the machinery of cells, we don't know what makes them living
things. However, it is possible that I've stumbled upon a definition
of what it means for something to be alive.
Right now I'm busy thinking about this incredibly important problem. The frustration I'm facing is that it's not clear how to build mathematical theories, so I have to talk about what Darwin called adaptations and then what he called pre-adaptations.
You might look at a heart and ask, what is its function? Darwin would answer that the function of the heart is to pump blood, and that's true—it's the cause for which the heart was selected. However, your heart also makes sounds, which is not the function of your heart. This leads us to the easy but puzzling conclusion that the function of a part of an organism is a subset of its causal consequences, meaning that to analyze the function of a part of an organism you need to know the whole organism and its environment. That's the easy part; there's an inalienable holism about organisms.
But here's the strange part: Darwin talked about pre-adaptations, by which he meant a causal consequence of a part of an organism that might turn out to be useful in some funny environment and therefore be selected. The story of Gertrude the flying squirrel illustrates this: About 63 million years ago there was an incredibly ugly squirrel that had flaps of skin connecting her wrists to her ankles. She was so ugly that none of her squirrel colleagues would play or mate with her, so one day she was eating lunch all alone in a magnolia tree. There was an owl named Bertha in the neighboring pine tree, and Bertha took a look at Gertrude and thought, "Lunch!" and came flashing down out of the sunlight with her claws extended. Gertrude was very scared and she jumped out of the magnolia tree and, surprised, she flew! She escaped from the befuddled Bertha, landed, and became a heroine to her clan. She was married in a civil ceremony a month later to a very handsome squirrel, and because the gene for the flaps of skin was Mendelian dominant, all of their kids had the same flaps. That's roughly why we now have flying squirrels.
The question is, could one have said ahead of time that Gertrude's flaps could function as wings? Well, maybe. Could we say that some molecular mutation in a bacterium that allows it to pick up calcium currents, thereby allowing it to detect a paramecium in its vicinity and to escape the paramecium, could function as a paramecium-detector? No. Knowing what a Darwinian pre adaptation is, do you think that we could say ahead of time, what all possible Darwinian pre adaptations are? No, we can't. That means that we don't know what the configuration space of the biosphere is.
It is important to note how strange this is. In statistical mechanics we start with the famous liter volume of gas, and the molecules are bouncing back and forth, and it takes six numbers to specify the position and momentum of each particle. It's essential to begin by describing the set of all possible configurations and momenta of the gas, giving you a 6N dimensional phase space. You then divide it up into little 6N dimensional boxes and do statistical mechanics. But you begin by being able to say what the configuration space is. Can we do that for the biosphere?
I'm going to try two answers. Answer one is No. We don't know what Darwinian pre adaptations are going to be, which supplies an arrow of time. The same thing is true in the economy; we can't say ahead of time what technological innovations are going to happen. Nobody was thinking of the Web 300 years ago. The Romans were using things to lob heavy rocks, but they certainly didn't have the idea of cruise missiles. So I don't think we can do it for the biosphere either, or for the econosphere.
You might say that it's just a classical phase space—leaving quantum mechanics out—and I suppose you can push me. You could say we can state the configuration space, since it's simply a classical, 6N-dimensional phase space. But we can't say what the macroscopic variables are, like wings, paramecium detectors, big brains, ears, hearing and flight, and all of the things that have come to exist in the biosphere.
All of this says to me that my tentative definition of an autonomous agent is a fruitful one, because it's led to all of these questions. I think I'm opening new scientific doors. The question of how the universe got complex is buried in this question about Maxwell's demon, for example, and how the biosphere got complex is buried in everything that I've said. We don't have any answers to these questions; I'm not sure how to get answers. This leaves me appalled by my efforts, but the fact that I'm asking what I think are fruitful questions is why I'm happy with what I'm doing.
I can begin to imagine making models of how the universe gets more complex, but at the same time I'm hamstrung by the fact that I don't see how you can see ahead of time what the variables will be. You begin science by stating the configuration space. You know the variables, you know the laws, you know the forces, and the whole question is, how does the thing work in that space? If you can't see ahead of time what the variables are, the microscopic variables for example for the biosphere, how do you get started on the job of an integrated theory? I don't know how to do that. I understand what the paleontologists do, but they're dealing with the past. How do we get started on something where we could talk about the future of a biosphere?
There is a chance that there are general laws. I've thought about four of them. One of them says that autonomous agents have to live the most complex game that they can. The second has to do with the construction of ecosystems. The third has to do with Per Bak's self-organized criticality in ecosystems. And the fourth concerns the idea of the adjacent possible. It just may be the case that biospheres on average keep expanding into the adjacent possible. By doing so they increase the diversity of what can happen next. It may be that biospheres, as a secular trend, maximize the rate of exploration of the adjacent possible. If they did it too fast, they would destroy their own internal organization, so there may be internal gating mechanisms. This is why I call this an average secular trend, since they explore the adjacent possible as fast as they can get away with it. There's a lot of neat science to be done to unpack that, and I'm thinking about it.
One other problem concerns what I call the conditions of co-evolutionary assembly. Why should co-evolution work at all? Why doesn't it just wind up killing everything as everything juggles with everything and disrupts the ways of making a living that organisms have by the adaptiveness of other organisms? The same question applies to the economy. How can human beings assemble this increasing diversity and complexity of ways of making a living? Why does it work in the common law? Why does the common law stay a living body of law? There must be some very general conditions about co-evolutionary assembly. Notice that nobody is in charge of the evolution of the common law, the evolution of the biosphere, or the evolution of the econosphere. Somehow, systems get themselves to a position where they can carry out coevolutionary assembly. That question isn't even on the books, but it's a profound question; it's not obvious that it should work at all. So I'm stuck. | <urn:uuid:64450fca-ff38-4ea6-8430-1e41b77b35f8> | CC-MAIN-2023-40 | https://www.edge.org/3rd_culture/kauffman03/kauffman_index.html | s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510924.74/warc/CC-MAIN-20231001173415-20231001203415-00590.warc.gz | en | 0.969544 | 2,362 | 2.546875 | 3 |
Deloitte has projected that soft skill intensive occupations will account for two-thirds of all jobs by 2030. In previous articles, we identify the soft skills needed for entry-level jobs. Here, we pinpoint the common terms jobs use to signify which soft skills are most important for the position.
Learn the terms referring to soft skills and know what they really mean.
A continuous learner is also called “coachable.” This means having an attitude or mindset that’s focused on continuous improvement.
Being proactive is also called a “self-starter.” This means taking initiative when given a new task or presented with a problem.
Collaborative also means “being a team player.” This means awareness of how an individual’s actions contribute to the success of a team outcome or goal.
Being a strategic thinker is also called being “solutions-oriented.” This means once a problem has been identified you are determined to develop and provide answers or solutions.
Having good communication and presentation skills is also called being a “strong communicator.” This means the ability to clearly and concisely express an idea or information to a colleague, customer or listener in a manner they can readily understand.
THE SOFT SKILLS
Want to be considered a self-starter in the workplace? Choose the soft skill that needs improvement. | <urn:uuid:285bec4f-896c-4f96-b8dd-ac1b16da30a9> | CC-MAIN-2022-49 | https://www.subkit.com/ctracktraining/posts/common-soft-skills-for-entry-level | s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711013.11/warc/CC-MAIN-20221205064509-20221205094509-00837.warc.gz | en | 0.958901 | 296 | 2.5625 | 3 |
Figurines Excavated from the Burnt Palace and Fort Shalmaneser
The most expansive text prescribing the types of figurines is the Aššur ritual KAR, no. 298. After defining the purpose of the ritual as to avert evil from the house, the text begins to prescribe the types of figures to be fashioned and buried at set locations.
It begins with a long passage prescribing wooden figures of seven apkallē “Sages,” from seven Babylonian cities. No such actual figurines appear to exist, nor should we expect any if the prescription were faithfully followed, since timber figurines would have perished.
The next passage, however, prescribes apkallū figures with the faces and wings of birds. These are the bird-headed figures (Plate IXb), found appropriately in groups of seven. As well as in the Burnt Palace, a group of such figures was found in Fort Shalmaneser in a late seventh-century context; the excavator believed that the figures were redeposited ninth-century pieces, but they are rather different in style (ND 9518, figures in the round rather than flat-backed plaques) and may in fact date closer to the period suggested by their findspot.
A group of figures of the same type was found by George Smith in the so-called “S.E. Palace,” perhaps a part of the same building as Palace “AB;” the pieces are close in style to the Burnt Palace examples and may date to the late ninth century.
The ritual goes on to prescribe a set of seven figures of the apkallē cloaked in the skin of a fish. This type is represented by septenary groups of fish-garbed human figures which vary somewhat from deposit to deposit.
The usual type from the Burnt Palace, thin and fairly flat, sometimes has a fish-head and, on the reverse, a dorsal fin (Plate Xb), but often has no very obvious fish elements, so that the pieces must be identified from others in the same deposit or by comparison with those in other deposits.
Also from the Burnt Palace come some more obvious human-piscine figures of heavy solid clay (Plate Xc). Six examples of this subtype were found, together with a seventh, “leader” (?), figure of the same being but of a very different style: a tall but flat fish-garbed man, the scales and tail indicated on the back by incised cross-hatching and diagonal lines.
Over thirty figurines and metal figurine accoutrements were found not buried in boxes but loose in the fill of one of the so-called “barracks-rooms” of Fort Shalmaneser. They would seem to be remnants from disturbed deposits, but evidently reused, since the fish-cloaked figures, of incongruous styles, were nevertheless seven in number.
It is possible, therefore, that the room was a kind of sick-bay, decked out with these prophylactic images. Plate Xd shows one of the types found, rather crudely made but with the line of the fish-cloak evident enough.
It is interesting to note, in this context, that when one of the legs is exposed and set forward on figurines of this type, it is the left one, perhaps foreshadowing an Islamic custom of entering a holy place with the right foot first, but the haunts of the jinn leading with the left.
The fish-cloaked figure is known in Mesopotamian art from the Kassite period, and despite a dearth of extant sculpture was not an uncommon figure in the Neo-Assyrian palace or temple (Plate Xa).”
Anthony Green, “Neo-Assyrian Apotropaic Figures,” Iraq, Vol. 45, 1983, pp. 88-90. | <urn:uuid:9102eec1-605e-45de-ab99-60c6dfffdaa8> | CC-MAIN-2017-30 | https://therealsamizdat.com/2015/07/18/figurines-excavated-from-the-burnt-palace-and-fort-shalmaneser/ | s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424931.1/warc/CC-MAIN-20170724222306-20170725002306-00434.warc.gz | en | 0.958597 | 823 | 3.140625 | 3 |
Run by School of Natural Sciences
10 Credits or 5 ECTS Credits
Organiser: Prof Mike Beckett
Overall aims and purpose
The aim of this module is to build on inorganic chemistry taught in year 1 with particular reference to key aspects of the chemistry of d- and f-blocks and to explore solid state chemistry (synthesis, properties, applications) in more detail.
Prof M.A. Beckett: (12h) the topics covered include a revision of CFSE from year 1 and an introduction to covalency in metal-ligand bonding with ligand field theory. Binary metal carbonyls will be surveyed and the 18e rule will be introduced. Metal carbonyl synergic bonding. An introduction to organometallic chemistry will be presented including metal alkene bonding, metal alkyls, cyclopentadienide and arene coordination chemistry. Oxidative addition and reductive elimination reactions will be examined. A brief overview of lanthanide and actinide chemistry will also be presented.
Dr J. Thomas (12h): The uses and properties of solid-state materials will be coverered in addition to the different synthetic procedures used in their preparation and the different methods of characterization, with an emphasis on 'state-of-the art' methodology. The properties and utility of solid-state materials (including defects) utilizing specific examples such as conductors and semi-conductors, black and white photography, zeolites, solid state lasers, superconductors, etc will be discussed
Threshold (40%). Knowledge and understanding covered in the course in basic. Problems of a routine nature are generally adequately solved. Transferable skills are at a basic level.
excellent (>70%): knowledge base is extensive and extends well beyond the work covered in the programme. Conceptual understanding is outstanding. Problems of a familiar and unfamiliar nature are solved with efficiency and accuracy, problem solving procedures are adjusted to the nature of the problem.
Good (60%): knowledge base covers all essential aspects of subject matter dealt with in the programme and shows good evidence of enquiry beyond this. Conceptual understanding is good. Problems of a familiar and unfamiliar nature are solved in a logical manner; solutions are generally correct and acceptable. Performance in transferrable skills is sound and shows no significant deficiencies.
The student should be able to show an understanding of binary metal carbonyls including structural aspects and metal carbonyl bonding; know the 18-electron rule and how to apply it to organometallic compounds, and be able to distinguish between saturated and unsaturated metal centres; understand oxidative-addition and reductive-elimination as key reactions that occur in organometallic systems; describe using MO diagrams ligand field theory as a follow on from CFSE in transition metal complexes; show an understanding of the basic principles of lanthanide and actinide chemistry (f-block)
The student should be able to describe key process in the synthesis and characterzation of solid-state materials and their appropriateness under specific circumstances; demonstrate a clear understanding of the properties and utility of solid-state materials, including defects, utilizing specific examples such as conductors and semi-conductors, black and white photography, zeolites, solid-state lasers, superconductors, etc.
Teaching and Learning Strategy
Background reading to support learning
The module has 24 hours of 1h lectures (including two tutorials held in class) at 2lectures per week.
- Literacy - Proficiency in reading and writing through a variety of media
- Numeracy - Proficiency in using numbers at appropriate levels of accuracy
- Computer Literacy - Proficiency in using a varied range of computer software
- Self-Management - Able to work unsupervised in an efficient, punctual and structured manner. To examine the outcomes of tasks and events, and judge levels of quality and importance
- Information retrieval - Able to access different and multiple sources of information
Subject specific skills
- PS11 Problem-solving skills including the demonstration of self-direction, initiative and originality
- SK2 Demonstrate a systematic understanding of fundamental physicochemical principles with the ability to apply that knowledge to the solution of theoretical and practical problems
- PS16 The ability to work in multi-disciplinary and multi-skilled teams
- SK4 Demonstrate, with supporting evidence, their understanding of synthesis, including related isolation, purification and characterisation techniques
- SK6 Develop an awareness of issues within chemistry that overlap with other related subjects
- SK9 Read and engage with scientific literature
- SK11. Reading and engaging with scientific literature.
- CC1 the ability to demonstrate knowledge and understanding of essential facts,concepts,principles and theories relating to theSubject areasCovered in theirProgramme
- CC2 the ability to applysuch knowledge and understanding to thesolution of qualitative and quantitativeProblems that are mostly of a familiar nature
Resource implications for students
No additional resource implications
Talis Reading listhttp://readinglists.bangor.ac.uk/modules/fxx-2202.html
'Inorganic Chemistry' 3rd Ed. C.E. Housecroft and A.G. Sharpe Essential 'Chemistry 3', Burrows, Holman, Parsons, Pilling, Price - essential 'Periodic Table at a glance , M.A. Beckett and A.G.W. Platt, Blackwell,, 2006, Recommended.
Pre- and Co-requisite Modules
Courses including this module
Compulsory in courses:
- F100: BSC Chemistry year 2 (BSC/C)
- F102: Chem with Europ Exper year 2 (BSC/CEE)
- F105: BSc Chemistry with International Experience year 2 (BSC/CHIE)
- F103: BSC Chem with Ind Exper year 2 (BSC/CIE)
- F104: MChem Chemistry year 2 (MCHEM/CH)
- F106: MChem Chemistry with International Experience year 2 (MCHEM/CHIE)
- F101: MChem Chemistry with Industrial Experience year 2 (MCHEM/CIND) | <urn:uuid:4717cff9-7299-4fdb-857d-21439bdba4e6> | CC-MAIN-2019-35 | https://www.bangor.ac.uk/courses/undergrad/modules/FXX-2202 | s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313436.2/warc/CC-MAIN-20190817164742-20190817190742-00394.warc.gz | en | 0.864377 | 1,279 | 2.671875 | 3 |
The day is winding down and it’s getting close to bedtime. Your eyes get heavy and you just can’t stop yawning. Your circadian rhythm, also known as your biological clock, is telling your body that it’s time to sleep.
We typically sleep in 24-hour cycles triggered by sunlight. Once the sun goes down and you drift off to sleep, your brain begins rotating through a series of neurological phases. This is the time when your daily experiences convert into long-term memories.
During the first stage of sleep, your body begins to relax. Your heart rate and breathing slow and your muscles may twitch.
As you relax further into sleep, your eyes stop moving and your body temperature drops. Your muscles grow limp.
“Stage three is what we like to call ‘deep sleep,’” Dr. Levinson says. “This stage of sleep is the most refreshing and what causes you to wake up feeling renewed and ready to take on the day.” Waking up is hardest during this stage, as your body is completely relaxed.
During the REM (rapid eye movement) stage, your eyes move quickly behind your eyelids and your heart rate begins to raise. You do most of your dreaming during this stage. Dr. Levinson says, “The reason we don’t act out our dreams is because the brain stem releases a chemical called GABA during REM sleep that temporarily restricts movement.”
REM sleep lasts about 10 minutes before cycling back to stage one and so on. We typically rotate through these stages a few times throughout the night. “Practicing good sleep hygiene will help establish a strong circadian rhythm and allow you to sleep better during the night,” says Dr. Levinson.
Talk to your doctor if you experience trouble sleeping for two or more weeks. | <urn:uuid:39c862b3-e0f7-4d05-9c86-e671307ca20a> | CC-MAIN-2020-05 | https://www.sharp.com/health-news/what-happens-to-your-body-when-you-sleep.cfm | s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250590107.3/warc/CC-MAIN-20200117180950-20200117204950-00068.warc.gz | en | 0.946985 | 381 | 3.171875 | 3 |
Submitted By nancy1445
Juveniles convicted as adults: Unconstitutional How does placing a child in an adult prison, where they risk sexual abuse that eventually leads to suicide, teach them a lesson? Juveniles, who commit serious crimes, are usually seen as convicts or criminals who should pay the price of what they committed even if that means sentenced as an adult and occasionally with no parole. Some of these juveniles who are being tried as adults suffered from psychological traumas caused at home by their parents or own family members. People need to know what can be done to prevent these crimes. Placing a juvenile in an adult trial is unconstitutional and is abusing their rights. Many of the juveniles prosecuted as adults are placed in adult jails pretrial, where they are at risk of harm, abuse and suicide. People need to understand the importance and dangers of incarcerating a child in an adult correctional facility. The administration of justice should implement meaningful juvenile justice reforms such as, rehabilitation centers, counseling, and they should correspondingly perform psychological test before being prosecuted in an adult trial so the U.S can uphold the dignity and human rights of our children and ensure that no child in our nation is considered a throwaway person. Juvenile crime rates soared in the mid- 1990s, and that is why every state initiated strict laws against juveniles and began incarcerating minors as adults. That high rates of juvenile delinquency dropped quickly by 1997, even homicide rates dropped to its lowest in 25 years. Now that delinquency rates dropped, Hansen states, “critics say such policies cause grave harm to the nation’s youth.” Placing juveniles in adult trials is harming our youth and the U.S seems not to care about them. Even though juvenile crime rates have declined, “Still, the nation's revamped juvenile justice system continues to…... | <urn:uuid:ff3a32d7-8814-4ee7-b0e4-ecdd82e1445d> | CC-MAIN-2019-04 | http://lirykmusik.info/essay-on/Jeveniles/248790 | s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583660877.4/warc/CC-MAIN-20190118233719-20190119015719-00168.warc.gz | en | 0.96846 | 376 | 2.546875 | 3 |
The 'Religion and Culture' lectures are rooted in the strong assertion that the radical positivist and secular inclinations of Enlightenment humanism have failed either to uproot the religious and spiritual tendencies of Western culture or to replace them with an alternative account of moral behavior on the basis of reason. The return of philosophy in the 18th and 19th centuries to romanticism and mythology is an indication of this, as is the persistent 20th-century search both for description and transcendence of the depth of human experience.
Dawson maintains that religion has a unifying effect, even in its obvious and inherent dynamism. 'The cultural function of religion is both conservative and dynamic: it consecrates the tradition of a culture and it also provides the common aim which unifies the different social elements in it' (chapter 1, 24). However, religion is also subject to the changes of society through the ages. 'The relation between Religion and Culture is always a two-sided one. The way of life influences the approach to religion, and the religious attitude influences the way of life' (chapter 2, 46). By this argument, Dawson asserts that, just as some social sciences would suggest that religion is merely a product of social conditioning, religious historians can argue that religion initiates profound and even revolutionary changes within culture (chapter 3).
To do this Dawson demonstrates the relationship between religion and culture through: prophecy, mysticism and revolutionary social change (chapter 4); monasticism, priesthood and notions of sacrifice and self-denial (chapter 5); divine kingship and monarchy (chapter 6); natural and sacred science (chapter 7), and sacred law and the social order (chapter 8); and, finally, notions of intuitive and salvific spiritual discipline (chapter 9). The practices of Buddhism, Christianity, Islam and Judaism are considered throughout, while the conclusion emphasizes the current period (immediately following the Second World War) as a crucial turning point in the interaction of religion and culture, especially in Europe (chapter 10). | <urn:uuid:7992e476-c6b6-49fd-b1e7-f826dadeecfb> | CC-MAIN-2020-16 | https://www.giffordlectures.org/lectures/religion-and-culture | s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371604800.52/warc/CC-MAIN-20200405115129-20200405145629-00282.warc.gz | en | 0.927006 | 406 | 3.234375 | 3 |
September 1, 2009
Researchers Study Possible Responses To Climate Emergencies
The future of the Earth could rest on potentially dangerous and unproven geoengineering technologies unless emissions of carbon dioxide can be greatly reduced, a new study has found.
The report (published September 1, by the Royal Society, the UK's national academy of science) found that unless future efforts to reduce greenhouse gas emissions are much more successful than they have been so far, additional action in the form of geoengineering will be necessary to cool the planet. However, the report identified major uncertainties regarding the effectiveness, costs, and environmental impacts of geoengineering technologies.
"Reducing our greenhouse gas emissions is more important than ever," said coauthor Ken Caldeira of the Carnegie Institution's Department of Global Ecology, "but even with our best efforts, the Earth is likely to continue warming throughout this century due to inertia in the climate system. Cutting emissions can reduce but cannot eliminate the risk of a climate emergency." Possible climate emergencies include rapid collapse of the Greenland ice sheet into the sea causing major sea level rise, a shift in rainfall patterns causing massive global crop failures, or melting Arctic permafrost causing catastrophic release of the powerful greenhouse gas methane.
Professor John Shepherd, who chaired the study said, "It is an unpalatable truth that unless we can succeed in greatly reducing CO2 emissions we are headed for a very uncomfortable and challenging climate future, and geoengineering will be the only option left to limit further temperature increases. Our research found that some geoengineering techniques could have serious unintended and detrimental effects on many people and ecosystems"”yet we are still failing to take the only action that will prevent us from having to rely on them. Geoengineering and its consequences are the price we may have to pay for failure to act on climate change."
The report assesses the two main kinds of geoengineering techniques"”Carbon Dioxide Removal (CDR) and Solar Radiation Management (SRM). CDR techniques address the root of the problem rising CO2"”and so have fewer uncertainties and risks, as they work to return the Earth to a more normal state. They are therefore considered preferable to SRM techniques, but none has yet been demonstrated to be effective at an affordable cost, with acceptable environmental impacts, and they only work to reduce temperatures over very long timescales.
SRM techniques act by reflecting the sun's energy away from Earth, meaning they lower temperatures rapidly, but do not affect CO2 levels. They therefore fail to address the wider effects of rising CO2, such as ocean acidification, and would need to be deployed for a very long time. Although they are relatively cheap to deploy, there are considerable uncertainties about their regional consequences, and they only reduce some, but not all, of the effects of climate change, while possibly creating other problems. The report concludes that SRM techniques could be useful if a threshold is reached where action to reduce temperatures must be taken rapidly, but that they are not an alternative to emissions reductions or CDR techniques.
"If we are confronted with a climate emergency and decide we cannot tolerate any more warming, engineering some system to deflect more sunlight back to space would likely be the primary option available to cool the Earth quickly," said Caldeira. "Of course, we need to make sure that tinkering with our environment in this way would not just cause bigger problems. We need to study these options now so that we can understand the pluses and minuses in case we need to deploy them."
Professor Shepherd added, "None of the geoengineering technologies so far suggested is a magic bullet, and all have risks and uncertainties associated with them. It is essential that we strive to cut emissions now, but we must also face the very real possibility that we will fail. If 'Plan B' is to be an option in the future, considerable research and development of the different methods, their environmental impacts, and governance issues must be undertaken now. Used irresponsibly or without regard for possible side effects, geoengineering could have catastrophic consequences similar to those of climate change itself. We must ensure that a governance framework is in place to prevent this."
On The Net: | <urn:uuid:6fc355bf-5b2e-4035-9834-f6a6b079eb7a> | CC-MAIN-2015-35 | http://www.redorbit.com/news/science/1745891/researchers_study_possible_responses_to_climate_emergencies/ | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644065330.34/warc/CC-MAIN-20150827025425-00151-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.946017 | 851 | 3.625 | 4 |
For every "a-ha!" moment in science, there are more than a few "hmmms."
The thrill of discovery is invariably preceded by periods of perplexity, when experiments yield confusing or contradictory results, a new approach becomes a dead end, or number-crunching leaves researchers numb.
Science is often portrayed as a dispassionate search for truth, which overlooks the passion that so many researchers bring to their work. What attracted many of them to the field, after all, is the opportunity to find new answers to knotty questions.
If the bench work of science rests on the methodical performance of well-defined procedures, the essence of scientific thinking is precisely that it is not routine, but often requires intuitive or logical leaps. The solution to a problem may lie in finding a new use for old technology, in viewing discrepancies from a different angle, or framing a question in a novel way.
At Dana-Farber, where cancer research and care are deeply entwined, the motivation for solving scientific puzzles is never purely intellectual, regardless of the modern focus on molecular networks and the genetic eccentricities of cells. The following stories help illustrate the stumbling blocks basic researchers often face, and the skills they use to overcome them.
Nature's knack for packaging DNA in ultra-tight bundles within the cell nucleus poses a challenge for scientists like Dana-Farber's Brendan Price, PhD, who studies how cells repair breaks in their DNA.
Rather than jamming genetic material haphazardly into the nucleus, the cell neatly packs the molecular equivalent of a household's worth of clothing into a single suitcase. As part of this process, the cell wraps DNA around clusters of protein molecules called histones, forming a bead-like string of structures called nucleosomes.
The arrangement is so efficient that the DNA within it takes up about 250 times less space than if it were loose. Such compactness enables the cell to store six feet of DNA in a nucleus .0002 inches in diameter.
When a section of DNA breaks – as a result of exposure to radiation or chemicals, or through mistakes in cell division – cancer can result. In response, the cell dispatches several crews of proteins to fix it. The proteins thread their way through the nucleosomes, gently loosening them to clear a path to the broken section. Once there, they summon other proteins to piece the strands of DNA back together.
Ideally, researchers would mimic nature's own techniques for DNA repair. It isn't possible to replicate in a laboratory, however, a process that took thousands of years to evolve and involves dozens of different proteins – not all of which are known.
The question of how to locate the damaged regions where the nucleosomes have been unpacked absorbed scientists for years. At first, Price and his colleagues tried making artificial histones and tagging them with fluorescent tracers, hoping damaged cells would take in the fabricated histones and splice them into the damaged section of their DNA. All such attempts failed.
Then, inspiration struck. "I've always found it useful to go back and read the original studies to understand what investigators back then were doing," says Price. "It's a way of coming to the field fresh, without any preconceptions."
While perusing scientific articles from 30 and 40 years earlier, Price read that the strength of the DNA-histone bond varies with the level of salt in the surrounding fluid. By increasing the salt concentration, Price and his colleagues were able to weaken the DNA "wrapping" at the sites of damage, allowing them to remove the histones and DNA-repair proteins and giving them easy access to the damaged DNA. The technique proved so useful it has since been adopted by scientists around the world.
The inventor who realized that microwaves – originally used in radar and long-distance telephone communication – could be harnessed to cook food has something in common with Dana-Farber's Kimberly Stegmaier, MD.
Stegmaier and her colleagues have developed powerful tools for cancer drug discovery using techniques borrowed from other areas of biological research. Her area of interest lies deep within the cell's machinery for converting genetic information – stored in DNA – into proteins that carry out the cell's business.
Malfunctions in this machinery can give rise to a variety of cancers. Sometimes, researchers are able to identify the individual protein cogs at fault and block them with targeted drugs. In other cases, the guilty proteins are unknown, and even if they were known, they may be impossible to reach with current drugs – hence their reputation as "the undruggables."
Stegmaier devised a solution. "We asked ourselves, 'Are there technologies available now, which were not available 10 years ago, that could be used to screen large numbers of chemicals as potential cancer drugs?'" she remarks. "We realized that microarrays – DNA chips that genetically 'fingerprint' cells by charting the activity or inactivity of thousands of genes at a time – would fit the bill."
The Dana-Farber team used microarray technology to compare the genetic fingerprints of normal, mature cells with those of cancer cells. They then screened entire "libraries" of drugs to find which ones converted the cancer cells' fingerprint to normal.
At this point, however, another challenge loomed: While microarrays could be used to define genetic fingerprints, they weren't a practical way of screening many thousands of molecules at a reasonable price.
Again, the solution was the novel use of an existing technology – in this case, one that uses fluorescent beads to indirectly measure minute amounts of RNA, a courier of genetic information and an indicator of the activity level of specific genes. The technology meshed perfectly with the format used in drug screening.
Since then, Stegmaier's group has identified several compounds with promise for treating acute myeloid leukemia and other cancers. "Bringing these two technologies together was like pointing spotlights from two different angles on the problem," she remarks. "The intersection was the key to narrowing our search for drug candidates."
Matthew Freedman, MD, and his colleagues had assembled a tidy case against a section of chromosome 24 as a co-conspirator in several kinds of cancer including colon, breast, and prostate.
The site, just a few units of DNA long, is located in what is known as a "gene desert," a region of chromosome devoid of genes that hold the code for making proteins. Without protein blueprints of its own, the site is thought to be involved in gene regulation – cranking the activity of nearby genes up or down to produce the proper amount of certain proteins.
Studies have shown that subtle variations at the site – changes in the placement of a handful of letters of the genetic code – are associated with many types of cancer. The question for scientists has been: Which gene or genes does the risk-increasing site regulate, and how?
Freedman and his associates thought they had the perfect candidate. The gene closest to the risk site is Myc (pronounced "mick"), one of the most notorious cancer-causing genes. The proximity suggested that the site interacts with Myc, and that changes in that interaction could have a role in cancer. Researchers needed only to show that variations in the risk site were linked to changes in Myc's activity, and the case would be solved.
However, Freedman's experiments found that the three major variations at the risk site produced no differences in Myc activity. "On paper and in theory, the connection seemed obvious, but we weren't able to prove it," Freedman relates. "We had to find another technique that would implicate Myc."
The alternative turned out to be chromosome conformational capture, a process that enables researchers to study how different parts of the genome interact in three dimensions. With this technique, Freedman discovered that the risk site has a "long-range loop" of DNA fiber that links to Myc like a molecular lasso.
"We demonstrated that these two regions are indeed co-localized [meaning they occupy the same space] and communicate with each other," Freedman says.
"Knowing that the risk site and Myc are in contact presents us with an attractive target for devising new cancer therapies."
Paths of Progress Spring/Summer 2012 Table of Contents
Dana-Farber Cancer Institute, 450 Brookline Avenue, Boston, MA 02215 | Call us toll-free: | <urn:uuid:895159b9-3a4d-445b-8b55-e376b7f6b57b> | CC-MAIN-2016-18 | http://www.dana-farber.org/Newsroom/Publications/How-Scientists-Solve-Research-Riddles.aspx | s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461861700326.64/warc/CC-MAIN-20160428164140-00219-ip-10-239-7-51.ec2.internal.warc.gz | en | 0.958071 | 1,730 | 3.25 | 3 |
Reflect: Think about the readings and activities of the course thus far. Include thoughts about your own experiences and what others have shared.
Create: Generate a genuine question about inquiry or inquiry-based learning (one that you’d like to know the answer to or to understand better). Ideally, this is indexed in some way to the readings and activities, the common experience of the class.
Discuss -> Create: Work with a partner to select one (or two) questions. Then, refine the question(s) so that they are most likely to stimulate rich discussion and inquiry by the class, as well as giving you the answers you seek. Use what you know about inquiry-based learning to make the question generative.
Ask -> Discuss: Present the question to the class and lead a discussion that starts with the question. The question may help you converge on an answer, or diverge to interesting additional issues.
Investigate: Continue the inquiry beyond the class time.
The questions below were generated by students in the Inquiry-Based Learning class on May 21, 2015. They sparked excellent discussions, which were thoughtful, connected, tied to the readings and shared experiences, and neither overly specific nor vague and general.
- What role, if any, can an outsider play with respect to situated, community-based inquiry? This is both a practical and moral question.
- How do we identify (or distinguish) between open learning in general and practices that embody the full progressive impulse, with its democratic aims? E.g., the Quest to Learn schools structure the curriculum around video games and play,. They’re very engaging, but may not have an explicit agenda to promote democracy.
- How can we foster inquiry experiences with limited or inconsistent time, for example, different kids showing up on different days of an after school program?
- How do we initiate an inquiry process for adults, including graduate/professional students, and working professionals?
- How can we structure a unit so that students develop important skills versus simply carrying out some activity?
- How can we use inquiry-based learning in real settings where there are many students and limited access to materials?
These really are great questions! | <urn:uuid:1441aafe-6d4b-436e-8c02-8fdd8fee6ba3> | CC-MAIN-2023-14 | https://chipbruce.net/cv/teaching/stimulating-inquiry-about-inquiry/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950110.72/warc/CC-MAIN-20230401160259-20230401190259-00547.warc.gz | en | 0.947295 | 454 | 3.875 | 4 |
Arthroscopic Hip and Hip Replacement Surgery
Arthroscopic surgery is a collection of minimally invasive procedures designed around the use of an arthroscope, a long, flexible tube with a camera on the end. This tool allows the surgeon to visualize the site of surgical manipulation without making a large incision and opening up the joint. As a result, there is less risk and faster recovery associated with arthroscopic surgery.
The hip-joint (acetabulofemoral) benefits from a wide range of arthroscopic surgeries. Some of these operations include:
- Removal of bone spurs (impingement repair)
- Cartilage repair
- Loose body removal
- Torn labrum correction
Hip arthroscopy procedures are usually performed on an outpatient basis and recovery is accelerated compared to open-surgery patients. In fact, depending on the operation, the patient may be allowed to start rehabilitation immediately following surgery, stretching the day after or even the night of surgery. Range of motion can generally be reclaimed within a week and walking may be possible within two to three weeks.
It is important to note that arthroscopy will not be used to treat arthritis of the hip due to its limited effectiveness. Full hip replacement also cannot be accomplished using just arthroscopy.
All surgical procedures carry some degree of risk for the patient’s well-being, and arthroscopy is no exception. The normal surgery concerns are non-issues, such as infection, but the anatomy of the hip itself presents vulnerable nerves and blood vessels that may complicate the procedure. This contributes to the fact that hip arthroscopy has progressed and evolved at a much slower rate than shoulder and knee arthroscopy. The external joints are much easier to manipulate and contain fewer fragile entities.
A hip fracture is a break in the top of the femur (thighbone) where the bone angles toward the hip-joint. If the break occurs within two inches of the joint, it is called a femoral neck fracture. If it occurs between two and four inches from the joint, it is known as an intertrochanteric fracture. (A break further down the bone is classified as a broken femur rather than a broken hip.) Femoral neck fractures require more extensive surgery.
Hip fractures usually make it too painful for the person to stand. The leg may turn outward or shorten. They generally require hospitalization and surgical repair.
A person’s risk for suffering a hip fracture increases if he or she is over 65, female, or small-boned; has a family history of hip fractures; has osteoporosis or low calcium, which leads to bone weakness; smokes or uses alcohol excessively; is physically or mentally impaired; or takes medications that cause weakness or dizziness. Hip fractures are a common and serious problem for the elderly, for whom a simple fall in the home may be enough to break the bone.
Hip Replacement Surgery
The hip is a “ball-and-socket” joint where the “ball” at the top of the thigh bone (femur) fits inside the “socket” in the pelvis (acetabulum). A natural substance in the body called cartilage lubricates the joint. When the bone and/or cartilage of the hip becomes diseased or damaged from arthritis, hip fractures, bone death or other causes, the joint can stiffen and be very painful. A total hip replacement may be recommended for patients who experience severe hip pain and whose daily lives are affected by the pain.
In a total hip replacement, the diseased bone and cartilage are replaced with a metal ball and plastic cup. The artificial joint, called a prosthesis, may be cemented in place, may be cementless, or may be a hybrid of both. The surgery takes from two to four hours, followed by another few hours spent under observation in a recovery room. Patients usually enjoy immediate relief from joint pain after the surgery.
Physical therapy starts as soon as the first day after surgery with the goal of strengthening the muscles and preventing scarring (contracture). Therapy begins with the patient sitting in a chair and progresses to stepping, walking and climbing stairs, first with crutches or walkers and then without supportive devices. Occupational therapy and at-home exercises help patients learn how to use the prosthesis in everyday activities.
Total hip replacement is successful in over 95% of well-selected patients. On average, replacements last 15-20 years. Some patients enjoy full use of the prosthesis after 25 years or longer.
Read more about total hip replacement at www.medicinenet.com | <urn:uuid:5d308009-c5b3-4fe4-91ef-695f1f5d95fb> | CC-MAIN-2019-04 | https://www.osaliortho.com/services/hip/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583850393.61/warc/CC-MAIN-20190122120040-20190122142040-00026.warc.gz | en | 0.926267 | 966 | 2.703125 | 3 |
3.7. Chapter 3 Exercises¶
There are 3 syntax errors in the following code. Fix it to print correctly without errors. It will print, “Your name is Carly and your favorite color is red.”.
You will get an error if you try to run the following code. Fix the code to print correctly without errors. It should print, “Your name is Carly and your age is 19.”
Note: Don’t forget that to turn an int into a string you do something like
x is the int you want to turn into a string.
Use string slicing to get “giant alligator” from
sentence and store it in
You should not be typing the string “giant alligator” yourself, you should be
getting the right part of what is stored in
The print will put |’s around your output to make it clear if you have a space at the start or end of s1.
Using the variables given, modify the print statement to print
"A car travelling at 70 mph takes 2.0 hours to go 140 miles."
Make sure to print the variables, not the values you know they contain.
If we were to change
distanceTravelled your program should still print a
mathematically correct message.
Write code below to get at least 3 values from the user using the
input function and output
a mad lib (which will use the input to tell a silly story).
This problem is not automatically checked. Make sure you are using variables to build your output and that the story uses the values you type in as input. Try giving different inputs and make sure that the story uses them. | <urn:uuid:11343fcc-bb47-40ed-b982-9473236038a3> | CC-MAIN-2023-06 | https://runestone.academy/ns/books/published/welcomecs/CSPNameStrings/Exercises.html | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764501407.6/warc/CC-MAIN-20230209045525-20230209075525-00879.warc.gz | en | 0.864309 | 376 | 4 | 4 |
Video Game Uncovers Gender-Based Work-Out Differences
A Michigan State University researcher is using a video game to uncover key differences in what motivates men and women to go the “extra mile” during a workout.
The study, led by Deborah Feltz, a University Distinguished Professor of kinesiology, gave participants the opportunity to watch an avatar of themselves exercise alongside a virtual partner, and was featured in the Games for Health Journal.
Results showed that male participants increased the duration of their exercise an average of 12.5 minutes longer, while women showed almost no effect.
“For women, to have the avatar in front of them was almost enough,” Feltz said. Women responded more positively to working out with their avatar, rather than competing with a partner.”
Eighty-two participants took part in the study and were evaluated on their performance during 12 sessions of aerobic exercise consisting of two types of partners; a consistently superior partner and another superior partner who showed signs of fatigue.
Feltz said that previous research has shown women are more motivated by the sense of being part of a team and not letting a partner down, instead of competing against someone.
“Instead of having partners, it might be more enjoyable for both men and women if they played with their partner against a team.”
But she added that because a team environment helps build social identity and comradery, this could be even more of a motivator for women in a workout setting.
The National Heart, Lung and Blood Institute at the National Institutes of Health funded the study. Other MSU researchers on the project included Karin Pfeiffer, associate professor in kinesiology, Norbert Kerr, professor of social science psychology, Brian Winn, associate professor in media and information and doctoral students Stephen Samendinger and Emery Max.
– via MSU Today | <urn:uuid:e14096c0-af2b-40c1-a677-1c42d4030437> | CC-MAIN-2017-04 | http://research.msu.edu/video-game-uncovers-gender-based-work-out-differences/ | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279923.28/warc/CC-MAIN-20170116095119-00103-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.962996 | 388 | 2.65625 | 3 |
For American lawyers, the answer is simple: "The English had used their own Declaration of Rights to depose James II and these acts were deemed completely lawful and justified," they say in their summary.
To the British, however, secession isn't the legal or proper tool by which to settle internal disputes. "What if Texas decided today it wanted to secede from the Union? Lincoln made the case against secession and he was right," they argue in their brief. - BBC
Consensus Scarce on Future of Overseas Bases - Politico
Obama to Bypass Congress on Mortgages - CBS News
Note Shows Big Power Split Over Iran - CBS News
Obama Signs Free-Trade Pacts - The Washington Times
TSA Misses Loaded Gun in Bag at LAX - USA Today | <urn:uuid:f42156bc-bf8b-48ec-b1dd-a82fbe34332b> | CC-MAIN-2016-40 | http://www.campaignforliberty.org/liberty-newswire-october-24-2011 | s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738661767.35/warc/CC-MAIN-20160924173741-00204-ip-10-143-35-109.ec2.internal.warc.gz | en | 0.976765 | 162 | 3.046875 | 3 |
Conceptos Plásticos is building homes for low-income families using recycled plastic bricks.
Following the government’s 583 million dollar investment in housing construction back in 2013, Colombia’s housing deficit has dropped substantially. According to the Departamento Administrativo Nacional de Estadistica (DANE) report, the improvement is largely due to the implementation of various social housing projects aimed at helping low-income families. Conceptos Plásticos, founded by architect and social entrepreneur Oscar Andres Mendez, is a local enterprise that is building homes and other infrastructure using sustainable bricks made out of plastic waste.
By recycling plastic from electronic waste, packaging and tyres, Conceptos Plásticos has developed a building material called Bloqueplas. The waste collected is melted down, poured into a mould and turned into plastic blocks with a joining design that enables a block to slot into another like Lego pieces.
Aside from being fire and earthquake resistant, the alternative building material provides a durable shelter that requires no maintenance and that is 30 per cent cheaper than traditional housing systems in Colombia’s low-income communities. In addition to this, Conceptos Plásticos reduces water and energy consumption, as well as CO2 emissions by recycling waste that would otherwise end up in landfills.
Skilled labour is not required to build with the plastic bricks. After initial training on how to use the blocks, as little as four community members can build a whole house in just five days, or a shelter to temporarily house many families in only ten days. The building blocks are easy to dismantle making them also ideal for temporary or mobile shelter solutions.
To date, Conceptos Plásticos has worked with the Colombian government, various NGOs and private organisations to build homes, temporary shelters, classrooms and community halls using its signature building material.
Via Design Indaba | <urn:uuid:3121bbf7-1f6c-4c41-b6f6-3a8fed4aa384> | CC-MAIN-2017-09 | http://www.liveeco.co.za/2016/08/02/conceptos-plasticos-housing-made-plastic-waste/ | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171933.81/warc/CC-MAIN-20170219104611-00193-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.940207 | 388 | 2.625 | 3 |
Your basket is empty.
Indonesian coffee is now ranked fourth in the world's largest in terms of production. Indonesia's coffee history allegedly came from the story Muslim pilgrims returning from the Middle East brought coffee beans to India in early 1600.
In 1696, the Dutch brought coffee to Batavia, in what is now Java. Batavia soon became the main supplier of coffee to Europe. Sumatran coffee beans are some of the heaviest, smoothest, and most complex coffees in the world. The Toraja region is the source of most of the high quality Sulawesi coffee. Java Indonesia coffee does not display the same body and richness as coffees from Sumatra or Sulawesi since most Java coffee is wet-processed. There is, however, a slight spicy-smokiness evident in some Java coffee to set it apart from its neighbors. | <urn:uuid:2a1d0eea-75eb-4d78-8d71-1d33bfc3048a> | CC-MAIN-2016-36 | http://indonesiancoffee.net/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982964275.89/warc/CC-MAIN-20160823200924-00173-ip-10-153-172-175.ec2.internal.warc.gz | en | 0.946755 | 177 | 2.625 | 3 |
The Story of Publius Quinctilius Varus
In September 9 AD an army of three Roman legions with supporting units of cavalry and auxiliaries, around 20,000 men in all, was annihilated in a running battle which lasted for three days. Lulled into a false sense of security by the Germanic chief Arminius, the Roman governor Publius Quinctilius Varus led his army into a trap that only a handful managed to escape alive. The loss of the Varian Legions was a massive psychological blow to the Roman Empire and, after 9 AD, the Romans gave up their plans to hold Germania and withdrew to the west bank of the Rhine.
Roman Accounts of the Battle
Other Primary Source Material
The Archaeology of the Clades Variana
Modern Accounts of the Battle
The Roman Army
The Early Germans
Roman Accounts of the Battle
Cassius Dio - From Roman History (Book 56, 18-24)The longest and most detailed account of the revolt, the battle and its aftermath. Dio gives valuable details of the situation in Germany before the uprising and his account of the battle is the best which survives.Gaius Velleus Paterculus - Roman HistoryA briefer account, Paterculus' passage lays blame for the disaster very much on Varus and gives the names of two of his senior officers.
Cornelius Tacitus - The Annals (Book 1, 61)Here Tacitus describes Germanicus Caesar's discovery of the remains of Varus legions while campaigning in northern Germany.Florus - Epitomae (Book 22, 88)Florus' account gives the background of Drusus' conquest of the province of Germania and the circumstances leading up to the revolt, as well as some lurid details of the torture and mutilation of captured Roman officers.Cornelius Tacitus - The Annals (Book 2, 88)
(English and Latin)
Other Primary Source MaterialIn this passage Tacitus gives an account of a Chattian plot to poison Arminius and his subsequent assassination at the hands of his own people, the Cherusci. (English and Latin)Strabo - Geographica - (Chapter Seven, 11.33-1.5)These three passages from the Greek geographer Strabo give valuable information on the geography of Germania. The second passage - VII 1.4 - also gives a brief account oof the battle and the names of several of the chiefs involved in the uprising against Varus the subsequent wars with Germanicus.Maps of Germania
The translation and notes are courtesy of Iris Kammerer.Ptolemy's Geography gives us the names of settlements and forts within Germania, many of which correspond to names found in Tacitus and Strabo. Here is a modern chart placing many of these recorded names, along with three modern maps showing the Germanic tribes at the time of Varus, the Roman campaigns in Germania and the location of various Roman bases and camps in the province.The Kalkriese Excavations (in German)
The Archaeology of the Clades VarianaThe University of Osnabrueck in Germany seems to have discovered the site of the Clades Variana. A combination of Roman military artefacts and coins dating to 9 AD indicate that the Kalkriese site is, at least, one associated with the battle. Here is a short abstract in English.Summary of Evidence from the Kalkriese Site (in English)A summary of the archaeological evidence which links the Kalkriese site to the Clades Variana (Coming Soon).Bibliotheca Germaniae - A Reconstructed Cheruscian VillageIris Kammerer's excellent site, including photos of a reconstructed first century Cheruscian village in Germany. It also includes images of a reconstructed Treveran village from the same period, images of Germanic defences and much more.The Abandoned Roman Colony at WaldgirmesA great site detailing the on-going excavations of a Roman colony town which was abandoned and burnt to the ground in the wake of the Varian Disaster. Includes reconstructions of the forum, the equestrian statue of the Emperor Augustus and an artist's rendition of how the colony may have looked circa 9 AD. (German language site, with English and French versions available)The Kalkriese Lorica FindOne of the most important archaeological finds at the battle site at Kalkriese was of a plate of Roman lorica segmentata - the earliest find of this type of armoour so far. This discussion of the find and its significance is from Matthew Amt's excellent Legio XX pages (see below)Die Varusschlacht im Osnabrücker LandAnother German language site devoted to the Kalkriese finds and the Varian Disaster. This one has extensive source material and other useful links.Barry Darling Coins - Varus Site
Modern Accounts of the BattleAn excellent and up-to-date reconstruction of the battle, with particular reference to the evidence provided by the many coins found on the site of the battle. Highly recommended.FalcoPhiles - The Teutoburg Massacre 9 AD <Louise Dade's site devoted to the Marcus Didius Falco mystery novels of Lindsey Davis details the background and events of the 'Teutoburg Massacre' (which is what Roman fans call Varus' crushing defeat *g*). The battle was a detailed part of the background to Davis' novel The Iron Hand of Mars - which is worth a read despite Davis mistaking the Germanics for 'Celts'. Celts?!!Channel 4 - Secret History: Lost Legions of VarusIn late 2001 the UK Channel 4's 'Secret History' series featured a documentary on the Varian Disaster - Lost Legions of Varus. The documentary featured extensive information about the Kalkriese excavations and a reconstruction of the battle featuring members of British Roman re-enactment groups. Some great photos of the filming can be found on this Legio Secunda Augusta page (scroll down to 'The Varus Disaster'). Have a look at the rest of the Legio Secunda Augusta site while you're there.Hermann and the Teutoburger WaldA very old fashioned and highly out-dated account of the battle and its background. This is one indication of the way in which the events surrounding the battle have been interpreted by Germanic nationalists and nineteenth century Romantics. Read with caution.Arminius the CheruscianA reasonably well-researched article from the neo-pagan magazine The Runestone. It has a slightly romantic bias towards the Germans, but gives detailed information about the period after the battle and the end of Arminius' life.DiscussionsThe Varus ForumContribute to the Varus Forum and offer your suggestions, ideas and information about the battle, the background history and the screenplay. All contributions welcome.The Roman Army Page
The Roman ArmyAs one of the most extensive and carefully compiled collection of articles and resources on the Roman Army, Sander van Dorst's invaluable site is a must for anyone interested in Roman military affairs.
The Roman Army ForumLively discussion of all aspects of Roman military history by historians and re-enactors. Discuss Boudica's last battle, helmets, shield grips and, of course, the Varus Film Project.Legio XX Home PageMatthew Amt's extensive and well researched site devoted to the Legio XX Roman re-enactment group, based in the US. The site has many useful historical articles about the Roman military system and plenty of photos of Legio XX in action.Romanarmy.comAn extensive and rapidly expanding resource site devoted to all aspects of the Roman Army. It includes the Roman Army Talk discussion board and a large number of links, articles and useful items of interest. They have also awarded 'Clades Variana' their Corona Aurea award for website excellence.Loricae Romanae'Loricae Romanae' is Dave Pearson's careful examination of the armour used in the Roman Army, with very detailed information on Roman mail, scale, lorica segmentata and muscled cuirasses. It includes excellent primary iconographical material from sources such as Trajan's Column and reconstruction diagrammes of the various finds of segmentata.The Calleva Film ProjectFilm-maker Sean Caveille is producing a documentary which aims to bring Roman Silchester to life with the help of local archaeologists, re-enactors and various Roman enthusiasts. Support another Roman-oriented historical production.The Germanic Heritage Page
The Early GermansA collection of articles, links and resources devoted to the early Germanic peoples. Language, literature, history, pagan mythology, runes and even pastimes and games are extensively covered.Tacitus - GermaniaCornelius Tacitus drew on Livy's (lost) German Wars and the Geographica of Strabo, as well as reports and observations of Roman veterans, to write this the most detailed account of the peoples and customs of ancient Germania.
This is Thomas Gordon's translation, but a Latin text version is also available online.
*Theudawurdò - Home of the Germanic-L Discussion Listt*Theudawordò ('Words of the Tribe') is the home page of the Germanic-L mailing list - devoted to discussion of all aspects of the early Germanic Peoples from prehistory to circa 800 AD.
Home | Background | Synopsis | Discussion | Screenplay | <urn:uuid:297f312e-ba2d-4efb-88d4-20a5a5e9492c> | CC-MAIN-2015-27 | http://webspace.webring.com/people/by/yeshua666/background.html | s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375094690.4/warc/CC-MAIN-20150627031814-00116-ip-10-179-60-89.ec2.internal.warc.gz | en | 0.908567 | 1,972 | 2.96875 | 3 |
February 14, 1966 saw the introduction of decimal currency to Australia. On Changeover Day, the old system of pounds, shillings and pence gave way to the new dollars and cents. A year earlier, an extensive public education program had been launched by the Decimal Currency Board. The following advertisement is from the National Film and Sound Archives collection.
Schools introduced the new currency to their students. Television advertisements were aired featuring a cartoon character called Dollar Bill. The Retail Traders’ Association held special lectures for members. In the lead-up The Courier-Mail ran a regular column, ABC of Decimals, which advised the public on conversions and practical applications. There was even a telephone service, The Dollar Jills, on hand to answer questions.
All of these preparations paid off, with the changeover remarkably smooth. Prime Minister Harold Holt said Australians deserved a “pat on the back for the good-natured way they have accepted decimal currency into their daily routine”. Brisbanites could take their share of the credit, though there were a few complaints due to confusion over conversion and suspicions of overcharging by some shopkeepers.
Three weeks before the changeover, The Courier-Mail conducted a street survey to record what Brisbane citizens thought of the new currency. It received a mixed reception: ” [The notes] look like lolly wrappers. They would go better on a jam tin than in a wallet” was one response. Another man expressed his indifference – “I wouldn’t really care if Ned Kelly’s portrait was on them, as long as I have enough”.
As part of State Library of Queensland’s collection, the John Oxley Library holds a few educational materials produced by the Decimal Currency Board including – Dollar & Cents & You (1966) and For businessmen : how to change over to dollars & cents (1965). Also part of our collection are two school textbooks from 1966, which were published in Brisbane by Jacaranda Press – Third year mathematics A : decimal currency and Third year mathematics A and B : decimal currency.
YouTube clips via National Film and Sound Archive and National Archives of Australia.
Myles Sinnamon – Project Coordinator, State Library of Queensland
[First published on 12 May 2014. Updated on 14 February 2016] | <urn:uuid:cfc93165-db8b-440d-8d9a-a99b1686a0b0> | CC-MAIN-2018-05 | http://blogs.slq.qld.gov.au/jol/2016/02/14/changeover-day-14-february-1966/ | s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084889567.48/warc/CC-MAIN-20180120102905-20180120122905-00061.warc.gz | en | 0.961784 | 474 | 3.171875 | 3 |
REPUBLIC ACT NO. 8976 November 7, 2000
AN ACT ESTABLISHING THE PHILIPPINE FOOD FORTIFICATION PROGRAM AND FOR OTHER PURPOSES.
Be it enacted by the Senate and House of Representatives of the Philippines Congress assembled:
Section 1. Title. - This Act shall be known as the "Philippine Food Fortification Act of 2000."
Section 2. Declaration of Policies. - Section 15 of Article II of the Constitution provides that the State shall protect and promote the right of health of the people and instill health consciousness among them.
State recognizes that nutritional deficiency problems in the Philippines, based on nutrition surveys, include deficiency in energy, iron, vitamin A, iodine, thiamin and riboflavin. To a minor extent, the Filipino diet is also deficient in ascorbic acid, calcium and folate.
The State recognizes that food fortification is vital where there is a demonstrated need to increase the intake of an essential nutrient by one or more population groups, as manifested in dietary, biochemical or clinical evidences of deficiency. Food fortification is considered important in the promotion of optimal health and to compensate for the loss of nutrients due to processing and/or storage of food.
Food fortification, therefore, shall carried out to compensate for the inadequacies in Filipino diet, based on present-day needs as measured using the most recent Recommended Dietary Allowances (RDA)
Section 3. Definition of Terms. - For purposes of this Act, the following terms shall mean:
(a) BFAD - the Bureau of Food and Drugs of the Department of Health.
(b) DOH - the Department of Health.
(c) Fortification - the addition of nutrients to processed foods or food products at levels above the natural state. As an approach to control micronutrient deficiency, food fortification is addition of a micronutrient, deficiency in the diet, to a food which is widely consumed by a specific at-risk groups.
(d) Fortificant - a substance, in chemical or natural form, added to food to increase its nutrient value.
(e) Micronutrient - an essential nutrient required by the body in very small quantities; recommended intakes are in milligrams or micrograms.
(f) Manufacturer - the refinery in case of refined sugar or cooking oil, the miller in case of flour or rice, or the importer in case of imported processed foods or food products, or the processor in case of other processed foods or foods products.
(g) NCC - the Governing Board of the National Nutrition Council.
(h) Nutrient - any chemical substance needed by the body for one or more of these functions; to provide heat or energy, to build and repair tissues, and to regulate life processes. Although nutrients are found chiefly in foods, some can be synthesized in the laboratory like vitamin and mineral supplements or in the body through biosynthesis.
(i) Nutrition Facts - a statement or information on food labels indicating the nutrient(s) and the quantity of said nutrient found or added in the processed foods or food products.
(j) Nutrition labeling - a system of describing processed foods or food products on the basis of their selected nutrient content. It aims to provide accurate nutrition information about each food. This is printed in food labels as "Nutrition Facts."
(k) Processed food or food products - food that has been subjected to some degree of processing like milling, drying, concentrating, canning, or addition of some ingredients which changes partially or completely the physico-chemical and/or sensory characteristics of the food's raw material.
(l) Recommended Dietary Allowances (RDA) - levels of nutrient intakes which are considered adequate to maintain health and provide reasonable levels or reserves in body tissues of nearly all health persons in the population.
(m) Sangkap Pinoy Seal Program (SPSP). - a strategy to encourage food manufacturers to fortify processed foods or food products with essential nutrients at levels approved by the DOH. The fundamental concept of the program is to authorize food manufacturers to use the DOH seal of acceptance for processed foods or food products, after these products passed a set of defined criteria. The seal is a guide used by consumers in selecting nutritions foods.
(n) Unprocessed food - food that has not undergone any treatment that results in substantial change in the original state even if it may have been divided boned, skinned, peeled, ground, cut cleaned, trimmed, fresh-frozen or chilled.
Section 4. The Philippine Food fortification Program. - The Philippine Food fortification Program, hereinafter referred to as the Program, shall cover all imported or locally processed foods or food products for sale or distribution in the Philippines; Provided, That, dietary supplements for which established standards have already been prescribed by the DOH through the BFAD and which standards include specifications for nutrient composition or levels of fortification shall not be covered by this Act.
The program shall consist of (1) Voluntary Food Fortification and (2) Mandatory Food Fortification.
Section 5. Voluntary Food Fortification. - Under the Sangkap Pinoy Seal Program (SPSP), the Department shall encourage the fortification of all processed foods or food products based on rules and regulations which the DOH through the BFAD shall issue after the effectivity of this act.
Manufacturers who opt to fortify their processed foods of food products but do not apply for Sangkap Pinoy Seal shall fortify their processed food or food products based on acceptable standards on food fortification set by the DOH through the BFAD.
Section 6. Mandatory Food Fortification. - (a) the fortification fo staple foods based on standards sets by the DOH through the BFAD is hereby made mandatory for the following:
(1) Rice - with Iron;
(2) Wheat flour 0 with vitamins A and Iron;
(3) Refined sugar - with vitamin A;
(4) Cooking oil - with vitamin A; and
(5) Other staple foods with nutrients as may later required by The NCC.
The National Nutrition Council (NCC) shall require other processed foods or food products to be fortified based on the findings of nutrition surveys. Such requirement shall be promulgated through regulations to be issued by the Department of Health (DOH) through the Bureau of Food and Drugs (BFAD) and other concerned agencies.
(b) The fortification of processed foods or food products under this Section shall be undertaken by the manufacturers: Provided, That in the case of imported processed foods or food products, the required fortification shall be done by the producers/manufacturers of such imported processed foods or food products. Otherwise, the importer shall have responsibility of fortifying the imported processed foods or food products before said products are allowed to be distributed or sold to the public: Provided, further, That the implementation of the mandatory fortification for wheat flour, refined sugar, cooking oil and rice, including those milled and/or distributed by the National Food Authority, shall commence after four (4) years from the effectivity of this Act.
(c) The DOH guidelines on micronutrient fortification of processed food or food products included in Administrative Order No. 4-A series of 1995 and such other necessary guidelines that may be issued by the DOH, shall serve as a basis for the addition of micronutrient(s) to processed foods or food products to avoid over or under fortification that may create imbalance in the diet as well as avoid misleading label claims to gain competitive marketing advantage.
(d) Manufacturers of processed foods or food products shall include on the label a statement of "nutrition facts" indicating the nutrient(s) and the quantities of said nutrients added in the food.
(e) Imported rice, wheat flour, refined sugar, cooking oil and other processed foods or food products that may identified later by the NCC, shall comply with the requirements of this Act on entry in country, at the end of manufacturing process and/or at all points of sale or distribution.
Section 7. Quality Assurance. - The agencies charged with the implementation of this Act shall establish a quality assurance system. Likewise, the manufacturers and importers of processed foods or food products shall also establish their own quality assurance system in accordance with the quality assurance system of the implementing agencies.
Section 8. Implementation, Monitoring and Review. - The DOH through the BFAD shall be the lead agency responsible for the implementation and monitoring of this Act while the NNC, the policy-making and coordinating body of nutrition, shall serve as the advisory board on food fortification.
The DOH shall also be responsible in the conduct of promotional and advocacy activities on the use of fortified processed foods or food products through its Sangkap Pinoy Seal Program (SPSP) and/or other programs designed to promote nutrition. Products approved by the SPSP shall be allowed to use the Sangkap Pinoy Seal. Futher, the DOH is hereby authorized to charge reasonable fees for applications in the SPSP and use of such fees in the promotion and advocacy activities of nutrition.
The NCC shall conduct a periodic review of the micronutrients added to food. This review will provide the basis for determining if the mandatory fortification is still required or not. The review shall be done at least every five (5) years to coincide with the conduct of the Food and Nutrition Research Institute's (FNRI) national nutrition survey and/or the assessment of the Philippine Plan of Action for Nutrition (PPAN).
The local government units, through their health officers or agricultural officers or nutritionist-dieticians or the sanitary inspectors shall assist in monitoring/checking that foods to be mandated to be fortified like rice, refined sugar, wheat flour and cooking oil are properly fortified and labeled with "nutrition facts" indicating the specific micronutrient it was fortified with.
The local food industries shall report on the production, marketing and distribution of fortified foods. They shall annual reports to the DOH, also indicating their industrial concerns and recommendations.
Section 9. Support to Affected Manufacturers. - The following government agencies shall support the implementation of this Act through their respective programs:
(a) The Department of Trade and Industry (DTI) is hereby required to assist and support affected manufacturers in upgrading their technologies by helping them obtain soft loans and financial assistance for the procurement of technologies and machines to comply with the provision of this Act;
(b) The Department of Science and Technology (DOST) shall develop and implement comprehensive programs for the acquisition, design and manufacture of machines and technologies and transfer said machines and technologies to manufacturers;
(c) The Land Bank of the Philippines (LBP) and the livelihood Corporation (LIVERCOR) are hereby required to assist and support the implementation of this Act by granting loans, to affected manufacturers, at preferential rates; and
(d) The various agencies/institutions with accredited analytical laboratories for nutrient analysis and other technology development generators shall provide the necessary services that may be required by the food industry in compliance with this Act.
Section 10. Noncompliance with Fortification Process. - The following shall be considered non compliance with the fortification process:
(a) if the food fortification levels do not comply with the DOH requirements, except when the deviation from the fortification levels are justified and are properly declared in the labeling;
(b) If the fortificant used is different from that approved by the DOH; and
(c) If the process of fortification does not conform to the DOH standard.
Section 11. Administrative Sanctions. - The DOH through the BFAD, after notice and hearing, shall impose any or all of the following administrative sanctions in cases of noncompliance with the food fortification guidelines it has set:
(a) Denial of registration of the processed foods or food products by the DOH through the BFAD if the processed foods or food products do not comply with the food fortification requirements. Said processed foods or food products shall not be allowed to be put in the market;
(b) Order the recall of the processed foods or food product(s); and
(c) Impose a fine or not less than Three Hundred Thousand Pesos (P300,000.00) and suspension of registration for the first violation; not more than Six hundred thousand pesos (P600,000.00) and suspension of registration for the second violation; and not more than one million pesos (P1,000,000.00) and cancellation of the registration of the product for the third violation of the provisions of this Act or its Implementing Rules and Regulations (IRR).
Section 12. Implementing Rules and Regulations. - The DOH through the BFAD and in consultation with other concerned government agencies, nongovernment organizations, private sectors and consumer groups involved in nutrition, shall formulate the implementing rules and regulations (IRR) necessary to implement the provisions of this Act within ninety (90) days from the approval of this Act. The IRR issued pursuant to this Section shall take effect thirty (30) days after publication in a national newspaper of general application.
Section 13. International Commitments. - Nothing in this Act is intended to violate provisions of Treaties and International Agreements to which the Philippines is a party.
Section 14. Repealing Clause. - All laws, decrees, rules and regulations, executive orders inconsistent with the provisions of this Act are hereby repealed or modified accordingly.
Section 15. Seperability Clause. - If any provision of this Act is declared unconstitutional or unlawful, the remaining provisions shall remain legal and in full effect.
Section 16. Effectivity. - This Act shall take effect upon its approval.
Approved: November 7, 2000
(Sgd.)JOSEPH EJERCITO ESTRADA
President of the Philippines
The Lawphil Project - Arellano Law Foundation | <urn:uuid:8cbe2fdd-6343-4865-8f28-6866f87028b8> | CC-MAIN-2016-40 | http://www.lawphil.net/statutes/repacts/ra2000/ra_8976_2000.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660709.12/warc/CC-MAIN-20160924173740-00170-ip-10-143-35-109.ec2.internal.warc.gz | en | 0.926594 | 2,895 | 2.828125 | 3 |
Microsoft today detailed the new Storage Spaces feature in Windows 8. Storage Spaces in Windows 8 will dramatically improve how you manage large volumes of storage in your PC at work or home. It may some how replace Windows Home Server Driver Extender technology. Storage Spaces allow,
- Organization of physical disks into storage pools, which can be easily expanded by simply adding disks. These disks can be connected either through USB, SATA (Serial ATA), or SAS (Serial Attached SCSI). A storage pool can be composed of heterogeneous physical disks – different sized physical disks accessible via different storage interconnects.
- Usage of virtual disks (also known as spaces), which behave just like physical disks for all purposes. However, spaces also have powerful new capabilities associated with them such as thin provisioning (more about that later), as well as resiliency to failures of underlying physical media.
You can read more on this at Building Windows 8 Blog. | <urn:uuid:95b987f9-ea1e-479c-b64b-7b6a4217aef1> | CC-MAIN-2015-40 | http://microsoft-news.com/microsoft-details-storage-spaces-feature-in-windows-8/ | s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737867468.81/warc/CC-MAIN-20151001221747-00007-ip-10-137-6-227.ec2.internal.warc.gz | en | 0.917448 | 190 | 2.515625 | 3 |
Windows for Pen Computing
|Stable release||Microsoft Windows for Pen Computing 2.0 / 1995|
|Operating system||Microsoft Windows|
Windows for Pen Computing was a software suite for Windows 3.1x, that Microsoft designed to incorporate pen computing capabilities into the Windows operating environment. Windows for Pen Computing was the second major pen computing platform for x86 tablet PCs; GO Corporation released their operating system, PenPoint OS, shortly before Microsoft published Windows for Pen Computing 1.0 in 1991.
The software features of Windows for Pen Computing 1.0 included an on-screen keyboard, a notepad program for writing with the stylus, and a program for training the system to respond accurately to the user's handwriting. Microsoft included Windows for Pen Computing 1.0 in the Windows SDK, and the operating environment was also bundled with compatible devices.
Microsoft published Windows 95 in 1995, and later released Windows for Pen Computing 2.0 for this new operating system. Windows XP Tablet PC Edition superseded Windows for Pen Computing in 2002. Subsequent Windows versions, such as Windows Vista and Windows 7, supported pen computing intrinsically.
- The Unknown History of Pen Computing contains a history of pen computing, including touch and gesture technology, from approximately 1917 to 1992.
- About Tablet Computing Old and New - an article that mentions Windows Pen in passing
- Annotated bibliography of references to handwriting recognition and pen computing
- Windows für Pen Computer (German)
- Windows for Pen Computer (German link above translated by Google)
- Notes on the History of Pen-based Computing (YouTube)
|This Microsoft Windows article is a stub. You can help Wikipedia by expanding it.| | <urn:uuid:584e3587-0754-431c-9a44-fc09e176b933> | CC-MAIN-2013-20 | http://en.wikipedia.org/wiki/Windows_for_Pen_Computing | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701806508/warc/CC-MAIN-20130516105646-00015-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.867356 | 345 | 2.65625 | 3 |
HONG KONG – A mutated strain of scarlet fever more resistant to antibiotics has killed a second child in Hong Kong, the first deaths from the illness in the southern Chinese city in at least a decade, authorities said Wednesday.
Certain characteristics of the new strain likely make it more contagious, and it may be responsible for an outbreak sweeping Hong Kong, said Professor Kwok-yung Yuen, head of Hong Kong University's microbiology department.
The new strain strain has about 60 percent resistance to antibiotics used to treat it, compared with 10 to 30 percent in previous strains, he said.
A 5-year-old boy who died at a hospital Tuesday was confirmed to have scarlet fever Wednesday. A 7-year-old girl who died in May was the first patient in Hong Kong to die of the illness in at least 10 years.
Hong Kong has had 466 reported cases of scarlet fever so far this year, about double the annual total. The outbreak may have spread to neighboring Macau and mainland China.
About 9,000 cases have been reported on the mainland, about double the average from recent years, although no information is available on deaths, the Hong Kong Standard newspaper reported, citing health officials. Macau has 49 cases, a jump from 29 cases in 2009 and 16 in 2010, but no deaths have been reported, the Macau Daily Times said.
"We are facing an epidemic because the bacteria causing scarlet fever is widely circulating in the region - not only in Hong Kong but neighboring places such as the mainland and Macau," said Thomas Tsang, controller of Hong Kong's Centre for Health Protection, the Standard reported.
Scarlet fever is a streptococcal disease characterized by a bright red skin rash, fever and sore throat. It's most common in children under 10.
Infectious diseases are a particular concern in Hong Kong, where the 2003 SARS outbreak killed 299 people. Nearly 500 more deaths were reported in other countries. | <urn:uuid:184591d4-6368-443a-ba10-31f33ba3ffa6> | CC-MAIN-2015-27 | http://www.foxnews.com/world/2011/06/22/second-hk-child-dies-suspected-scarlet-fever/ | s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375099755.63/warc/CC-MAIN-20150627031819-00157-ip-10-179-60-89.ec2.internal.warc.gz | en | 0.965711 | 404 | 2.625 | 3 |
In the fall of 2014, a wave of the country swept mass spice poisoning. A significant part of the victims were teenagers and young people. It is at this age that a lack of life experience and unhealthy curiosity is felt. So what is spice and how is it dangerous for the body?
Currently, spice is called a smoking mixture with a narcotic effect. Initially, these were herbal and herbal compositions, then they began to produce chemical-impregnated spices. The composition of smoking mixes is completely unpredictable, and the dosage is made by eye. Sometimes spices are called “designer” drugs, as the composition of the mixture often changes. In this case, consumers of the goods act as guinea pigs.
Where did spice come from
In Russia, smoking mixtures of poisoning have been observed since 2008. These were the first generation spices, they were imported from Europe. The dependence on them was developing poorly, in many respects those spices were comparable to marijuana. In large cities, the number of victims of such mixtures was approximately 5% of the number who applied for drug treatment. A year and a half later, after the ban of such mixtures, the psychotropic drug JWH-18 was synthesized in the USA, and after it was banned, a wave of analogues from China began.
Spices are widespread, they are completely traded with impunity anywhere in the country. The synthetic drug comes in the form of a powder, then the merchants infiltrate it with the most innocent pharmaceutical herbs, plain tobacco, or even sawdust. Intoxication of the body with synthetic substances contained in smoking mixtures occurs very quickly and can have irreversible consequences. According to official statistics, about 8 million Russians use narcotic smoking mixtures. More than 700 types of spies are known, of which only 44 were banned. After mass poisoning at the end of 2014, a law was passed prohibiting the distribution, storage and use of narcotic smoking mixtures and allowing the relevant services to respond quickly when new types of spies appeared.
What are dangerous smoking mixtures
Doctors say that the spice is much more dangerous than traditional drugs that medicine has been studying for decades. This state of affairs is determined by the following points.
- Spice - a new threat. Treatment regimens have not been developed, the mechanism of action has not been studied enough.
- Until recently, smoking mixtures were perceived by public opinion lightly. There is no such widespread prevention among adolescents and their parents as for long-known drugs.
- Mixing several drugs in the smoking mix can cause different body reactions. For this reason, it is difficult for doctors to predict the course of the disease and correct the therapeutic treatment of spice poisoning.
- Constantly there are new compositions of smoking mixes.
A fatal outcome when using spikes is possible both due to poisoning and under the influence of a hallucinogenic factor. There are cases when a smoker went out the window, confident in his ability to fly.
Symptoms of Spice Poisoning
The intoxication with smoking mixtures is different from the usual drug intoxication. Due to the instability of the composition and combinations of narcotic substances in the mix, an overdose or severe poisoning with spice may occur. Symptoms appear after 1-2 puffs:
- loss of consciousness;
- mental disorder;
With long-term use of spices, signs of dependence and chronic poisoning of the body are noted:
- poor sleep;
- sharp fluctuations in appetite;
- pale skin;
- hair loss;
- dark circles under the eyes.
During the period of action of the drug, symptoms of brain activity are characteristic:
- sudden falling asleep in uncomfortable poses;
- inadequate behavior;
- bad memory;
- speech disorders.
In case of poisoning with smoking mixtures, death is possible. Smokers need urgent medical attention.
What to do when spice poisoning? It should be remembered that self-treatment is not effective and dangerous! A high dose of a hazardous substance may be in the mixture, and the help of doctors is required to neutralize it.
First, measures should be taken so that the addict cannot harm himself — go out of the window and the like, or to other people (in case of increased aggressiveness). If you lose consciousness, you must put the victim on its side, make sure that he does not suffocate vomit, there is no sticking of the tongue. Immediately call the ambulance brigade!
With the development of dependence, the person himself is not able to cope with it, so close people should help him. In certain cases, it is possible to get rid of addiction at home under the supervision of an experienced psychiatrist-narcologist.
Spice Poisoning Treatment
If the smoker was poisoned by spice, hospitalization is necessary. Treat patients admitted to the toxicological department. Smokers with experience, those who have already become addicted, continue to take the same mixture after leaving the hospital, not thinking about the sad consequences. With re-admission of such patients are sent to the narcological department.
In case of poisoning with smoking mixtures, symptomatic treatment is required, and in some cases, the administration of an antidote. As a rule, poisonings are massive. At the initial stage of outbreak development, physicians conduct the necessary toxicological studies of the mixture and find out which chemical substance was the cause. Then, depending on this, a detoxification and therapy scheme is developed. Most of those victims who were taken to hospital for treatment in time survive.
Spice poisoning treatment is carried out in 2 stages.
- Detoxification of the body, duration from 5 to 10 days. It can be carried out in a hospital or at home. Includes drug therapy and depression.
- The rehabilitation period. Dependence on smoking mixes is quite individual, therefore rehabilitation lasts from several weeks to three months. During this period, it is better for a drug addict to be in a rehabilitation center - it is important for him to change the situation and environment.
Damage to the body and long-term effects
When smoking narcotic mixtures, toxic substances pass through the lungs into the blood. As a result of spice poisoning in the body, a series of destructive processes occur.
- The first blow takes on the liver, trying to neutralize toxins and accumulating decomposition products.
- The main harm drugs cause the brain. Its blood circulation is disturbed, the capillaries are narrowed, the access of oxygen stops and the nerve cells die.
- Strongly influenced smoking spice on sexual function. Testosterone production decreases, impotence develops in men. Disappears sexual desire. In women, the sexual cycle and menstruation stops.
Spice smoker goes through all stages of dependence: from curiosity and experimentation to addiction, increasing the dose and perception of the drug as the only goal in life. There are conflicts in the family, changing the circle of communication. Drug addicts are disturbed by the perception of reality, and murder and suicide are possible. With prolonged use, irreversible disturbances occur. Possible effects of poisoning spice:
- persecution mania;
- loss of interest;
- a sharp change in the psyche due to brain degradation;
- reduced intelligence;
- memory impairment;
- stroke or heart attack at the age of 20-30 years;
- death from overdose.
When smoking narcotic mixtures, severe spice poisoning is possible. Signs of poisoning occur after 1–2 puffs. Vomiting, loss of consciousness, convulsions. In no case can you treat the victim yourself. It is necessary to call the ambulance crew and immediately take the victim to the hospital. Often a smoker is not able to ask for help on his own, so close and indifferent people are playing a big role.
Spices are drugs, so they are addictive and addictive. To get rid of it, the smoker requires treatment according to the same principles as ordinary drug addicts. | <urn:uuid:5ca04e0d-6b9a-400c-afcf-628f27a8c4dc> | CC-MAIN-2019-39 | http://en.intoxication-stop.com/otravlenie-spajsami_html_default.htm | s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573264.27/warc/CC-MAIN-20190918085827-20190918111827-00181.warc.gz | en | 0.951744 | 1,623 | 2.578125 | 3 |
Veterinary: Career and Education Opportunities in Mississippi
Veterinary: Veterinary Assistants help manage, feed and care for the animals in a veterinarian's care. Along with their overall care for the animals, they assist in the office and in the administration of health care.
Mississippi has a population of 2,951,996, which has grown by 3.77% over the past decade. Nicknamed the "Magnolia State," Mississippi's capital and biggest city is Jackson. In 2008, there were a total of 1,558,262 jobs in Mississippi. The average annual income was $30,383 in 2008, up from $29,542 the previous year. The unemployment rate in Mississippi was 9.6% in 2009, which has grown by 2.8% since the previous year. About 16.9% of Mississippi residents have college degrees, which is lower than the national average.
The top industries in Mississippi include furniture product manufacturing, household furniture cabinet manufacturing, and household furniture manufacturing. Notable tourist destinations include the Audubon Society Jackson Chapter, the Manship House Museum, and the Glory of Baroque Dresden Exhibition.
CITIES WITH Veterinary OPPORTUNITIES IN Mississippi
Featured Online Colleges
CAREERS WITHIN Veterinary
Veterinary Attendants feed, water, and examine pets and other nonfarm animals for signs of illness, disease, or injury in laboratories and animal hospitals and clinics. Veterinary Attendants need to read and understand what has been read. They also need to listen well to others and take in their information and issues. | <urn:uuid:572f5d82-b3d4-47a9-96c4-909644e7dbab> | CC-MAIN-2016-50 | http://www.careeroverview.com/usa/mississippi/healthcare-support/veterinary/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541170.58/warc/CC-MAIN-20161202170901-00393-ip-10-31-129-80.ec2.internal.warc.gz | en | 0.962496 | 320 | 2.640625 | 3 |
Each February, the American Dental Association sponsors National Children’s Dental Month to promote the benefits of great oral health. While most people understand the importance of a healthy smile, few people know just how big of a role orthodontic care plays children’s overall oral health.
The American Association of Orthodontists recommends that all children have a check-up with a licensed orthodontist around the age of seven. Many initial crowding, crossbite, and jaw discrepancies begin to surface at that time. Early treatment could allow your orthodontist to guide jaw growth, help lower the risk of trauma to protruding front teeth, detect and correct harmful oral habits, improve your child’s bite, and improve the appearance of your child’s smile. Many times, early treatment may prevent more serious issues from developing, make treatment later on shorter and less complicated, or help your child avoid further treatment altogether.
Whether or not the initial assessment reveals any issues, you’ll have peace of mind knowing that you are doing everything you can to help your child develop a healthy, beautiful smile. | <urn:uuid:3fe15b67-cba5-4953-83cb-7e87c5f27fb4> | CC-MAIN-2020-29 | https://www.smilecsra.com/category/general-dentistry/page/2/ | s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655897844.44/warc/CC-MAIN-20200709002952-20200709032952-00158.warc.gz | en | 0.928594 | 231 | 2.578125 | 3 |
An entrepreneur is someone who organizes and operates a business, invests in innovations and is willing to take on greater than normal risks in order to see his or her vision become a reality. Our one year diploma in entrepreneurship will develop the analytical abilities and strategic competencies necessary for students who wish to become entrepreneurs or are already part of a growing business. Our entrepreneurship courses are short programs, designed to help entrepreneurs grow their business.
To train students to set up their own business, become entrepreneurial managers or join their family business. Focus on ensuring preparation of bankable project reports by students.
Groom students as effective social entrepreneurs and change-agents.
To educate and certify existing entrepreneurs, who may not be formally trained, on entrepreneurial processes, functions and outcomes.
To motivate and equip students with necessary knowledge and skills for arriving at innovative plans for setting up their own enterprises.
To create awareness among students on life skills and inspire them to be on their own.
Introduction to Entrepreneurship
Legal Issues for Entrepreneurs
Introductory to Finance
Cultivating Entrepreneurial Values & Skills | <urn:uuid:7ee41781-9f83-4a0b-a60d-9902a6cc2575> | CC-MAIN-2019-13 | http://imeshlab.com/training/entrepreneurship.php | s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202161.73/warc/CC-MAIN-20190319224005-20190320010005-00512.warc.gz | en | 0.932772 | 220 | 3.046875 | 3 |
Scaling (aka: tooth scaling and root planing) is a procedure for professional deep dental cleaning.
When is scaling mandatory?
Some people complain of gum-related symptoms that include:
- Red, painful and inflamed gums.
- Gums that easily bleed.
- Appearance of pockets between your teeth.
- Foul breath.
These symptoms require a quick visit to the dentist who will examine you thoroughly evaluating:
- Your general health.
- Your oral health.
- If you have tooth stains, tooth decay or tooth loss.
- Check for misaligned teeth or improper bites.
- And finally dental X-ray for root and bone diseases.
According to your diagnosis, your dentist will determine the best line of treatment for your case. In most of the cases, tooth scaling and root planing are one of the non-surgical choices!
Steps of scaling
It doesn’t require special preparations, it’s an in-office procedure. But it requires local anesthesia to decrease the pain of the scaling procedure. Your dentist may start with dental cleaning to remove the plaque or tartar. It is not a treatment for the gum disease but it helps improve the condition.
Then your dentist proceeds to the scaling procedure where he/she starts scraping all plaque and tartar above and below the gum-line then smoothening of tooth surface to remove bacteria and also allowing the gums to reattach to clean teeth surface.
Some steps are required after scaling to disinfect periodontal tissues from harmful bacteria. These steps include oral irrigation using chlorhexidine gluconate solution that infiltrates these tissues and remain active for a while (up to 7 days). Sometimes your doctor will prescribe antibiotics to kill all harmful bacteria.
One of the main advices your dentist will give you is to properly brush your teeth twice daily and after every meal using a fluoride-containing toothpaste to decrease plaque and gingivitis.
your dentist will also advise you to quit smoking, eat healthy foods and visit your dentist every 6 months. And if you have any underlying cause that may be the main reason behind your gum diseases (like misaligned teeth or a crowded jaw), you have to start considering treating this cause by using braces. If the case is not treated properly with the deep cleaning, tooth scaling and root planing procedures due to certain complications (like excess loss of bone or very deep pockets) then your dentist will proceed with gum surgery to repair the extensive damages.
Early diagnosis and treatment of gum diseases, by proper oral hygiene and regular dental visits will ensure you a lifetime of healthy smiles. | <urn:uuid:2011286d-6db1-49b8-9380-62d1066f9aa5> | CC-MAIN-2019-22 | https://www.magrabi.com.sa/blog/27168 | s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256764.75/warc/CC-MAIN-20190522063112-20190522085112-00185.warc.gz | en | 0.889444 | 538 | 2.8125 | 3 |
Quick HOWTO : Ch17 : Secure Remote Logins and File Copying
- 1 Introduction
- 2 A Quick Introduction To SSH Encryption
- 3 Starting OpenSSH
- 4 Testing The Status of SSH
- 5 The /etc/ssh/sshd_config File
- 6 Using SSH To Login To A Remote Machine
- 7 What To Expect With Your First Login
- 8 Deactivating Telnet After Installing SSH
- 9 Executing Remote Commands on Demand with SSH
- 10 SSH Tunneling
- 11 SCP: A Secure Alternative to FTP
- 12 SFTP: Another Secure Alternative to FTP
- 13 Using SSH and SCP without a password
- 14 Conclusion
One of the most popular file transfer and remote login Linux applications is OpenSSH, which provides a number of ways to create encrypted remote terminal and file transfer connections between clients and servers. The OpenSSH Secure Copy (SCP) and Secure FTP (SFTP) programs are secure replacements for FTP, and Secure Shell (SSH) is often used as a stealthy alternative to TELNET. OpenSSH isn't limited to Linux; SSH and SCP clients are available for most operating systems including Windows.
A Quick Introduction To SSH Encryption
Data encryption is accomplished by using special mathematical equations to scramble the bits in a data stream to make it unreadable to anyone who does not have access to the corresponding decryption equation. The process is usually made even harder through the use of an encryption key that is used to modify the way the equations do the scrambling. You can recover the original data only if you have access to this key and the corresponding programs. Data encryption helps to prevent unauthorized users from having access to the data.
SSH uses the concept of randomly generated private and public keys to do its encryption. The keys are usually created only once, but you have the option of regenerating them should they become compromised.
A successful exchange of encrypted data requires the receiver to have a copy of the sender's public key beforehand. Here's how it's done with SSH.
When you log into an SSH server, you are prompted as to whether you want to accept the download of the server's public key before you can proceed. The SSH client's key is uploaded to the server at the same time. This creates a situation in which the computers at each end of the SSH connection have each other's keys and are able to decrypt the data sent from the other end of the encrypted link or "tunnel".
All the public keys that an SSH client's Linux user encounters are stored in a file named ~/.ssh/known_hosts along with the IP address that provided it. If a key and IP address no longer match, then SSH knows that something is wrong. For example, reinstalling the operating system or upgrading the SSH application might regenerate the keys. Of course, keys changes can be caused by someone trying some sort of cyber attack, as well. Always investigate changes to be safe. Your server's own public and private SSH keys are stored in the /etc/ssh/ directory.
Note: The .ssh directory is a hidden directory, as are all files and directories whose names begin with a period. The ls -a command lists all normal and hidden files in a directory. The ~/ notation is a universally accepted way of referring to your home directory and is recognized by all Linux commands.
Linux uses other key files also to provide the capability of password-less logins and file copying to remote servers using SSH and SCP. In this case, the SSH connection is established, then the client automatically sends its public key which the server uses to match against a predefined list in the user's directory. If there is a match then the login is authorized. These files are also stored in your ~/.ssh directory and need to be specially generated. The id_dsa and id_dsa.pub files are your private and public keys respectively, and authorized_keys stores all the authorized public keys from remote hosts that may log into your account without the need for passwords (more on this later).
OpenSSH is installed by default during Linux installations. With Ubuntu / Debian, this may not be the case and it will have to be installed after the initial installation. The
apt-get install ssh command will be sufficient to activate SSH using these latter mentioned distributions.
Because SSH and SCP are part of the same application, they share the same configuration file and are governed by the same
/etc/init.d/sshd startup script.
You can configure SSH to start at boot by using the
chkconfig command when running Fedora / Redhat or with the
sysv-rc-conf command with Debian / Ubuntu.
[[email protected] tmp]# chkconfig sshd on
You can also start, stop, and restart SSH after booting by running the
sshd initialization script.
[[email protected] tmp]# service sshd start [[email protected] tmp]# service sshd stop [[email protected] tmp]# service sshd restart
Remember to restart the SSH process every time you make a change to the configuration files for the changes to take effect on the running process.
Testing The Status of SSH
You can test whether the SSH process is running with
[[email protected] tmp]# pgrep sshd
You should get a response of plain old process ID numbers.
The /etc/ssh/sshd_config File
The SSH configuration file is called /etc/ssh/sshd_config. By default SSH listens on all your NICs and uses TCP port 22. Take a look at a snippet from configuration:
# The strategy used for options in the default sshd_config shipped with # OpenSSH is to specify options with their default value where # possible, but leave them commented. Uncommented options change a # default value. #Port 22 #Protocol 2,1 #ListenAddress 0.0.0.0 #ListenAddress ::
SSH Versions 1 and 2
The original encryption scheme of SSH was adequate for its time but was eventually found to have a number of limitations. The answer to these was version 2. Always force your systems to operate exclusively with version 2 by setting the protocol statement in the /etc/ssh/sshd_config file to 2. Remember to restart SSH to make this take effect.
# # File: /etc/ssh/sshd_config # Protocol 2
How To Change The TCP Port On Which SSH Listens
If you are afraid of people trying to hack in on a well known TCP port, then you can change port 22 to a location that won't interfere with other applications on your system, such as port 435. This is a rudimentary precaution only, because good network scanning programs can detect SSH running on alternative ports.
What you need to do is:
1) Use the netstat command to make sure your system isn't listening on port 435, using grep to filter out everything that doesn't have the string "435":
[[email protected] root]# netstat -an | grep 435 [[email protected] root]#
2) No response allows us to proceed. Change the Port line in /etc/ssh/sshd_config to mention 435 and remove the # at the beginning of the line. If port 435 is being used, pick another port and try again.
3) Restart SSH:
[[email protected] tmp]# service sshd restart
4) Check to ensure SSH is running on the new port:
[[email protected] root]# netstat -an | grep 435 tcp 0 0 192.168.1.100:435 0.0.0.0:* LISTEN [[email protected] root]#
Next you'll discover how to actually login to systems using SSH.
Using SSH To Login To A Remote Machine
Using SSH is similar to Telnet. To login from another Linux box use the ssh command with a -l to specify the username you wish to login as. If you leave out the -l, your username will not change. Here are some examples for a server named smallfry in your /etc/hosts file.
If you are user root and you want to log in to smallfry as yourself, use the command
[[email protected] tmp]# ssh smallfry
User root can also log in to Smallfry as user peter via the default port 22:
[[email protected] tmp]# ssh -l peter smallfry
or via port 435 using the [email protected]_server alternative login format:
[[email protected] tmp]# ssh -p 435 [email protected]
What To Expect With Your First Login
The first time you log in, you get a warning message saying that the remote host doesn't know about your machine and prompting you to store a copy of the remote host's SSH identification keys on your local machine. It will look something like this:
[[email protected] tmp]# ssh smallfry The authenticity of host 'smallfry (smallfry)' can't be established. RSA key fingerprint is 5d:d2:f5:21:fa:07:64:0d:63:1b:3b:ee:a6:58:58:bb. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'smallfry' (RSA) to the list of known hosts. [email protected]'s password: Last login: Thu Nov 14 10:18:45 2002 from 192.168.1.98 No mail. [[email protected] tmp]#
The key is stored in your ~/.ssh/known_hosts file and you should never be prompted for this again.
SSH Failures Due To Linux Reinstallations
If Linux or SSH is reinstalled on the remote server then the keys are regenerated and your SSH client will detect that this new key doesn't match the saved value in the known_hosts file. The SSH client will fail giving an error like this, erring on the side of caution to alert you to the possibility of a form of hacking attack.
[[email protected] tmp]# ssh 192.168.1.102 @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY! Someone could be eavesdropping on you right now (man-in-the-middle attack)! It is also possible that the RSA host key has just been changed. The fingerprint for the RSA key sent by the remote host is 5d:d2:f5:21:fa:07:64:0d:63:1b:3b:ee:a6:58:58:bb. Please contact your system administrator. Add correct host key in /root/.ssh/known_hosts to get rid of this message. Offending key in /root/.ssh/known_hosts:2 RSA host key for 192.168.1.102 has changed and you have requested strict checking. Host key verification failed. [[email protected] tmp]#
If you are confident that the error is due to a reinstallation, then edit your ~/.ssh/known_hosts text file, removing the entry for the offending remote server. When you try connecting via SSH again, you'll be prompted to add the new key to your ~/.ssh/known_hosts file and the login session should proceed as normal after that.
Deactivating Telnet After Installing SSH
You should always consider SSH over TELNET, because of the inherent data encryption features of SSH and the current widespread availability of SSH clients for both Linux and Windows.
By default, the TELNET server isn't installed with Fedora Linux. If you do decide to deactivate an active TELNET server, then use procedures detailed in Chapter 16, "Telnet, TFTP, and xinetd".
Executing Remote Commands on Demand with SSH
A nice feature of SSH is that it is capable of logging in and executing single commands on a remote system. You just have to place the remote command, enclosed in quotes, at the end of the ssh command of the local server. In the example below, a user on server smallfry who needs to know the version of the kernel running on server bigboy (192.168.1.100) remotely runs the uname -a command. The command returns the version of 2.6.8-1.521 and the server's name, bigboy.
[[email protected] tmp]# ssh 192.168.1.100 "uname -a" [email protected]'s password: Linux bigboy 2.6.8-1.521 #1 Mon Aug 16 09:01:18 EDT 2004 i686 i686 i386 GNU/Linux [[email protected] tmp]#
This feature can be very useful. You can combine it with password-free login, explained later in this chapter, to get the status of a remote server whenever you need it. More comprehensive monitoring may best be left to such purpose built programs as MRTG, which is covered in Chapter 22, " Monitoring Server Performance".
You already know that SSH creates an encrypted data session between a client and server. With SSH tunneling the server computer can also receive data from other computers on the client's network over the very same session. The client is configured to listen on a specified TCP port and all data received on that port will be automatically SSH encrypted and relayed to the remote server. It is for this reason that SSH tunneling is also called SSH port forwarding.
There are two types of forwarding:
Local Forwarding: Forwards traffic coming to a local port to a specified remote port. This is also known as outgoing tunneling, as the tunnel is established to the remote server.
Remote Forwarding: Forwards traffic coming to a remote port to a specified local port. This is also known as incoming tunneling, as the tunnel is established from the remote server.
As always it is best to explain these methodologies with some examples.
The syntax for local forwarding relies on the -L SSH command line qualifier which is configured like this:
Where the bind-address and bind-port are the IP address and TCP port on which the local computer will listen for connections from its neighbors. If the bind-address isn't listed, then the server will only accept connections from localhost. The remote-server-address and remote-port specify the same options for the remote server.
- Note: Sometimes an intermediary relay host for the data can be used. In this case the data passes through an encrypted SSH connection for the part of the journey between the local server and the intermediary. The connection between the intermediary and remote host is not. This is not a security issue when forwarding SSH traffic, which is already encrypted, but it can be so when forwarding unencrypted data such as POP mail, SMTP mail or, telnet.
- Intermediaries can be useful especially when the intermediary host is the only host on the local network with access to the remote host.
Here are some explanatory examples:
Example 1: The local computer forwards any connection to its NIC IP address on a specified port to a remote host.
[[email protected] ~]# ssh -L 192.168.1.100:9999:22.214.171.124:22 [email protected] [email protected]'s password: Last login: Sat Mar 17 22:00:50 2007 from 192.168.1.201 [[email protected] ~]#
Here server bigboy is configured to forward any connections its NIC IP address of 192.168.1.100 receives on port 9999 to port 22 on server 126.96.36.199.
The outbound connection to 188.8.131.52 is managed by a separate shell process with the login credentials you specify. In this case the connection will be created on bigboy itself, 192.168.1.100, and will run as user root. This can be tricky, as after executing this command it will appear as if you have logged into bigboy all over again and nothing appears to be happening. Don't be fooled, the new login is shell is the master process for the connection that needs to be created. If you log out, the forwarding will be broken.
This can easily be tested. Using SSH to connect to bigboy on port 9999 actually logs you into the remote server web-003.
[[email protected] ~]# ssh -p 9999 [email protected] [email protected]'s password: Last login: Sat Mar 17 21:56:03 2007 from home.my-web-site.org [[email protected] ~]#
You can then use the netstat and ps commands to verify that the shell process has been created and that the connection has been established.
[[email protected] ~]# netstat -an | grep 184.108.40.206 tcp 0 0 192.168.1.100:36342 220.127.116.11:22 ESTABLISHED [[email protected] ~]# [[email protected] ~]# ps -ef | grep 18.104.22.168 root 22901 21955 0 13:59 pts/0 00:00:00 ssh -L 192.168.2.200:9999:22.214.171.124:222 -p 222 [email protected] [[email protected] ~]#
Forwarding can be useful! Let's see some more examples.
Example 2: The local computer forwards any connection to localhost on a specified port, to a remote host via an intermediary server. Connection's to the local computer's NIC on the specified port is not allowed.
[[email protected] ~]# ssh -L localhost:9999:126.96.36.199:22 \ [email protected] [email protected]'s password: [[email protected] ~]#
Here server smallfry is configured to local forward any connections its localhost IP address receives on port 9999 to port 22 on a remote server with an IP address of 188.8.131.52. Server bigboy is used as the intermediary.
[[email protected] ~]# ssh -p 9999 localhost [email protected]'s password: Last login: Sat Mar 17 20:11:09 2007 from home.my-web-site.org [[email protected] ~]#
You can use the netstat command on smallfry and bigboy to verify that connections have been established between bigboy and web-003, and smallfry and bigboy.
[[email protected] ~]# netstat -an | grep EST | grep 192.168.1.100 tcp 0 0 192.168.1.50:40679 192.168.1.100:22 ESTABLISHED [[email protected] ~]#
[[email protected] ~]# netstat -an | grep EST | grep 184.108.40.206 tcp 0 0 192.168.1.100:56236 220.127.116.11:22 ESTABLISHED [[email protected] ~]#
SSH connections to the NICs of either bigboy or smallfry fail.
[[email protected] ~]# ssh -p 9999 192.168.1.100 ssh: connect to host 192.168.1.100 port 9999: Connection refused [[email protected] ~]# ssh -p 9999 192.168.1.50 ssh: connect to host 192.168.1.50 port 9999: Connection refused [[email protected] ~]#
The final example which follows discusses forwarding unencrypted protocols.
Example 3: The local computer forwards any POP mail queries to localhost directly to the remote POP mail server over an SSH tunnel.
[[email protected] ~]# ssh -L 9999:mailserver:110 [email protected] [email protected]'s password: Last login: Sat Mar 17 20:11:09 2007 from home.my-web-site.org [[email protected] ~]#
In this case an SSH connection is created to mailserver using a shell process owned by user root. The server mailserver then creates an unencrypted POP session (TCP port 110) to itself. The advantage of this configuration is that POP data never leaves the POP server unencrypted.
POP mail users on smallfry can then get their mail over the encrypted link by configuring localhost as the POP mail server in their mail client, and not mailserver.
The syntax for local forwarding relies on the -R SSH command line qualifier which is configured like this:
The syntax is similar to that of the -L option. The bind-address and bind-port are the IP address and TCP port on which the local computer will listen for connections from its neighbors. If the bind-address isn't listed, then the server will only accept connections from localhost. The remote-server-address and remote-port specify the same options for the remote server and are from the remote server's perspective. If you specify localhost as the remote-server-address, SSH will be interpret it to mean the Internet IP address of the remote server.
This can be useful in a number of scenarios. For example, you cannot connect to your office workstation via VPN due to network maintenance, but during this time your workstation still has access to the Internet. Remote forwarding could provide you with access.
Here's another scenario. You are moving into a new Internet data center, all the network gear has been configured, but the installation of the data circuits has been delayed. This has caused the configuration of the servers to be delayed. If one server wired to your network can get access to a server on the Internet, via a wireless card, or otherwise, then remote access to the data center could be achieved using remote forwarding.
Example 1: The local computer forwards any connection to localhost on a specified port to a remote host. Forwarding occurs over a previously established connection from the remote host. If we revisit our scenario where VPN access will be down due to maintenance, the first thing to be done is to configure your workstation at work to establish a remote forwarding SSH session to your home server.
[[email protected] ~]# ssh -R localhost:9999:localhost:22 [email protected] [email protected]'s password: Last login: Sat Mar 17 21:15:36 2007 from 18.104.22.168 [[email protected] ~]# ping localhost PING bigboy (127.0.0.1) 56(84) bytes of data. 64 bytes from bigboy (127.0.0.1): icmp_seq=1 ttl=64 time=0.091 ms 64 bytes from bigboy (127.0.0.1): icmp_seq=2 ttl=64 time=0.082 ms 64 bytes from bigboy (127.0.0.1): icmp_seq=3 ttl=64 time=0.097 ms 64 bytes from bigboy (127.0.0.1): icmp_seq=4 ttl=64 time=0.078 ms
Here workstation work-001 creates an SSH session to server bigboy at home. It also tells bigboy to use this session to forward data to work-001 when bigboy receives SSH connections to localhost on port 9999. Remember, the remote-server-address of the -R option is from the remote server's perspective (work-001). If you specify localhost as the remote-server-address, SSH will be interpret it to mean the Internet IP address of the remote server.
We have setup a ping session to ensure that there is constant traffic between bigboy and work-001 over the connection so that any intermediary firewall doesn't kill it due to inactivity.
When you arrive home, all you have to do is SSH to localhost on your home system to gain access to your workstation at work.
[[email protected] ~]# ssh -p 9999 [email protected] [email protected]'s password: Last login: Sun Mar 18 15:50:16 2007 from 22.214.171.124 [[email protected] ~]#
As you can see, remote forwarding can be both useful, convenient and productivity enhancing.
Example 2: The local computer forwards any connection to it's NIC on a specified port to a remote host. Forwarding occurs over a previously established connection from the remote host.
This is more fitting for our limited connectivity data center scenario. In this case the local computer can be accessed by anyone on the Internet and it will forward any SSH connections it receives on the specified port to the server in the data center with the wireless access. Here's how it's done:
- Your local computer may be configured to only accept SSH connections for remote forwarding on the loopback localhost interface. Edit your sshd_config file and make sure the GatewayPorts setting is set to yes.
# # File: /etc/ssh/sshd_config # GatewayPorts yes
- Restart the SSH daemon to activate the setting.
[[email protected] ~]# /etc/init.d/sshd restart Stopping sshd: [ OK ] Starting sshd: [ OK ] [[email protected] ~]#
- The next step is to establish the remote port forwarding session. Set up a ping if you need constant activity on the link. In this case Internet server is netserver-001.my-web-site.org.
[[email protected] ~]# ssh -R netserver-001.my-web-site.org:9999:localhost:22 [email protected] [email protected]'s password: Last login: Sat Mar 17 21:15:36 2007 from 126.96.36.199 [[email protected] ~]# ping localhost PING netserver-001 (127.0.0.1) 56(84) bytes of data. 64 bytes from netserver-001 (127.0.0.1): icmp_seq=1 ttl=64 time=0.091 ms 64 bytes from netserver-001 (127.0.0.1): icmp_seq=2 ttl=64 time=0.082 ms 64 bytes from netserver-001 (127.0.0.1): icmp_seq=3 ttl=64 time=0.097 ms 64 bytes from netserver-001 (127.0.0.1): icmp_seq=4 ttl=64 time=0.078 ms
- Here workstation datacenter-001 creates an SSH session to Internet server netserver-001. It also tells netserver-001 to use this session to forward data to datacenter-001 when netserver-001 receives SSH connections on any interface IP address (*) on port 9999.
- Now it's time to test it. From our home server bigboy, we can SSH into server netserver-001 on port 9999 and get access to the data center.
[[email protected]~]# ssh -p 9999 [email protected] [email protected] netserver-001.my-web-site.org's password: Last login: Sun Mar 18 15:50:16 2007 from 188.8.131.52 [[email protected] ~]#
Success! You have saved the day with your ingenuity.
Configuring Forwarding with GUI Clients
You won't always have SSH command line access for the servers at both end of a port forwarding connection. Sometimes a GUI is either easier to use, or is your only option.
Most GUI clients will have SSH forwarding capabilities and it will be configurable on each of your saved connections, not globally. The options to do this should be found under the advanced properties or equivalent tab and with your new Linux command line knowledge, the setup should be relatively easy.
Troubleshooting SSH Port Forwarding
There can be many complications with SSH port forwarding, and they are mostly related to typographical errors. Here are a few symptoms that are easy to overcome:
- If remote forwarding doesn't work from a remote server, but works from localhost, then make sure you have activated the GatewayPorts setting on your computer. If not, change it, restart the SSH daemon and try again.
- If you get a message like this stating that the address is already in use, then you may have another port forwarding session already started on the port or the port you intend to use for forwarding is already in use by another application.
bind: Address already in use channel_setup_fwd_listener: cannot listen to port: 9999 Could not request local forwarding.
- "Connection closed" messages like this one could be caused by typing in an incorrect forwarding address.
ssh_exchange_identification: Connection closed by remote host
- If you are attempting remote forwarding using your server's NIC IP address and get this message, then it could be because the GatewayPorts setting has been disabled. With local forwarding, it could be caused by specifying an incorrect port on which the server should listen.
[[email protected] ~]# ssh -p 9999 192.168.1.100 ssh: connect to host 192.168.1.200 port 9999: Connection refused [[email protected] ~]
SSH port forwarding is a very useful tool that can provide you with a great deal of versatility when administering your servers. It's always a good thing to remember.
SCP: A Secure Alternative to FTP
From a networking perspective, FTP isn't very secure, because usernames, passwords, and data are sent across the network unencrypted. More secure forms such as SFTP (Secure FTP) and SCP (Secure Copy) are available as a part of the OpenSSH package that is normally installed by default on RedHat and Fedora Core. Remember, unlike FTP, SCP doesn't support anonymous downloads like FTP.
The Linux scp command for copying files has a format similar to that of the regular Linux cp command. The first argument is the source file and the second is the destination file. When copying to or from a remote server, SCP logs in to the server to transfer the data and this therefore requires you to supply a remote server name, username, and password to successfully execute the command. The remote filename is therefore preceded by a prefix of the remote username and server name separated by an @ symbol. The remote filename or directory then follows separated by a colon. The format therefore looks like this:
[email protected]:filename [email protected]:directoryname
For example, file /etc/syslog.conf on a server with IP address 192.168.1.100 that needs to be retrieved as user peter would have the format [email protected]:/etc/syslog.conf, the entire /etc directory would be [email protected]:/etc/.
Note: You can download an easy-to-use Windows SCP client called WinSCP from http://winscp.vse.cz/eng/
Copying Files To The Local Linux Box
If you understand how scp represents remote filenames, you can start copying files fairly easily. For example, to copy file /tmp/software.rpm on the remote machine to the local directory /usr/rpm use the commands
[[email protected] tmp]# scp [email protected]:/tmp/software.rpm /usr/rpm [email protected]'s password: software.rpm 100% 1011 27.6KB/s 00:00 [[email protected] tmp]#
To copy the file /tmp/software.rpm on the remote machine to the local directory /usr/rpm using TCP port 435, use the commands
[[email protected] tmp]# scp -P 435 [email protected]:/tmp/software.rpm /usr/rpm [email protected]'s password: software.rpm 100% 1011 27.6KB/s 00:00 [[email protected] tmp]#
Copying Files To The Remote Linux Box
Copying files to the local Linux server now becomes intuitive. For examples, to copy file /etc/hosts on the local machine to directory /tmp on the remote server
[[email protected] tmp]# scp /etc/hosts [email protected]:/tmp [email protected]'s password: hosts 100% 1011 27.6KB/s 00:00 [[email protected] tmp]#
To copy file /etc/hosts on the local machine to directory /tmp on the remote server via TCP port 435, use the commands
[[email protected] tmp]# scp -P 435 /etc/hosts [email protected]:/tmp hosts 100% 1011 27.6KB/s 00:00 [[email protected] tmp]#
SFTP: Another Secure Alternative to FTP
OpenSSH also has the SFTP program, which runs over an encrypted SSH session but whose commands mimic those of FTP. SFTP can be more convenient to use than SCP when you are uncertain of the locations of the files you want to copy, because it has the directory browsing abilities of FTP. Unlike FTP, SFTP doesn't support anonymous logins.
Here is a sample login sequence that logs in, gets help on the available commands and downloads a file to the local server.
[[email protected] tmp]# sftp 192.168.1.200 Connecting to 192.168.1.200... [email protected]'s password: sftp> help Available commands: cd path Change remote directory to 'path' lcd path Change local directory to 'path' chgrp grp path Change group of file 'path' to 'grp' chmod mode path Change permissions of file 'path' to 'mode' ... ... sftp> ls ... ... anaconda-ks.cfg install.log install.log.syslog ... ... sftp> get install.log install.log 100% 17KB 39.4KB/s 00:00 sftp> exit [[email protected] tmp]#
Using SSH and SCP without a password
From time to time you may want to write scripts that will allow you to copy files to a server, or login, without being prompted for passwords. This can make them simpler to write and also prevents you from having to embed the password in your code.
SCP has a feature that allows you to do this. You no longer have to worry about prying eyes seeing your passwords nor worrying about your script breaking when someone changes the password. You can configure SSH to do this by generating and installing data transfer encryption keys that are tied to the IP addresses of the two servers. The servers then use these pre-installed keys to authenticate one another for each file transfer. As you may expect, this feature doesn't work well with computers with IP addresses that periodically change, such as those obtained via DHCP.
There are some security risks though. The feature is automatically applied to SSH as well. Someone could use your account to log in to the target server by entering the username alone. It is therefore best to implement this using unprivileged accounts on both the source and target servers.
The example that follows enables this feature in one direction (from server bigboy to server smallfry) and only uses the unprivileged account called filecopy.
Configuration: Client Side
Here are the steps you need to do on the computer that acts as the SSH client:
1) Generate your SSH encryption key pair for the filecopy account. Press the Enter key each time you are prompted for a password to be associated with the keys. (Do not enter a password.)
[[email protected] filecopy]# ssh-keygen -t dsa Generating public/private dsa key pair. Enter file in which to save the key (/filecopy/.ssh/id_dsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /filecopy/.ssh/id_dsa. Your public key has been saved in /filecopy/.ssh/id_dsa.pub. The key fingerprint is: 1e:73:59:96:25:93:3f:8b:50:39:81:9e:e3:4a:a8:aa [email protected] [[email protected] filecopy]#
2) These keyfiles are stored in the.ssh subdirectory of your home directory. View the contents of that directory. The file named id_dsa is your private key, and id_dsa.pub is the public key that you will be sharing with your target server. Versions other than RedHat/Fedora may use different filenames, use the SSH man pages to verify this.
[[email protected] filecopy]# cd ~/.ssh [[email protected] filecopy]# ls id_dsa id_dsa.pub known_hosts [[email protected] .ssh]#
3) Copy only the public key to the home directory of the account to which you will be sending the file.
[[email protected] .ssh]# scp id_dsa.pub [email protected]:public-key.tmp
Now, on to the server side of the operation.
Configuration - Server Side
Here are the steps you need to do on the computer that will act as the SSH server.
1) Log into smallfry as user filecopy. Create an .ssh subdirectory in your home directory and then go to it with cd.
[[email protected] filecopy]# ls public-key.tmp [[email protected] filecopy]# mkdir .ssh [[email protected] filecopy]# chmod 700 .ssh [[email protected] filecopy]# cd .ssh
2) Append the public-key.tmp file to the end of the authorized_keys file using the >> append redirector with the cat command. The authorized_keys file contains a listing of all the public keys from machines that are allowed to connect to your Smallfry account without a password. Versions other than RedHat/Fedora may use different filenames, use the SSH man pages to verify this.
[[email protected] .ssh]# cat ~/public-key.tmp >> authorized_keys [[email protected] .ssh]# rm ~/public-key.tmp
From now on you can use ssh and scp as user filecopy from server bigboy to smallfry without being prompted for a password.
Most Linux security books strongly recommend using SSH and SCP over TELNET and FTP because of their encryption capabilities. Despite this, there is still a place for FTP in the world thanks to its convenience in providing simple global access to files and TELNET, which is much easier to implement in price-sensitive network appliances than SSH. Consider all options when choosing your file transfer and remote login programs and select improved security whenever possible as the long term benefits eventually outweigh the additional cost over time. | <urn:uuid:4718bd43-380e-4eca-8935-f8eb7f47de07> | CC-MAIN-2020-34 | https://wiki.ubuntu.org.cn/Quick_HOWTO_:_Ch17_:_Secure_Remote_Logins_and_File_Copying | s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735833.83/warc/CC-MAIN-20200803195435-20200803225435-00041.warc.gz | en | 0.862031 | 8,164 | 3.3125 | 3 |
Proteomics/Protein Identification - Mass Spectrometry
- Mass Spectrometry Overview
Mass spectrometry is a technique in which gas phase molecules are ionized and their mass-to-charge ratio is measured by observing acceleration differences of ions when an electric field is applied. Lighter ions will accelerate faster and be detected first. If the mass is measured with precision then the composition of the molecule can be identified. In the case of proteins, the sequence can be identified. Most samples submitted to mass Spectrometry are a mixture of compounds. A spectrum is acquired to give the mass-to-charge ratio of all compounds in the sample. Mass spectrometry is also known as 'mass spec' or MS for short. Mass spectrometry throws light on molecular mechanisms within cellular systems. It is used for identifying proteins, functional interactions, and it further allows for determination of subunits. Other molecules in cells such as lipid components can also be defined.
A mass spectrometer is composed of several different parts: a source that ionizes the sample, the analyzer that separates the ions based on mass-to-charge ratio, a detector that "sees" the ions, and a data system to process and analyze the results. You can also measure relative abundance of an ion using mass spectrometry. Different compounds have differential ionization capabilities and therefore intensity of your ion is not a direct correlation to concentration.
Mass spectrometry can be a high throughput analytical method due to the ability for a mass spectrum to be measured rapidly and with minimal sample handling as compared to gel methods.
It is an analytical method which has a variety of uses outside of proteomics, such as isotope and dating, trace gas analysis, atomic location mapping, pollutant detection, and space exploration
- History of Mass Spectrometry
The history of this technique finds its roots in the first studies of gas excitation in a charged environment, more than 100 years ago. This pioneering work led to the identification of two isotopes of neon (neon-20 and neon-22) via mass to charge ration discrimination by J.J Thomson in 1913. Over the next fifty years the fundamental basis of the technique was further developed. After the coupling of gas chromatography to Mass Spectroscopy in 1959 by researches at Dow Chemical, the full potential of the technique as a highly accurate, quantitative method for exploring compounds was realized, spurring a wave of developments which continue to the present day. The precision of mass spectrometry led to the discovery of isotopes.
- Implications of Mass Spectrometry for Proteomics Applications
The technique of mass spectrometry is a valuable tool in the field of proteomics. It can be used to identify proteins through variations of mass spectrometry techniques. The most common first approach to proteomics is a bottom-up approach in which the protein is digested by a protease, such a trypsin, and the peptides are then analyzed by peptide mass fingerprinting, collision-induced dissociation, tandem MS, and electron capture dissociation. Once the peptides masses have been determined the mass list can be sent to a database, such as MASCOT, where the list is compared to the masses of all known peptides. If enough peptides match that of a known protein you can identify your protein. If the masses of your peptides do not match a known protein you can sequence your peptide by de novo sequencing using MS/MS methods; where you isolate your peptide and break it along the peptide bond backbone forming y and b ions from which you can determine the sequence. The advantages of the bottom-up approach are that the small size of tryptic peptide ions is easy to handle biochemically than entire protein ions because of their relatively small masses that are easier to be determine. Beside bottom-up approach, another approach is top-down. In top-down approach, the complete proteins are directly analyzed by using mass spectrometer without solution digestion as bottom-up does. The advantages of the top-down approach are that it can sometime provide the complete coverage of the protein. But since whole proteins are hard to handle biochemically compared to small peptide pieces, it makes top-down approach difficult to analyze.
Another use of mass spectrometry in proteomics is protein quantification. By labeling proteins with stable heavier isotopes you can in turn determine the relative abundance of proteins. Companies now produce kits, such as iTRAQ (Applied Biosystems), in order to do this at a high-throughput level.
One of the most powerful ways to identify a biological molecule is to determine its molecular mass together with the masses of its component building blocks after fragmentation. There are two dominant methods for doing this. The first is electrospray ionization (ESI), in which the ions of interest are formed from solution by applying a high electric field. This is done by applying a high electric field to the tip of a capillary, from which the solution will pass through. The sample will be sprayed into the electric field along with a flow of nitrogen to promote desolvation. Droplets will form and will evaporate in a vacuumed area. This causes an increase in charge on the droplets and the ions are now said to be multiply charged. These multiply charged ions can now enter the analyzer. ESI is a method of choice because of the following properties: (1)The "softness" of the phase conversion process allows very fragile molecules to be ionized intact and even in some non-covalent interactions to be preserved for MS analysis. (2)The eluting fractions through liquid chromatography can then be sprayed into the mass spectrometer, allowing for the further analysis of mixtures. (3)The production of multiply charged ions allow for the measurement of high-mass biopolymers. Multiple charges on the molecule will reduce its mass to charge ratio when compared to a single charged molecule. Multiple charges on a molecule also allows for improved fragmentation which in turn allows for a better determination of structure. The second is matrix-assisted laser desorption/ionization (MALDI) in which the molecular ions of interest are formed by pulses of laser light impacting on the sample isolated within an excess of matrix molecules. This enables the determination of masses of large biomolecules and synthetic polymers greater than 200,000 Daltons without degradation of the molecule of interest. The advantages of MALDI are its robustness, high speed, and relative immunity to contaminants and biochemical buffers.
A type of mass spectrometer often used with MALD is TOF or Time of Flight mass spectrometry. This enables fast and accurate molar mass determination along with sequencing repeated units and recognizing polymer additives and impurities. This technique is based on an ultraviolet absorbing matrix where the matrix and polymer are mixed together along with excess matrix and a solvent to prevent aggregation of the polymer. This mixture is then placed on the tip of a probe; then the solvent is removed while under vacuum conditions. This creates co-crystallized polymer molecules that are dispersed homogeneously within the matrix. A pulsing laser beam is set to an appropriate frequency and energy is shot to the matrix, which becomes partially vaporized. In turn the homogeneously dispersed polymer within the matrix is carried into the vapor phase and becomes charged. To obtain a superb signal-to-noise ratio, multiple laser shots are executed. The shapes of the peaks are improved and the molar masses determined are more accurate. Fianlly,in the TOF analyzer the molecules from a sample are imparted identical translational kinetic energies because of the electrical potential energy difference. These ionic molecules travel down an evacuated tube with no electrical field and of the same distance. The smallest ions arrive first at the detector, which produces a signal for each ion. The cumulative data from multiple laser shots yield a TOF mass spectrum, which translates the detector signal into a function of time, which in turn can be used to calculate the mass of the ion.
In addition to these ionization techniques, highly powerful mass analyzers have been developed. These analyzers measure the mass/charge ratio of intact ionized biomolecules, as well as their fragmentation spectra, with high accuracy and high speed. The measurement of fragmentation spectra is called tandem MS or MS/MS. In conjunction with single stage MS (with intact precursor ions) tandem MS can be utilized to help elucidate a protein since the problem of elucidation will reduce to assembling the puzzle pieces of the fragmented protein.
- American Society for Mass Spectrometry - What is MS?, http://www.asms.org/whatisms/p4.html
- Mass Spectrometry in the Postgenomic Era
- Annual Review of Biochemistry Vol. 80: 239-246 (Volume publication date July 2011) DOI: 10.1146/annurev-biochem-110810-095744 https://ted.ucsd.edu/webapps/portal/frameset.jsp?tab_tab_group_id=_2_1&url=%2Fwebapps%2Fblackboard%2Fexecute%2Flauncher%3Ftype%3DCourse%26id%3D_767_1%26url%3D
- University of Illinois at Urbana-Champaign School of Chemical Sciences http://scs.illinois.edu/massSpec/ion/esi.php
- University of Southern Mississippi School of Polymers and High Performance Materials http://www.psrc.usm.edu/mauritz/maldi.html | <urn:uuid:cb872216-2f64-4141-9bf9-1d6d54c0ef61> | CC-MAIN-2020-29 | https://en.wikibooks.org/wiki/Proteomics/Protein_Identification_-_Mass_Spectrometry | s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655887377.70/warc/CC-MAIN-20200705152852-20200705182852-00120.warc.gz | en | 0.922466 | 1,984 | 3.515625 | 4 |
Newton’s laws of motion afford the quantification of the motion of matter (i.e. objects) and correspondingly by way of calculation the determination of the movement of the objects. It is because of these laws we can determine the effect of two objects interacting (colliding), such as when a golf club hits a golf ball or what happens when we try to move a large object without applying an adequate external force.
Given that the conception of our system of economics was informed by Newtonian mechanics, it is not surprising to see J.B. Watson’s and B.F. Skinner’s behaviorism—the use of the stimulus-response—informing the methods of management. Just as Newton was able to precisely determine the movement of objects, management theorist and practitioners sought a similar result for the behavior of people.
Physics of External Forces
That is, human action is viewed in large part, if not wholly, as a response to external stimuli. The implication being that without the introduction of external forces, action is not expected. Further assuming people act in response to a stimulus, applying the right stimulus people will then act as desired. Thus, management’s responsibility is to provide the right external force (i.e. extrinsic motivation).
What makes this feasible is the unquestioned belief that people are economically or materially driven. This driving force—to maximize one’s material self-interest— acts universally on all of us. Effectually it has been made to be the equivalent of gravity for human action. The following excerpt from It’s the EconoME, Stupid reveals the limiting effect of this assumption.
S: I am not quite clear on what you are getting at.
Q: Well, we can choose to structure life—bringing (our sense of) order to life—by seeking to satisfy the desires emerging out of our animal nature or by appealing to our higher, uniquely human nature.
S: Are you saying we can either act like animals or we can act like people; that we can be barbarians or we can be civilized?
Q: Not exactly, although I suspect we’ve all met some animals that have acted far more civilized than some people we know.
What I am saying is that we can choose to focus attention on our lower level desires as a way of ordering our world or we can choose to appeal to our higher level nature. We can create societal systems that either cause individuals to structure their lives consistent with the needs and wants we share with all other animals or that cause individuals to orient their lives consistent with realizing higher level capabilities that only human beings have.
Recall that we discussed earlier that if we set up the conditions such that people’s options are limited to choosing between pleasure and pain in the next moment, then it would be a mistake to conclude that the primary driver in people’s behavior is pleasure-seeking and pain-avoiding—these are lower level behavioral drivers we share with all other animals.
S: So, if we order life according to the notion that people are pleasure- seeking and pain-avoiding, then are we guiding people to order life according to their animal nature?
Q: Yes, and as a result we are limiting the human potential that lies in every person. It causes individuals not to recognize the value of being human; often times this awakening is only realized when a person faces the prospect of his/her life coming to an end.
S: So we are more than intelligent animals capable of ordering life according to the satisfaction of selfish passions in the moment?
Q: Yes, life need not be structured according to this very limited view of what it means to be human…
Many of the popular methods of management, such as pay-for-performance incentive systems, merit pay systems, rank and yank systems, annual performance appraisal systems, management by exception/results, and management by objective—which continue to be taught in almost every business school—are grounded in the stimulus-response model of human behavior. While the merits of Newton’s laws of motion are remarkable, it does not make them everywhere applicable. Such teachings and practices reflect the application of the science of the external (material) world to the phenomena of the interior (non-material) world.
It is inappropriate to apply the science of objects in motion to the realm of human behavior. Effectively, in so doing, subjects have to be related to as objects for the physics of objects to apply, which is an inhumane and thus immoral way of treating people. We are not objects we are subjects, and we have the right to be treated as such.
As subjects we have the capability and the right to choose how we will be-in-the-world. We have free will and the right to exercise it. But, with everyone directed to strive for the same thing—material gain—free will is cancelled out. This gravitational-like force imposed by both the system of economics and management practice is just that, an imposition.
Same But Different
Inherently people don’t act primarily as a result of forces external to them. Each individual is guided by intangibles (e.g. beliefs, values, attitudes, feelings etc) that lie within. In each and every moment, an individual has a choice of not only whether to act but also how to act. This choice involves a process that includes the guidance of the intangible contents of mind. The strength of each to influence one’s choices is associated with the amount of energy that each presents in a given situation. The more firmly held a belief, or the stronger an emotion, the more influence—the greater its energy—in the decision. Moreover, each of us has the freedom to change what he/she holds in mind as well. That is, changing one’s mind is our choice, an act of free will. Human behavior is far too complex to fit neatly into a linearly deterministic stimulus-response (or goal-action) model.
Physics of the Invisible
So if the physics of the external material world is not appropriate to inform methods and practices of management what is? What science helps us understand the world of the non-material and intangible? Quantum physics, the physics of the subatomic world. A world where things—subatomic particles—are not objects at all but rather energies (quanta) that show tendencies—based on probabilities—to exist or happen. The manifestation of these interconnections of energy is variation in the observed phenomena.
So how do we know these quanta exist? By their effects! Quite analogously we’ve never seen, touched, tasted or smelled a belief, but we surely have observed its influence in people’s choices. At the subatomic level there is indeterminacy in that these interconnections do not take place at a definite place and time, but rather exhibit a likelihood of occurring—hence variation is inherent in our world. Moreover we know choices aren’t deterministic but rather probabilistic; that groups of people holding a particular belief in mind will show a tendency—a probability—to act in a like manner, unless of course they change their mind. Thus certainty in the future is not a possibility, and thus neither is control of it.
So what must management/leaders do? A good first step is to cease driving for results and begin enabling potential—be a mentor of people, not a mechanic of the machine. This means not treating people as objects but rather relating to them as subjects, core-to-core. To do this those in authority will have to stop relying on positional authority and begin developing their personhood as a way of influence.
But how do you manage in a complex and chaotic world that is influenced by the energy of intangibles that show a tendency to happen? How do you manage when things are so variable? First, learn how to think statistically and cease managing by exception and by the numbers. An important principle for guiding action is that an understanding of the causes of variation requires first an understanding of the variation caused. That is, learn how to interpret patterns of variation for the purpose of transforming them into knowledge. Greater knowledge means sounder decisions and more effective leadership.
The human world is a highly relational and variable world; it is not an independent acting deterministic world, so let’s cease managing as if it is! | <urn:uuid:bcc6789f-9483-4faa-996f-0d46406d360d> | CC-MAIN-2019-39 | https://forprogressnotgrowth.com/2010/09/16/objects-or-subjects/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572484.20/warc/CC-MAIN-20190916035549-20190916061549-00085.warc.gz | en | 0.94303 | 1,731 | 3.46875 | 3 |
by Linda Booth Sweeney
Book author: Margaret Bloy Graham
Publisher: Harper & Row, 1967
Format: picture book; fiction
Age range: 4 years old and up
Systems Thinking Concepts:
Simple interrelationships, unintended consequences, reinforcing and balancing feedback, time delays, “fixes that fail”
A Quick Look at the Story
A little boy is moving to a new apartment that doesn’t allow pets. Not having a place for his pet spider “Helen” to stay, he decides to leave her in a box at the front gate of the zoo. Inside the box is a note asking that the zookeeper take care of her. When the zookeeper opens the box, Helen escapes and makes her way into the lions’ cage. Before the arrival of Helen, the lioness and her cubs were miserable, covered in flies from mane to paws. Helen, whose favorite meal is flies, sets up her web in the corner of the lions’ cage and begins to feast. A week later, Helen has eaten all the flies in the lions cage and so moves next door to the elephant house.
This weekly migration of spinning, eating and moving on continues and the zoo becomes a peaceful, fly-free place for all. The harmony is broken when the zookeeper decides the zoo needs to be cleaned up for an upcoming inspection by the mayor. Despite a protest from one of his assistants that “spider webs are supposed to be sort of useful,” the zookeeper decides that all spider webs must go! With that, the balance among flies, animals and Helen is broken. To avoid the cleaners, Helen disappears into a crack in the ceiling of the camel house and remains hidden there for several days. At first, the zoo looks spotless. However, with Helen gone, the flies begin to come back after a few days. Helen spins her web at night in the camel’s cage but does not travel around to other cages for fear of being swept away by the zoo attendants. All the other animals once again begin to look miserable, except for the camel. The zookeeper and his assistant finally realize the role Helen has played in maintaining harmony within the zoo. The “ah-hah!” leads the zookeeper to establish a new rule at the zoo: “Be nice to spiders.”
There are numerous exploratory paths that can lead to a systems-based analysis of this story, such as the connection between humans and nature, a focus on short-term vs. long-term consequences of decisions and actions, and the web of interrelationships within a seemingly straightforward system.
The Intersection of Human and Natural Worlds
In general, any point at which humans and the natural world interact provides a rich source of teaching opportunities about the behaviors of systems. You can begin exploring this story by simply talking about one’s “mental models” or assumptions regarding the natural world. A good way to start is to ask students what comes to mind when they think about spiders. It is almost guaranteed that some one (or two) will cry: “Yuck! Spiders are creepy!” This is a fairly common opinion among children as well as among many adults. Yet what many dont know is that spiders help humans and animals by trapping bothersome or harmful insects. In fact, studies in England and Wales show that spiders kill 200 trillion insects in those countries annually. Yet the perception of spiders as disgusting, harmful insects looms large. By not knowing the value of a spider, the zookeeper makes a decision that has noticeably undesirable and unintended consequences.
To explore the intersection of humans and nature in this story, you could also use the iceberg, a systems thinking visual tool. (The event: prepare for inspection; pattern over days: wipe away webs; structure/policy: eliminate webs and Helen, if can; mental model: spiders and their webs are not a positive/useful part of the zoo’s existence[i]
Expanding Time Horizons
We see that the zookeeper focuses on his short-term need: cleaning up the zoo for an upcoming inspection by the mayor. This focus on the short-term vs. long-term provides a good opportunity to look at a behavior over a set period of time. The time horizon in this story is approximately 10 weeks, beginning with Helen’s arrival and ending with the birth of Helen’s baby spiders. What behaviors shall we graph? Students might graph the volume of spiders and spider webs. Another perhaps more pertinent variable is the harmony level of the zoo. A graph of the harmony level in the zoo might look like this:
At week O, the harmony level is relatively low. When Helen arrives, the harmony level slowly increases as Helen begins the slow process of reducing the number of pesky flies in the zoo. At week five or six, the harmony level begins to decrease as the zoo attendants clear away the spider webs. By week nine, the animals in the zoo are miserable once again. Finally, in week ten, Helen, and ultimately her babies, is welcome at the zoo and harmony begins to be restored.
A Look At the System’s Interrelationships
The next question that one taking a systems perspective might ask is: “What set of interrelationships might be causing this pattern of behavior?” You might begin with a simple feedback loop:
Taking some license with the story, we can say that the more spiders, the more the zoo attendants will remove the spiders and their webs. The more the webs are removed, the fewer spiders. The less spiders, the less the zoo attendants will remove the webs. To more closely capture the dynamics of the “Be Nice to Spiders” story, we can map the interrelationships in a diagram that resembles an archetypal “story” called “Fixes that Fail” or “Fixes that Backfire.” The basic idea behind this archetype is that just about every decision has short and long-term consequences and very often, the two outcomes are at cross-purposes. [ii]
In this archetypal pattern of behavior, the problem symptom can be thought of as the “unattractiveness of the zoo.” When spider webs make the zoo appear messy, the zookeeper’s suggested “fix” is to remove all spiders and spider webs. (Thus, we can label the fix “removal of spiders/webs.” ) Here the zookeeper exhibits what we can think of as “open loop” thinking; he doesn’t detect or anticipate that his decision might negatively influence, over time, the appearance of the zoo. At first, his decision results in a decrease in the number of spider webs and he sees an initial improvement in the appearance and attractiveness of the zoo. Over time though, the number of pesky flies increases dramatically and the attractiveness and harmony of the zoo begins to decline. We certainly should have a healthy amount of empathy for the zookeeper. Many research studies with middle school students and adults show that people of all ages tend to focus on the short-term impact of decisions and tend not to consider feedback dynamics. (That is, people often consider how A influences B, but not how B eventually comes back around to influence A.)[iii]
After mapping out the causal loops in the story, you can explore with students where they see “fixes that backfire” patterns either in the news or in their every day lives. Here you might talk about road building programs (depending on the age of your group). A state highway department decides to build a new highway to ease traffic congestion. Inevitably, the new highway solves the problem in the short term, but the system reacts to compensate (more people drive on the new highway), thereby undermining the impact of the intervention and creating additional pressure. Students may also consider the “fixes that backfire” nature of adding additional ski lifts at popular ski resorts. For example, the more ski lifts, the more crowds, the longer the lines at the ski lifts. Or the impact of increased drug enforcement: as drug enforcement efforts increase, prisons become overcrowded, which then forces some criminals back on the streets sooner than originally intended.
Additional Questions to Consider
In other systems-based reviews, I have suggested questions the teacher or parent may ask the student. In this story, let’s change perspectives and instead, ask the students to pose questions to the central figure of the story, that is, the spider herself. This approach is drawn from the African Primary Science program and has been called by educator Eleanor Duckworth and others “the ask the things themselves principle.”[iv] For example, students might ask:
What do you eat, Helen?
Where are the male spiders?
How do you decide where to put your web?
Why are some people afraid of you?
The teacher may also ask these questions of students:
What happened in this story?
Where else do we see decisions that cause unintended consequences or cause “things to happen” that were not intended and aren’t desirable?
If you were a zookeeper who understood the value of spiders but also wanted as attractive a zoo as possible, what rule(s) would you put in place?
We can liken this story to the steady-state (or self-regulating) nature of The Old Ladies Who Liked Cats. Ask students, “What is the state of the system with the spiders?” If “state” is too abstract a term, try swapping state for “condition” or “experience.” What is the state without the spiders? Much of systems thinking involves attempting to understand the overall state or condition of a system. The Lorax is another good example of what occurs when a system in balance is disrupted. (For a systems-based review of the Lorax, see p. 88 of When a Butterfly Sneezes). . For another book based on a true story of ecological disruption, see The Day They Parachuted Cats on Borneo: A Drama of Ecology by Charlotte Pomerantz (Young Scott Books/Addison-Wesley, Massachusetts, 1971, ages 8-12). Written in verse, this book tells how spraying for mosquitoes in Borneo affected the entire ecological systemincluding cockroaches, rats, cats, geckoes, the river, and eventually the farmers. There is also a short story on the topic for older readers called “Top of the Food Chain” in Without a Hero (and Other Stories) by T. Coraghessan Boyle (Viking, Penguin Books, New York, 1994).
[ii] For more information on the Fixes that Fail/Backfire archetype, see the appendix of Senge’s The Fifth Discipline, the Pegasus Communications, Inc. publication “Systems Thinking Tools” by Daniel Kim, page 20, and Senge, et alsThe Fifth Discipline Fieldbook, pg. 126- 129.
[iii] There is ample evidence to suggest that children and adults tend to misperceive feedback when feedback exists. Instead, individuals tend to envision the natural (and social) world in terms of linear cause and effect or other forms of causality. If you are interested in learning more about this research, here are several suggested readings:
Dörner, D. (1980). “On the Difficulties People Have in Dealing with Complexity.”Simulations and Games 11(1): 87-106.
Grotzer, T. and B. Bell Basca (2000). Helping Students to Grasp the Underlying Causal Structures when Learning about Ecosystems: How does it impact understanding? National Association for Research in Science Teaching Annual Conference, Atlanta, Georgia, Understandings of Consequences Project, Harvard Graduate School of Education.
Hogan, K. (2000). “Assessing students’ systems reasoning in ecology.” Journal of Biological Education 35(22-28).
Sterman, J. (1989). “Misperceptions of Feedback in Dynamic Decision Making.”Organizational Behavior and Human Decision Processes 43(3): 301-335
[iv] For more on this approach see Duckworth’s “The African Primary Science Program: An Evaluation and Extended Thoughts” (1978, published by the University of North Dakota) as well as “Exploring, Sensibility and Wonder: Science with Young Children and Using the Senses” by Kees Both (1978, in Growing up with Science: Developing Early Understanding of Science, published by Academia Euorpaea, p. 146) and “Ask the ant lion: the growth of an African primary science unit” by J. Elstgeest (1971, in New Trends in Integrated Science Teaching, published by UNESCO).
You are currently a guest. For full access to all of our materials Sign In or Register Now.TOP BACK | <urn:uuid:94878c4f-0471-4b55-a58c-3f995b850e14> | CC-MAIN-2017-13 | http://watersfoundation.org/resources/be-nice-to-spiders/ | s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189495.77/warc/CC-MAIN-20170322212949-00370-ip-10-233-31-227.ec2.internal.warc.gz | en | 0.938078 | 2,708 | 2.984375 | 3 |
In many places of the world, rice is the principal staple crop. Modern agriculture relies on fertilizer, and much of that fertilizer is based on so-called natural gas, which accounts for up to 90% of its costs.
Because of the ongoing Russian-Ukraine war, the price of natural gas is fast rising, leading fertilizer prices to climb in tandem. This, in turn, necessitates further research into methods of growing food with less nitrogen-based fertilizer.
Responding to this dilemma, Chinese researchers created a type of genetically modified rice that grows while requiring less nitrogen. It in fact increases food production by 40% to 70%.
The researchers began by looking at proteins known as transcription factors, which are known to govern photosynthesis.
They searched a collection of 118 transcription factors previously discovered to govern photosynthesis in rice and maize for those that were also increased in response to light and low nitrogen levels to locate the right target. When they discovered one, they created transgenic lines that produced an abundance of it.
The plants that have been genetically engineered were planted in fields with varying environmental conditions. Over a three-year period, all of the rice plants improved their photosynthetic capability and nitrogen usage efficiency.
They contained more chlorophyll and bigger chloroplasts than wild-type. They also had more effective nitrogen intake in their roots than wild-type, as well as more efficient nitrogen transport from their roots to their shoots.
This increased grain output even when the plants received less nitrogen fertilizer.
The researchers also tested their new method on wheat and Arabidopsis, the most extensively used plant biology model organism. Both showed the same production improvements while consuming less nitrogen as the experimental rice plants.
The scientists propose that genome editing, rather than the transgenic techniques should be used in other crops to increase productivity.
Reference- Journal Science, Clean Technica, Wikipedia, Statista | <urn:uuid:38043212-02dd-42f2-8c5b-a16e40274687> | CC-MAIN-2023-06 | https://www.cleanfuture.co.in/2022/08/29/modified-rice-less-nitrogen/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500619.96/warc/CC-MAIN-20230207134453-20230207164453-00530.warc.gz | en | 0.958904 | 383 | 3.890625 | 4 |
Anzac Nations: legacy of Gallipoli in NZA
The legacy of Anzac: why is it still so significant?, Times Online, 21 April 2022
Why do Kiwis and Australians still view the Gallipoli campaign, fought more than 100 years ago in a distant Turkish peninsula, as an expression of national identity?
In Anzac Nations: The legacy of Gallipoli in New Zealand and Australia, 1965-2015 (Otago University Press, 2022) University of Auckland historian Dr Rowan Light examines the myth-making around Anzac and how the commemoration has evolved.
He looks at what shapes this collective remembrance and how commemorations differ between the two nations, as well as exploring how societies make meaning and express value and beliefs through practices like remembrance and commemoration.
He says in 1965, many assumed the tradition of remembering the Anzacs wouldn’t survive beyond the death of the last Gallipoli veteran.
“Whereas when we came to the Anzac Centenary in 2015 the Australian federal government outspent all other countries, and New Zealand’s centenary programme was the largest commemoration in the country’s history.”
Dr Light is also interested in who has authority over what is – and isn’t – remembered on April 25 and why this national memory focuses so heavily on the place and experience of Gallipoli, rather than on other aspects of past violence at home or abroad.
“The way we commemorate the Anzacs at Gallipoli has changed in ways that would surprise most New Zealanders. Whereas in 1965, Anzac Day was an exclusive practice centered on the image of the citizen-soldier as an imperial and masculine warrior, today we can see the sense of public ownership over April 25, in which the New Zealand public are front and centre of a nationally inclusive day.”
He says part of this shift is that we expect our political leaders to speak on Anzac Day and extol the national values of Anzac in ways that would have been extraordinary to earlier generations.
The book looks at the changing and contested meanings of Anzac from the 1960s to the 1980s, the expanded role of the state in commemoration since 1990 and responses to these shifts by indigenous and non-indigenous communities.
It brings together stories and evidence from both Australia and New Zealand and includes examples of how groups of people, such as writers, filmmakers, protestors and prime ministers have reinvented the story of the Anzacs for public audiences.
Based on archival work done on both sides of the Tasman, Dr Light’s research is unique because it offers, for the first time, a study of Anzac commemoration in both countries, drawing on comparative and transnational approaches.
“A comparative lens allows us to see more starkly the contrast between different national uses of the Anzac story, while transnationalism opens up the possibility that one has influenced the other,” he says.
Dr Light teaches Aotearoa New Zealand histories in the University of Auckland’s Faculty of Arts. He is also project curator at the Auckland War Memorial Museum Tamaki Paenga Hira, assisting with research on the history, remembrance and commemoration of the New Zealand Wars. | <urn:uuid:b6032e7c-115f-40c2-8ac6-bf072b5fa805> | CC-MAIN-2023-06 | https://camd.org.au/anzac-nations-legacy-of-gallipoli-in-nza/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500044.66/warc/CC-MAIN-20230203091020-20230203121020-00023.warc.gz | en | 0.947789 | 670 | 3.0625 | 3 |
What is a food allergy?
A food allergy is an immune system response to a food that the body mistakenly believes is harmful. Once the immune system decides that a particular food is harmful, it creates specific antibodies to it. The next time the individual eats that food, the immune system releases massive amounts of chemicals, including histamine, in order to protect the body. These chemicals trigger a cascade of allergic symptoms that can affect the respiratory system, gastrointestinal tract, skin, or cardiovascular system. Scientists estimate that approximately 12 million Americans suffer from true food allergies.
Although an individual could be allergic to any food, such as fruits, vegetables, and meats, there are eight foods that account for 90% of all food-allergic reactions. These are: milk, egg, peanut, tree nut (walnut, cashew, etc.), fish, shellfish, soy, and wheat.
What is the difference between food allergy and food intolerance?
Although food intolerances share some of the symptoms of food allergies, they do not involve the immune system. They can cause great discomfort but are not life-threatening. People with food intolerances are not able to digest certain foods because their bodies lack the specific enzyme needed to break down that food. For example, if you are lactose intolerant, you are missing the enzymelactase, which breaks down lactose, a sugar found in milk and other dairy products. The words “gluten intolerance” are sometimes used to describe Celiac disease. However, Celiac disease does involve the immune system and can cause serious complications if left unchecked.
What is the best treatment for food allergy?
Strict avoidance of the allergy-causing food is the only way to avoid a reaction. Reading ingredient labels for all foods is the key to maintaining control over the allergy. If a product doesn’t have a label, allergic individuals should not eat that food. If a label contains unfamiliar terms, shoppers must call the manufacturer and ask for a definition or avoid eating that food.
What are the common symptoms of a reaction?
Symptoms range from a tingling sensation in the mouth, swelling of the tongue and the throat, difficulty breathing, hives, vomiting, abdominal cramps, diarrhea, drop in blood pressure, and loss of consciousness to death. Symptoms typically appear within minutes to two hours after the person has eaten the food to which he or she is allergic.
What is the best treatment for a food allergy reaction?
Epinephrine, also called “adrenaline,” is the medication of choice for controlling a severe reaction, also called anaphylaxis
. It is available by prescription as a self-injectable device (i.e., Auvi-Q
® or EpiPen
Is there a cure for food allergies?
Currently, there are no medications that cure food allergies. Strict avoidance is the only way to prevent a reaction. Most people outgrow their food allergies, although peanuts, nuts, fish, and shellfish are often considered lifelong allergies. Some research is being done in this area and it looks promising.
Find an Allergist | <urn:uuid:d3e78beb-d101-432d-97ca-2a6951b1a391> | CC-MAIN-2017-34 | https://utahfoodallergy.wordpress.com/food-allergy/ | s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886110792.29/warc/CC-MAIN-20170822143101-20170822163101-00297.warc.gz | en | 0.946872 | 643 | 3.484375 | 3 |
Inherited gene variation helps explain drug toxicity in patients of East Asian ancestry
Approximately 10% of young leukemia patients of East Asian ancestry inherit a gene variation associated with reduced tolerance of an indispensable drug for acute lymphoblastic leukemia (ALL), the most common childhood cancer.
Researchers reported that patients who inherited either one or two copies of the newly identified variation in the NUDT15 gene were extremely sensitive to mercaptopurine. Patients with the NUDT15 variant required their mercaptopurine dose reduced by as much as 92%. At standard doses, patients developed side effects that caused treatment delays and threatened their chance for cure.
The study was led by scientists from St. Jude Children's Research Hospital in Memphis, Tennessee, and the findings were published in the Journal of Clinical Oncology (2015; doi:10.1200/JCO.2014.59.4671).
The finding should aid efforts to improve identification and treatment of patients who need reduced doses of mercaptopurine. The drug is the backbone of chemotherapy that has helped transform the outlook for young patients with ALL. At St. Jude, 94% of patients with newly diagnosed ALL now become long-term survivors.
In this study, patients of East Asian and Hispanic background were more likely to inherit the NUDT15 variant than those from other racial and ethnic groups. Among patients of East Asian ancestry, 9.8% carried at least one copy of the NUDT15 variant, compared with 3.9% of Hispanic patients. The variant was more rare among those of European or African ancestry. East Asia includes China, Japan, and Korea.
"Mercaptopurine intolerance has been suspected to be a problem for young ALL patients of East Asian ancestry. Even at very low doses, the patients often develop toxicity that delays treatment, but until now the genetic basis of the problem was unknown," said first and corresponding author Jun J. Yang, PhD, an assistant member of the St. Jude Department of Pharmaceutical Sciences.
St. Jude is a pioneer in the field of pharmacogenetics, which focuses on how inherited differences in the makeup of genes influence patients' drug responses. This study confirmed previous St. Jude research that showed variations in another gene, TPMT, were also associated with an increased risk of mercaptopurine toxicity. St. Jude patients are now routinely tested for the TPMT variants, and the results help determine the mercaptopurine dose patients receive.
The TPMT variants did not completely explain mercaptopurine toxicity. The TPMT variants are less common in persons of East Asian ancestry. TPMT carries instructions for a protein of the same name. Some patients with normal TPMT still cannot tolerate standard doses of the drug. "That suggested there are other inherited genetic risk factors at play," Yang said.
Six percent of the patients in this study who required more than a 50% reduction in mercaptopurine inherited neither the TPMT nor NUDT15 variants. "Other factors, both genetic and nongenetic, are still to be discovered to improve the safety and effectiveness of mercaptopurine treatment for children with ALL," said senior author Mary Relling, PharmD, chair of the St. Jude Department of Pharmaceutical Sciences. | <urn:uuid:dcbad925-1ebe-4183-932f-3d6fe14cfcc5> | CC-MAIN-2017-26 | http://www.oncologynurseadvisor.com/web-exclusives/inherited-gene-variation-helps-explain-drug-toxicity-in-patients-of-east-asian-ancestry/article/398062/ | s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320226.61/warc/CC-MAIN-20170624050312-20170624070312-00455.warc.gz | en | 0.95206 | 667 | 2.671875 | 3 |
Dozens of studies over the years have found that immigration has little or no effect on crime in the U.S., on average. But research forthcoming in Criminology shows those studies offer an incomplete picture.
What we know about the relationship between crime and immigration is based largely on crimes that have been reported to police. But victims are much less likely to report a violent crime in areas that have drawn large numbers of immigrants in recent decades, a new study finds.
In fact, as the proportion of immigrants rises in these “new destination” areas, odds plummet that a victim will go to police to report crimes such as aggravated assault, robbery and rape.
In neighborhoods where 10% of residents were born outside the U.S., the probability of reporting is 48%, researchers estimate. In neighborhoods where 65% of residents are immigrants, there is a 5% chance that a victim will report.
The study’s authors looked at crime reporting in counties with long-established immigrant communities — New York City, Los Angeles and Chicago, for example — and compared it to crime reporting in counties that began attracting immigrant residents after 1990.
They find that crime reporting in counties with long-established immigrant neighborhoods is at about the same level as crime reporting in neighborhoods without large numbers of immigrants. But crime reporting is lower in immigrant neighborhoods located in counties the researchers call “new destinations,” which have “shorter histories of concentrated immigrant settlement.”
The study’s lead author, Min Xie, an associate professor of criminology and criminal justice at the University of Maryland, told Journalist’s Resource that the findings apply to all victims who live in these neighborhoods and are Latino, white or black. She and co-author Eric Baumer, a professor of sociology and criminology at The Pennsylvania State University, did not have information about victims’ immigration status at the time of the study.
Looking for ‘alternative data sources’
To generate these estimates, Xie and Baumer analyzed federal survey data collected between 1996 and 2014 from individuals aged 12 and older who were asked about crimes committed against them. The researchers focused on the 19,225 cases of violent crime that were recorded by the survey, including whether those crimes were reported to police.
Xie says pairing the survey data with census tract data allowed her and Baumer to assess how much crime isn’t reported to police and who’s not coming forward.
“Even though we understand crime statistics from police departments are important data sources, they are limited,” Xie says.
She says that the study’s findings do not contradict prior research, but rather offer additional context to help explain the relationship between immigration and crime.
“It’s important to use alternative data sources so we can understand the relationship better,” she says.
Prior research and its limitations
Last year, the Annual Review of Criminology published a review of 51 studies on crime and immigration dated between 1994 and 2014. Overall, according to the review, the most common outcome reported in these studies “is a null or nonsignificant association between immigration and crime.”
When the authors of the review analyzed data from the 51 studies, they found a slightly negative relationship between crime and immigration, but that the magnitude of the relationship “is so weak it is practically zero.”
The review’s authors also note that official crime data are not sufficient to answer a lot of questions about crime and immigration. For example, the data does not distinguish between documented and undocumented immigrants. Also, crimes such as sexual assault, gang violence and domestic violence are underreported among immigrants for various reasons, including language barriers and a fear of authorities, the authors explain.
“To truly advance research on the immigration-crime nexus, critical data limitations must be overcome, including incorporating information about nationality in official data collection efforts, further distinguishing between documented and undocumented status in the data, and addressing the problem of underreporting, especially with respect to immigrant victims,” write the authors, Graham Ousey of the College of William and Mary and Charis Kubrin of the University of California, Irvine.
Xie says research that relies on self-reported data about immigrants, some of whom do not want to draw attention from authorities, may not be as accurate as researchers would like. She says she’s investigating whether unauthorized immigration affects crime reporting levels.
“We are actually right now trying to do research to incorporate information about the estimated number of unauthorized immigrants — we are hoping to incorporate that data to see whether or not that would affect our findings,” she says.
Xie says she also plans to look at how reporting levels may differ among neighborhoods when it comes to property crimes such as burglary and motor vehicle theft.
Story ideas for journalists
- Research studies of crime often try to present a picture of the nation as a whole, Xie explains. She suggests journalists remind their audiences that researchers generally report their findings as averages. She also urges journalists to look at how local communities differ from national averages in terms of how immigration affects neighborhoods as well as how crime is reported and how police respond to immigrants. “There could be some local variations that are important,” she says.
- Research conducted in other countries offers important insights that could affect how Americans think about immigration and immigration policy, Xie says. “Don’t forget the U.S. is not the only place dealing with immigration,” she says. It’s a good idea for journalists to examine those studies and put them into context.
Writing about immigration? Read our tip sheet on covering immigration from Angilee Shah, a former senior editor at Public Radio International, and our tip sheet on covering Latino immigrant communities specifically from Maria Hinojosa, the anchor and executive producer of the NPR show “Latino USA.” | <urn:uuid:c45f4055-d5cd-452f-9c15-a2af999e3b51> | CC-MAIN-2021-43 | https://journalistsresource.org/politics-and-government/immigration-crime-research-victim/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585380.70/warc/CC-MAIN-20211021005314-20211021035314-00314.warc.gz | en | 0.965678 | 1,212 | 2.578125 | 3 |
© David Kaplan.
The vertical line of letters in the brackets represents particles of the Standard Model. The d stands for down quark, the e for electron, and the Greek letter (pronounced "nu") for neutrino. Grand unified theories typically group quarks and leptons together into the same grouping of particles. Note that the strong force carrier (the gluon) changes quarks from one color to another. The weak force carrier (here, the W boson) can change an electron to a neutrino. If all of these particles are in the same grouping (technically, "representation" of the symmetry), then there would also be force carriers, X, which change quarks into leptons. (Unit: 2) | <urn:uuid:d2e39e25-6249-4f95-a64a-1bfc1b910230> | CC-MAIN-2015-18 | http://www.learner.org/courses/physics/visual/visual.html?shortname=5bar | s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1430459118064.7/warc/CC-MAIN-20150501054518-00074-ip-10-235-10-82.ec2.internal.warc.gz | en | 0.860208 | 156 | 3.5625 | 4 |
The myrtle family (Myrtaceae) consists of trees and shrubs that originated from Australia and includes approximately 140 genera and 3,000 species. In general, it prefers the tropics, subtropics and temperate climates; however, some of its species can survive in other types of climates that aren't tropical or temperate. Myrtle trees are a favorite addition in a garden because they grow fast and produce flowers in various colors that add visual appeal to any type of landscape design.
Of all the myrtle trees, crape myrtles (Lagerstroemia indica) are the most popular because they bloom for a long period. Used in most residential, urban and rural gardens, crape myrtles are drought-tolerant. They grow in any types of soil as long as they drain well.
Crape myrtles vary in sizes ranging from small (dwarfs) to medium. Dwarf types grow less than 4 feet tall after five years while medium crape myrtles can grow more than 20 feet tall after 10 years. Choose from a long and ever-growing list of crape myrtle cultivars to add to your garden with flower colors of white and various shades of pink, purple, lavender and red that bloom from 75 to 110 days in summer and fall.
In the winter, crape myrtles may lose their leaves, but they still look attractive because of their interesting peeling barks with colors ranging from gray, white, pale cream, rust, bright orange to cinnamon brown underbark.
Some of the popular crape myrtles are 'Natchez' and 'Muskogee'. Both varieties grow up to 20 feet tall with a spread of up to 20 feet. 'Natchez' is famous for its white flowers while 'Muskogee' has lavender flowers. These two varieties thrive best in USDA Hardiness Zones 6 to 9. If you prefer a more compact, semi-dwarf variety, choose 'Tonto'. It thrives best in USDA Hardiness Zones 7 to 9, grows up to 10 feet tall, has a spread of 8 feet wide and produces large, deep pink (watermelon-red) flower clusters for up to 100 days during summer.
Crape myrtles need little water, moderate fertilization and annual pruning to keep them healthy and looking attractive.
Peppermint tree (Agonis flexuosa) grows up to 40 feet high with a spread of up to 30 feet wide. An evergreen tree that has delicate, weeping branches and has clusters of small, five-petaled, fragrant, white flowers, peppermint tree thrives best in USDA Hardiness Zones 10 to 11. It prefers sandy or clay loam, normal to moist soil and full to partial sun. It has fibrous brown bark and long, narrow, dull green leaves. Peppermint tree looks similar to a weeping willow from a distance and is identifiable by its powerful peppermint smell that comes from the leaves.
Lemon Scented Gum
Lemon scented gum (Eucalyptus citriodora) is an evergreen tree that grows up to 100 feet in nature and forms two leaves--juvenile and adult. These two sets of leaves produce a strong lemony scent and yield lemon-scented oil (Citonellal) used in perfumes.
Lemon scented gum thrives best in USDA Hardiness Zones 9 to 11 and blooms in winter to early spring with flowers of white that are not very distinctive. Woody, urn-shaped capsule appear after the flowers. This tree needs full sun, grows in any types of soil as long as well-drained. It does not require fertilizing. Avoid exposing to temperatures below 50 degrees F. | <urn:uuid:7fd2ded8-83be-4e0f-9d75-46016b3d38cb> | CC-MAIN-2017-34 | http://www.gardenguides.com/117252-flowering-plants-myrtle-family.html | s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105334.20/warc/CC-MAIN-20170819085604-20170819105604-00641.warc.gz | en | 0.92848 | 785 | 3.03125 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.