text stringlengths 144 682k |
|---|
Chapter 9: Network Requirements and Preparation Planning and Installation Guide
ShoreTel 11.1 115
9.4.3 Latency
Latency is the amount of time it takes for one person’s voice to be sampled, digitized (or
encoded), packetized, sent over the IP network, de-packetized, and replayed to another
person. This one-way delay, from “mouth-to-ear,” must not exceed 100 msecs for toll-
quality voice, or 150 msecs for acceptable-quality voice. If the latency is too high, it
interferes with the natural flow of the conversation, causing the two parties to confuse the
latency for pauses in speech. The resulting conversation is reminiscent of international
calls over satellite facilities.
The latency introduced by the ShoreTel system can be understood as follows: When a
person talks, the voice is sampled by the ShoreGear voice switch, generating a latency of 5
msecs. If the call does not traverse ShoreTel voice switches and is handled completely
internally by the switch, the latency is generated by the basic internal pipeline of the
switch. In this case, the switch samples the voice, processes it, combines it with other voice
streams (switchboard), and then converts it back to audio for output to the phone in
5-msec packets, for a total latency of about 17 msecs.
When the call transfers between voice switches, the voice is packetized in larger packets—
10-msec for LAN and 20-msec for WAN—to reduce network overhead. The larger packets
take more time to accumulate and convert to RTP before being sent out. On the receive
side, the incoming packets are decoded and placed in the queue for the switchboard. For a
10-msec packet, this additional send/receive time is approximately 15 msecs, and for a 20-
msec packet it is about 25 msecs.
For IP phones, the latency is 20 ms in the LAN and 30ms in the WAN.
When the codec is G.729a, the encoding process takes an additional 10 msecs and the
decoding process can take an additional 10 msecs.
See Table 9-6 for specific information about latency on the ShoreTel system.
Bandwidth for
voice onlyc
256 Kbps 128 Kbps 64 Kbps 32 Kbps 8 Kbps 8 Kbps
284 Kbps 146 Kbps 82 Kbps 52 Kbps 26 Kbps 26 Kbps
Bandwidth after
260 Kbps 132 Kbps 68 Kbps 37 Kbps 12 Kbps 12 Kbps
a. When ADPCM voice encoding is used, an additional 4 bytes are added to the voice
data for decoding purposes.
b. Voice data bytes per packet = (# bits/sample) x (8 samples/msec) x (20 msecs/packet)
/ (8 bits/byte)
c. Bandwidth = (# bytes/20 msecs) x (8 bits/byte)
Configuration Overhead Encoding Frame Size -5 Jitter BufferaDecoding Total (+/– 5 msec)b
Switch 17 0 0 Varies 0 17
Table 9-6 Latency
Broadband Linear G.711 ADPCM G.729a G.729a
Table 9-5 WAN Bandwidth—Bytes
Terms of Use | Privacy Policy | DMCA Policy |
Environmentally friendly labels; the options
When it comes to labels, they are typically thought of as not environmentally friendly.
We are here to challenge that preconception and educate you on the more environmentally friendly options available in 2022 with the invention of bioplastics and other novel materials.
We’ll talk about two different materials that are 100% compostable, making them amazingly sustainable compared to a plastic alternative.
Let’s get started.
What labels are currently made from
There are two categories of material. The first is paper labels, and these are reasonable for the environment.
Paper labels will degrade over time and can be quite a good option if sourced from sustainable forests.
The second category is plastic labels. Plastic labels are not so sustainable. As we are all aware plastic waste is a huge problem that we need to solve, so the more we can reduce plastic, the better.
The benefit of plastic labels compared to paper is that they’re more durable and waterproof, so you cannot replace them for many applications.
So what to do?
Introducing bioplastic labels
Some clever scientists have put their heads together and invented something called bioplastic.
For the application of labels, these bioplastics can then be extruded into thin films that we then print on to make a self-adhesive label as we know it today.
Bioplastics are incredible. They’re made from wood pulp and are completely compostable in just a few months in the right environment. This makes them so much more environmentally friendly than that plastic alternative.
Even better, they behave similarly to a plastic sticker. They are typically waterproof and really quite durable. So they have some clear advantages over a paper-based sticker.
Here’s an example of a clear label made from bioplastic.
An alternative to paper labels
But what happens if you want to use paper-based labels because of the texture or something else about your application?
We have a solution for that too
Labels that look and feel like paper but are made from a sugarcane byproduct called bagasse are available.
These look and feel like paper, but they’re made from the byproduct of sugarcane, making them super sustainable and a great alternative to a paper label if that’s what you like.
This image below shows an example of a sugarcane fibre label, and it looks just like paper!
There’s no excuse not to choose eco-friendly
Now you know about these two materials, we believe you can choose to be environmentally friendly for every option. There’s no excuse!
If you have any questions about these materials that we haven’t answered in this blog, please ask us in the comments below or check out this materials page for more information.
Thank you for reading, and we hope you choose the sustainable option. |
If you’ve been sniffing the milk in your fridge, it’s probably not the milk you think it is.
But the scent is there, and it’s not just for dairy products, says Sarah MacLeod, a food scientist at the University of York.
“You have a cow’s milk smell, you have a sheep’s milk scent, you can smell the cheese and the butter smell of a cheese factory.” “
In terms of the smell of milk, there are a lot of milk smells that you can get,” she says.
The milk smell is a powerful scent.
“It’s a very powerful smell and you don’t have to go to the supermarket to smell it,” Ms MacLeod says.
The milk’s sweetness and sourness can be detected by sniffing milk and other dairy-based products.
“The smell is very complex and it has a very strong flavour,” says Ms Macleods.
It can also be found in the body of animals such as cows, sheep, goats and pigs, and in the skin of other animals, including birds, fish and insects.
In a paper published in the journal Environmental Toxicology and Chemistry, researchers from the University Of New South Wales examined the chemical composition of the milk smell in different dairy products.
They compared the composition of various dairy products to the milk itself, looking for signs of how the milk smells.
In some cases, the researchers found that the milk had more in common with the urine or saliva of other species, suggesting that the smell was derived from a different organism.
The researchers also looked at how the chemical makeup of milk was related to the age of the cow or the milk’s age.
In older milk, it appeared to have been produced by the female dairy cow.
In younger milk, the milk was produced by either a male or female calf.
“A female cow’s urine and saliva has a much higher concentration of certain compounds than a male’s urine,” says Dr Macleod.
The study was published in Environmental Toxicologist.
What do we smell like?
There’s an important distinction between the milk of a cow and milk produced by other animals.
A cow’s natural milk is a sweet liquid that can be stored in the cow’s bladder.
The cow’s saliva is an oily substance that can reach the bottom of the bladder.
But, like milk, when it’s put in a cup or bottle, it becomes a strong odour.
The taste is what people smell most of all.
“We don’t really know what we smell, because we don’t know what the other animal’s smell is,” Ms Mckenzie says.
The smell of human milk is more complex than that of milk.
It includes the aroma of the stomach, which is made up of two parts: the gastric juice, and the digestive tract.
“So we’re talking about two separate things,” says Sarah Ruhr, a lecturer in the University’s Department of Molecular and Cell Biology.
The digestive tract contains the bacteria that make up the milk.
The mucus inside the stomach and intestines contains the other bacteria that produce the milk-like flavour.
The smell of the human digestive tract is also different from that of a milk cow’s, she says, because it’s made up mostly of bacteria.
The reason it’s so unique is because the smell comes from the gut bacteria.
“When a cow eats milk, she has a good gut bacteria profile,” says Professor Ruhro.
How to recognise milk?
There are a few ways to recognise the milk taste, says Dr Ruhra, who is also a member of the Australian Dairy Science Centre.
She suggests that people should look for “brown, creamy” or “creamy” milk.
“They will not taste like a normal cow milk because it has so much less fat than other types of milk,” she explains.
“If you see a milk that has a creaminess to it, that means that the cow has had a very hard day, because she’s had a lot to digest.”
The taste of milk is often described as “ladylike”, but it can be hard to pinpoint exactly what it smells like.
“The smell you get is not the smell you’re looking for,” Ms Ruhrum says.
There are other ways to tell if a milk product is dairy-free.
“Most of the dairy products we’re used to smell in the grocery store are dairy-grade milk products,” Dr Ruckel says.
These include cheese, milk, butter and cream.
“These are really good dairy-like flavours.
They’re more likely to be labelled dairy-finished,” she adds.
You can also look for dairy-milk blends.
“For example, you’re probably not going to smell dairy-laced milk,” Ms McLeod says, “so if you want a more traditional milk, you may want to look for a dairy-to-
후원 콘텐츠
|
The Science Behind The Eternity Tree
Our environment, our planet, indeed our very way of life, is in danger. As the author of this piece, I am 51 years of age, and I remember as a schoolboy being taught about the greenhouse effect and release of gases into the atmosphere, along with its causal effect. We covered melting of polar ice caps and rising sea levels, the changing air currents and deforestation. Forty years later and has anything really changed? The answer, effectively, is no.
Okay, so a lot of consumables have gone CFC-free, hybrid and electric cars are breaking through and we all drink from cardboard cups using cardboard straws at our favourite take-away restaurant. We recycle plastic bowls and glass bottles and jars at home, and we save energy wherever possible. New houses have solar panels on rooves and our coastal views now also include wind turbines and wave generators.
However, we are closer to ‘Armageddon’ than ever before. Consecutive governments gain power on the back of ‘green’ promises whilst activists and protesters bemoan the lack of progress. In Rio de Janeiro in 1992, The Earth Summit was the very first of its kind. Billed as a game changer, in fact it did very little towards progress. In fact, regress was the outcome. The 1990’s was catastrophic for the environment and earth was bundled into a new millennium in a worse state than ever. Polar ice caps are melting at an alarming rate, rain forests are disappearing faster than ever and still the world is failing to join forces to tackle the problem as ‘one’.
Behind The Eternity Tree
Turning Back the Years
Planting hundreds of millions of trees across the world is one of the biggest and cheapest ways of taking Carbon Dioxide out of our atmosphere in order to tackle the climate crisis, according to scientists, who have made the first calculation of how many more trees could be planted without encroaching on crop land or urban areas.
As a tree grows, it then absorbs and stores the carbon dioxide emissions that are driving global warming to new, unprecedented levels. New research estimates that a worldwide planting programme could remove two-thirds of all the emissions from human activities that remain in the atmosphere today, a figure that scientists describe as “mind-blowing”.
The analysis found there are 1.7 billion hectares of treeless land on which 1.2 trillion native tree saplings would naturally grow. This area is roughly 11% of all land and equivalent to the size of the United States and China combined. Tropical areas could have 100% tree cover, while others would be more sparsely covered, meaning that on average about half the area would be under tree canopy.
The scientists intentionally excluded all fields which are used to grow crops and urban areas from their analysis. But they did include grazing land, on which the researchers say a few trees can also benefit the native sheep and cattle.
However, this project would require governments, nations and citizens to come together collectively, both fundamentally but also financially, for this to happen. Given our global state politically, the author fears this approach is clever in design but lacking in implementational practicality.
Turning Back the Years
How Do Trees Help?
Effectively, climate change is too big a subject for us mere mortals to wrap our heads around in such a short article. Melting ice caps, rising sea levels, record flooding, record-breaking droughts, record heat – and new records set nearly every year. But climate change is a little more simple to understand when your city, town or home is the subject of a local/national news story or weather disaster headline, or your trip home is by canoe and not your car! Indeed, just today I am looking at news stories from South Wales of the impact of Storm Dennis, and Storm Ciara last week, of people trapped in their homes with water levels reaching ground floor ceilings.
Feeling helpless in the face of mother nature, many cannot come to terms with the contribution just one person can make to the cause of climate change, much less the solution. So, they wait for local and national leaders to make a sweeping, dramatic change to save the planet. But considering that scientists, fossil fuel companies and global governments and world leaders have been well-informed on climate change since the 1970’s, don’t count on any of them to ride in on a white horse anytime soon. We need to make decisions in our own households that can effect change.
But all hope is not lost. Even if you own a small property you can make a significant difference to fighting climate change by planting a tree or two or three (or if you don’t have space for trees, woody perennials or shrubs). Multiply that effort by the millions of property owners in the United States and United Kingdom alone, and that will be a great start.
Trees and woody perennial shrubs are key in controlling the level of Carbon Dioxide in the atmosphere (carbon dioxide is just one of the chemicals responsible for climate change). All plants take in Carbon Dioxide and release oxygen (the O in CO2) and the carbon molecule is used for many plant functions. In the case of trees and woody perennials like shrubs, it helps form trunks, stems, and root mass, and is stored as wood for years, decades, or even centuries. This is what’s referred to as “sequestering” carbon dioxide.
This carbon storage capacity makes trees one of the best tools to reduce carbon dioxide in the atmosphere. According to data on climate change collected by the United States Government as of 2012, U.S. forests, grasslands, and other natural sources sequestered 762 million metric tons (MMT) of carbon dioxide which offset around 11 percent of total U.S. greenhouse gas (GHG) emissions. Sequestering is also known as “carbon sink” – the ability of natural resources to capture atmospheric carbon.
How Do Trees Help
How many trees have been lost in the US alone?
Since the first European settlers arrived on the shores of the United States, forestland has seen a net reduction (more were cut down than replaced) of roughly 257 million acres. To give you an idea of how many acres that is, it’s three times the amount of land currently managed by the National Park Service: every National Park, National Monument, historic battlefield, trail, seashore, and more combined.
But that’s only half the story. When trees are cut down and burned or used for some other purpose, it releases their stored carbon back into the atmosphere. So not only did the loss of that acreage decrease the ability for the planet to absorb carbon, it also increased the amount of carbon in the atmosphere. I bet you didn’t know that, did you?
It is estimated by the United States Mid-Century Strategy for deep decarbonisation, that if we are able to expand the acreage of trees and natural grasslands by 40-50 million acres, we could offset 50% of US greenhouse gas emissions by 2050. It is figures like this that set your mind in motion with questions such as “Why aren’t we doing that already?”.
From 1987 to 2012, tree planting efforts helped United States forests expand by roughly 1 million acres per year – with federal land agencies accounting for roughly a third of those figures. To reach the mid-century reforestation goal, this annual expansion rate needs to double on average. However, since it takes time for trees to grow and reach their full carbon sequestration potential, even more planting must now occur in the short term. Independent projections call for 2.7 million acres per year of forest expansion from now through at least 2035 – almost three times the current rate.
Georgetown Public Policy Review
Yes, the implications of planting that much land with trees seems overwhelming to us mere mortals, as it seems to be with most politicians and business leaders too. But it is in fact twice as tough to achieve if those responsible for public policy have the will to do it. One may also add, what other choice do we have? Planting trees doesn’t require massive technology development – it is a simple solution to an overwhelming problem that can be done with a minimal amount of equipment, or even by 1 person at a time. As a member of the population of planet earth, I am left asking myself whether I should trust the powers that be to act on this. Or should I do something about it myself?
Which brings us back to the beginning. How can homeowners and gardeners help fight climate change? Planting millions of trees across the country, including in urban landscapes, will obviously take years. In the meantime, the trees that you plant on your property can begin the carbon sink. Trees also contribute to protecting your home from severe weather and shade it in summer, saving on air conditioning costs (which also fights climate change by reducing energy use). Trees also have the added benefit of attracting wildlife like birds which keep down the pest populations in your landscape.
The importance of taking some action and planting trees can’t start soon enough. If you have a shaded property, congratulations, you are part of the solution. But if your property is a clear-cut, golf-green-like lawn from the street to your front door, then firstly ask yourself why? and secondly, buy a tree or two or six and plant them today.
Every tree is a good tree, but if you buy native species of trees – that is, trees that have evolved in your region – they will be very low maintenance (if any at all) once they are established and then they should thrive. Just be sure you make a note of the mature height and spread of the tree and plant them in a location where the mature size won’t crowd your foundation or other trees.
What are the actual figures?
The percentage rate of cremation in the United Kingdom is currently around 90%. Over the past ten years, cremations have surpassed burials as the most popular end-of-life option in the United States, too, according to the National Funeral Directors Association. At the same time, companies have been springing up touting creative things you can do with a loved one’s ashes, such as pressing them into a vinyl record, using them to create a marine reef, or having them compressed into diamonds.
Cremation, along with these creative ways to honour the dead, is often marketed as a more environmentally friendly option than traditional embalming and casket burial. Concern for the environment, in addition to economic considerations, may be driving some of the increase in popularity. But while it’s true that cremation is less harmful than pumping a body full of formaldehyde and burying it on top of concrete, there are still environmental effects to consider. Cremation requires a lot of fuel, and it results in millions of tons of carbon dioxide emissions per year. The average cremation takes up about the same amount of energy and has the same emissions as about two tanks of petrol in an average car.
In the western world, all cremations happen indoors at crematoriums. The big environmental concerns with this type of cremation are the amount of energy it requires, and the amount of carbon dioxide emissions it produces. Regional environmental regulations mean that most U.S. crematoriums have scrubbing or filtering systems, such as after-chambers that burn and neutralise pollutants like mercury emissions from dental fillings. The United Kingdom has similar restrictions.
So, what are the actual emissions from a cremation and how much of those emissions do trees absorb?
The estimate is that an average cremation produces 500 pounds of carbon dioxide, all ending up in the environment. In the UK alone, there are 400,000 cremations performed on an annual basis. That equates to an overall figure of 91,000 tonnes of carbon dioxide into the environment.
A mature tree absorbs roughly 50 pounds of carbon dioxide in a year. So, it takes ten years for a mature tree to absorb the emissions from each cremation.
do trees absorb
Say ‘Hello’ to The Eternity Tree!
The Eternity Tree Ltd was founded in the UK and US by two friends who had both recently lost loved ones. Both of these friends are also incredibly passionate about saving the environment. After the grieving process was over, they both had cremated remains in hand and looked into what to do next.
They found two very important issues. Firstly, grieving relatives are often confused as to what to actually do with cremated remains. And, secondly, our planet is dying and needs trees to be planted, NOW. Further, the general public are largely unaware of the toxic effect that scattering human ashes is having on the environment in general, whilst also having a major impact on local wildlife and foliage. Indeed, most National Forests and Parks have now banned the scattering of ashes due to these unwanted effects on the environment.
The cost of memorial parks is becoming increasingly expensive and then there is the added cost of future rents and the burden they could create for the family. The fact still remains that most families want a lasting memorial of their loved one, that can be seen and visited at special times. So, the friends thought what better way to create a lasting memorial than by planting a tree that could absorb the goodness from the ashes, creating a living memorial to their loved ones.
So, they began researching cremated remains. They found that they have an incredibly high pH level but at the same time contain calcium, potassium and phosphorous, essential in the growth of plants and trees. However, the remains do not contain manganese, carbon and zinc, which are important in plant growth.
Therefore, it is necessary to neutralise the high pH level, maximise the impact of the good nutrients and introduce missing nutrients.
Thus, The Eternity Tree developed the world’s first and only bio-neutralising urn, The Eternity Seed. This seed is of a similar capacity to all other urns allowing the remains to be deposited inside in full. The seed then degrades within 6 months, neutralises the harmful elements of the remains and then nourishes the chosen tree. Clients can choose from over 30 forestry commission approved saplings or shrubs, usually planting the urn and tree in their garden. All saplings are native to the chosen country, are grown from seeds in Forestry Commission Approved facilities, free from disease and using absolutely no plastic in the process whatsoever.
So, now there is the opportunity to change mindsets and alter the way we view death. No longer does death have to mean the end. Now, for the first time, we have the ability to see a loved one’s death as the start of something special…new life for the planet and hope. The Eternity Tree also plants two additional trees when someone purchases a product.
Just think, in ten years, those three mature trees will be providing valuable life to a planet in dire need of our love. Multiply that by 500,000 people, or 1,500,000 trees (thanks to The Eternity Tree) and you suddenly have a true memorial that leads to life for the planet and a reversal of climate change.
Add Comment
Open chat
Can we help you? |
Could your gut be to blame for your restless nights?
Could your gut be to blame for your restless nights?
You’re struggling to focus, can’t stop yawning and the only thing you want to do is crawl back into bed, even though it does you no favours most nights. Sound familiar?
Sleep hygiene is so important for a good night's sleep but for a huge portion of people dealing with chronic insomnia, sleep hygiene is rarely enough.
“Disease is often an overlooked cause of sleeping problems, nevertheless we all know it can cause a sleepless night,” says our Holistic Health Nutritionist Sarah Murphy. “All of us have had a bad cold, or flu that made it impossible to sleep, and those who experience chronic pain know that sleeping can be impossible.”
Yet Sarah suspects another condition could be causing sleepless nights, one which is not yet well known or understood, called endotoxemia. “This is low grade intestinal inflammation which affects every system throughout the whole body, manifesting in a whole host of symptoms—a major one being insomnia.”
Your gut bacteria affects your circadian rhythms and all your biological functions. At night, your cortisol levels should be at their lowest for good sleep, yet inflammation leads to high cortisol levels.
This changes your melatonin, serotonin and hypothalamic function (an area of your brain which releases hormones and regulates body temperature). Anything that raises your cortisol at night will stop you from getting a good night's sleep because it throws off your entire internal clock.
If your sleep hygiene is good, i.e. you’re restricting screen time before bed, dimming lights, sticking to a nightly routine etc, but you’re still unable to have a satisfying night of rest, then it’s worth considering where else this cortisol could be coming from. “Look to your gut inflammation,” suggests Sarah. “Insomnia is associated with diabetes, obesity, cancer, skin diseases, gastrointestinal disorders, depression, anxiety, mood swings and dementia. The common denominator? Inflammation.”
Sarah explains this further: “Overgrowths of yeasts, viral infections, parasites—all these cause inflammation due to compromised gut barrier function. This inflammation damages our cells including the cells in our hypothalamus (the site of your biological clock). The increase in inflammatory cytokines leads to decrease in tryptophan (the precursor to serotonin which is the precursor to melatonin). Melatonin is a powerful antioxidant they even use in cancer therapies— not only does it help us sleep but it also is a powerful anti-inflammatory that helps scavenge free radicals. You don't want to mess with these levels. If you have chronic insomnia and there are no other obvious illnesses, external stressors or glucose deficiencies, I can confidently say your answer lies right there. As long as gut inflammation exists, you will have a hard time getting a good night's sleep.”
One of the best ways to look after your overall health including sleep is to nurture your gut: create a solid foundation by supplementing with a probiotic to ensure that you’re maintaining a balance of good bacteria in the gut to protect against potential inflammation. Better yet, use a PREBIOTIC + PROBIOTIC together to ensure you’re giving the probiotic bacteria the best chance of survival. Who would have thought that the answer to a good night’s sleep was within your body the whole time?
• Li, Yuanyuan, et al. “The Role of Microbiome in Insomnia, Circadian Disturbance and Depression.” Frontiers in Psychiatry, vol. 9, Dec. 2018. PubMed Central, doi:10.3389/fpsyt.2018.00669.
• Nobs, Samuel Philip, et al. “Microbiome Diurnal Rhythmicity and Its Impact on Host Physiology and Disease Risk.” EMBO Reports, vol. 20, no. 4, Apr. 2019. PubMed Central, doi:10.15252/embr.201847129.
• Mullington, Janet M., et al. “Sleep Loss and Inflammation.” Best Practice & Research. Clinical Endocrinology & Metabolism, vol. 24, no. 5, Oct. 2010, pp. 775–84. PubMed Central, doi:10.1016/j.beem.2010.08.014.
• Colten, Harvey R., et al. Extent and Health Consequences of Chronic Sleep Loss and Sleep Disorders. National Academies Press (US), 2006.,
• Li, Shi-Bin, et al. “Hypothalamic Circuitry Underlying Stress-Induced Insomnia and Peripheral Immunosuppression.” Science Advances, vol. 6, no. 37, Sept. 2020, p. eabc2590., doi:10.1126/sciadv.abc2590.
• Cutando, Antonio, et al. “Role of Melatonin in Cancer Treatment.” Anticancer Research, vol. 32, no. 7, July 2012, pp. 2747–53. |
What is ATEX
ATEX definition
ATEX is short for ATmosphére ÉXplosive. Atex certified products are used as a precaution in potentially exploding environments to make the work environment safer.
An explosive environment
Areas, which are potentially explosive, can be places that contain of a mixture of air and flammable substances under atmospheric conditions, where the combustion after ignition spreads to the entire unburned mixture. The flammable substances can be in the form of gases, vapors, mists or dust.
To sum up, the following elements need to exist: First a combustive agent like air, second a combustible like gas, paint thinner, diesel etc. and third a point of flammability or an ignition source.
ATEX classifications
The classification of the dangerous Zones is established according to the following norms: EN 1127-1, EN 60079-10-1 (gas-Ex) and EN 60079-10-2 (Dusts-Ex), with a division of the dangerous areas in three Zones, according to the frequency and the period of presence of the explosive substance.
Do like hundreds of your colleagues
Do like hundreds of your colleagues
Get knowledge and special offers
Sign up for Vetec Insights and receive industry knowledge and the latest information regarding our products and services.
You have Successfully Subscribed! |
Your browser is out of date.
Renewable Energy
Project overview
Today’s consumer is more environmentally conscious than ever before. Forward-thinking companies, like Ocean Renewable Power Company (ORPC) and US Synthetic, are teaming up to develop new ways to capture and convert hydrokinetic energy from ocean waves and river currents into renewable, emission-free electricity.
ORPC’s RivGen® Power System was built in 2014 and deployed in the Kvichak River near the remote Alaskan village of Igiugig in 2015. Using the latest technology advancements, like diamond bearings from US Synthetic, the RivGen Power System is now supplying one-third of the village’s electrical needs, significantly offsetting diesel fuel use.
To meet the challenges of operating in a wet and dirty environment, renewable energy companies are relying on seal-less, corrosion-resistant diamond bearings to deliver on the promise of local, sustainable, clean continuous power.
PROJECT: RivGen® Power System
CATEGORY: Power Generation, Water Treatment
APPLICATION: Ocean & River Energy - Submerged
LOCATION: lgiugig Village, Alaska
“Engineers [in the renewable energy space] have to design systems that can survive the corrosive effects of seawater and withstand the intensity of waves….” Fast Company Magazine: Could Wave Power Be the Next Boom in Renewable Energy?, April 2019
Diamond bearing underwater.
The dirty water passing through a hydrokinetic system poses significant challenges to rotating equipment. Ocean waves and river currents generally carry significant amounts of damaging sediment (like sand, gravel, etc.), corrosive minerals (including salt), high amounts of fluid pressure caused by violent waves or powerful river currents, and even barnacles and mud that can compromise component life in any system. In most cases, frequent mechanical failure is common, due to eroded and contaminated rotating equipment–especially bearing seals. Traditional ball bearings running in these harsh environments must be replaced or repaired often, significantly increasing critical power outages and costly downtime.
Engineers have tried to overcome these challenges in the past, by adding redundant ball bearing stacks and/or encasing the bearings in large sealed sleeves for protection. These conventional methods work to a degree, but the power of ocean waves and river currents, combined with the eroding sediments within the water, still manage to break down these common counter measures. These methods of protecting traditional bearings also introduce complexity into the system and typically increase the overall size of the tool–increasing the cost, limiting the operating life, and significantly reducing the efficiency of this critical system.
ORPC and US Synthetic teamed up to develop a diamond-based bearing solution that completely overcomes the challenges of rotating equipment running in corrosive, abrasive ocean and river environments. By utilizing US Synthetic’s proprietary diamond technology, the ORPC RivGen Power System leverages:
• A seal-less, fluid-cooled diamond bearing that eliminates excess component weight and the need for contaminating lubricants and maintenance;
• A polycrystalline diamond material that can easily resist corrosive and damaging material flowing in the water;
• A sliding-element bearing that delivers a low coefficient of friction (0.01). After approximately 200 hours of use, diamond “wears in” and the coefficient of friction will perform similar to a polymer. At these low friction levels, diamond provides improved operating efficiency and eliminates excessive drag on the system’s rotating elements.
“We’re designing for a 20-year service life.” — Cian Marnagh, VP Engineering, ORPC
Tell us about your bearing needs
*Denotes required fields
Why work with us
Manufacturing Locations
Scientists and Technologists |
Attention client/members affected by severe weather:
Should You Buy an Electric Car? Take This Quiz
Should you opt for an all-electric vehicle, hybrid, or stick to a gas-powered vehicle? Learn more about the types of electric cars to discover which may be right for you.
The biggest variable between plug-in hybrid (PHEVs) and all-electric vehicles (EVs) is the driving range.
• PHEVs have both an electric motor and a gasoline engine, giving the vehicle more range.
There are two types of PHEVs:
• Series (also called extended-range) PHEVs can run solely on their electric motor until the battery is depleted, at which point the gasoline engine starts and generates electricity to power the electric motor.
• Parallel (also called blended) PHEVs share the load and run on both the electric motor and gas engine at the same time, but are capable of running on electric power only at low speeds.
• EVs, on the other hand, don’t have a gas engine and typically have less range than PHEVs and gasoline powered vehicles. Range for both will vary based on make and model.
So which type of vehicle is right for you? Take the quiz below to determine if you’re ready for an electric. The answer may surprise you.
What’s your level of commitment to the environment?
• A. You are committed to the environment in many ways, but you aren’t sure an electric will fit your lifestyle.
• B. You make efforts to reduce your carbon footprint as much as possible, but occasionally convenience wins out over your conscience.
• C. You are wholeheartedly committed to reducing your carbon footprint as much as possible.
If you answered C, then you are probably ready for an EV. If you answered B, a PHEV might be the better fit. And if you answered A, you may even want to stick to a gas-powered vehicle. Your habits and lifestyle needs, covered in the next few questions, will help you decide.
What is your current vehicle status?
• A. The car you buy will be your only vehicle and you’ve never owned an EV or PHEV before.
• B. You currently have one vehicle in your family that is either gas powered or a PHEV. You are thinking of adding a second car that is electric.
• C. You have been driving a PHEV, but are thinking of trading that in for an EV.
If you chose B or C, you might be ready for your first EV. If you have a backup car for longer-range needs, or if you’ve been driving a PHEV for a while and rarely need to fill the tank, it might be time to make the EV leap.
What are your needs regarding extended car travel?
• A. You take several road trips — longer than 60 miles — throughout the year.
• B. You rarely travel outside your city or town by car.
• C. You take a few road trips per year and like to make frequent stops.
If you chose B or C, you might be ready for an EV. The key is to decide whether you take enough longer trips where range is an issue or if you’re comfortable with the idea of having to charge frequently.
What is your daily commute like?
• A. Your commute is longer than 40 miles.
• B. Your commute is shorter than 40 miles.
• C. You walk, take public transportation or work from home.
If you chose B or C, you might be ready for an EV. The length of your commute will determine whether an EV makes sense. If you do have a longer commute, you could still opt for a PHEV.
Where do you live?
• A. You live in a rural area or small town with few public places to charge and long ranges between towns.
• B. You live in a midsized city; it’s spread out but has plenty of places for charging.
• C. You live in a larger city and mostly use your car for running quick errands.
Again, if you chose B or C, an EV is a possibility, but if you chose A, you may need to stick to a gas-powered vehicle or buy a PHEV.
No matter which car you choose, your local Farm Bureau agent is here to assist you with any auto insurance needs. |
Wise Judgment: A Young Girl’s Decision
A teenage girl is “in love” with her 17-year-old boyfriend. He is encouraging her to have sex with him saying that he will make sure they only have “protected” sex. This scenario is a very common one among teenagers these days. I can especially relate to this scenario because I was once in the same situation. However, this scenario can be applied to the five components of wise judgment to help come up with an answer, or solution, to this situation.
First, there are four components to emotional intelligence: emotional perception and expression, emotional facilitation of thought, emotional understanding, and emotional management. Emotional perception and expression is the ability to recognize your own emotions as well as recognizing other people’s emotions. Also, this component involves the capacity to both express positive and negative emotions accurately. As a teenager, it is very hard to control your emotions. For instance, this teen girl thinks that she is in love with her boyfriend. However, being so young she could be confusing love for lust or even a very strong liking feeling because these feelings are probably new to her. Emotional facilitation of thought, if developed in this teen girl, could use her emotions to harness for more efficient decision-making. However, being a teen, she probably isn’t emotionally mature to us this factor. Emotional understanding involves the ability to label emotions with words, to understand the causes and consequences of the various emotions, and to recognize the relationships between them. Understanding complex and sometimes contradictory feelings and how they change over time is an important dimension of emotional intelligence. But I think this is one of the
hardest components for a teenager to comprehend because teenagers are often emotional without even knowing why they are or why they feel the way they feel. Teens are emotional and impulsive because they don’t know where their feelings are coming from. This girl is considering having sex because she thinks she’s in love and that is the next step in a relationship where you are in love. However, she doesn’t understand the consequences of this emotion of “love”. Lastly, there is emotional management which is self-explanatory and once again difficult for teens to achieve because of the rush of emotions that teens tend to feel at a constant rate, making them extremely difficult to try to manage or control. This makes it more likely for them to act impulsive especially when it comes to sex. One thing leads to another and teens tend to not think about the consequences that come down the road but how they feel at the moment.
The next component is successful intelligence. It is believed that to have successful intelligence you think well in three different ways: analytically, creatively, and practically. Now, creative thinking, to me, is more of a personal trait. However, the other two areas, I believe, portray one’s maturity level. For this scenario of teens having sex, I don’t think that they are thinking practically necessarily because along with their emotions, being irrational is also a trait of teens where they don’t think practically because they are thinking more so with their emotions. This teen is thinking more about her feelings for this guy than practically like the consequences that can come from performing the act of sex. Even if her boyfriend said he would wear a condom, there is still that chance of pregnancy occurring because the number one, 100 percent affective form of birth control is abstinence.
Finally there is wisdom itself. When it comes to being a “wise individual” one must be able to balance a variety of self-interests (intrapersonal) with the interests of other people (interpersonal) and of other aspects of the context in which one lives (extrapersonal) such as one’s environment. I think this aspect of wisdom will deal with the after-math of the decision the teenage girl makes. She will have to make up her mind as to her own interests in either having sex or not which deals with the intrapersonal interests. Then she will also have to factor in when dealing with this decision how it will affect her boyfriend’s interest as well as when dealing with the consequences how it could ultimately affect the interest of her family and friends. Finally, depending on the choice the girl makes, it will affect her environment or her extrapersonal interests. For instance, if she decides not to have sex, she will have to stay away from any and all environments that will influence her to have sex in anyway shape or form to keep her from changing her minds, such as being home alone with her boyfriend or going to a party where drugs and alcohol are present. Whereas if she does decide to have sex, the environments just mentioned would probably be ideal for her.
There are factors to balance when it comes to wisdom: balancing goals and interests, balancing short- and long-term interests, balancing responses to the environment context, and acquiring and using tactic knowledge. When balancing goals and interests, this teenage girl has to factor in the consequences of each choice she has in this situation and what affect it will have on her goals, whether short-term or long-term as well as her interests. For example, if she were to have sex and even with a condom, became pregnant, she would have to re-work all of her goals and interests, having to decide how she’s going to finish high school, if she can still go to college, not being able to play basketball anymore, and so on and so forth. Next, there is balancing short- and long-term interests. This all goes back to teens acting on impulse. Teenagers are especially known for not thinking beyond the moment, with the moment of having sex being the short-term interest, and say gaining a reputation for being a slut because her boyfriend couldn’t keep his mouth shut as a long-term interest. Many different variables could be exchanged in this example of short- and long-term interests when dealing with the sex issue. Balancing responses to the environment context goes back to the situation after she makes the decision. If she decides not to have sex, she will need to stay away from the things that are influencing her to have sex, and if she decides to have sex, she will want to be in those types of environments. Acquiring and using tactic knowledge, if the girl has good tactic knowledge, she may be able to change her boyfriend’s mind on having sex if she doesn’t want to by maybe explaining why she wants to wait and the consequences they may have to deal with if they had sex. However, if she doesn’t have good tactic knowledge, it is more likely than not that she will be talked into doing it.
her to have sex with him saying that he will make sure they only have “protected” sex. This situation was me a few years back, if I could go back and make the decision again, I probably would have chosen a very different path. However, when it comes to a solution, if the girl is wise, she will take a step back from the situation to really think and analyze what saying yes would mean and what saying no would mean. It would also help her to decide the best choice for her and stop from rushing into a situation without thinking. My hope for her after going through all these components of wisdom would be for her to wait until she was really in love instead of taking the chance of confusing her emotions and feelings and doing it anyway. My main advice to this young girl would be to take some time and really try to sort out the types of feelings she really has for this boy and are they really worth the risks of having sex. |
UC Berkeley Library
Using the Library during COVID-19
100 Deutsche Jahre (Germany: A Hundred Years)
Thirty nine segments of a television program covering various aspects of life in Germany in the 20th century.
20th Anniversary of the Berlin Wall: August 12, 1981 (Nightline)
A look at the Berlin Wall from differing perspectives. The West claims it was built to keep unhappy East Germans from escaping. The East Germans claim it was built to protect them from West German invasion. 24 min.
A Trip Through 1930's Germany
Contains films mainly made by amateurs -- though some professional work is included -- showing what the people, landscapes and culture of Germany was like during the 1930s, before the Second World War changed it all forever. For those familiar with Germany, you will be astounded at how the efforts of those to rebuild from the ruins were successful in putting much back 'the way it was'. Shows what it was like to live under the Diktatur, but, at the same time, portrays the country in an innocent and entertaining a way.
After the Fall (Nach dem fall)
A haunting documentary about one of the most powerful icons of the 20th century, the Berlin Wall. The program examines the effects of the Wall on the Berlin citizens who lived in its shadow for 28 years. Contemporary witnesses included an American historian, a church minister in Berlin and a Bavarian demolition expert. The common tenor of all their statements is that the Wall and its traces were eradicated too quickly and too radically with the intention of forgetting the past. 1999. 86 min.
Berlin unter den Alliierten, 1945-1949: Hoffnungen und Entauschungen
Produced by German filmmakers and presented from a German point of view, this documentary uses archival films to illustrate the history of Berlin from the end of World War II through the division of Germany into East and West in 1949. During reconstruction the city is divided into four sectors, administered respectively by the four victorious Allies. In 1948, Berlin is split by the rivalry between the Western Allies and the Soviet Union who impose a blockade on the three Western sectors. The Western Allies counter by supplying the city by air-- The Berlin Airlift.
Berlin's Hidden History
Berlin, the capital of reunified Germany is a city with a glittering present and a dark past. A historian leads the viewer through the new Berlin, revealing the many traces of history in a city that served as Frederick the Great's, Bismarck's, and Hitler's capital before it became the front line of the Cold War and the place where the Berlin Wall was built and destroyed. Learn that there is more than meets the eye at the Brandenberg Gate, Potsdammer Platz, Nazi buildings, Jewish cemeteries, Check Point Charlie, and many other places steeped in history.
Berlin, Metropolis of Vice
In the period between the first and second world wars, Berlin transformed itself into the Babylon of the world. Degenerate cabarets, bars, and clubs catering to every sexual daydream sprung up like mushrooms. Censorship was all but non-existent as corruption comingled with culture and political liberties. Bonus content: In-depth interviews with key historians and writers; Audio commentary track on Berlin by filmakers Ted Remerowski and Marrin Canell. c2006. 45 min.
Brandenburg Bricks; Brandenburg Heathland, Brandenburg Sand
Contents: Brandenburg Bricks (1988, b&w, 30 min.) -- Brandenburg Heathland, Brandenburg Sand (1990, b&w, 60 min.) The first part of this documentary shows the brickmakers of Zehdenick, a small town in Brandenburg, and their grim living and working conditions. Made less than 2 years before the fall of the Berlin Wall, it was banned by the East German authorities and only allowed to be shown after cuts were made. The second part of the documentary was made in the same town after the wall came down.
Children of Golzow.
Begun in August 1961, this longest running documentary in the history of film chronicles the lives of nine children growing up in the German Democratic Republic. It follows their lives for almost four decades beginning in the rural town of Golzow providing a social panorama of life in the GDR. Central themes in the documentary are the relationships between generations, trust and responsibility, daily work, as well as existential questions about war and peace. c1999. 2 tapes. 256 min. |
We welcome an endangered okapi calf
We’re celebrating the rare and safe arrival of an endangered okapi calf – and it’s a girl!
As both mum and calf are doing well, we have reopened the okapi house. But, Niari may be hidden away for a few more weeks as she is still in her ‘nesting’ phase. Until then, you’re welcome to quietly pop in and see if you can spot her!
The precious female calf has been named by keepers as Niari, which means ‘rare’ and is an area in the Democratic Republic of the Congo where okapis are found in its tropical rainforests.
First-time mum, Daphne, and Niari are both doing well and are currently off show, out of the public eye, to ensure they bond and the calf settles during the nesting period.
Marwell Zoo animal keeper, Phil Robbins, said: “We know guests are desperate to see the pair, but we want to make sure Daphne and Niari enjoy some peace and quiet, as this is essential in the first few weeks of the nesting period.”
Phil explains: “Okapis are very shy animals. As such, we prefer to keep okapi dams and calves in an isolated environment to reduce noise and stress levels.”
Okapis give birth to a single calf after a 14-month gestation period. An okapi calf can be on its feet and suckling within half an hour of being born. In the wild, the mother will leave her calf in a hiding place to nest, returning regularly to allow the calf to nurse.
Uniquely, okapi calves defecate for the first time after 30 to 40 days. A theory behind this adaptation is that it helps keep predators from sniffing out the hidden newborn until the calf has grown and gained strength.
Okapis are incredibly eye-catching animals which are relatives of the giraffe. They have thick, reddish brown-black coats and like giraffes, okapis have the same body shape with long necks, long black tongues and males have horn-like ossicones on their head. Their hindquarters and front legs are black-and-white striped, reminiscent of a zebra’s.
For video footage of the birth and Daphne and Niari’s first venture outside, visit our Facebook page. |
How Cornflakes Were Accidentally Invented
Cereal has come a long way since the invention of cornflakes back in 1894 (via Kellogg's official website). Though there are still some healthy options today, the current landscape of colorful cereals packed with sugar would be Dr. John Harvey Kellogg's waking nightmare. Believe it or not, he and his brother didn't get into the cereal game in the 19th century for the money, he wanted to create a health movement.
According to Forbes, the 1850s and the decades after were a time of "national indigestion" when Americans were eating huge breakfasts filled with pastries, fritters, boiled chickens, cold cuts, and steaks. Dr. Kellogg wanted to create what we know today as cornflakes to improve American eating habits as part of his "biological living" health movement that encouraged more exercise, more bathing, and eating more whole grains and less meat.
According to The History Channel, Dr. Kellogg preached this movement to everyone at the church-founded health institute that he took over: Battle Creek Sanitarium — basically a medical spa and resort. Sounds pretty harmless and similar to the health movements happening now, right? Well, a few problematic rumors surrounded the real reason breakfast cereal was invented, but regardless of motive, we now know cornflakes were created by sheer accident.
They were trying to make granola and left it out too long
According to The History Channel, around 1877, Dr. John Harvey Kellogg concocted a twice-baked mixture of flour, oats, and cornmeal using a process called "dextrinization," which involved cooking whole grains at high temperatures to make them more easily digestible and therefore more healthy. In the process of making this "granola," according to Kellogg's official website, the wheat mixture was exposed to air overnight and, when it was flattened by a roller, it resulted in the first flaked cereal. After this happy accident, experimentations with several other grains happened before landing on corn as the best ingredient.
According to Forbes, after the invention of cornflakes, Dr. Kellogg ran into problems when other businessmen recreated his product — even his own brother William wanted to make them taste better by adding sugar, which was against everything Dr. Kellogg believed in. According to The History Channel, William Kellogg bought the rights to the flake cereal recipe from his brother to create the Battle Creek Toasted Corn Flake Company in 1906. He added malt, sugar, and salt to the original dough (making it edible by today's standard), began selling Kellogg's Corn Flakes in mass quantities, and spent a lot on advertising to bring it closer to the boxed version we know today. |
December 2017: Kate Thomas
Kate is Pet of the Month
Reason for Nomination: Beating the dangers of Gut Stasis
Kate was presented to us because she was not eating for 2 days and she was not passing faeces. She was also quiet in herself.
When we examined Kate we identified her problem to be that her guts had stopped moving. This is a serious problem in rabbits that is unfortunately not all that uncommon.
To understand why this is a problem, we have to understand the way a rabbit’s guts work. Rabbits have a simple stomach. A part of the large intestine called the caecum is the most important part of a rabbit’s gut. Rabbits must eat food that has a very high fibre content. This is very poorly digestible. The purpose of this high fibre diet is not to provide nutrition but to keep the guts constantly moving. Most of the fluid and some smaller particles of the food is sent to the caecum where it undergoes fermentation. This process creates soft pellets called caecotrophs that then exit through the rest of the large intestine and collect around the back end of the rabbit. Rabbits will then eat the caecotrophs which are much higher in nutrient content. A very efficient system using food with low nutritional value!
So, a diet that does not have a high fibre content can therefore reduce the motility of the guts leading to a vicious cycle which can ultimately be fatal. There can be other causes for the guts to slow down or stop. These include pain (like dental disease) reducing food (fibre) intake; stress; or a blockage of the intestines by for eg a hairball. Ileus (when the guts stop moving food along) is known to be painful and hence can lead to a deteriorating vicious cycle.
We needed an idea of how serious the problem was with Kate. Measuring blood glucose levels has been shown to be a sensitive indicator of the severity of the disease. It can also indicate the likelihood of an obstruction. There was further concern of an obstruction because she had a palpable distended stomach.
Fortunately, the blood glucose reading was normal. However, she still needed aggressive medical treatment.
For her treatment she received pain relief, drugs to get the guts moving again, antibiotics and intravenous fluid therapy. To administer intravenous fluids and medications in rabbits an intravenous cannula is placed in a vein in their ears because this is easily accessible and also usually large enough! Nutrition is a very important part of the treatment for the reasons mentioned above and so an appropriate liquid diet was syringe fed at regular intervals.
Over 2 days hospitalisation she continued to slowly improve. The stomach distension also resolved on the third day at the hospital and she was discharged. We saw her for 2 more days to administer some more fluid therapy. She has continued to do well!
The initiating cause for Kate’s illness is unclear. Her diet is appropriate, and we could not find any focus of pain otherwise like a dental disease. We suspect with the sudden onset of the illness that she either had an infection or she underwent some stress. With rabbits being prey species they are very susceptible to stressors that are not always apparent to us. Her symptoms were recognised quite early on which made her prognosis much better and we are very happy that she has made a complete recovery. |
Why Do Dogs Howl?
Why do dogs howl?
Dogs might not speak the way humans do, but they certainly communicate verbally.
Our furry canine friends use barks, grunts, growls, yips, groans, snorts, and other orally generated noises to express themselves and interact with each other and humans.
Howling is one common form of canine verbal communication.
Some dogs howl a lot, while others rarely express themselves in that way. Some breeds have loud, ear-piercing howls, while others’ howls never quite reach that volume. But every dog can howl and does so occasionally.
But why do dogs howl? What purpose does all the noise serve? Are they just channelling their inner wolves, or is there more going on here?
Why Do Dogs Howl?
There are several reasons a dog might howl. Some breeds, such as hounds, are more prone to howling for every reason listed below, plus more. Other breeds are likely to howl only when presented with certain stimuli on the list.
One way or another, a dog howls for a reason. Here are some of the most likely motives.
I’ve Got Something To Say!
First and foremost, howling is a form of communication. Dogs howl to send a message, a warning, or even instructions.
In the wild, canines live in packs. These packs sometimes need to communicate across long distances. The most effective way to achieve this communication is through loud, extended sounds that can carry over distance. That’s why a dog’s howls can be so long and loud.
I Found Something!
Howls also notify a dog’s pack (canine or human) that they discovered something. This behavior is especially common in hunting breeds.
When your dog trees a squirrel or a cat, they are likely to howl insistently. This isn’t an attempt to scare poor Muffin into submission; it’s actually a way of notifying you that your dog cornered prey.
What Is That Awful Noise?
Another common cause for howling is a loud or high-pitched noise that the dog finds unfamiliar. Many dogs howl in response to sirens, fireworks, storms, and even certain musical instruments.
If you’re practicing your viola and your dog starts to howl, don’t assume he’s just being a harsh music critic. It’s actually natural for canines to respond to unfamiliar or high-pitched sounds with a series of howls.
I’ve Got Your Back!
Sometimes, dogs howl to announce they are alert and standing guard. Again, this motive can be traced back to the canine nature of traveling in packs.
When a dog is standing guard, howling serves two functions. First, it notifies her packmates that she is on duty and will keep an eye out for any encroaching threats. Second, it warns any potential threats in the area to stay away.
Come Here!
Another motive for howling is the need to draw packmates (or human family members) to a dog’s position. In some cases, a dog howls to notify his pack of his location and to get them to gather in that place.
Don’t Come Over Here!
In addition to howling to draw pack members to an area, dogs can also howl to warn them away. If a dog discovers potential dangers in a location, she will often howl to alert her family to stay far away.
If it seems strange that dogs use howling to accomplish these opposite goals, remember that not every howl is the same. In a pack, different types of howls will carry different messages.
I’m Coming!
Another function howling serves is to alert packmates that a dog is approaching. Dogs will often howl in much the same way a patrol car uses its siren. Dogs want to alert their pack that they are approaching so they will not startle them.
Pay Attention To Me!
Another common purpose of howling is to attract attention. Many times, if a dog is bored or feeling neglected, he will howl to capture the attention of humans or packmates. Let’s face it: the loud, high-pitched noise of a howl is difficult to ignore.
I’m Nervous!
Sometimes, dogs howl to indicate they are nervous or uncomfortable. Just like humans, dogs find new and unfamiliar situations stressful. Howling is one way of notifying packmates (including humans) that a dog is anxious and would prefer to escape a situation.
I’m In Pain!
Howling can also be a sign that a dog is in pain. If a dog is injured or sick, she will often howl to let her humans or packmates know something is wrong.
Again, since different-sounding howls can indicate different things, your dog probably has a specific howl that indicates he needs medical attention, so you’ll want to be hyper-aware of the way this particular howl sounds.
Should You Be Worried About Your Dog Howling?
In most cases, howling is a natural behaviour for dogs, especially in hunting breeds. Although howling can be annoying, it’s generally not a health or well-being concern.
There are, however, exceptions.
Some dogs howl because of separation anxiety. They experience extreme distress when apart from their humans and react accordingly. If your dog howls because of this mental anguish, you will want to work with a veterinarian or canine behavioural expert to alleviate the stress brought on by your absence.
As noted above, dogs can also howl because they are in pain or ill. In these cases, you will obviously want to take your dog to the vet to identify and treat the root cause.
Should You Stop Your Dog From Howling?
Whether your dog should be allowed to howl or not is up to you as the dog parent.
Some people have no problem with dogs howling in the yard or at the dog park, but don’t want to deal with it at home. Others prefer their dogs to never howl at all.
You can train your dog to stop howling through several techniques. The most effective rely on positive reinforcement.
The Bottom Line
Dogs howl for various reasons: to announce cornered prey, draw packmates to or away from an area, gain attention, react to unfamiliar or bothersome noises, and communicate discomfort, pain, or illness.
Howling is generally nothing to worry about, although if your dog is trying to communicate pain or separation anxiety, you should consult a vet.
You can train your dog not to howl if you want. Just know that howling is a natural behaviour, and your dog will require consistent training to kick the habit.
1 Comment
Leave a Reply
Your email address will not be published. |
Difference between revisions of "Calhoun, John C."
From Federalism in America
Jump to: navigation, search
Line 26: Line 26:
SEE ALSO: [[Civil War]]; [[Interposition]]; [[Nullification]]
[[Category:Political/Historical Figures]]
Revision as of 14:12, 28 September 2017
As a politician and political philosopher of constitution, federalism, and state sovereignty, John Caldwell Calhoun (1782–1850) was the most preeminent spokesperson for the antebellum South. Born near Calhoun Mills, Abbeville District (presently Mount Carmel, McCormick County), in the South Carolina upcountry on March 18, 1782, Calhoun graduated from Yale College in 1804. After his education in the North, Calhoun returned home, practiced law, and served as a member of the South Carolina House of Representatives from 1808 to 1809.
At the age of 28, Calhoun entered the national political arena and represented South Carolina in the U.S. House of Representatives from 1811 to 1817, when he resigned to become the secretary of war. An ardent nationalist during this early period of his political career, Calhoun distinguished himself as one of the “War Hawks.” Using his influence as acting chair of the House Committee on Foreign Relations, Calhoun urged the War of 1812 with Great Britain to redeem the honor of his country in the face of Britain’s disregard of American neutral rights. After the war, Calhoun proposed several reconstruction measures in support of Representative Henry Clay’s nation-building program known as the American System. He advocated chartering the Second Bank of the United States to stabilize the necessary currency and encouraged federal spending for internal improvements to build a nationwide network of roads and canals. Calhoun also supported the Tariff of 1816 to eliminate the increased national debt due to the war and to protect his country’s fledgling industry. Though unknown to Calhoun then, the tariff issue was soon to ignite an inflammatory controversy.
John C. Calhoun. National Portrait Gallery, Smithsonian Institution.
Having served as secretary of war under President James Monroe’s administration from 1817 to 1825, Calhoun became vice president of the United States in 1825 under the administration of President John Quincy Adams. It was during his first term as vice president when the tariff issue eventually came to the forefront, exposing the emerging economic conflicts between the North and the South. The cotton planters throughout South Carolina and the rest of the agrarian South, who detested a protective tariff to ensure their staple exportation overseas, began to perceive that a series of federal tariff policies was only designed to protect the industrial interests in the North at the expense of the cotton-producing southern states. When the Tariff of 1828, which became widely known as the Tariff of Abominations, was enacted, the tariff issue was no longer an economic issue alone. It now also encompassed the old debate as to the extent of federal power in the young union. Calhoun’s nationalist sentiments were gradually strained, and he turned himself into a states’ rights advocate and sectionalist.
Responding to his constituents’ rising criticism of the federal tariff policies and also sensing the bleak prospect of disunion advanced in his native state, Calhoun anonymously penned the South Carolina Exposition and Protest for the state legislature during the summer and fall of 1828. While finding himself in the grave dilemma of privately thwarting the tariff measure enacted by the administration that he was a part of as vice president, Calhoun looked for a constitutional alternative to possible revolutionary action. Resurrecting the compact theory embodied in the language of the 1798 Kentucky and Virginia Resolutions fathered by Thomas Jefferson and James Madison respectively, Calhoun advanced the doctrine of nullification. In the South Carolina Exposition, Calhoun argued that the U.S. Constitution was a compact among the states and that each state could not only interpose (that is, block) its authority between the citizens of that state and the laws of the United States, but also nullify (that is, overrule) such laws and actions as being unconstitutional and inoperative in the state. Calhoun’s Exposition thus became a standard of revolt against the American System of protective tariffs and a clarion call for devising constitutional safeguards to protect the South’s impinged minority interests from the abuse of federal power.
In 1828, Calhoun was reelected vice president with President Andrew Jackson. To the dismay of Calhoun and southern cotton planters, Congress failed to reduce the tariff rate under the new administration. When some South Carolinians became restless and threatened to vindicate the doctrine of nullification, Calhoun issued an open letter entitled “On the Relation which the States and General Government Bear to Each Other,” more popularly known as the “Fort Hill Address,” in the summer of 1831. Using the words “interposition,” “veto,” and “nullification” interchangeably, Calhoun, no longer anonymously, warned that unless some constitutional checks were administered, the sectional conflicts would become interminable, leading eventually to “the dissolution of the Union itself.” When the new Tariff of 1832 provided for reduced tariff revenues but retained the approximately 50 percent rate on cotton and woolen clothing, a special convention of the South Carolina legislature finally formulated its “South Carolina Ordinance of Nullification” in November 1832, proclaiming that the tariff was “null” and “void” within the state. In response, President Jackson denounced nullification as unconstitutional and asked Congress in the following month to enact a “Force Bill” to enforce tariff collections in South Carolina by sending federal troops.
In late December 1832, Calhoun resigned as vice president during South Carolina’s Nullification crisis. Immediately after he left the Jackson administration, Calhoun took a seat as a U.S. senator to defend his state’s position and continually served in that capacity until his death in 1850, except for a brief period when he was appointed as secretary of state. By then, the slavery controversy—an entangling issue with the South’s slave-plantation economy—had become a predominant concern for Calhoun. It was in his 1837 address to the Senate entitled “Speech on the Reception of Abolition Petitions” that Calhoun contributed to the development of the South’s proslavery argument, terming slavery as “a positive good.” As the secretary of state under President John Tyler’s administration from 1844 to 1845, Calhoun negotiated the annexation of Texas and justified it as a means to expand the area open to slavery. Back to the Senate, Calhoun denounced the proposed Compromise of 1850 on the ground that the measure would not adequately protect the South’s slavery interests. Calhoun prepared his last address to the Senate in the midst of the debate over this compromise sponsored by Senator Henry Clay. On March 4, 1850, Calhoun, too ill to deliver the address himself, asked Senator James M. Mason from Virginia to read it for him. Sensing that the union was now “lying dangerously ill,” the dying senator made his final plea that the responsibility of saving the union rested on the North. Only four weeks later, on March 31, 1850, Calhoun passed away in Washington, D.C., at the age of 68.
The essence of Calhoun’s last address was his proposition for restoration of the sectional equilibrium within the union. To achieve this end, Calhoun propounded that each sectional majority (that is, the North or the South) or each major-interest majority (that is, the manufacturing interests or the agricultural interests) should be given the constitutional power to veto on the acts of the federal government, which represented numerical majority. This doctrine of concurrent majority had been expounded in his Disquisition on Government, presenting the case for minority rights within the framework of majority rule. The theory also led Calhoun to conclude that the only adequate means to protect the slavery interests of the South would be the invention of a dual presidency with a northern president and a southern one, each acting concurrently and possessing the definitive veto power. Advancing the doctrine of concurrent majority, Calhoun’s Disquisition later served as an introduction to his much larger Discourse on the Constitution and Government of the United States.
Though history dictates that he was the foremost intellectual figure who influenced the southern secessionists of 1860 to 1861 on the verge of the Civil War, Calhoun never sought that solution. While protecting his region’s interest, Calhoun disparately grappled with what Alexander Hamilton, James Madison, and John Jay had repeatedly pondered in penning The Federalist Papers—“the nature of the Union.” But his final tragedy lies in the fact that his theory of nullification and his defense of the South’s morally indefensible institution of human bondage eventually led the way to the destruction of the union that he had so dearly loved.
Jesse T. Carpenter, The South as a Conscious Minority, 1789–1861: A Study in Political Thought (New York: New York University Press, 1930; reprint, Columbia: University of South Carolina Press, 1990); Margaret L. Coit, John C. Calhoun: American Portrait (Boston: Houghton Mifflin, 1950; reprint, Columbia: University of South Carolina Press, 1991); William W. Freehling, Prelude to Civil War: The Nullification Controversy in South Carolina, 1816–1836 (New York: Harper and Row, 1965; reprint, New York: Oxford University Press, 1992); John Niven, John C. Calhoun and the Price of Union: A Biography (Baton Rouge: Louisiana State University Press, 1988); Merrill D. Peterson, The Great Triumvirate: Webster, Clay, and Calhoun (New York: Oxford University Press, 1987); and Clyde N. Wilson, ed., The Essential Calhoun: Selections from Writings, Speeches, and Letters (New Brunswick, NJ: Transaction, 1992).
Yasuhiro Katagiri
SEE ALSO: Civil War; Interposition; Nullification |
How we change what others think, feel, believe and do
The Annotated Art of War (Parts 7.30-32: Calm)
Disciplines > Warfare > The Annotated Art of War > Parts 7.30-32: Calm
Previous chapter << Chapter: 7 >> Next chapter
Previous part | Next part
VII. Maneuvering
Sun Tzu said: Commentary
30. Disciplined and calm, to await the appearance of disorder and hubbub amongst the enemy:--this is the art of retaining self-possession. There is much in warfare that in civilian situations would cause panic and disorder. Such discomfort is a waste of energy, draining spirit and creating a dangerous state of unreadiness.
A lack of discipline is also a recipe for chaos and panic. Disciplined troops are quietly confident and are in control of themselves. With external discipline they gain internal self-discipline and so can reach a state of calm in the hardest of circumstances.
Calmness does not mean lethargy or a lack of readiness. The calm warrior is always ready. They just do not need to sustain a state of tension to be observant and able to respond at a moment's notice.
The same is true in business. Those who have an assured calm make better leaders and are more successful. They are not lazy. They are just conserving their energies for where true value can be created.
A lack of calm is often shown as stress, which can wear people out and break them without external intervention.
31. To be near the goal while the enemy is still far from it, to wait at ease while the enemy is toiling and struggling, to be well-fed while the enemy is famished:--this is the art of husbanding one's strength. Strength, motivation and spirit are what you need in battle. When you are not fighting or marching, you should conserve the energies you will need later.
If you can cause your enemy to lose their calm, always keeping them on edge, then when you meet you will have a significant advantage.
32. To refrain from intercepting an enemy whose banners are in perfect order, to refrain from attacking an army drawn up in calm and confident array:--this is the art of studying circumstances. Striking at the enemy requires good timing. Panicked troops or those who want only to fight will lash out. Keeping cool and calm lets you wait for the right moment.
It also requires calm to stand firm or retreat in an orderly way in the face of an advancing superior force.
To be faced with a calm army is in itself fearful. In war, those who lose their cool first may consequently lose the fight.
Site Menu
| Home | Top | Quick Links | Settings |
You can buy books here
More Kindle books:
And the big
paperback book
Look inside
Please help and share:
Quick links
* Argument
* Brand management
* Change Management
* Coaching
* Communication
* Counseling
* Game Design
* Human Resources
* Job-finding
* Leadership
* Marketing
* Politics
* Propaganda
* Rhetoric
* Negotiation
* Psychoanalysis
* Sales
* Sociology
* Storytelling
* Teaching
* Warfare
* Workplace design
* Assertiveness
* Body language
* Change techniques
* Closing techniques
* Conversation
* Confidence tricks
* Conversion
* Creative techniques
* General techniques
* Happiness
* Hypnotism
* Interrogation
* Language
* Listening
* Negotiation tactics
* Objection handling
* Propaganda
* Problem-solving
* Public speaking
* Questioning
* Using repetition
* Resisting persuasion
* Self-development
* Sequential requests
* Storytelling
* Stress Management
* Tipping
* Using humor
* Willpower
* Principles
* Behaviors
* Beliefs
* Brain stuff
* Conditioning
* Coping Mechanisms
* Critical Theory
* Culture
* Decisions
* Emotions
* Evolution
* Gender
* Games
* Groups
* Habit
* Identity
* Learning
* Meaning
* Memory
* Motivation
* Models
* Needs
* Personality
* Power
* Preferences
* Research
* Relationships
* SIFT Model
* Social Research
* Stress
* Trust
* Values
* Alphabetic list
* Theory types
Guest Articles
| Home | Top | Menu | Quick Links |
© Changing Works 2002-
Massive Content — Maximum Speed |
Custom Gasket Cutting
A gasket is a shape of any desired material that is sealing an intersection between a part where conduction is mainly by electrons and a part where it is mainly by holes. Now a day, many industries need gaskets for their processes. The semi conductor industry is a good example. While the market is becoming increasingly demanding, industries are looking for faster, more powerful and smaller devices. The result is a higher demand for precision gasket cutting.
What is Custom Gasket Cutting?
Custom gasket cutting is the process that produces laser cut gaskets with very tight tolerance, as low as ±0.0005. There are many machines and technologies that can produce precision cut gaskets. For example, Waterjet can hold tolerances as low as +/- 0.005”-0.010”. Another example is a CO2 Laser, which can hold as low as +/- 0.002”. The machines that can cut the tightest gasket is the UV laser; it can hold up as low as ±0.0005.
Gasket Design
When it comes to gasket design, they can come in various sizes and shapes which makes the laser cutting process an ideal choice. Because UV machines use CAD files that are designed by engineers, meeting those changes of sizes and shapes are a straightforward task and can be done without a challenge.
Popular Custom Gasket Materials
There are many popular materials to make custom gaskets.
custom gasket cutting
Q-pad is a composite of silicone rubber and fiberglass. What makes this material special, besides its outstanding mechanical and physical attributes, is its flame retardant characteristics. Because of that, this material can be used in many applications in the semiconductor industry; for example to electrically isolate power sources from heat sinks.
Q-pad has attributes that can be perfect for many applications in the semiconductor industry, like:
• Thermal Impedance: 1.13°C-in2/W (@50 psi)
• Thermal Conductivity: 0.9 W/mK
• Maximum Usage Temperature: 356 F
Grafoil’s flexible composition is produced by chemically treating natural graphite flake to bond the layers and then heating them to decomposition. Grafoil is popular because it is flame retardant and easy to handle. Grafoil’s unique physical and chemical properties make it ideal for sealing and for high-temperature applications.
Grafoil’s sheets can be laminated together to form gaskets for many uses. Its attributes include:
• Thermal Conductivity: 5 W/mK
• Maximum Usage Temperature: 752 F
Why A-Laser for Laser Cut Gaskets?
• A-laser has more than 20 years of experience cutting precision parts and offering laser cutting services.
• A-laser has expert engineers that can evaluate your materials and recommend the best system to process them.
• A-laser has two systems that can produce very precise gaskets: UV lasers and IR lasers. UV machines can be a great choice when cutting gaskets made from Grafoil, Q-pad or Kapton, while the IR machines are better for gaskets made from metal.
• A-laser machine tables are 23 by 23 inches, which allows for laser cut gaskets up to that size.
• A-laser QC department will both inspect the gaskets both visually and dimensionally-based on the print’s requirement.
• A-laser is a quality driven company, which assures shipping gasket with the highest standers.
UV Lasers and IR Lasers are a very attractive choice when designing and producing custom gaskets for your process. |
jump to navigation
Rosary group meditation
The Rosary can be said either individually or in a group. The following page demonstrates a method of using the Rosary as part of a group meditation. It combines the method used for saying the Rosary individually with readings and reflection / meditation on scripture (or other sacred writings).
The group should allocate a leader and four readers. A scripture passage (or other sacred writing) is dived into four readings.
The leader ensures that the prayer space has been prepared for the group. A quiet area with minimal disturbance. Icons, candles, incense or other liturgical resources may be used but are by no means necessary.
A choice of chairs, cushions, kneelers etc may also be of benefit, if available. The leader should also arrange for copies of any hymns, chants or recorded music that might be used during the meditation. Copies of readings should also be made available to the readers, this should be typed or clearly written so that readers can read in low (candle) light without strain.
The leader ensures that the prayer space has been prepared in sufficient time for the arrival of the group, and copies of the prayers are made available to people if unfamiliar with the group process, again this should be typed or clearly written to be able to be read in low light.
The group comes together in silence, or with some background music. New comers ideally have been prepared beforehand. If this has not been possible then brief instructions can be given then a period of silence—quiet music may be used to allow the group to enter into the space and time of the meditation. It is expected that no further instructions or discussion will take place once the prayer has begun.
The Meditation
The recitation of the Rosary in the group meditation is similar to that used in individual recitation; with the following variation.
The group might begin with a hymn or chant. Afterwards the leader will begin the Rosary meditation with the Cross prayer, perhaps written in some responsorial form, or said as a group. Similarly the Invitatory prayer is said either as a group or in responsorial form, as is the first of the Cruciform prayers.
Then the first reader reads their section of the scripture / writing. A period of silence for reflection and meditation on the word is kept; reflective music may be played during all or part of the time of reflection. The leader then begins the reciting the Week prayer on each of the Week beads and is joined by the group. The second Cruciform bead is then recited, followed by the second reading, time of reflection in silence and or with music. The same pattern is followed for the third and fourth Cruciform brayers and readings, again with time of reflection after each reading and before the recitation of the Week prayers. After the last group of Week prayers the group finishes on either the Invitatory bead followed by the Cross prayer again, or simply finishes on the Cross prayer. The leader then completes the group with a Collect.
Individuals should leave in silence, leaving space for those wishing to, to stay and reflect, meditate or pray longer. If it is desired for the group to meet for further fellowship or Bible study they should proceed to an area away from the prayer space so as to allow those remaining to continue to pray or meditate in silence.
For a suggested group meditation format please see the exemplars page.
%d bloggers like this: |
6 Ways Medical AI is Transforming Healthcare
The evolution of healthcare is one of the greatest success stories of modern times. But, as the ongoing coronavirus pandemic and the rising threat of antibiotic-resistant drugs have shown, there is always room for improvement. As life expectancies increase, healthcare services face constantly growing demand. Rising costs and a need for more healthcare workers are making it harder to meet those needs and improve outcomes.
Despite the unstoppable forces driving demand, new technologies like artificial intelligence (AI) promise to make healthcare more scalable and sustainable. With investors pouring $4 billion into healthcare AI startups in 2019 alone, it is safe to say the industry is undergoing a radical transformation. AI is becoming instrumental in speeding up operations like drug discovery and diagnoses, while robots will assist in complicated surgeries.
Here are some of the most exciting applications for medical AI:
1. Medical diagnosis
Applying AI to medical diagnoses can greatly augment the capabilities of medical teams by automating routine diagnostic operations, minimizing human error, and saving time. The technology has now reached a level where, in applications like cancer detection, it can perform at the same level as trained and licensed radiologists. Because AI works at machine speed, the automated analysis of medical images like MRI scans and X-rays happens much faster, allowing doctors to diagnose diseases in less time.
AI-powered diagnosis is also helping us garner a better understanding of COVID-19. V7 Labs is one company creating an annotated dataset so researchers can better understand how COVID-19 impacts the lungs in the longer term.
2. Robotic surgery
Performing surgical tasks requires an immense degree of care and accuracy. Whether routine or specialized, these often time-consuming operations can quickly become a massive burden, and overworked surgeons can introduce an increased risk of human error, with potentially catastrophic consequences.
While robots probably won’t be fully replacing surgeons any time soon, they can complement surgeons’ capabilities with better imaging and greater precision in factors like speed, depth, and trajectory. Patients benefit too with quicker healing times and less pain and scarring from robotic-assisted surgeries that are becoming more common.
However, because surgery is often nuanced and requires adaptability, it cannot always be automated according to pre-programmed behaviors. There is a clear and growing need for AI that uses deep-learning data to supervise the process and augment surgical capabilities.
3. Virtual nursing
AI-powered virtual assistants such as Amazon Alexa and Google Assistant have already been a part of our lives for some years, but recent advancements in technology are enabling a similar concept in healthcare and other sectors. Many nursing routines are repetitive and do not necessarily require a human touch, in which case they can and should be automated to free up time for nurses to focus on things that only they can do.
Virtual nursing assistants and chatbots are important factors in the changing healthcare equation. They can guide patients through various self-care routines, offer medical advice, and automate routine operations like appointment scheduling and patient admission and discharge—especially useful for remote patients.
4. Drug discovery
Pharmaceutical development is incredibly expensive, taking years or even decades to develop new drugs and test them for efficacy and safety. One of the main reasons for these long time spans is that it often requires a monumental effort to analyze vast data sets. On the other hand, thanks to the digitization of health records and other medical documents and archives, AI and automation technologies can be used to comb through the data, uncovering relevant insights at a much faster rate.
For example, global pharmaceuticals company and developer of one of the leading COVID-19 vaccines, Pfizer, uses the IBM Watson machine-learning platform to change drug discovery for the better, making it quicker, cheaper, and more effective.
5. Precision medicine
The goal of precision medicine is to build and optimize diagnostic and treatment pathways and develop more accurate prognoses. Clinicians have the opportunity to act more quickly with improved outcomes, detect the early onset of cancers and other diseases, and develop customized therapies and personalized medicine.
While a revolutionary advance in healthcare, such an approach relies on AI to analyze ever-increasing amounts of unstructured and unlabeled data sets to capture and understand key variables in everything from environmental factors to genetics to a patient’s previous medical history.
6. Administrative tasks
The U.S. spends 17% of its GDP on healthcare, outspending most other developed countries on a per capita basis. Much of this is due to burdensome administrative tasks and red tape, which don’t always directly lead to better healthcare outcomes. For example, in the U.S., almost half of healthcare funds are spent on administration.
By applying intelligent automation to many of these routine operations, costs could be reduced by improving efficiency, leaving more funds available to support affordable healthcare or further advancements in the industry. After all, AI can help overcome the challenges of scale, allowing hospitals to make better use of their resources.
How can a managed workforce help?
Despite these amazing developments and breakthroughs, scaling AI in healthcare remains a big challenge. Data sets need to be curated, cleansed, and annotated accurately. These labor-intensive tasks make clear the need for outsourcing. However, given regulatory concerns, the need for accuracy, and the question about whether only healthcare experts should do the work, these are not the sort of processes healthcare organizations should outsource to just anyone.
In fact, the CloudFactory team worked with one medical AI company to label medical images at scale, so it could develop its predictive advice algorithms to help practitioners garner a better understanding of health issues to aid in preventative care.
This is where the value lies in working with a reputable and experienced managed workforce to scale AI model development without compromising on quality.
Learn more about how our managed workforce can help you meet the challenges of scale in medical data annotation.
Medical AI Diagnostic & Clinical Imaging
Computer Vision Healthcare AI & Machine Learning Data Entry
Get the latest updates on CloudFactory by subscribing to our blog |
A Short History of the Fire Extinguisher
Fire extinguishers provide the first line of defense for home and business owners against fires of all kinds. Over time, fire extinguishers have helped minimize the spread of fires and prevent fire damage, injury and death, but a lot of people don’t know much about the history of the fire extinguisher or how this technology came to be. To learn more about the origins of fire extinguishers, keep reading.
Who created the first fire extinguisher?
Fires have always been a reality for people, but the danger caused by fire escalated as communities began living in more densely populated areas with lots of wooden structures. For a long time, the only option available to people who needed to extinguish a fire was to use water or try to suffocate the fire by stomping on it or covering it with cloth.
It wasn’t until 1723 that the first design for a chemical fire extinguisher was patented. This original patented fire extinguisher was created by Ambrose Godfrey, a chemist. Godfrey’s fire extinguisher featured a chamber filled with fire extinguishing chemicals and an ignition system designed to scatter the solution over a fire.
History of the fire extinguisher
Over time, more modern fire extinguishers were developed, and they had many similarities to the modern fire extinguishers in use today. In 1818, a British captain developed a fire extinguisher made with a copper vessel filled with potassium carbonate and compressed air. A similar fire extinguisher design made with soda acid was patented in the United States in 1881. This design used a chemical reaction to create a stream of pressurized water that could be used to put out a fire.
The first chemical foam fire extinguisher was developed in 1905 by a Russian inventor. The foam fire extinguisher released a thick, fire-extinguishing foam when activated thanks to a chemical reaction between sodium bicarbonate and aluminum sulphate. Later on, more extinguishers were developed, including some gas extinguishers that were designed to offer effective fire extinguishing capabilities in a small and easy-to-use package.
Modern fire extinguishers are modeled off a design patented by the company Dugas in 1928. This dry chemical extinguisher was used for all kinds of fires and was heavily marketed for home fire protection. Over time, technological advancements and developments led to the creation of more sophisticated and effective fire extinguishers for a wide variety of applications. Modern fire extinguishers have the capacity to extinguish even large fires, which can help property owners avoid extensive damage and minimize the risk of injury when a fire ignites.
Fire protection services
Nobody knows fire extinguishers quite like the team at Carpet Capital Fire Protection Inc., and we’re here to help answer any questions our customers have about fire protection for their homes and businesses. Since 1977, we have been dedicated to providing our customers with the highest quality fire protection services and equipment available, including fire extinguisher inspection and maintenance. Find out more about what we have to offer by giving our team a call today. |
Knowledge base
Thinking About Data and Analysis
What to Consider When Thinking about Data and Analysis
Define your research questions. Every data collection and analysis effort has to begin with asking the right research questions. Questions should be measurable, clear and stated as simply as possible.
Decide what to measure. Consider what kind of data you need to answer your key questions. Does answering your research questions require quantitative or qualitative data or both? In general, when you measure something and give it a number value, you are creating or working with quantitative data. When you classify or judge something, you are creating or working with qualitative data.
• Quantitative data deals with things you can measure objectively. This data type is measured using numbers and values and may include things like the area of a land parcel, how many people live in a household, ages and dates of birth to name a few.
• Qualitative data deals with characteristics and descriptors that can’t be easily measured, but can be observed subjectively. Qualitative data includes things like measuring perceptions about tenure security or measuring attitudes toward government land policies.
Decide how to measure it. Thinking about how you measure your data is just as important as deciding what you want to measure.
• How frequently do you need to collect and analyze your data, monthly, quarterly, annually?
• What is the type of data? Are you working with quantitative or qualitative measures?
• What is its unit of measure? Will this data be measuring people’s perceptions with a text or narrative response or will it be measured using numeric values like hectares or acres?
Document your steps. Consider the software you use for analysis, and whether those applications automatically generate information about your data files (metadata) and process steps (such as log files). Keeping track of your data processing and analysis steps can save you time when you want to recreate your work, or share your methodology with others.
Boost your skills. If you’re considering using a new application you are not familiar with, or you just want to learn more about software that you use regularly, look for training opportunities. There are a wide variety of online courses you can take to increase your skill level.
Keep your data safe. Document and describe your data as you capture it, organize your files, and make smart choices about where you store your data. Since some software programs produce files that are proprietary and can only be opened in their applications, consider saving data in formats that can be opened by different software programs. |
Introduction – Process Costing
Process Costing
Process costing is used as one of the imperative techniques to calculate costing. There are various stages through which raw materials pass and then finally gets converted into a completed product. In all this process, the method of costing is used.
It is with the help of process costing, a manufacturer can calculate the totality involved in process and product costing.
In simplified form, it is a procedure used in business houses to detect budget at different phases of various process and its cost involved in product manufacturing.
According to the given definition of ICMA, process costing is referred as one of the forms of operation costing applicable to the production of standardized goods.
Operation costing is defined as a combination of process and job costing which is used in any of them in either situation.
Outcome of repetitive or nearly continuous operational arrangement is related to standardized services or goods in an industry. Costs are charged in these processes, and it is averaged over items at the time before the final production takes place. The basic costing method category involved in this entire process is stated to be operation costing in a business.
Links of Previous Main Topic:-
Links of Next Finance Topics:-
Submit Assignment
For receiving updates on WhatsApp. Zero Spam!
Please provide instructions relevant for assignment/
Please use ZIP Folder for Multiple files.
How It Works
Customer Reviews
Ratings based on 510 customer reviews.
Trustpilot ratings
Google Ratings |
Neoliberalism and the Failure of the Arab Spring
ImageThe foundations for the Arab uprisings that took place in the wake of the 2008 financial crisis were laid in the years before by the neoliberal restructuring of Middle Eastern and North African economies. After decades of dropping trade barriers, lowering wages, and dismantling industry, Arab governments had stripped their populations of the social protections necessary to cope with the increases in unemployment and commodity prices and the stagnation in wages that were characteristic of the crisis (International Labour Organization, Global Wage Report 2012/13, Dec. 7, 2012). While the gutting of state industry and the opening of trade policy paid dividends for those who were well connected to the state bourgeoisie that had developed during the state-capitalist period in many countries, those changes left many vulnerable to international economic crises and reeling from a deepening sense of social inequality (Hanieh, Middle East Monitor, Mar. 1, 2015).
The Egyptian Opening
and the Lost Decade
It is fitting that the process of neoliberal economic restructuring in the Arab world began in Egypt, which from 1952 to 1970 was the beating heart of the Arab nationalist project. The Nasser regime came to epitomize the Bandung Era—the era of independent, non-aligned Third World nations—and its “popular nationalism,” characterized by anti-imperialist foreign policy positions and economic and social modernization through the expansion of the state.1 The destruction of the Arab armies by Israel in 1967, followed by the ascension of Anwar Sadat to Egypt’s presidency after Nasser’s death in 1970, sparked upheavals in that country, particularly from the student movements and labor unions. Under extreme domestic pressure, Sadat, together with Syria, launched a limited war against Israel in 1973, but it “was enough for him to sustain a temporary boost in popularity and drag the carpet from underneath the feet of the student movement” (el-Hamalawy, al-Araby, Jan. 25, 2015). With the pressure of the Israeli occupation of the Sinai and the 1967 defeat relieved enough to ease domestic tensions, Sadat had the political capital to pursue a rightward political shift, abandoning the policy of confrontation with Israel and embracing neoliberal economic restructuring in exchange for an alliance with the United States.
The Egyptian pivot, along with the decline in oil prices and debt crisis of the 1980s,
gave international financial institutions the opportunity to effect change in the direction of unfettered free market economy. Sudan in 1979/1980, Morocco in 1983, Tunisia and Egypt in 1987, and Jordan in 1989 all turned to the IMF and World Bank for financial and technical assistance. Algeria, Yemen, and Lebanon followed suit during the 1990s.2
Early adopters of neoliberal restructuring were held up by the Bretton Woods Institutions and the Western states as shining examples of economic reform in the region, despite the low growth rates, diminishing living standards, and repression of democratic expression in those countries, including the full nullification of elections in Jordan and Tunisia.
Whereas the neoliberal restructuring of Latin America is often considered the prototypical failure of this strategy, the Arab world experienced worse economic growth performance than did Latin America in its “lost decade.” The non-oil Arab states achieved almost zero growth, compared to the meager 1 percent or so for Latin America. The Middle East also emerged as the second largest indebted region after Latin America.3 Arab governments were compelled by the need to secure aid and favor from the West to transform their governments from “social states” to “regulatory states,” as “fiscal austerity was prioritized over employment generation and inclusive growth.”4 In the period 1970 to 1990, the Middle East and North Africa region as a whole experienced an average annual growth rate of -0.2 percent, compared to the +2.5 percent average for developing countries as a whole, and the 0 percent average for sub-Saharan Africa.5 The “structural deficiencies” that came to be the main economic grievances of the youth that led the uprisings in 2011 were a product of these neoliberal policies.6
Narrowing the Choices
Neoliberal restructuring was not only an economic process; it systematically redefined politics itself. Social welfare policies in the Arab world had been a response to genuine social pressures and movements. Ignoring the history of these policies, the neoliberals replaced them with a very narrow set of prescriptions: trade and financial liberalization; balanced state budgets; undistorted prices; reduction of state intervention in general, including the collection of rents; and the promotion of policies conducive to foreign investment. During the process of neoliberal restructuring, whole social structures were written off and liquidated, as if deviating from the natural state of the market could only be the result of a temporary insanity, disregarding the social forces that had swept the state-led model into existence. Ideologically, deviations from the narrow neoliberal understanding of the market were characterized as dangerous and radical. To facilitate this shift, whole economic histories needed to be rewritten; the pre-eminence of the United States and Great Britain were posited as a triumph of laissez-faire economics over lazy French interventionism, rather than a bloody hundreds-year-long process of redesigning the world by force of arms at the expense of entire civilizations. The Soviet Union was descending into oblivion, and the neoliberals triumphantly believed that they could put an end, not just to history, but to politics itself. As opposition to the international capitalist order became a less and less viable basis for an ideology of social transformation, an already disoriented anti-imperialism, which at its apex “self-consciously” placed itself “within the tradition of the European Enlightenment,” all but disappeared from the Third World (Malik, New York Times, Jan. 3, 2015). By systematically circumscribing the scope of political possibilities and stripping governments of their ability to protect workers and develop industry, the neoliberal pivot created a mass of people who were both materially deprived and socially vulnerable and, at the same time, lacked the discursive tools needed to understand and comprehend their positions. Thus, these societies were left only with what Tariq Ali, in his latest work, dubs “the extreme center,” an incontestable set of assumptions that severely hampers the ability of the state to deal with the problems inherent in peripheral economies.
In the Arab world, the modernization project was closely tied to the political legitimacy of the Arab state itself. The disorientation of the Arab world that resulted from the collapse of the Ottoman Empire was built upon the importation, by force of arms and without consideration of existing social structures, of the European nation-state to replace the (nominally) universal state of the Ottoman Empire. This process culminated in the dissolution of the caliphate by Kemal Atatürk, who argued that the nation-state was the only scientific form of social organization, in opposition to the universal Ottoman State.7 Nascent national leaderships, whether elevated by dominant powers or inheriting past divisions created by them, had to scramble to legitimize their positions, because, by the logic of the nation-state, the Arab world, with its common language, history, and culture, should have been a single state. Thus, the promise of social and economic modernization became the main source of popular legitimacy. As the other Arab states retreated from their raison d’être—the economic modernization project—“the implicit social contract struck between many Arab governments and their citizens began to fall apart.”8 Thus, as Samir Amin argues, in the process of abandoning the state-led model, these Arab governments liquidated their ideological content and popular legitimacy:
Thus, while the state-capitalist phase ultimately succumbed to its own economic contradictions, it at least produced coherent systems with positive visions, as opposed to the nihilism of the neoliberal program, in which the masses are seen not as the backbone of society to be elevated through productive and socially coherent employment, but as cannon fodder to be marched into free-trade industrial zones, perpetual casualties in the never-ending war to attract international capital.
The Neoliberal Ascendance
The ascension of neoliberalism to the dominant ideological discourse has stamped alternative understandings of economic history firmly out of the popular consciousness. The cleansed version of economic history, discussed in Western capitals at length and with a teleological certainty that would not have been out of place in Stalinist propaganda, is, however, as politically driven as the Keynesian and socialist nemeses against which neoliberals define themselves. Neoliberalism simply redraws the market-state boundary in a way that is consistent with its own particular ideological components, a combination of political libertarianism and Austrian economics (considered archaic even when this union was cemented during the 1930s) which reflect firmly the social institutions of that environment. As Ha-Joon Chang argues,
The “market rationality” that neoliberals want to rescue from the “corrupting” influences of politics can only be meaningfully defined with reference to the existing institutional structure, which is itself a product of politics. And if this is the case, what neoliberals are really doing when they talk of the depoliticization of the market is to assume that the particular boundary between market and state they wish to draw is the correct one, and that any attempt to contest that boundary is a “politically-minded” one. If there appears to be a fixed boundary between the two in certain circumstances, it is only because those concerned do not even realize that boundary is potentially contestable. … In calling for the depoliticization of the economy, the neoliberals are not only dressing up their own political views as “objective” and “above politics,” but are also undermining the principle of democratic control [emphasis added].9
Effectively, neoliberalism has, using the force of Western capital, dragged the ideology of the far right into a new center. Jackson Lear, in a critique of Hillary Clinton’s autobiography (London Review of Books, Feb. 5, 2015) captures the neoliberal fantasy perfectly, arguing that in practice as well as theory
The centrists tend to be at least as ideologically driven as the zealots they deplore. The core of their ideology is the belief that the U.S. has a uniquely necessary role to play in leading the world towards an inevitably democratic (and implicitly capitalist) future. The process is foreordained but can be helped along through neoliberal policy choices. This muddle of determinism and freedom is a secular residue of providentialist teleology, held with as much religious fervor and as little regard for contrary evidence as other dogmatic faiths derided by self-styled liberal pragmatists. … Clinton’s utopian faith depends on fantasies of a reified technology, unmoored from class and power relations and operating autonomously as a global force for good.
The collapse of the state-led model thrust the neoliberal ideology onto the rest of the world, riding a wave of decrees from international creditors and financial institutions. The institutions that the peoples of the Third World built for themselves in their attempts to transcend the conditions of peripheral integration into the world economy were dismantled with reckless rapidity. National industries—once symbols of progress and national pride—were liquidated and parceled out (Hickel, New Left Project, Apr. 9, 2012) to multinational corporations to close balance-of-payment gaps, and societies heaved from the strain of social dislocation caused by massive unemployment and price hikes (Lewkowicz, Open Democracy, Feb. 9, 2015). As recent events in Greece have demonstrated, this process is not subject to a democratic check, even in a parliamentary democracy (Rankin and Smith, Guardian, Feb. 20, 2015).
The Current Crisis:
A Clash of Extremists
In the Middle East and North Africa, the death of politics and the triumph of the neoliberal center have left a vacuum that culturalist ideologies, primarily Islamism, rushed to fill. Islamism is nothing more than an inverted Eurocentrism, and is incapable of dealing with the economic and political problems presented by international capitalism.10 These problems were once the territory of some form of socialism, which understood the profound failure and disarray of the Arab world after World War I as the result of European colonialism and the way in which the Ottoman Empire was integrated into the world market in a peripheral way. The solutions were thought to be political and economic: the creation of strong states to steward the development of industry and the tools through which to confront imperialism. When that collapsed, the region-wide pivot towards neoliberalism was rapid and hard.
In this context, the parallels between what happened to politics in the United States and what has happened in the Middle East as a result of the ascension of neoliberalism to hegemonic dominance are easy to see. The discourse that was once the venue through which real social conflicts were carried out is now hollowed out, leaving only the culturalist husks, perversions of actual political and economic grievances, which effectively “[transfer] struggles from real social contradiction to the world of the so-called cultural imagination, which is transhistorical and absolute.”11 Whereas this can take the form of identity politics, xenophobia, or “the culture wars” at the center, in the periphery—where economic problems are more acute, social structures are in a position of perpetual collapse, and states struggle for legitimacy in the face of political humiliation and economic stagnation—this process is exponentially more extreme.
The Islamists rose to power through elections in Egypt and Tunisia, but their program failed spectacularly and with speed. What came to replace them were ancien régime figures, who also lacked true political ideology; they have no substance outside of their opposition to Islamism and their promise to restore stability.12 Just as Mohammed Morsi had no program to reverse Egypt’s economic stagnation, Abdel Fatah al Sisi’s campaign consisted of nothing more than being the anti-Mohammed Morsi (Achcar, Le Monde Diplomatique, Jun. 2013). He articulated no plan for getting Egypt out of the economic disaster it currently faces. In other words they were both the perfect neoliberals; empty vessels galvanizing the whipped-up masses behind false understandings of their own history and the scope of choices available, the epitome of those “machine men with machine minds and machine hearts” that Charlie Chaplin’s character in the 1940 film, The Great Dictator, implores the world to reject. Al Sisi led the resurgence of the neoliberal technocrat, repackaged as the anti-Islamist crusader. Riding a wave of anti-Islamism, he has continued the neoliberal restructuring of Egypt at a pace that even Mubarak could not muster. A similar dialectic process occurred in Tunisia, albeit without the extremes of political repression and bloodshed. After winning the largest number of seats in the 2011 elections, the Islamic Renaissance party has lost to the newly formed “Call for Tunisia” party in the latest parliamentary elections. Call for Tunisia is led by ex-Ben-Ali-regime apparatchik Mohamed Beji Caid Essebsi and essentially campaigned on the same anti-Islamist platform as al Sisi.
This is the Arab world we find today: a world in which the fiery articulation of deep political and economic grievances, rooted in a history of humiliation and stagnation, that manifested itself in the Arab Spring has been wholly extinguished by the binary of inverted Eurocentrism and farcical Bonapartism, which reinforce each other at every turn (El-Baghdadi, Foreign Policy, Dec. 19, 2014). The great tragedy, of course, is that neither program actually addresses the political and economic problems of the region caused by the demise of state capitalism and neoliberal economic restructuring. It is only in this perverse world, stripped of the discursive space necessary to articulate any opposition to neoliberal ideology, that the Islamic State menace can exist, that King Abdullah of Jordan, whose kingdom could not exist without British imperialism and American largesse, can don a flight suit and pose as some kind of strongman in the face of the Islamic State, or that a Gulf Cooperation Council jet flown by a female pilot can exist as a bulwark against Islamic extremism. It is only within this context that al Sisi can pose in the shadow of Nasser while cooperating fully in the crushing of Gaza (Kilani, al-Araby, Dec. 24, 2014) and the gutting of Egyptian social protections (Ramadan, Middle East Monitor, Jul. 18, 2014), where al Assad can successfully market himself as the civilized man in a battle with brutes from 1000 AD while killing hundreds of thousands of Syrians.
1. Samir Amin, Eurocentrism: Modernity, Religion, and Democracy, 2nd ed. (Monthly Review Press, 2009).
2. Hamed El-Said and Jane Harrigan, “Globalization, International Finance, and Political Islam in the Arab World,” Middle East Journal (Vol. 60, No. 3, Summer 2006), 444-466, 448.
3. El-Said and Harrigan.
4. Richard J. Heydarian, How Capitalism Failed the Arab World (London: Zed, 2014), 64.
5. Gilbert Achcar, The People Want: A Radical Exploration of the Arab Uprising (University of California Press, 2013), 11.
6. Heydarian,11.
7. S. A. Sayyid, Fundamental Fear: Eurocentrism and the Emergence of Islamism (London: Zed, 1997).
8. Sheri Berman, “Islamism, Revolution, and Civil Society,” Perspectives on Politics (Vol. 1, No. 2, June 2003), 257-272, 263.
9. Ha-Joon Chang, “The Market, the State and Institutions in Economic Development,” Rethinking Development Economics (London: Anthem, 2003).
10. Sayyid.
11. Amin, 82.
12. Sayyid.
About Author
Yousef Khalil is a Master of Arts candidate at the New School University Graduate Program for International Affairs, studying economic development with a particular interest in the Middle East and North Africa region.
Leave a Reply
|
Computer science course for craftsman
A computer science course for craftsman
How do you make your own exe file?
What is it like to make a game?
Can you make your own Facebook?
The Nand2Tetris Course
Nand2Tetris is a computer science course, where you learn how to build your own computer. It is split in two parts, which have been recently packed in two courses in coursera. Those are recorded by the initial authors of the course and the accompanying book, which gives you a more detailed view of the covered concepts. Most of the chapters of the book you can find on their site.
Building your own computer is a huge endeavor. But it will satisfy your hunger for knowledge and it will drive your interest in computer science to a whole new level. Apart from that, do you remember all those computer science subjects you studied in university such as boolean algebra, grammars, automata, etc.?
Even though we have studied the subjects, we never really develop an understanding of why we study them or how can we use them (at least I didn’t). But this course will provide you with answers. Because you will not study boolean algebra to complete a set of tasks to get a high grade, you will study it to build your own CPU.
The course comes in two parts – first you build the hardware, and then the software running on top of it.
Here you start from the atoms of a modern computer – the logic gates. You will start with the simplest logic gate found in computers, called NAND (That’s where the course name comes from).
You will use it to build other logic gates such as AND, OR, XOR. Up to more complex ones such as a multiplexer.
You will learn what a register is and use it to create the computer memory we all know today as RAM. Then you build an Arithmetic-Logic unit (ALU) and, finally, your own processor.
The final mini-project of this part is to build an Assembler which translates symbolic notation into binary. Modern computers have their own machine language and your computer does not differ.
The good thing is that for building the hardware of your computer, you don’t need any prerequisite knowledge.
You will use a hardware description language which will be explained during the course and is not hard to use. The only exception is the final project of building an Assembler, as you will need to write a program that actually does the translation for you. However, you will choose the programming language and you don’t need a lot of in-depth knowledge.
Surprisingly, it turned out that building the hardware of the computer is easier than building the software. In the second part of the course, you will build a virtual machine. It is very similar to the virtual machine of the Java programming language. Next up, you will build a compiler. Here, you will learn what a grammar is and how to implement it in code.
Finally, you build an operating system. Here is the part where you will learn what happens when you click the ON button of your computer. Ever wondered how the Heap memory actually works? Don’t worry, you will learn that too.
For this part of the system, you will need a more in-depth knowledge of a programming language. You will need to understand a bit more complex topics such as recursion, as well as Object-oriented programming. A basic knowledge of data structures and algorithms will be helpful as well, although they are explained in the course when used.
What you need to get started
As I mentioned, for the first part of the course, you don’t need any prerequisite knowledge. For the second part, understanding at least one programming language to a decent extent is necessary.
But most importantly, you will need to put in a LOT of effort for this course. It is not a feel-good course you just go through from start to finish by watching 20 minutes of video per day. You watch the lectures first, then you read the book and then you do the project.
The whole course consists of 12 mini-projects along with the accompanying lectures. Each one of those will take you at least 5 hours for watching the videos, 1 hour for reading the chapter in the book and anywhere from 5 to 20 hours of work for completing the project, depending on the complexity.
As you see, it will be hard. The first part took me about 2.5 weeks of focused learning to complete. Meaning I was studying and coding for 10 hours a day, every day. The second part took me around 3.5 weeks. But if you are doing the course on the weekend due to work, then it will take you a lot more time.
Now, I’m not saying all this to stop you from taking it. Actually, it’s the other way around. Don’t consider this course as a destination, rather – a journey. Every step you take to completing it will make you at least a bit happier. The reason is that there is so much to learn from it. Completing it will feel more like playing a game, than working.
But that is only if you have genuine interest in computer science and programming. Which brings me to the next topic:
Why bother with such a course?
Well, as you see, this course won’t teach you a lot of marketable skills. Companies search for people who know how to use AngularJS, not someone who knows how to build a compiler. Also, it is not a particularly easy course. It will require a ton of dedication and a lot of contributed time to complete.
It is for those who want to gain a deeper understanding of computer science, not just a deeper understanding of the most popular JavaScript framework.
Although, the knowledge that Nand2Tetris provides a long-term prospect for you as a professional. Because, modern frameworks change, but fundamental knowledge stays the same.
And you might say that this is all obsolete and no one needs this anymore, because someone else has done the job for you.
But even so, would you hire an electrician to model the whole electrical scheme of your house if he doesn’t know Ohm’s law?
Sure, there are devices that give you the metrics out of the box, but an understanding of the basics is still important.
The Nand2Tetris course is a work of art. It’s made from passionate people for passionate people. It goes from A to Z of computer science topics and let’s you take a breadth-first approach to the nature of computing systems.
So, if you are like me. If you want to understand how things work on the bare metal. If you consider yourself a craftsman, rather than simply a worker, then you will definitely love this course.
Once you complete it, you will feel utterly exhausted, but deeply satisfied.
It is a huge step on your journey to mastery.
Site Footer |
How Muslims should respond to offensive Freedom of expression?
How Muslims should respond to offensive Freedom of expression?
Muslims should respond in the light of the following Quranic verse, to others’ offensive freedom of expression:
لَتُبْلَوُنَّ فِيْٓ اَمْوَالِكُمْ وَاَنْفُسِكُمْ ۣوَلَتَسْمَعُنَّ مِنَ الَّذِيْنَ اُوْتُوا الْكِتٰبَ مِنْ قَبْلِكُمْ وَمِنَ الَّذِيْنَ اَشْرَكُوْٓا اَذًى كَثِيْرًا ۭ وَاِنْ تَصْبِرُوْا وَتَتَّقُوْا فَاِنَّ ذٰلِكَ مِنْ عَزْمِ الْاُمُوْرِ
“You will surely be tried and tested in respect of your property and your lives, and you shall surely hear many hurtful things from those who were given the Book before you and from those who set up partners with Allah, but if you endure with fortitude and restrain yourselves, that indeed is a matter of strong determination.”
[Al-Quran Surah 3: Verse 186]
In the light of the above verse, two responses that are required from the Muslims in the face of other’s hurtful expressions, are: ‘Sabr’ and ‘Taqwa’.
‘Sabr’ has many meanings, including:
Patience: Capacity to endure hardship, difficulty, or inconvenience with calmness and self-control
Forbearance: Tolerance and restraint in the face of provocation
Composure: A calm or tranquil state of mind
Equanimity: The quality of being calm and even-tempered
Steadfastness: Quality of being, firmly loyal, unswerving and unchanging
Firmness: Determination and resolution
Self-constraint & Self-control: fettering, shackling of various urges, violent emotions and bad desires
The term ‘Taqwa’ also has many meanings including: preservation, restraint from evil, uprightness, integrity, moral soundness, uprightness, rectitude and incorruptibility of character, being cautious and conscience.
Another Quranic verse that is also worth consideration is as follows:
وَلَا تَسُبُّوا الَّذِيْنَ يَدْعُوْنَ مِنْ دُوْنِ اللّٰهِ فَيَسُبُّوا اللّٰهَ عَدْوًۢا بِغَيْرِ عِلْمٍ ۭكَذٰلِكَ زَيَّنَّا لِكُلِّ اُمَّةٍ عَمَلَهُمْ ۠ ثُمَّ اِلٰى رَبِّهِمْ مَّرْجِعُهُمْ فَيُنَبِّئُهُمْ بِمَاكَانُوْا يَعْمَلُوْنَ
Do not revile those [beings] whom they invoke instead of Allah, lest they, in their hostility, revile Allah out of ignorance. Thus to every people We have caused their actions to seem fair. To their Lord they shall all return, and He will declare to them all that they have done.
[Al-Quran Surah 6: Verse 108]
Comments are closed. |
Breakthrough Proof Clears Path for Quantum AI – Overcoming Threat of “Barren Plateaus”
Breakthrough Proof Quantum AI
A novel proof that certain quantum convolutional networks can be guaranteed to be trained clears the way for quantum artificial intelligence to aid in materials discovery and many other applications. Credit: LANL
Novel theorem demonstrates convolutional neural networks can always be trained on quantum computers, overcoming threat of ‘barren plateaus’ in optimization problems.
Convolutional neural networks running on quantum computers have generated significant buzz for their potential to analyze quantum data better than classical computers can. While a fundamental solvability problem known as “barren plateaus” has limited the application of these neural networks for large data sets, new research overcomes that Achilles heel with a rigorous proof that guarantees scalability.
“The way you construct a quantum neural network can lead to a barren plateau—or not,” said Marco Cerezo, coauthor of the paper titled “Absence of Barren Plateaus in Quantum Convolutional Neural Networks,” published recently by a Los Alamos National Laboratory team in Physical Review X. Cerezo is a physicist specializing in quantum computing, quantum machine learning, and quantum information at Los Alamos. “We proved the absence of barren plateaus for a special type of quantum neural network. Our work provides trainability guarantees for this architecture, meaning that one can generically train its parameters.”
As an artificial intelligence (AI) methodology, quantum convolutional neural networks are inspired by the visual cortex. As such, they involve a series of convolutional layers, or filters, interleaved with pooling layers that reduce the dimension of the data while keeping important features of a data set.
These neural networks can be used to solve a range of problems, from image recognition to materials discovery. Overcoming barren plateaus is key to extracting the full potential of quantum computers in AI applications and demonstrating their superiority over classical computers.
Until now, Cerezo said, researchers in quantum machine learning analyzed how to mitigate the effects of barren plateaus, but they lacked a theoretical basis for avoiding it altogether. The Los Alamos work shows how some quantum neural networks are, in fact, immune to barren plateaus.
“With this guarantee in hand, researchers will now be able to sift through quantum-computer data about quantum systems and use that information for studying material properties or discovering new materials, among other applications,” said Patrick Coles, a quantum physicist at Los Alamos and a coauthor of the paper.
Many more applications for quantum AI algorithms will emerge, Coles thinks, as researchers use near-term quantum computers more frequently and generate more and more data—all machine learning programs are data-hungry.
Avoiding the vanishing gradient
“All hope of quantum speedup or advantage is lost if you have a barren plateau,” Cerezo said.
The crux of the problem is a “vanishing gradient” in the optimization landscape. The landscape is composed of hills and valleys, and the goal is to train the model’s parameters to find the solution by exploring the geography of the landscape. The solution usually lies at the bottom of the lowest valley, so to speak. But in a flat landscape one cannot train the parameters because it’s difficult to determine which direction to take.
That problem becomes particularly relevant when the number of data features increases. In fact, the landscape becomes exponentially flat with the feature size. Hence, in the presence of a barren plateau, the quantum neural network cannot be scaled up.
The Los Alamos team developed a novel graphical approach for analyzing the scaling within a quantum neural network and proving its trainability.
For more than 40 years, physicists have thought quantum computers would prove useful in simulating and understanding quantum systems of particles, which choke conventional classical computers. The type of quantum convolutional neural network that the Los Alamos research has proved robust is expected to have useful applications in analyzing data from quantum simulations.
“The field of quantum machine learning is still young,” Coles said. “There’s a famous quote about lasers, when they were first discovered, that said they were a solution in search of a problem. Now lasers are used everywhere. Similarly, a number of us suspect that quantum data will become highly available, and then quantum machine learning will take off.”
For instance, research is focusing on ceramic materials as high-temperature superconductors, Coles said, which could improve frictionless transportation, such as magnetic levitation trains. But analyzing data about the material’s large number of phases, which are influenced by temperature, pressure, and impurities in these materials, and classifying the phases is a huge task that goes beyond the capabilities of classical computers.
Using a scalable quantum neural network, a quantum computer could sift through a vast data set about the various states of a given material and correlate those states with phases to identify the optimal state for high-temperature superconducting.
Reference: “Absence of Barren Plateaus in Quantum Convolutional Neural Networks” by Arthur Pesah, M. Cerezo, Samson Wang, Tyler Volkoff, Andrew T. Sornborger and Patrick J. Coles, 15 October 2021, Physical Review X.
DOI: 10.1103/PhysRevX.11.041011
1 Comment on "Breakthrough Proof Clears Path for Quantum AI – Overcoming Threat of “Barren Plateaus”"
1. Dan'l Danehy-Oakes | December 12, 2021 at 12:31 pm | Reply
Why, in heaven’s name, does anyone think that quantum AI is a good idea?
Arthur C. Clarke (from memory, so the wording may be off) wrote: “The first truly intelligent machine we invent will be the last thing we need to invent.” He later amended this to “…may be the last thing we are allowed to make.”
There is nothing we need QAI for so badly and urgently, that we should rush ahead without studying the possible consequences and working out how to mitigate them.
Leave a comment
|
Griffins are the size of mountain lions, with wings that are used to glide. So a group of Griffins, instead of pursuing their prey over ground, could jump off a high rock and glide downwards and go for the kill. According to wikipedia, "Cougars are ambush predators, feeding mostly on deer and other mammals". So, supposedly, wings that enable the griffin to glide should improve its hunting capabilities.
But how would deer, elk, mountain sheep/goats be different in response to this? Would there be no difference?
My thinking is that they would perch on the ridges and swoop down into the valleys to get their food rather than ambushing. This would change the evolutionary pressures on their prey mostly Racoons and ungulates forcing them to look up more and to hide in wooded areas. For browsers Deer and their kin, this would actually be a boon but for the grazers not so much. So faster more cryptic animals with better upward vision. The swooping down suggests vision based hunting so the winged-catamounts would either be diurnal or have owl-like night vision.
its prey would take even more advantage of the Selfish Herd so you should see an increase in herd sizes. Being in large groups so your buddy gets eaten instead of you would work well against swoopers. Alternately staying in the thick woods and eating browse would be a solution. Running around by one's self in meadows would not.
They would also most likely be extinct in the United States because they would be a threat to livestock and settlers and be fairly easy to hit with a rifle. Also flying pumas being no ambushing apex predators wouldn't be good at hiding. Historically there were bounties on non-flying pumas. Griffins would definitely have them. Also the beautiful plumage.
The joke about flying cougars in Dodgeball : an underdog story would not be a joke.
• $\begingroup$ I don't get it (the joke). $\endgroup$ Feb 3 '16 at 18:51
There are two options here, the gryphons can fly (like an eagle) or the gryphon can glide (like a flying squirrel).
In the former case you're looking at herd animals of larger sizes than normal keeping a sky watch. Large eagles can take young lambs, fauns and the like. However as they get older they get too big to remain prey from the sky. If there were much larger avian predators, larger creatures would remain alert to threats from the air. There might also be a tendency to become larger. Exceeding the maximum size the predator can handle being a valid defence from aerial predation. (I'm ignoring minor details like the things being too heavy to fly.)
In the latter case I can't see it being an especially practical hunter if it can only glide from a perch. Animals would largely be safe on the open plains as long as they kept clear of high cliffs. As an example of this sort of learned behaviour, there are herds of deer that still won't approach where the cold war borders used to be.
• $\begingroup$ "Exceeding the maximum size the predator can handle being a valid defence from aerial predation." this is exactly why we don't see Rocs anymore elephants got too big. $\endgroup$
– King-Ink
Feb 3 '16 at 18:59
Griffins become an apex predator they would have no other creatures that could harm them ( except maybe the large moose). The additional ability to fly would not only make a ambushing prey a lot easier. Chasing down and overpowering their prey would also become easier. The usual tricks that prey animals use to get away from a mountain lion wouldn't work on a Griffin.
Because of this the population of other animals be completely devastated the Griffins hunt them to extinction. This would eventually backfire as soon the one of the any food the Griffins to eat. The population of Griffins would drop quickly. Executions population go down the population of prey animals would eventually. The rise in available food causes the population of Griffin to rise again. This would repeat itself over and over again until you would eventually reach an equilibrium.
In the end to summarize you would have a world very much like today except for with Griffins as the apex predator ( other than humans) with some species of prey animals extinct.
Prey animals that didn't it go extinct, would probably do most of their grazing near cave or maybe a very thickly wooded forest or some other place where they could cancel fly advantage of a Griffin. If they somehow managed to detect the Griffin before he ambush them they a retreat into the cave.
• $\begingroup$ I think you've got a decent answer, but it's not clear enough. Try reviewing real life predator/prey relationships, and rewrite your answer to clarify what you're trying to say. $\endgroup$ Feb 3 '16 at 3:15
• $\begingroup$ First, let me just say all I have done is replaced mountain lions with griffins. Mountain Lions are apex predators, and they don't hunt everything else to extinction. I was wondering if nature would push for faster sheep, animals with increased jumping ability, that kind of thing. How would animas evolve to not be munched by a gliding mountain lion. $\endgroup$ Feb 3 '16 at 3:15
• $\begingroup$ @XanderTheZenon he Griffin with the ability to fly would be harder to get away from, ambushing animals would be much easier. The prey animals will not be able to compete. The tricks they would use to get away from a regular mountain lion would not apply. I can't possibly see how jumping higher would affected situation much a flying creature could easily Dodge or otherwise outmaneuver ajumping one in the air. jumping towards a Griffin would only save it sometime because I wouldn't have to fly down to you. $\endgroup$ Feb 3 '16 at 3:25
• 1
$\begingroup$ I doubt prey animals would be hunted to extinction. There's a supply:demand ratio in effect: once it becomes too hard for a population of predators to find food, their numbers decline (for a variety of factors). With fewer hunters, the prey populations recover. globalchange.umich.edu/globalchange1/current/lectures/predation/… $\endgroup$ Feb 3 '16 at 18:04
• $\begingroup$ I didn't say the prey would be hunted to extinction I said some of them would. I agree with you most and still survive because Asus food supply began to decrease, so would number Griffins and tell you eventually reach equilibrium. $\endgroup$ Feb 3 '16 at 18:35
You must log in to answer this question.
Not the answer you're looking for? Browse other questions tagged . |
Child Psychiatry
Unlike traditional psychiatry, which rarely looks at the brain, Amen Clinics uses brain imaging technology to identify brain patterns associated with conditions that affect children and adolescents.
What is Child Psychiatry?
If your child or adolescent is struggling in school, acting out, rebellious, inattentive, sad and lonely, forgetful, disorganized, or hanging out with the wrong crowd, you may think it’s just normal growing pains and hope they will grow out of it. Many kids and teens do grow out of troublesome behavior and emotional problems, but not all of them do. When your child or adolescent’s troubling symptoms and behaviors persist and negatively impact their schoolwork, friendships, and home life, it’s time to seek help. Allowing a child’s symptoms to go unchecked can increase the chances of school failure, limit their opportunities to go to college, hamper their ability to get the job they want, and set them up for relationships filled with strife. In the worst case, it can lead to suicidal thoughts and behavior.
Who Suffers?
The most commonly diagnosed conditions among kids and teens are ADD/ADHD, behavioral problems, anxiety, and depression. Other conditions that affect children and teens include bipolar disorder, obsessive compulsive disorder, autism spectrum disorder (ASD), eating disorders, PANDAS, PTSD, Tourette syndrome, schizophrenia, aggression, and traumatic brain injury (TBI). According to the latest statistics from the CDC, the number of youngsters diagnosed with these conditions, as well as the number who took their own lives are:
• 6.1 million ages 2-17 have ADD/ADHD
• 4.5 million ages 3-17 have behavior issues
• 4.4 million ages 3-17 have anxiety
• 1.9 million ages 3-17 have depression
• 14,717 young people ages 10-24 died by suicide in 2017
What are the Symptoms?
Be aware that it is not unusual for children to have more than one condition. For example, nearly 2 out of 3 youngsters diagnosed with ADD/ADHD also have one or more other mental health condition. About half of all kids with ADD/ADHD also have a behavior problem or conduct disorder, and about 1 in 3 have anxiety. If your child shows multiple signs and symptoms, it could mean that co-occurring disorders are at work. See below for a list of warning signs—from school struggles to anger issues—that your child may need help.
The most commonly diagnosed conditions among kids and teens are:
• ADD/ADHD (6.1 million)
• Behavior issues (4.5 million)
• Anxiety (4.4 million)
• Depression (1.9 million)
• Concussions (up 71% from 2010-2015)
Why Choose Amen Clinics for Treating Your Child’s Issues?
Our brain imaging work at Amen Clinics shows that “mental health” conditions in children and teens are actually “brain health” conditions. One of the most exciting neuroimaging findings is that conditions like ADD/ADHD, depression, and anxiety aren’t simple or single disorders. In addition, brain imaging shows that head trauma is a major cause of psychiatric illness in children and adolescents, but few people realize it. For young people, getting accurately diagnosed and starting treatment early can be more effective and can help prevent long-term issues.
Children’s Brains Work Differently
Because young brains are still developing until a person’s mid-20s, untreated problems can alter brain development and lead to lasting problems in how the brain functions. For example, untreated ADD/ADHD increases the risk of depression, drug abuse, obesity, smoking, type 2 diabetes, and Alzheimer’s disease.
Healthy Brain Scan
ADD Brain Scan
SPECT (single photon emission computed tomography) is a nuclear medicine study that evaluates blood flow and activity in the brain. Basically, it shows three things: healthy activity, too little activity, or too much activity. The healthy surface brain SPECT scan on the left shows full, even symmetrical activity. The scan on the right, taken during concentration, is from a child with ADD and reveals decreased blood flow and activity (the areas that look like “holes”) in the brain’s prefrontal cortex, one of 7 brain patterns associated with ADD.
Ready to learn more? Speak to a care coordinator today!
Contact US
Warning Signs Your Child May Need Help
It can be difficult for parents to know if the problems your children are having are just a phase or if they’re signs of something more serious. Some of the many red flags that could indicate your child is suffering from a mental health condition include:
Sudden Anger or Behavior Changes
This can include consistently low moods, crankiness, and a lack of interest in pleasurable activities or extreme mood swings. Be aware of any inexplicable deviation from your child’s normal routine, moods, energy levels, behaviors, or school performance. Frequent temper tantrums, aggression, or defiance are red flags.
School and Academic Struggles
Ongoing academic troubles can be due to behavioral issues or attention problems.
Fears and Worries
Children who have overwhelming concerns that interfere with their ability do things or that undermine their performance at school may have anxiety.
Issues with Friends or Classmates
Difficulty making friends or trouble connecting with others may be a warning sign of a deeper problem.
Repetitive Actions
Some kids have physical tics or vocalizations, or they repetitively check things and get upset if they aren’t in the right order.
Physical Pain
Frequent complaints about body aches, headaches, upset stomach with no known cause
Sleep Issues
Sleeping too much or having trouble staying asleep through the night is associated with several mental health conditions.
Weight Loss / Weight Gain
Avoiding eating, using laxatives, or vomiting may be signs of an eating disorder. Overeating can be a sign of using food to self-medicate bad feelings.
Substance Use
Some adolescents smoke, drink alcohol, or use drugs as a way to self-medicate.
Hearing or seeing things others can’t see or hear is a red flag behavior that needs to be investigated.
Children and teens who injure themselves or engage in risky behavior need help.
Suicidal Thoughts and Behavior
Suicide is the second leading cause of death among children and young adults ages 10-24, so if a child or teen is talking about suicide, it needs to be taken very seriously.
“When Your Brain Works Right, You Work Right”
– Daniel G. Amen, M.D.
Have a Question?
|
10 Lost Cities Of The World You’ve Probably Never Heard Of
Angamuco—a lost, undocumented city
Angamuco is a long-lost city in Mexico that is believed to have had as many buildings as Manhattan. The city was uncovered using laser surveying technology in Western Mexico.
It is believed that at its prime, the ancient city of Angamuco had around 40,000 structures spread over an area of around 25sq km, meaning that it’s roughly about the same number of buildings as Manhattan, but on a much smaller plot of land.
Termessos—The Ancient City That Alexander the Great didn’t conquer
A city that Alexander the Great wanted to conquer really bad.
Termessos was founded on a natural platform on top of Güllük Dağı, near Antalya in modern-day Turkey, soaring to a height of 1,665 meters.
The city, hidden within a mountain was under siege by Alexander Great in 333 BC. However, he never managed to conquer it.
The city features a Roman-style theater build to house up to five thousand spectators.
The city of Termesos was abandoned in 200 AD after an earthquake destroyed important parts of the city, including its primary aqueduct.
Baial—The Lost Las Vegas of Rome
Baia was an ancient city dubbed by experts as the Las Vegas of the ancient Roman Empire. After having been sacked and abandoned, eventually, it sank in a bay near Naples. Today, the city is visited by divers who enjoy incredibly well-preserved buildings and statues.
Ruins of Gedi
Located near the India Ocean, east of Kenya. The ancient city of Gedi included a massive wall that protected the town. The Gedi ruins were first discovered by colonialists in 1884 after a British resident of Zanzibar, Sir John Kirk, visited the site. Most archaeologists agree that the city was of great commercial value. The city was one of the most advanced ancient sites in the region. Guess what? Its inhabitants had flush toilets hundreds of years ago!
The Legendary city of Troy
Thought to be a mere myth, the ancient city of Troy was actually found in modern day Turkey. According to Homer’s epic poem the Iliad, it was here where the Trojan War took place.
Today, the archaeological site of Troy is home to a treasure trove of historical artifacts. The site contains several layers of ruins. The present-day location is known as Hisarlik.
Timgad—The Lost city of the Roman Empire
Founded by Emperor Trajan around 100 AD, Timgad was a Roman military colonial town located in modern-day Algeria.
The ancient city is famous for representing one of the best-preserved examples of the grid plan as used in Roman town planning.
Chan Chan—Peru’s little-known diamond
Peru is home to countless incredible archaeological sites. One of them is Chan Chan. This ancient city is considered by archaeologists as the largest city to exist in pre-Colombian America.
The city is home to a number of walled citadels which are believed to have housed ceremonial rooms, burial chambers as well as temples.
Around 30,000 people called the city of Chan Chan their home.
The buildings of Chan Chan were built using adobe brick and were finished with mud that was adorned with patterned relief arabesques.
Vijayanagara—one of the largest ancient cities in the world
The ancient city of Vijayanagara was one of the largest cities in the world, home to more than 500,000 inhabitants. The ancient Hindu city flourished between the 14th century and 16th century. The city was the capital of the Vijayanagara Empire. The name translates as “City of Victory”.
Ctesiphon—one of the greatest cities in Mesopotamia
The ancient city of Ctesiphon was one of the largest cities on the surface of the planet as well as one of the most imposing cities in ancient Mesopotamia.
Ctesiphon was captured by Rome and the Byzantine Empire five times. The city is located on the eastern bank of the Tigris. Ctesiphon is believed to have been founded sometime in the late 120’s BC. The most conspicuous structure remaining today is the Taq Kasra, sometimes called the Archway of Ctesiphon.
Ciudad Perdida—The Lost City of Colombia
Located in Sierra Nevada Colombia lay the ruins of a Lost City. Believed to have been founded around 800AD, the city is home to countless terraces carved in ancient times into the Colombian mountainside.
Featured Image Via Pinterest
Like it? Share with your friends!
One Comment
Comments are closed. |
Jump to content
Insulin - Melatonin - Glucagon axis genetic risk profiles
Recommended Posts
Circadian rhythms - the pancreas releases less insulin, insulin levels drop while we are asleep, presumably because the organism evolved not needing to digest food as much while one is asleep. The drop in insulin is mediated by melatonin (melatonin causes the pancreas to release less insulin) - since there is a natural circadian cycle to the peaks and valleys of melatonin released during day/night. Generally, we don't want too much insulin circulating for long (as it's strongly connected with cancer), but we do want it timed to have enough to break down the sugars post-meal, so we don't have elevated BG circulating too long. However, it's easy to see how melatonin release cycle might be mismatched with food/blood sugar for people with atypical sleeping patterns (maybe one reason for elevated cancer in night shift workers?) - and that includes those atypical sleepers who, say, f.ex. go to bed around 8 pm and get up around 2-3 am. Now it transpires that different people are affected by melatonin's signalling differently "up to 30 percent of the population may be predisposed to have a pancreas that's more sensitive to the insulin-inhibiting effects of melatonin. People with this increased sensitivity carry a slightly altered melatonin receptor gene that is a known risk factor for type 2 diabetes." We are talking about rs10830963 and the risk allele is G (fwiw, mine is CG according to 23andme). In any case, during the night another hormone is released - glucagon, which elevates the levels of BG in the absence of food, which if you combine with now insufficient insulin results in higher BG upon awakening... and if you have a fasting BG test first thing in the morning, you as the carrier of G might then show elevated BC levels. The effect is so strong that it might lead to type 2 diabetes. Incidentally, some CR'd folks have odd BG levels, I wonder if it's not due to being G carriers (30% of the population makes this a very popular case). The other interesting thing is that given how popular melatonin supplements are, researchers caution that regular melatonin supplementation can cause some serious diabetes problems down the road - and they actually tested that hypothesis and confirmed that part of it (i.e. indicating caution wrt. melatonin supplementation). If you know your status (through 23andme or otherwise), you can ponder the insulin-cancer-glucagon-diabetes-melatonin axis and adjust your food and sleeping patterns if so inclined. Here in Cell Metabolism:
rs10830963 is an eQTL in human islets conferring increased MTNR1B mRNA expression
Melatonin inhibits cAMP rises in mouse islets and clonal insulin-secreting cells
Melatonin blocks insulin release in mouse islets and clonal insulin-secreting cells
Melatonin’s inhibition of insulin release is stronger in risk allele carriers
There's also a pop writeup:
PMID: 19060908 (full free text available)
Genome-wide association studies have shown that variation in MTNR1B (melatonin receptor 1B) is associated with insulin and glucose concentrations. Here we show that the risk genotype of this SNP predicts future type 2 diabetes (T2D) in two large prospective studies. Specifically, the risk genotype was associated with impairment of early insulin response to both oral and intravenous glucose and with faster deterioration of insulin secretion over time. We also show that the MTNR1B mRNA is expressed in human islets, and immunocytochemistry confirms that it is primarily localized in beta cells in islets. Nondiabetic individuals carrying the risk allele and individuals with T2D showed increased expression of the receptor in islets. Insulin release from clonal beta cells in response to glucose was inhibited in the presence of melatonin. These data suggest that the circulating hormone melatonin, which is predominantly released from the pineal gland in the brain, is involved in the pathogenesis of T2D. Given the increased expression of MTNR1B in individuals at risk of T2D, the pathogenic effects are likely exerted via a direct inhibitory effect on beta cells. In view of these results, blocking the melatonin ligand-receptor system could be a therapeutic avenue in T2D.
The key point I'm citing is:
"A variant in the MTNR1B gene increases future risk of T2D and is associated with increased fasting glucose levels
First, we studied whether the MTNR1B rs10830963 SNP predicts future T2D in 16,061 Swedish (from the Malmoe Preventive Project, MPP) and 2,770 Finnish (from the Botnia study) subjects, 2,201 (2063/138) of whom developed diabetes during 400,000 follow-up years (Table 1). The frequency of the risk G-allele of SNP rs10830963 was higher in individuals from the MPP study who converted to T2D compared to non-converters (30.2% vs 28.0%, P=0.002). This yielded a modestly increased risk of 1.12 (95%CI 1.04–1.20, P=0.002). There was no significant difference between converters and non-converters in the Botnia study, but here only 138 individuals developed T2D during a 7 year follow-up period (31.0% vs 29.3%; OR 1.09, 95%CI 0.82–1.43, P=0.56). In the combined analysis of the two cohorts, the risk allele was associated with a 1.11-fold increased risk of future T2D (95% CI 1.03–1.18, P=0.004). This relatively modest risk for future T2D probably explains why this SNP was not identified as being associated with T2D in previous GWAS (OR 1.12 (95% CI 1.04– 1.20), P=0.003 in DIAGRAM). However, the effect on glucose levels seems much stronger; in non-diabetic individuals from the MPP study, risk G-allele carriers displayed a higher fasting plasma glucose concentration at baseline (CC: 5.38±0.54 mmol/l, CG: 5.44±0.55 mmol/l, GG 5.50±0.55 mmol/l, P=3×10−19), which remained elevated throughout the 25-year follow-up period (CC: 5.41±0.54 mmol/l, CG: 5.49±0.54 mmol/l, GG 5.55±0.54 mmol/l, P=2×10−31) (Figure 1E)."
The other interesting study is this:
PMID: 21195351 (full free text available)
Type 2 diabetes (T2D) evolves when insulin secretion fails. Insulin release from the pancreatic β cell is controlled by mitochondrial metabolism, which translates fluctuations in blood glucose into metabolic coupling signals. We identified a common variant (rs950994) in the human transcription factor B1 mitochondrial (TFB1M) gene associated with reduced insulin secretion, elevated postprandial glucose levels, and future risk of T2D. Because islet TFB1M mRNA levels were lower in carriers of the risk allele and correlated with insulin secretion, we examined mice heterozygous for Tfb1m deficiency. These mice displayed lower expression of TFB1M in islets and impaired mitochondrial function and released less insulin in response to glucose in vivo and in vitro. Reducing TFB1M mRNA and protein in clonal β cells by RNA interference impaired complexes of the mitochondrial oxidative phosphorylation system. Consequently, nutrient-stimulated ATP generation was reduced, leading to perturbed insulin secretion. We conclude that a deficiency in TFB1M and impaired mitochondrial function contribute to the pathogenesis of T2D.
Here are talking about rs950994 and the risk allele is A (available to check through 23andme - fwiw, mine is GG
Share this post
Link to post
Share on other sites
Thanks Tom - very interesting group of papers!
Boy, I wonder who you might be referring to in that last statement...☺
I agree with you that an odd sleep pattern (e.g. shift workers) coupled with eating at times that don't align well with the melatonin / insulin circadian rhythm could be bad news for glucose control and risk of diabetes. That's why I'm a strong advocate for eating one's calories early in the day (when insulin sensitivity is highest [1]), and leaving a long period between one's last meal and bedtime.
Despite the fact that I go to bed at 8pm, by that time of night (or day - depending on your perspective...), I've been fasting for 12+ hours. So while I totally agree with the first part of your statement (i.e. "we do want it [insulin production capacity] timed to have enough to break down the sugars post-meal, so we don't have elevated BG circulating too long."), the problem of such timing seems irrelevant to my situation, since I've aligned my eating pattern with my sleep pattern to a greater degree than virtually anyone else on the planet .
BTW, 23andMe says I've got the same profile as you for the two SNPs involved - I'm CG for rs10830963 and GG for rs950994.
Regarding the particular study of melatonin, insulin and glucose you posted, if you read the full text the results look rather equivocal. In fact I'm really confused where the authors came up with the title of their paper "Increased Melatonin Signaling Is a Risk Factor for Type 2 Diabetes" based on their own data reported in this paper. Perhaps they wanted headlines...
In particular, in the human part of their study, they gave rs10830963 purebreed folks (i.e. CC or GG, not mutts like you and me with CG) 4mg/day of melatonin for 3 months and then tested their post-meal glucose, insulin and insulin sensitivity. Here are the graphs of the results before (left) and after (right) the melatonin treatment:
As you can see from A, at baseline the risk allele folks (GG) had higher postprandial glucose spikes than the CC folks. But 3 months of melatonin didn't make much difference in this relationship or the post-meal glucose levels of either group (graph B). From graphs C and E, it appears the reason for the higher post-meal glucose spike in the GG folks at baseline was lesser & later release of glucose-clearing insulin. After three months of melatonin supplements, the early post-meal insulin release in both the CC and GG folks went down (graph E), but their insulin sensitivity went up (graph F) to compensate. As a result, the post-meal glucose spike and amount of insulin released was virtually unchanged for both CC and GG folks after 3 months of melatonin (A vs B and C vs D).
So what exactly is the concern Tom?
Sure, it appears to be a mixed blessing to be a carrier of the G allele for this SNP - reducing insulin exposure on the one hand (a good thing) but increasing the postprandial glucose spike on the other (a bad thing). But regarding melatonin supplements, if anything it appears from the human data that both the CC and GG folks people achieved the same glucose control using less insulin (i.e. exhibited greater insulin sensitivity) after three months on melatonin. As you mentioned, keeping insulin levels low is good for cancer and a host of other health-related effects, as long as it doesn't result in increased glucose levels, which appears to be the case with melatonin supplements at least based on this study.
The second study you posted (PMID 19060908) followed 16K Swedes for 25 years to see how diabetes risk varied with whether or not they were carriers of the risky G allele for this same SNP (rs10830963). It appears the G folks were a bit more likely to become diabetic, but only very modestly so (~12% greater risk in this population, but no statistically significant increased risk in two other studies the authors cite). And this is in people who were likely eating a pretty crappy diet, so it's relevance and significance for us is even more dubious. Yes, as you point out, the G carriers had higher fasting glucose, but likely as a result of having lower fasting insulin. So pick your poison.
In short, the fact that the G carriers had only a very modestly higher risk of diabetes in this one study (and no higher risk in several others), makes it appear the so-called "insulin-cancer-glucagon-diabetes-melatonin axis" is pretty tenuous, at least the part of it involving melatonin signalling.
I personally don't take melatonin myself - I'm sleeping like a baby lately. But if I needed to, I wouldn't lose sleep over taking melatonin to avoid losing sleep based on this data.
Am I missing something?
[1] Diabetologia December 1969, Volume 5, Issue 6, pp 397-404
Circadian variations of blood sugar and plasma insulin levels in man
C. Malherbe, M. de Gasparo, R. de Hertogh, J. J. Hoem
Blood sugar, plasma insulin, non-esterified fatty acids (NEFA), plasma cortisol, and urinary catecholamines were measured for 24 h in seven normal subjects receiving a standard diet. During the night, blood sugar and plasma insulin remained low, NEFA decreased progressively, and the excretion of catecholamines diminished. During the day, the insulin response appeared particularly important after the morning meal. This last observation was also made when normal subjects were given three identical meals at intervals of four and a half hours. Under these conditions, the postprandial elevations of blood sugar were not statistically different, but the plasma insulin rose significantly higher after the morning meal. These observations may be explained by the existence of a periodicity which would regulate the insulin secretion. It is also possible that the insulin liberated postprandially conserves a certain activity at the moment of the next meal, and still intervenes in the maintaining of blood sugar homeostasis. Later in the day, however, blood sugar homeostasis would necessitate a new synthesis of insulin, which would explain the delayed plasma insulin response to the evening meal.
Plasma insulin blood sugar non-esterified fatty acids urinary catecholamines circadian variations meals
Share this post
Link to post
Share on other sites
First of all, yes, if you are a risk allele carrier, indeed, you would benefit from keeping your meals well away from peak melatonin - in practical terms (unless you're a shift worker), this means "don't have late dinners", and don't snack in the evening. So you are in the clear :) - but this might be good info for those who have a different pattern (say, the Spanish who love late dinners that start 10.00 pm and later).
Re: differences - I do see some difference in those graphs. As to how meaningful those differences are, I can't tell, maybe someone studying diabetes could say "this amount of extra time BC is circulating will eventually lead to problems" - but as the authors hinted in the paper, this is just one of many things that go wrong when you develop DMT2, it is one factor, not the only factor, and perhaps not even the most important.
I just thought this might be interesting info here, because generally we're obsessed by insulin-glucose, and some of us have considered melatonin. I personally have considered it too, but there is still too little we know for me to take melatonin regularly.
Share this post
Link to post
Share on other sites
I'm CC for rs10830963 and my Hemoglobin A1c is 4.1. But I attribute that low number (if it's even a relevant number to this conversation) to a vegan diet, a silly crazy active lifestyle, and to not one simple gene. I've been flirting with the idea of eating my one and only daily meal in the evening, say, a few hours before zzzz. But now I don't know. My reasoning for eating in the pm might be because anecdotally I feel sleepy after feasting like nutrition royalty. So perhaps giving my body the time and space it wants to digest the feast is smart. By resting after pigging out, do I donate more blood to the digestion process, as we hippies might say? Spend resources digesting all those greens, onions, beans, broccoli sprouts and all of the feast (rather than devoting full bellied energy to running around leaping during daylight hours) is a good idea? Eat, rest, dream?
Since we've no playbooks to guide our choreographies through the mess of life -- how best to eat, what best to eat, when best to move, when best to stop -- it seems intuitive to eat, then rest. Follow the body: listen with inward wisdom that borders on sounding woo. And although I've read -- no no no this is wrong, Sthira, you should eat, then go burn off that feast in order to keep BG down low -- this advice doesn't feel maternally inspired to me. No swimming until an hour after lunch, my mom wags her sunburned hand. Is mom wrong? My hba1c hints nope mom might be right: eat, then relax, take rest, go siesta. But eat one big meal, then go to nightly sleep a few hours later? I dunno. What's Watson have to tell us?
Edited by Sthira
Share this post
Link to post
Share on other sites
Not sure where to post this, but since Time Restricted Feeding was discussed here this is probably ok. A new Dr. Rhonda Patrick interview, this time with Dr. Satchin Panda, an expert on TRF:
The first half hour or so is kind of intro fluff. Around 56 minutes he talks about a very recent finding in past few weeks that melatonin receptors have been found on pancreatic beta islet cells, and when these receptors are triggered they inhibit insulin secretion... leading to the well known differences in day/night insulin sensitivity.
There was a very brief BAT mention in the video, but no significant time spent on that during this interview.
Dr. Panda is using an app to gather crowdsourced TRF data, you can participate at www.mycircadianclock.org
In the last 10 minutes he talks a bit about another TRF finding which is that the bacteria types/quantities in your gut microbiome actually change through the day on their own daily cycle. And a brief mention of how TRF modifies uptake of simple carbs, and bile acids.
Share this post
Link to post
Share on other sites
I finally got around to listening to the Rhonda Patrick interview with Dr. Panda on the benefits of time restricted feeding (TRF). Fascinating stuff. I too especially liked the apparently recently uncovered link between melatonin and insulin secretion (i.e. melatonin blocks insulin production by the pancreas).
Dr. Panda's website (http://www.mycircadianclock.org/) and is iPhone/Android app, by which random people like us can participate in a "citizen science" study of TRF by simply taking a picture of what we eat (w/ timestamp for when) is really cool!
Thanks for bringing it to our attention. I've embedded the interview below.
Share this post
Link to post
Share on other sites
I used the MyCircadianClock app for several months and even had some interaction with its developers. It's a nice way to track sleep and tRF patterns as well as BG & BMI. The Android version had quite a few bugs, they fixed many of them but eventually I had to remove the app because it was never freeing up its internal memory usage, every time you take a pic of the food you are eating the app was using more memory. Maybe they have since fixed that, I don't know. Reinstalling the app temporarily fixes the problem but that got annoying after a while. There are a bunch of popular press articles explaining these researchers' findings and how the app has been used to help some typical Americans who were eating pretty much around the clock.
Share this post
Link to post
Share on other sites
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Register a new account
Sign in
Already have an account? Sign in here.
Sign In Now |
Creative Commons/File
Covid-19 in jails: Why tracking infected prisoners became necessary
This crisis in prisons points towards the need for prison reforms to ensure the right to life, healthcare and dignity.
Updated 22 May, 2020 12:42pm
Lockdown. The term recently hit its peak popularity on Google trends in late March as Covid-19 began to wreak havoc around the world. Self-isolation became synonymous with imprisonment. Freedom was lost.
But for nearly 11 million people across the world, lockdown has been a part of their everyday lives. For some, it has been that way for decades. We know them as prisoners, inmates, detainees, the incarcerated. Those deprived of liberty, with many of them not even convicted of a crime.
Covid-19 has brought to fore how states have failed in keeping prison systems safe across the globe, even in case of some of the most developed countries. There are at least 115 countries with over-crowded prisons at this point, rendering them even more dangerous by making physical distancing impossible behind bars, leaving the inmates scrambling for sanitation and healthcare.
According to the live global tracker put together by Justice Project Pakistan (JPP), Prison Insider, and independent investigator Melanie Carr, as of May 21, more than 46,600 prisoners in 67 countries have been infected with Covid-19 and out of these 839 have lost their lives.
When JPP first started tracking global figures of infected prisoners on April 6, we only came across numbers from 16 countries. At that time, the United States was reporting at least 1,000 infected prisoners, the UK was reporting 88, Pakistan had 49, and India had none. In just over a month and a half, cases among prisoners in the US alone have crossed the 37,000 mark, making the country’s prison system the worst impacted. The UK stands at 435, Pakistan at 384 and India at 400. These are just the numbers reported from official and unofficial sources and it is estimated that the actual numbers inside prisons are much higher, with many countries not divulging any data at all.
These numbers may not be accurate either. But even with the under-reported figures, we know for a fact that the infection rate inside prisons is way higher than that of cities. According to calculations made by JPP, in Lahore’s Camp jail, where Pakistan’s first prisoner was tested positive in March, the infection rate was 19.33/1000 in comparison with 0.09/1000 of Lahore (as of April 15). For Karachi Central Jail, where cases went from 40 to 252 within 24 hours, the infection rate was 71.84/1000, whereas the city's rate was 0.323/1000.
Reports suggest that eight of every 10 patients testing positive in Pakistan in the first week of May were asymptomatic and were only known to have contracted the virus after they were tested. Most of the prisoners who tested positive in Karachi Central Jail also did not exhibit any symptoms, but were detected when mass tests were carried out there. The jail is now conducting 300 tests daily.
On the contrary, Punjab appears to have only conducted about 520 tests in Lahore's Camp Jail after the first prisoner tested positive, resulting in a total of 59 cases in its premises. There are no reports of anymore mass testing in jails in Lahore, Sialkot, Dera Ghazi Khan, Gujranwala, Jhelum, Bhakkar, Faisalabad, Kasur and Hafizabad. All these cities have jails with prisoners that tested positive. There are also reports of one death of a Rawalpindi jail inmate, but with no official acknowledgement or confirmation. The numbers from Punjab jails haven’t changed in more than three weeks, leaving activists wondering about the full scale of the outbreak in prisons.
Without acknowledgement and transparency, there cannot be an adequate response. And without mass testing, we cannot be sure about the extent to which the virus has spread behind bars.
Who makes up the prison population?
When we talk about prisoners, we often imagine a psychopathic villain straight out of a Hollywood movie. Probably in jail for a terrible crime who will go on a rampage as soon as he’s out. But the reality of prisons is quite different. Nearly 70% of Pakistan’s prison population consists of under-trial prisoners who have not been convicted of a crime. Worse, they are imprisoned along with convicted inmates and have to spend years in prison before a verdict in their case is announced. We also have a considerable number of sick, mentally ill and disabled prisoners, with many suffering from contagious diseases like HIV/Aids and tuberculosis, thus having extremely compromised immune systems. Then, there are juveniles, the elderly, and mothers with children. None of these people are a threat to society, and none of them deserve to be rotting inside over-crowded jails and living in unhygienic conditions during such a crisis.
The increased vulnerability of these prisoners during Covid-19 is compounded by the fact that family visits were banned as soon as the virus spread across the country. These visits were the only way for the prisoners to have some contact with their loved ones. This was also the only way for them to easily acquire medicine, soap, clothes, and other necessary supplies needed to survive in their cells. With the visits cancelled, many of these prisoners, particularly ailing ones who require regular medication and care, have been left neglected.
Although a necessary step, the lockdown of prisons has left inmates more frustrated worldwide, with many now rioting and demanding better security measures and alternative solutions. Prisoners are also reportedly inflicting self-harm or are hurting others so they can be put into the usually dreaded solitary confinement. That’s the only way for them to avoid being packed like sardines.
Tracking global numbers
What is the need for collating data of infected prisoners? Advocating for the rights of prisoners, especially vulnerable ones, those under-trial, the young, the elderly, and the sick, during a pandemic has been quite a task for activists. Steps taken for their well-being have been met with much resistance, even by the top court of Pakistan which earlier barred provincial governments from releasing prisoners.
However, it is critical and in the interest of everyone to protect the prisoners, because when one prisoner contracts the virus, they are not only likely to spread it to their community outside the jail through family visits and contact with prison staff, but can also pass it on to paramedics who treat them. Thus, neglecting prisoners at this time will be an added burden on an already collapsing healthcare system battling a virus that can only be eliminated when controlled from spreading.
The global tracker has hence been put together to show that the problem inside prisons is much more serious than what authorities had imagined or anticipated. Looking at the spread of the virus in other jails around the world (and how it was controlled in some places) can help advocates, activists and even governments make informed decisions about protecting prisoners.
When a virus spreads in prisons, it does not discriminate. We know of a Covid-19 positive pregnant prisoner in the US who died after giving birth on a ventilator. We know of a man who spent 44 years in a US prison but died only days before his release. And an 85-year-old prisoner with breathing difficulties who recently died in India. There have also been outbreaks in jails for women and juvenile offenders in multiple countries.
These cases all point towards one thing: the need for prison reforms that must be carried out across the board to ensure that no individual is deprived of their right to healthcare, life and dignity.
As for our prison system, it is simply not equipped to deal with such a crisis. When Covid-19 was declared a pandemic, many assumed that prisoners would remain safe because of their lack of access to the outside world. But many enter and exit prisons on a daily basis, including family members of inmates and prison staff. And all the virus needed was one infected person to be anywhere inside a prison.
This is the time to remember that with no access to proper bedding and washrooms, let alone soap and clean water, the lockdown for prisoners has never been the same as ours.
The tracker's methodology
For the live global tracker, all data for infected prisoners and deaths around the world is first collated from official government sources, such as the ones provided by Canada, Brazil, Chile and South Africa, and then through news reports. In the case of the United States, official numbers are being provided on some states’ official Department of Corrections websites and for federal prisons the Bureau of Prisons website is sharing the information. Figures from other states, a majority of county jails and juvenile detention centres are sourced from news reports.
Official authorities in several countries have not released information on infected prisoners, such as Iran, China and Syria, and the tracker relies on leaked reports for their numbers. Even for official numbers, human rights organisations across the world have raised concerns of under-reporting. The plus(+) sign in front of some of the country's numbers indicates that the actual number might be much higher than what was last reported.
All sources for the figures can be accessed on this public spreadsheet here.
For more information on Covid-19 and prisoners, click here. |
When it comes to the English, mastering it and understanding it in its fine details can be a daunting challenge for any foreigner. The language has many grammar rules. Although there is no denying that the rules are necessary, remembering all of them is difficult. Don’t worry, though; you’re not alone! Even people who are born and raised in the UK and the United States can struggle to master the English language, so it’s not surprising that foreigners struggle even more to learn it.
Getting to a level where you are fluent in both spoken and written English takes a lot of practice, but the outcome of learning it properly will be more than worth it. It will open up many doors for you in the future, in your personal life and the business world.
Learning a new language is always a worthy challenge to undertake, but you need to know what you’re getting yourself into in advance before you embark on learning the English language. As long as you’re aware of the journey ahead when it comes to such challenges, you won’t be surprised when you encounter some of the trickier parts of the English language.
English Language Words Specific To The United States Of America
When learning about the English language, you’ll realize that many words are used exclusively in the United States. These words aren’t made up, they’re real words, but they are only used in American English. There’s no real reason behind this; it’s just how the language has evolved in the United States.
Such words and phrases can be complicated to understand to none Americans, so let’s just start with one to get a basic understanding of what we’re talking about here.
An Example Of An American English Word
The word we’re going to be focusing on today is “mainland”.
“A large contiguous land mass that includes the greater part of a country or territory, as opposed to offshore islands and detached territories.”
What does mainland mean?
That’s the grammatical definition, but what does mainland mean when it is used in the United States? You’ve probably heard this word before, and might even be wondering where mainland USA is. Well, the United States is broken up into a few sections, and the main section of its land is home to 48 states, Alaska is a separate piece of land, and so is Hawaii. So, when we refer to the mainland, we’re referring to that large chunk of land that houses the bulk of the North American states.
So, if you hear someone say that they are from the mainland in the USA, all they mean is that they live in a state that isn’t Alaska or Hawaii. It is that simple, but it can also easily lead to a misunderstanding if you aren’t entirely familiar with the English language or the United States as a whole. Mainland can, of course, also be used the way it is formally defined, but when using the word in the United States, you will be referring to the section of land with the 48 states.
However, the definition for mainland does not limit to the 48 states of USA (or the whole USA, in that sense). In Western countries, the term mainland also refers to the people of China. You may be familiar with the terms “mainland China” and/or “mainland Chinese”— both refer to the same usage, as mainland is a term for Chinese people (used popularly in the Western cultural countries). In other words, the appropriate use of the word mainland will vary depending on the place and region you are using it for.
The English Language
The English language is a colossal task to learn. With all of its subtle rules here and there, it can often take years to get a truly solid understanding of it.
The topic of when to use the term mainland comes up quite a bit, so hopefully today we were able to clear that up for you. Although this is just one of thousands of words you will need to grasp fully in the English language, we hope that you are at least more comfortable with what mainland means and how/when you should use it.
While the ‘one word at a time’ approach can be time-consuming, it can be extremely useful when trying to master words.
Happy learning and if you know anybody who has been as confused as you about words like the mainland, please share this article with them!
Latest posts by Yuri Khlystov (see all) |
Open Accessibility Menu
The Weight of Knowledge - Schoolbooks and Your Kid's Back
The Weight of Knowledge - Schoolbooks and Your Kid's Back
This is the time of year where mothers and fathers across the land begin to worry about what gear their students need for school, and questions about school backpacks come up time and again.
What’s the heaviest backpack my child should wear? What is too heavy – is there a maximum weight or is it based on a percentage of body weight? Is there a maximum time per day a heavy backpack should be carried? How do I choose the right backpack: cool looks-form-and-design or fit-and-function? One strap or two, multiple compartments or one, padded or unpadded, waist strap or not? What’s the best way to wear a backpack, high on the back, low on the back, one strap or two?
If my child has back pain, is it from the back pack design, improper wear, improper loading, excessive time, my child’s deconditioning or obesity? If the backpack is too heavy in childhood and adolescence and causes pain, will my child have pain later life as a result? If my child complains of neck and back pain, is it always the fault of the backpack? How do the teachers and medical societies weigh in? Since 2011 alone, more than 6,000 articles are in print in the medical literature addressing these issues.
So, what’s the background information, and how common is back pain in kids? And is it always the fault of the backpack?
Among children between the ages of 11 and 14 years, almost 40% complain of neck and lower back pain. Of those in pain, 80% attributed their pain to backpack use. Several ergonomic studies show immediate deleterious effects of children standing, walking, climbing stairs, and balancing with excessively heavy backpacks. Heavy backpacks can cause neck shoulder and back muscular problems such as postural compensation and strain, and makes kids more prone to injury and falling especially if the loads are unbalanced. Girls complained of neck and back pain more than boys, especially if they wore the pack with one strap. Adolescents complain more of back pain when the packs are heavier and are worn for longer periods of time, such as more than the 10 minute average from bus to class.
But that’s not the whole story – it’s not always the pack.
Sedentary lifestyle is possibly the most important factor determining back pain among schoolchildren. Lack of physical activity contributes to loss of muscle strength and tone in the lower back. Students who complain of back pain after carrying their packs often complained of pain before carrying packs. Studies report that back pain in children is often related more to psychosomatic factors and daily experience with back pain rather than use of a backpack. Children who were deconditioned or referred to themselves as “sedentary” or felt fatigued while carrying their backpacks during the usual 10 minute walk from the bus to class, had more back pain than the children who described themselves as “fit or active”.
Given that we can’t be 100% sure where the pain is coming from, what do we recommend is the heaviest the pack should weigh?
I recommend limiting backpack weight to 10% of body weight. (A 100 pound student should carry 10 pounds). Up to 20% of body weight is the maximum load that can be carried safely, according to several medical societies (pediatrics, physical therapy and orthopedics), although there is no absolute consensus. We all know that these recommendations are ignored routinely. In the United States more than 9 out of 10 children carry backpacks that weighed more than 10% and up to 22% of their body weight. Those carrying the heaviest backpacks complained of pain 50% more than the others.
The good news is, no study has ever shown that carrying a backpack that’s too heavy leads to more problems later in life or to the development of problems like disk herniations or scoliosis. It just hurts now. The bad news is, children who have headaches, backpain and anxiety now often complain of the same symptoms later in adulthood.
But what about the backpack? There is no perfect backpack. Backpacks vary in design depending on what they’re intended use might be, such as for long trips, marching, camping, or daily use for school. My best recommendation for the backpack based on the research is as follows:
1. Two shoulder straps with a chest compression strap.
2. Plenty of lower back padding, and a waist strap.
3. Skip the water bottle or carry it empty to school.
4. Two compartments in the pack, and load the heaviest objects in the compartment closest to the spine.
5. Wear high and tight on the back rather than low and loose.
6. Try the backpack on as if you were trying on a pair of shoes. Bring some books with you, load the pack, and have your student walk around the store with that. Quickly, function and fit will supersede looks and coolness.
To all parents, here is the obvious conclusion: until all books are on CD ROM or available 100% online, students will be carrying heavy packs to and from school. So remember, max pack weight of 10% of body weight is best, never more than 20%. And kids, stay fit and get off the couch. |
Skip to main content
Verified by Psychology Today
How People Make Their Wishes Come True
Our wishes can offer extraordinary insights into ourselves.
Key points
• People's wishes are often a reflection of their needs.
• Fantasizing about fulfilling a wish can sometimes hinder the pursuit of the wish.
• Contrasting positive fantasies with relevant internal obstacles can motivate people to devise plans to overcome hurdles and pursue their wishes.
One of Chekhov’s fictional heroes—a venerable professor of medicine—had a peculiar method for gaining insight into people. He considered their wishes. “Tell me what you want,” he would bid, “and I will tell you who you are.”
Much like the masters of arts and letters, psychologists have long been captivated with the why and how of our dreams. For over two decades, NYU psychologist Gabriele Oettingen has been researching the curious mechanisms of wish fulfillment, from inception to realization. The tale of every wish, it appears, revolves around the journey of four main protagonists: the dreamer, the dream, the fantasy, and the obstacle. The backdrop against which these protagonists are traveling is a tapestry of countless constellations of outer and inner circumstances.
愚木混株CDD20/Pixabay/Marianna Pogosyan
Source: 愚木混株CDD20/Pixabay/Marianna Pogosyan
Having studied thousands of participants, Oettingen and her colleagues discovered that upon making a wish, people typically followed one of these self-regulatory cognitive patterns:
1. Spend many pleasant hours in positive fantasies by imagining themselves fulfilling their dreams (indulging).
2. Brood over all possible obstacles standing between them and their dreams (dwelling).
3. First fantasize about their desired future, then explore the obstacles (mental contrasting).
4. First explore the obstacles, then fantasize about the desired future (reverse contrasting).
Positive Fantasies as a Double-Edged Sword
All of us dreamers visualize our dreams coming true. Perhaps that’s why one of Oettingen’s biggest surprises was finding what exactly these positive fantasies were doing to people who were indulging in them. The data was clear: the very images that allowed us to virtually live through our hearts’ desires were having hampering effects on their realization.
“In the beginning, we were so surprised to see this tendency, that I thought I must have made a mistake,” says Oettingen. Only after continuous replication of the studies with similar results did Oettingen and her colleagues recognize that they had come across a real phenomenon.
Why would fantasizing about fulfilling a wish hinder the pursuit of the wish?
Let’s say that your long-standing wish is to publish a book. When you picture yourself with your bestseller already in your hands, standing in front of an applauding audience as you accept an award for your literary success while reporters queue up to ask you questions, you experience a “mental attainment” of your wish. In your mind’s eye, you have already experienced the rewards of having achieved your dream. This virtual simulation of fulfillment can have a relaxing effect, making people exert less energy and effort that is actually required to turn their wishes into reality.
Oettingen and her colleagues tested this hypothesis. Indeed, participants who indulged in positive fantasies about winning a lot of money behaved like they were financially satiated and chose to forego an immediate monetary reward for a larger one in the future (Sciarappo, Norton, Oettingen, & Gollwitzer, 2015). Apparently, after having tasted winning the monetary prize in their fantasies and mentally attaining their desired future, they were not as concerned about receiving the money from the experimenters right then and there.
Why Mental Contrasting Can Help You Achieve Your Dreams
How can we offset the counterintuitive “problem” with positive fantasies and their tendency to dampen our drive to go after our dreams?
That’s where mental contrasting comes in.
“To get people out of their fantasies and give them the energy to pursue their dreams, we realized that we needed to give them a healthy dose of reality,” says Oettingen.
This came in the form of an obstacle that people could identify in themselves that stood in the way of their wishes. When a positive fantasy is contrasted with relevant internal obstacles, a connection is established, linking the desired future with the reality, and the reality (obstacle) with appropriate behavior to overcome it. This non-conscious process, according to Oettingen, can help generate energy by motivating people to devise binding goals, intentions, and appropriate plans to move past hurdles and pursue their wishes. If the expectations of success are high, people commit to the path of attaining their wishes; if they are low, the wishes will be abandoned or postponed.
Wish Outcome Obstacle Plan
One of the most widely studied wish-fulfillment strategies born from Oettingen’s research is WOOP (Wish Outcome Obstacle Plan). WOOP is mental contrasting in action. It’s the space where the four protagonists come together to get to know one another before embarking on their journey.
Gabriele Oettingen/Marianna Pogosyan
Source: Gabriele Oettingen/Marianna Pogosyan
Oettingen calls WOOP a “change agent.”
“If you see something in the world that you want, something that is challenging yet feasible because you have some agency over it, WOOP gives you a framework for turning your positive fantasies into reality,” says Oettingen, who has used WOOP to accomplish many of her own wishes.
Get to Know Your Wishes—and Your Obstacles
An intimate exploration of our wish inventory through tools such as WOOP can offer extraordinary insights into ourselves.
For example, peek behind your dreams and you are likely to stumble upon some psychological need.
Oettingen and her colleagues observed this phenomenon in a series of studies. In one experiment, they asked participants not to drink any liquids for four hours before visiting their lab, where they were offered salty pretzels. Half of the participants got water, and the other half were kept thirsty. Results showed that the positive fantasies of those who were kept thirsty revolved around quenching their thirst. Those who had been allowed to drink, on the other hand, fantasized about events unrelated to water.
Similar outcomes were obtained with experiments conducted with psychological needs.
“When we instilled a need for meaning, people started positively fantasizing about getting a more meaningful job. When we instilled a need for relatedness, people fantasized about getting together with friends and family,” says Oettingen. “Thus, the fantasies are often an expression of what we don’t have.”
Other than understanding our deepest needs, exploring our wishes can also make us face the inner resistance that’s preventing us from realizing them.
Peeking behind those, too, might be an eye-opening endeavor.
“Behind the obstacles are often emotions, irrational beliefs, bad habits, or old hang-ups which we have been carrying with us for years,” explains Oettingen.
Thus, Oettingen’s advice: spend some time in the company of your wishes. Give them your full attention and listen carefully—with an open heart.
“Take 5-10 minutes of quiet to ask yourself one question: What do I really want? Find a wish that’s dear to your heart, in whatever life domain it may be. The key is to ask this question and patiently feel out the answer. Often, people don’t know what they want, or are told what they want. Remember, you are the expert of your life. You have your own needs and desires. Pay attention to your positive fantasies. They are vitally important because they give the action a direction. They represent a desired future, where you want to go and where you want to be.
"Once you have a wish, explore how you would feel already being there. Happy? Relieved? Pinpoint the best outcome and vividly imagine it by experiencing it in your mind. Only then, switch to the obstacle—what is it in you that stands in the way of realizing your wish? Try scratching at the surface and go deeper to understand what is behind your obstacle. You might discover that what you thought was keeping you from fulfilling your wish was not external as you always thought, but an internal resistance. It’s very helpful to gain clarity about what it is in yourself that’s truly standing between you and your dreams, because it will help you assess ways to overcome your obstacle. It could also help you recognize that surmounting the obstacle is too costly for now, or even impossible. But if you feel like the obstacle is surmountable, then it’ll give you the drive to pack away the excuses, step out of the fantasy, and go after your dreams.”
Source: Michaelpuche/Shutterstock
As much joy and fulfillment we anticipate at the end of our wish-fulfillment journey, we would be remiss to ignore the jewels we encounter along the way—bravery and patience, creativity and flow, good intentions, and self-compassion. Perhaps, then, the biggest reward of pursuing our dreams lies in the fortuity to nurture our relationship with ourselves.
Many thanks to Dr. Gabriele Oettingen for her time and insights. Dr. Oettingen is a professor of psychology at NYU. She is the author of Rethinking Positive Thinking: Inside the New Science of Motivation (2014).
LinkedIn image: Michaelpuche/Shutterstock
Oettingen, G. & Sevincer, A. T. (2018). Fantasy about the future as friend and foe. In G. Oettingen, A. T. Sevincer, & P. M. Gollwitzer (Eds.), The psychology of thinking about the future (pp. 127–149). New York, NY: Guilford.
Sciarappo, Norton, Oettingen, & Gollwitzer (2015) in Oettingen, G., & Reininger, K. M. (2016). The power of prospection: Mental contrasting and behavior change. Social and Personality Psychology Compass, 10(11), 591-604. |
Meanings of Personality Development
In this world there are so many people and each person has a very different definition of what personality is. Human beings are very unique organisms. This is because they possess a very powerful element called the mind. They are the only creatures with this tool not any other animal in the whole planet has this element. The other animals have got the brain just like the human beings but the ability of their brain is only to some degree unlike the human beings. And so because of our powerful mind, we are able to control the whole world. This is very important to note because it plays a very important part in personality. It is the one that develops, enriches and boosts our personality in both internal and external beauty.
There is several meaning of the word personality from different sources and the dictionary has got numerous meanings. One of the main meanings of personality is that it is a set of a person’s characteristic. These set is made up of a person’s attitude, interests, different ways of behavior, the way one responds to emotions, the part that he plays in the society and many other personal qualities that last for a very long time. These characteristics are the ones that make one attractive and also to be noticed and thus socially interesting. It could also mean a very well known person, a person who is famous an example of such a person is an entertainer or an athlete. It could also mean an extraordinary person who is distinguishing. It is also the features of being a person and which make you to exist.
These are the meanings which only look at the outside of a person but then they are true. The only way that we can be able to know a stranger is through the outside or what he has worn. This is because we cannot know his character or behavior from just looking at the person. Many people however judge ones personality from the external that is the looks. This is not supposed to be the case. To know the personality of a person you have to stay with them for some time.
You should also note that we do not inherit personality traits and neither are they inborn. They develop in us the minute we are born. We are the developers of our personality. People around us could help us to develop but the truth is that we are the main developers. So, for us to have personality we have to learn both academic and extracurricular activities. |
Giant planet, moon detected 5K light-years away
A massive planet with a giant moon has been detected more than 5,000 light-years from Earth, according to findings published in Nature Astronomy. Astronomers say the planet is similar to Jupiter and the size of its exomoon is somewhere between Neptune and Earth.
Full story:
Sigma Xi SmartBrief
News on scientific research and innovation
Designed specifically for researchers in all fields of science and engineering, SIGMA XI SmartBrief is a FREE, daily e-mail newsletter. By providing the latest need-to-know industry news and information, SIGMA XI SmartBrief saves you time and keeps you smart. |
Big Data Only Tells Half the Story, If You’re Lucky.
Big Data only points to symptoms.
Big data comes tells us what people did, but it rarely every tells us WHY they did it. This quantitative data can show us patterns of behavior indeed but it almost never uncovers the underlying cause of the behavior.
Consider this crazy scenario (ignore the obvious ethical implications and play along for the moment) – a doctor opens the doors to the ER waiting room and finds a thousand patients waiting, all doubled over and clutching their bellies. This behavior is symptomatic of severe abdominal distress but alone is not enough for the doctor to safely prescribe a course of action. He’ll need at least 2 data points even begin to uncover a pattern to consider safe treatment options.
So our dutiful doctor looks around the room and visually examines the patients and notices that none of them are showing signs of external bleeding. This is good because he can now, with reasonable certainty, rule out gun shoots and stabbings. Now with 2 data points he can focus on internal ailments and begin to consider treatments.
If data alone were enough our doctor might begin by offing some sort of antacid or other drug designed to sooth and calm an upset stomach. But that would be crazy and possibly fatal to the patients. What options does our doctor have?
A/B testing will help but will only get us so far
Imagine now that our doctor takes an intuitive leap and assumes that not everyone is ailing from the same cause. He could, like any good CRO professional might do, treat half the patients with one remedy and the other half with another and then wait to see which group responds most favorably. The problem is some people could die by the time he recognizes the effects.
What if neither group responds well, what now? Does our doctor try to split the groups again and try two new treatments? What if some of the people in one group respond and some in the other do as well, then what? There are so many possibilities and variables. How can our doctor narrow this down?
To get to the cause you need to talk to the patient
Clearly our doctor can’t single handedly interview each and every one of the 1000 patients to gather a complete medical history and still help everyone. What if this is something more serious than a case of severe indigestion? Some patients would die before he got to all of them. What’s a good doctor to do?
He could begin with a small random sampling of patients and a short list of insightful questions. What was the last thing you ate, and where and when was that? When did you begin to feel the pain? Where do you feel pain and how intense is it?
Pulling out as few as 10 patients could begin to uncover a pattern that sheds light on the cause of the ailment. Let’s say that after interviewing 10 patients our doctor finds out that 7 out of the 10 had all attended the same social function and ate the same food. Now he get on the ER intercom and ask the other 990 patient who else attended the function and had the same food, pull them aside and treat accordingly.
With this small amount of additional qualitative data our doctor can now begin treat all his patients more quickly and with greater effectiveness than he could solely relying on big data. Why do I bring this up?
Treat the cause not the symptom for maximum results.
This how we go about making many decisions these day with web design, minus the possible fatalities and questionable ethics concerns of course. Conversion Rate Optimization (CRO) is based almost entirely on the use of big data. Don’t get wrong, CRO can be a highly effective way to improve the performance of a given page, albeit usually in small incremental percentages.
For some companies a shift as little a quarter percent can equate to hundreds of thousands, if not millions of dollars lost or gained. But radical shifts come from gaining deeper insights into user motives and pain points than big data will usually ever uncover.
To fully understand how powerful the combination of big data and just enough qualitative data can be look no further than Jared Spool’s story of the 300 Million Dollar Button.
The gist of Jared’s real life story is that big data pointed to a symptom for one major online retailer, high cart abandonment rate. Now they could have gone the CRO only route of A/B testing the hell out of the existing design but they would have never uncovered the real cause of abandonment. With a little in-person usability testing Jared’s team was able to discover the real source of pain.
“I’m not here to be in a relationship” was the response from one test subject and he was not alone in this feeling. Turned out that asking people to set up accounts at the start of the checkout process was putting them off. This kind of insight would likely never have surfaced through typical CRO practices. So what was the result of the changes they made based on that insight? A 45% reduction in cart abandonment and about 300 million dollars in gained revenue.
I encourage you to read Jared’s article to get full story. The reduction in cart abandonment and increased revenue were only two of many benefits the online retailer gained from the doing just enough qualitative research.
Quantitative data + Qualitative data = Maximum ROI
Big data (quantitative data) will help you spot symptoms and optimize your experiences so you need to utilize what’s available. Qualitative data on the other hand can transform the experience entirely by identifying the causes of pain. Used together, as illustrated by Jared’s true story this is a combination that will yield the greatest results for both your business and your users.
To paraphrase a bumper sticker: Know users, know success. No users, no success. It’s not enough to know what the user did, you need to know why they did it. While quantitative data requires a fairly large sample to be useful, qualitative data can uncover impactful insights from a relatively small sample.
If you think you can’t afford to do that kind of research I’d say, you can’t afford not to.
Published by
Mike Donahue
One thought on “Big Data Only Tells Half the Story, If You’re Lucky.”
Leave a Reply
|
I've developed a radar that I'm using to determine the distance to remote objects. It uses a custom PCB with an onboard FPGA that performs the DSP algorithms. The data from it is then plotted on a host PC. This appears as a 2D histogram where the y-axis denotes the FFT frequency bins (due to the nature of the radar this is proportional to distance) and the x-axis denotes time. The plot (shown below) gives a very strong signal at a distance halfway between the antenna and max range, which I'm unable to explain.
enter image description here
The actual algorithms performed are: an FIR polyphase decimation filter (downsamples from 40MHz to 2MHz) which produces an output 1024 samples in length. Then I run it through a Kaiser window function with a beta of 6, followed by a 1024-point FFT, the result of which is transmitted to the host PC. For each value of t in the plot (the x-axis) the host PC averages over 30 1024-length sequences (averaged element-wise). Since all inputs to the FFT are real, the output is Hermitian symmetric and so I only plot the first 512 values of each output sequence. The strong signal you see above occurs at bins 257 and 258 (indexed from 0). I've tested the radar in an open space where it shouldn't generate any strong signals. I've simulated all of the FPGA logic, so while I can't be sure it's right (I've only formally verified parts of it), I'd be surprised if it wasn't.
What could be the cause of this? Is there some obvious aspect I'm missing? If any of this is unclear or some part of the information I've omitted is important for answering this question (e.g. the equation relating frequency to distance), please let me know and I'll include it.
Edit: more details on acquired signal
This is an FMCW radar. A frequency synthesizer generates sawtooth ramps from 5.3GHz to 5.9GHz over a duration of 1ms. This signal is simultaneously transmitted and mixed back in with the reflected signal. We then measure the difference frequency to back out the distance.
The FPGA modules are timed such that data is only acquired during the synthesizer's ramp period. First, I enable the ramp and power amplifier and (once enabled) begin acquiring data. The data is processed by the FIR filter and then passed through the kaiser window. Once the last sample passes through the FIR filter the ramp and power amplifier are disabled. The processed data (which were stored in a FIFO) are now run through the FFT and then the resulting output is dispatched in packets to the host PC via USB. I use a header sequence, tail sequence, and duplicated packets to try to avoid data corruption/loss. Once the FFT is finished, the process starts again (ramp and power amp enable, etc.).
The FIR filter should take just longer than 0.5ms to acquire all samples, so it should fall within the frequency ramp period.
• $\begingroup$ Interesting, so if I follow, this is suggesting the presence of a strong 500 KHz tone- I don't see from your processing what would create this but suggests an aliasing imaging artifact in the process. Or occurs naturally in the radar processing (are you doing FMCW and can you provide more specifics on that?). Without other obvious answers I would suggest capturing your signal at various stages in the process to narrow down where this is being introduced. Are you able to easily do that? (Capture the raw 40 MHz signal and do your own FFT on that waveform, then the 2 MHz output. $\endgroup$ Nov 16 '19 at 13:39
• $\begingroup$ What is the repetition rate of your FM chirps? Is this an expected high frequency component that you are supposed to filter out? $\endgroup$ Nov 16 '19 at 13:40
• $\begingroup$ @DanBoschen I've added some information about the signal generation / processing. Let me know if there's anything else that would be useful to include. And thanks for the suggestion, that seems like a logical way to go about this. I'll update the post with any information I find doing that. $\endgroup$
– MattHusz
Nov 16 '19 at 17:04
• 1
$\begingroup$ @Envidia is that relevant for FMCW (genuine question, I'm new to radar)? I measure the received signals at the same time I'm sending the transmit signal. I.e. it's not a send pulse, turn off transmitter, measure received signal setup. The PRF ends up being around 1-2KHz but it's based on the amount of the time the fft and other algos take to run, not some deliberate decision. This is not synchronized to the switching frequency of my power supply. The switching frequency of the buck converter upstream of the mixer is 500KHz, which is the frequency of the tone. You think that could be the issue? $\endgroup$
– MattHusz
Nov 19 '19 at 1:45
• 1
$\begingroup$ @MattHusz You're right, FMCW systems don't have a "PRF-proper", but they do have an equivalent as you mentioned. The power supply at 500 KHz might explain why you're seeing that tone in your histogram. If you can, try and get a different power supply that switches at a different frequency and check results. Also as a tidbit, this is one of the weaknesses of FMCW systems: they are vulnerable to electronic attack. $\endgroup$
– Envidia
Nov 19 '19 at 2:02
If your system is powered by a switching power supply that is not synchronized to the PRF (or multiples of), then you may get reliable spurs over time as seen in your histogram.
In your case, it's been found that was indeed the problem! The 500 KHz frequency fell nicely in the middle of your 1 MHz histogram. Hopefully this will help hunt down similar issues in the future.
Your Answer
|
Capital Structure
• Updated on
What is capital structure?
The capital structure of a Company tells us about the blend of debt and equity used to fund the business’s complete operations and growth. Companies use a different combination of capital raising methods to finance their operations, capital expenses, acquisitions, and other investments.
Debt is the money a Company borrows. In return, the Company has to pay interest to the lender within a defined timeline. The Company borrows capital via bonds or through bank loans.
On the other hand, equity comprises of ownership rights in a Company. The equity holder can claim a share in future cash flows and profits. Equity comes in the form of preferred stocks, common stocks or retained earnings.
While considering the capital structure of a Company, we look into short-term debt, long-term debt, preferred stock, and common stock.
When an analyst studies a Company's capital structure, he/she looks into its debt-to-equity ratio. The ratio helps the analyst understand the risks associated with the Company's borrowing practices. In case a company has a debt-to-equity ratio of more than 1, it means it is funded more through debt. The higher the value of the ratio, the higher is the risk exposure.
What are the different types of capital structure?
There are four kinds of capital structure:
• Horizontal capital structure
• Vertical capital structure
• Pyramid-shaped capital structure
• Inverted capital structure
Horizontal capital structure
In a horizontal capital structure, the firm’s capital structure has no debt component. This is one of the most balanced forms of capital structure is. Firms with horizontal capital structure expand using the capital raised via equity or retained earnings. Besides, there is a limited possibility of any disruption in the structure.
Vertical capital structure
In a vertical capital structure, the base is formed from a small portion of equity share. The base is the foundation on which the super structure of preference share capital as well as debt is developed. Any increase in the capital is majorly through debt.
Pyramid-shaped capital structure
In a pyramid-shaped capital structure, the bottom-most layer comprises common equity. Above that is preferred equity. The top two layers include senior debt and Mezzanine debt.
Pyramid-shaped capital structure (Copyright © 2021 Kalkine Media Pty Ltd.)
Risk-averse conservative companies follow this structure.
Inverted capital structure
An inverted capital structure is the opposite of a pyramid-shaped capital structure. In this case, there is a tiny portion of the equity capital, an adequate level of retained earnings and an escalating debt element.
What factors influence a Company’s capital structure decision?
The capital structure decision of a Company depends on both external and internal.
Internal factors include financial leverage, risk, growth, and stability, cost of capital, retaining control, flexibility, cash flows, the purpose of finance and asset structure.
External factors influencing the capital structure decision include the Company's size, nature of the industry, investors, cost of floatation, legal requirements, duration of finance, level of interest rate, level of business activity, accessibility of funds, tax policy and level of stock prices.
Additional external factors include control of management over the firm, risk, income, tax consideration, trading on equity, investors’ attitude, flexibility, timing, legal provisions, profitability, growth rate, government policy, marketability, company size, flexibility and financing reasons.
How do companies decide which financial principle to select?
As companies grow with time, they keep track of how to fund their projects and operations, pay their employees, and keep the business ongoing. For this, companies look for the best mix of equity sold to investors and bonds sold to creditors.
Generally, companies look at three approaches to decide their capital structure. These include the net income approach, static trade-off theory and pecking order theory. Let’s understand these approaches.
Net Income Approach
In this style of capital structure, cost of capital is the function of the capital structure. The approach assumes optimum capital structure, indicating that at a certain debt-equity ratio, the cost of capital is at the lowest, and the value of the firm is highest.
Static Trade-Off Theory
Static trade-off theory is a financial theory designed by economists Modigliani and Miller. The two studied the capital structure theory and developed the capital structure irrelevance proposition.
The theory starts from capital structure irrelevance theory. It eliminates the assumption that there are no costs to financial distress when businesses borrow additional money. If the assumption is eliminated, then taking more debt would not reduce the Weighted Average Cost of Capital (WACC). Instead, there would be a specific point at which the value-reducing cost of financial distress exceeds additional value added by including one or more debt forms.
Pecking Order Theory
As per the Pecking Order Theory, a Company initially tries to fund itself through retained earnings. In case the Company does not have retained earnings to finance itself, then, in that case, it should go for debt. Finally, if nothing works out, then the Company should fund itself by using its new shares.
The Pecking Order Theory is essential as it highlights a Company's financial position and its performance. In case the Company is financing itself internally, it means it is doing well and is a strong company. Further, if the Company is financing through debt, it shows its confidence to repay the loan within the stipulated time. However, if the financing is done through equity, it gives a negative signal that the Company believes its stock is overvalued and aims to make money before any drop in its share price. |
Declaration of Independence from ED
Independence Day is once again upon us. We recognize that the day is a demarcation of our country’s separation from British rule and is an important part of our national heritage, but few have really looked closely at the actual document. We learn about the Declaration of Independence from an early age, but most associate the July 4th Independence Day celebration with the peak of summer, fireworks, American flags and red-white-&-blue themed decorations.
The Declaration of Independence was adopted on July 4, 1776 in Philadelphia when the Second Continental Congress led the then 13 American colonies in obtaining independence from Great Britain. The term “Declaration of Independence” is actually not used in the original text itself, but the document was clearly developed to announce and explain the intent to separate from Britain. There were multiple drafts from the influential thinkers of that time, leading to a carefully crafted outline of grievances against King George III and justification for the need for seeking independence. The document has become known as a support of human rights and of Americans’ right to “Life, Liberty and the pursuit of Happiness.” The official document is on display at the National Archives in Washington DC, but versions are more easily viewed via websites, such as:
For those struggling with an Eating Disorder (ED), there is similarly the need for creating a declaration of independence somewhere along the recovery road. Like the colonists, there was initially good reason for reliance on the more powerful, seemingly all-knowing, influential source of direction. An ED can appear to be part of a solution, distraction, even a seeming savior to other life issues; however, there then comes a time when its guidance is no longer working effectively. The imbalance and detriment eventually come to be increasingly clear. It may be easier to keep the status quo, yet the status quo is increasingly problematic. The colonists spent many years attempting to work within the confines of the British rule before they chose to fight for separation.
Choosing to declare independence from ED can create an internal revolution. There are parts of one’s self that feel ready to fight and rebel, and there are other parts that would rather choose to submit and surrender. The dependence and reliance on the ED has often been long term and complex, and so there is naturally much ambivalence about letting go. Stating one’s intention and justification boldly and confidently in writing can, however, be a start.
Using some of the actual text of the 1776 Declaration of Independence, a Declaration of Independence from ED might be proposed. (Fittingly, we might short-hand this to the acronym of DIED.) Exploring a holiday-themed assignment might inspire a new angle on recuperation motivation. Utilizing a Mad Libs format, individuals might be encouraged to consider personalizing the 5-section document for their particular recovery purposes.
The introductory section of the Declaration of Independence asserted how it is part of Natural Law for people to seek independence and that the grounds for such independence be reasonable. The introductory section suggested that dissolving political bands with Britain required a declaration of the causes that impelled the separation. Similarly, in case of ED, we might declare boldly:
On _______ (date), _______ (name) makes a unanimous declaration of independence from ED. As it has become necessary to dissolve the bonds that have connected _______ (name) with ED, it is reasonable that the causes of the separation be clearly outlined.
The Preamble of the Declaration outlined the general philosophy that the colonists were upholding and explained why they felt justified in continuing the revolution against a government that was infringing on their natural rights. For individuals with EDs, a similar forward could be introduced:
I hold these truths to be self-evident, that not all body shapes are supposed to be equal, that I deserve to be treated fairly and respectfully and that I have certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness.
I also deserve to have more
(behaviors/thoughts/pursuits to increase)
and to experience less
(behaviors/thoughts/pursuits to decrease)
Because ED has become destructive of these ends, it is my right to institute a new leadership and lay the foundation for organizing more Safety and Happiness. Because there has been a long line of mistreatments by the ED, it is my right and my duty to throw off such governance and provide new Guards for my future security.
This lengthy section of the Declaration of Independence outlined the particulars of the King’s “injuries and usurpations” of American rights and liberties. For an individual recovering from an ED, this would be the section that might be most customized and focused on the particulars for an individual declaring independence from ED:
The ED has a history of repeated injuries and mistreatments, making it now necessary to seek independence from its rule. To highlight this, let Facts be submitted about how the ED has been oppressive, harsh, unjust, and overbearing…
(This might contain a listing of 20-30 items of concern that have unfolded as a result of the ED; in the Declaration of Independence, there are 27 separate items listed as wrongdoing by the King.)
This section completed the case for independence and the justification for the revolution. There are many variations to announcing this in regards to the ED, such as:
ED is unfit to be ruler of me any longer. I have warned the ED and have attempted to appeal to its benevolence, but it has been deaf to the voice of justice. I must announce my intent to separate.
This final section of the Declaration summarized how the colonies needed to cut off political ties with the British Crown and become independent because of the conditions of the British rule as outlined earlier. Within the context of an ED Declaration, a closing statement might serve as a mantra or guiding principle to be considered throughout recovery, such as:
I therefore solemnly declare that I have the right to be free and independent of allegiance to ED. My connections with ED will be dissolved, and I will have full power to decide how to live my life peacefully and powerfully. I pledge my intention and my sacred honor to this Declaration from here on forward.
The original document included 56 signatures from all 13 states and included well-known leaders of the time such as John Hancock, Thomas Jefferson, John Adams and Benjamin Franklin. A Declaration of Independence from ED requires only 1 authentic, valid, well-intentioned signature.
Let the Negotiations Begin
Creating a Declaration document this summer might honor the national freedom announced over 240 years ago but might also serve as a way of inspiring individuals’ inner independence and healthy nonconformity. There must be appreciation of the fact that most clients with EDs aren’t generally ready to declare independence when they are starting the recovery journey, just as the 1776 Declaration of Independence was drafted years into the American Revolution (which is dated as officially starting in 1765). For our ED clients, years may be spent in the back and forth of internal power battles before the fuller, more confident position can be taken. A personalized Declaration of Independence from ED can help with outlining the justification for the separation and with courageously crafting the intent to seek freedom. There may be many drafts and much fluctuation in commitment, but beginning negotiations can be fruitful.
And then let the celebratory fireworks begin.
(Visited 262 times, 1 visits today) |
Neuro-ophthalmology Question of the Week: Computed Tomography
Question: On CT which of the following are isodense, hypodense, hyperdense, or enhance on contrast?
1 Acute clot in a large vessel
2 Blood vessels
3 Bone
4 Breakdown of the normal blood-brain barrier
5 Calcium
6 Edema
7 Fat
8 Fresh blood
9 Infarction
10 Inflammatory lesions
11 Necrosis
12 Neoplasms
13 Normal brain
Question with answers: On CT which of the following are isodense, hypodense, hyperdense, or enhance on contrast?
1 Acute clot in a large vessel = Hyperdense
2 Blood vessels = Enhance on contrast
3 Bone = Hyperdense
4 Breakdown of the normal blood-brain barrier = Enhance on contrast
5 Calcium = Hyperdense
6 Edema = Hypodense
7 Fat = Hyperdense
8 Fresh blood = Hyperdense
9 Infarction = Hypodense
10 Inflammatory lesions = Enhance on contrast
11 Necrosis = Hypodense
12 Neoplasms = Enhance on contrast
13 Normal brain = Isodense
Explanation1: “Computed Tomography
Because bone, calcification, fat, and blood all have unique X-ray absorption patterns, CT is a very effective technique for orbital imaging (▶Table 4.2).
Specific absorption patterns can be highlighted on CT to emphasize bone, soft tissues, or blood.
CT images are classically obtained in the axial planes. It is possible to also request images in the coronal plane by repositioning the patient. Sagittal images may be obtained by computer reformatting.
Routine studies are done at 3 or 5mm slice intervals, but it is possible to obtain 1mm slice intervals (better resolution).
A head CT without contrast takes only a few minutes and is readily available. It is commonly performed in the emergency room and is extremely valuable in trauma patients (who may have a bone fracture or an orbital foreign body), in stroke patients (▶Fig. 4.19 and ▶Fig. 4.20), and when an acute intracranial or intraorbital hemorrhage is suspected (e.g., to detect a subarachnoid hemorrhage in a patient with an explosive headache). However, a normal head CT without contrast is insufficient in almost all other situations. It is falsely reassuring and often misses serious disorders.
The following are good indications for orbital CT:
• Orbital trauma (suspected fractures or foreign body)
• Ocular trauma to rule out a foreign body (ruptured globe)
• Infectious or noninfectious orbital inflammation (▶Fig. 4.21 and ▶Fig. 4.22)
• Bone lesions (osteoma, fibrous dysplasia, suspected metastatic disease, etc.)
• Preoperative imaging for orbital disease (when imaging of the facial sinuses is very important)
• Lesions that may contain calcium (retinoblastoma, optic nerve drusen, orbital varix, meningioma, etc.)
• Lacrimal gland lesions
1. Neuro-ophthalmology Illustrated-2nd Edition. Biousse V and Newman NJ. 2012. Theme
More than 600 additional neuro-ophthalmology questions are freely available at
Questions prior to September 2016 are archived at
After that, questions are archived at
Please send feedback, questions and corrections to |
From Earth to Moon and Back
Source: NASA
Here’s to all the unsung engineers who made the Apollo 11 moon landing possible. I say this with particular pride because my husband was one of them. A systems engineer for a NASA sub-contractor, he was a card-carrying member of the IEE (from his early days in the UK) and IEEE (US). His expertise was in military/aerospace radar in applications such as tracking and guidance. This technology was used to land the lunar module, dock it back with the command module after the moon landing, and return the spaceship to Earth. (For more details, go to the link at the bottom of this post.)
My husband with a rooftop microwave dish, Ilford, Essex, England (c.1947)
Early wireless communication for the Apollo mission (RCA)
Below is a link to a press release. Read all about space radar, wireless lunar radios, HD-to-TV signals, digital rocket launch technology, and other precursors to the tiny cellular computers we now carry around every day.
My Father the Tailor
Above is quite an old photo of my father, Harry Silberman, who was a tailor from Romania.
The photo above shows Poppa (left) with one of his first bosses when he worked as a tailor in a factory loft in Brooklyn. Poppa later opened his own shop and dry cleaners in Bay Ridge.
But few people wear bespoke clothing anymore. Most people, well… |
Keeping Your Hive Healthy
After repeated use, old brood combs can become very dark—nearly black. The inside diameter of each cell also becomes smaller because the cocoons of each succeeding generation are glued to the cell walls. Even though the cells are polished by nurse bees before new eggs are laid, some of this cocoon material remains.
Pesticides and disease organisms can reside in both the wax cells and the cocoon layers. The darker the cells get, the higher the probability of contamination. It is recommended that very dark combs be cut away and discarded. In the past beekeepers could keep combs in use ten or twelve years and it was a point of pride to do so. With the use of pesticides and the ever-widening array of honey bee diseases, that philosophy has changed.
One of the easiest ways to rotate old comb out of your supply is to decide on an annual schedule of replacement. If you replace the worst 20% of your combs every year, you will rotate your entire stock once every five years. Some beekeepers prefer to replace 25% every year for a four-year rotation.
When doing a hive inspection if noticing a particularly bad comb, mark the top bar with a felt-tip pen so it can be found later. Then, before spring build-up when both stores and brood nests are small, go through the hives and pull out the 20% to be discarded. Since the brood nests are small, it is easy to equalize the boxes so that each box has eight frames remaining.
The empty slots can be replaced in several different ways. You can use new frames or you can cut out the old comb and reuse the frames if they are not too bad. You can use foundation—or not—just as you normally do. Some beekeepers prefer to have all new frames made in advance and then just drop one in and pull an old one out.
The system is not perfect. You will always find a hive where all the brood for the entire colony is on the one worst comb. Don’t worry about it—just leave that one there and remove the worst frames that don’t contain any brood. Even with those few exceptions, you will still be providing a healthier environment for your baby bees.
Used with permission from |
Ryan Fiorenzi
August 1, 2021
Is Daylight Saving Time Good or Bad For Sleep?
Daylight saving time (DST) is when clocks are moved one hour later on the second Sunday in March at 2 am so that the sun rises later in the morning and sets later in the evening. This is referred to as "springing forward." On the first Sunday in November at 2 am it switches back, which is referred to as "falling back."
There are many different arguments for and against DST, but sleep scientists don't support changing the time. Dr Christopher Barnes, Associate Professor of Management at the University of Washington who researches the impact of sleep deprivation says, "When we change the time by one hour, it throws a monkey wrench into our circadian process. The following Monday, we've discovered that people have about 40 minutes less sleep. Because we're already short on sleep to begin with, the effects of even 40 minutes are noticeable."
One of the pillars of good sleep hygiene is to fall asleep and wake up at the same time every night (including weekends). Your brain's internal clock is set by its exposure to sunlight, which is your circadian rhythm. When you change the time when you regularly go to sleep and wake up, your brain's expectation for when to fall asleep and wake up is altered. Research has confirmed that DST causes people to be sleep deprived.
A study published by the National Institutes of Health combined surveys from 55,000 people in Europe on their sleeping and wakefulness for 8 weeks around DST in the spring and reverting back in the fall. The research showed that people never fully adjust their circadian rhythm to the time change, and the time change is more difficult for night owls (people who go to bed late). A smaller study involving 9 volunteers referred to by LiveScience concluded that the fall transition was more difficult for larks (early risers).
Increased Risk of Stroke
A 2016 study published by the American Academy of Neurology found that the rates for stroke were 8% higher in the 2 days after DST. Cancer patients were 25% more likely to have a stroke after daylight saving time than during another period. The risk was also higher for those over age 65, who were 20% more likely to have a stroke right after the transition.
Increased Risk of Heart Attack
According to the University of Alabama at Birmingham, “The Monday and Tuesday after moving the clocks ahead one hour in March is associated with a 10% increase in the risk of having a heart attack,” says UAB Associate Professor Martin Young, Ph.D., in the Division of Cardiovascular Disease. “The opposite is true when falling back in October. This risk decreases by about 10%.”
Increased Accidents
A study published in Sleep Medicine found that fatal car accidents rise significantly on the Monday after the spring shift and the Sunday of the fall shift. The researchers claim that, "The behavioral adaptation anticipating the longer day on Sunday of the shift from DST in the fall leads to an increased number of accidents suggesting an increase in late-night (early Sunday morning) driving when traffic-related fatalities are high, possibly related to alcohol consumption and driving while sleepy." They further explain, "Public health educators should probably consider issuing warnings both about the effects of sleep loss in the spring shift and possible behaviors such as staying out later, particularly when consuming alcohol in the fall shift. Sleep clinicians should be aware that health consequences from forced changes in the circadian patterns resulting from DST come not only from physiological adjustments but also from behavioral responses to forced circadian changes."
According to the American Psychological Association, mine workers who have slept on average 40 minutes less due to the time change experience 5.7% more workplace injuries in the week after DST than any other week of the year. This is because moderate sleep deprivation can cause cognitive and coordination impairment that is worse than being legally drunk.
Additional Risks
Dr Kyoungmin Cho of the University of Washington, along with Dr Barnes, published a study cited on the Association for Psychological Science site that judges even give harsher sentences on the Monday after the DST switch. This is consistent with what research has already told us about being sleep-deprived: people have less control over their emotions, are more impulsive, and have a harder time making decisions.
History of DST
Benjamin Franklin is the originator of this practice as he thought it would conserve energy because people wouldn't need to light their homes because they could use sunlight.
However, the history of its use is spotty and inconsistent. The United States used it during WWI to conserve fuel, then it was abolished, before being used again in WWII. After WWII, it was chaotically used in some places and not in others. St. Paul Minnesota and Minneapolis Minnesota were on different times. In 1966, Congress enacted the Uniform Time Act which required that states could decide what they wanted to do, but the entire state had to be on the same clock.
Currently, in the United States every state observes DST except for Hawaii and Arizona (although the Navajo Nation in Arizona does observe it).
Most of Europe observes DST, including the United Kingdom. Australia, New Zealand, South America set their clocks forward in the fall and move them back in the spring. Russia doesn't observe it.
How to Adjust to Time Changes
There are several things you can do to prepare yourself for an upcoming DST switch or the reversion back to non-DST time:
1. If you're moving the clocks forward, take a 15-20 nap in the afternoon on the day before, and the day after if you feel you could use it.
2. The night that you're moving the clocks forward, use a natural sleep aid or eat foods that help facilitate an earlier onset of sleep and deeper sleep.
3. For a week or two leading up to a time change, gradually add or subtract a few minutes to your bedtime to make the acclimation less difficult on the day of the change.
4. The morning after "springing forward" expose yourself to sunlight in the morning, and if possible, exercise early (and outside is better). |
Deciphering the Four Noble Truths
Amitaba Buddha This figure, located in Japan, is Amitaba Buddha, the Buddha of infinite light. Buddhism is the fourth largest religion in the world and is widely practiced in China, Japan and other parts of southeast Asia.
Nirvana was a grunge band in the 90s. It is also the name for when a Buddhist reaches Enlightenment. The fourth largest religion in the world, Buddhism, originated in India but is now mostly practiced in China, Japan and other parts of southeast Asia.
Professor Barry Crawford, a professor of religion and philosophy, explained the story of Buddhism’s founder Siddhartha Gautama, who was an Indian prince of the Sakya Clan, lived from 563-483 B.C. His father kept him locked in the palace because he wanted his son to be a great political leader, not a great religious leader. It had been prophesized that he could be either. Siddhartha escaped and witnessed the four passing sights: an elderly person, a sick person, a corpse and a monk meditating.
“How much of this story is true we don’t know,” said Crawford.
Disturbed by this, Siddhartha ran away to find inner peace. He practiced every technique he could think of to find peace. He became the Buddha and became enlightened.
He also discovered the four noble truths. The first noble truth is that all of life is suffering. Next, we suffer because we thirst to have things we cannot get. The next noble truth is that we can stop this suffering. The fourth is that by following the eight fold path you can stop this suffering. The eight fold path includes right thinking, right attitude, right effort and right livelihood, for example.
“In Buddhism, you have to know and internalize the four noble truths,” said Crawford. “You have to realize everything is impermanent and nothing lasts. For example, you want to be young forever and that can’t be.”
Crawford explained that in Buddhism thirsting for yourself causes the most suffering. The belief is there is no self, there is no soul.
“This is a hard concept for Westerners and non-Buddhists to understand,” said Crawford. “Think of it like a mirage. You are conscious of it and perceive it, but it’s not there. The same idea applies to the self. You want continuity but it’s not there.”
The goal in Buddhism is to reach enlightenment or Nirvana. Nirvana translates to “no wind,” like blowing out a candle explained Crawford.
“When you retain enlightenment your consciousness of yourself disappears,” said Crawford. “It’s an ultimate no-thingness.”
According to there are three main types of Buddhism. These are: Theravada, Mahayana and Vrjiana Buddhism. Crawford explained the differences between the three main types.
Theravada is “the way of the monk” and they hold closest to the teachings of Buddha. Mahayana Buddhism focuses on the model of love and compassion exhibited by Buddha the person. Finally Vrjiana Buddhism is also known as Tibetan Buddhism. This is the Buddhism practiced by the Dali Lama.
“Vrjiana is a mixture of Buddhist beliefs and a Tibetan religion called Bon,” said Crawford. “Bon has to do with strategies for survival in a hostile environment by placating hostile spirits. It also involves chants, mantras and charms.”
Mahayana Buddhism is very popular in Japan, where they have a 40-foot statue of the Amitaba Buddha, or the Amida as they call him. The Amida is the Buddha of infinite light.
“Amida reigns over paradise in the west,” said Crawford. “He is a cosmic Buddha and people pray so that he will have pity and share his grace and merit. The hope is that after they die the Amida will admit them into paradise.”
The Amitaba Buddha is often sold in stores as a figurine. Another figure that many would recognize is the laughing Buddha, or Ho Thai. He is a fat and happy Buddha. He is a Chinese folklore figure and his name Ho Thai means “cloth sack.”
Buddhists believe in reincarnation. According to Crawford, it is called Samsara which means rebirth.
“If you don’t reach enlightenment in this life, you will be reborn until you see the truth and see what Siddhartha himself saw,” said Crawford.
Although Buddhism began in India, it is not a popular religion there today. A large portion of India practices Hinduism. Crawford explained the connection between Buddhism and Hinduism.
“Like Christianity began as a movement within Judaism, Buddhism started within the bosom of Hinduism,” said Crawford. “Buddhism became so different that eventually it became its own religion.” addresses the debate about whether or not Buddhism is actually a religion or a philosophy. It is said that Buddhism is a philosophy, not a religion because there is no talk of a god or deity. Mahayana Buddhism treats its many Buddha figures as cosmic figures like gods. However Siddhartha never addressed any god.
“Buddha didn’t pay attention to gods,” said Crawford. “Still, it’s not quite accurate to say he was an atheist.” |
Everything You Need to Know About Woodpeckers in Oklahoma
red-bellied woodpecker
There are 14 species of woodpecker in Oklahoma that have been spotted. Of these, 12 species are recognized on state checklists as regularly occurring, one additional species is considered rare or accidental, and one species is regarded as near threatened.
Going out birding in the woods and forest is the best way of seeing woodpeckers in Oklahoma. However, some such as Red-bellied Woodpeckers, Hairy Woodpeckers Downy Woodpeckers, and Northern Flickers can regularly be seen on backyard feeders.
This guide will help you identify the woodpecker species in Oklahoma according to avibase.
Common_birds_-_part_1 x
You can print out a free bird identification photo guide for Oklahoma to help you identify all birds that visit your backyard.
The most common woodpecker in both summer and winter in Oklahoma is the Red-bellied Woodpecker. The Red-headed Woodpecker is more commonly spotted in summer, while the Northern Flicker and several other species are more commonly spotted in winter in Oklahoma.
Read on to find out all about the woodpeckers in Oklahoma, with pictures, videos, and what sounds they make.
There are 14 species of woodpecker in Oklahoma:
1. Red-bellied Woodpecker
2. Downy Woodpecker
3. Northern Flicker
4. Red-headed Woodpecker
5. Pileated Woodpecker
6. Hairy Woodpecker
7. Yellow-bellied Sapsucker
8. Ladder-backed Woodpecker
9. Golden-fronted Woodpecker
10. Lewis’s Woodpecker
11. Red-cockaded Woodpecker
12. Williamson’s Sapsucker
13. Acorn Woodpecker
14. Ivory-billed Woodpecker
The 14 Types of Woodpecker in Oklahoma
1. Red-bellied Woodpecker
red-bellied woodpecker
Red-bellied woodpecker female
Red-bellied Woodpeckers are the most frequently spotted woodpecker in Oklahoma all year. They are recorded by bird watchers in state checklists, 24% in summer and 25% in winter.
• Length: 9.4 in (24 cm)
Red-bellied Woodpeckers can be found in the Eastern US, and they do not migrate.
Red-bellied Woodpecker call and drumming
Credit: https://www.xeno-canto.org/ Paul Marvin
Where to spot Red-bellied Woodpeckers:
2. Downy Woodpecker
Downy Woodpecker for identification in Massachusetts MA
Downy woodpecker female
Downy Woodpeckers are the second most commonly spotted woodpecker in Oklahoma in summer and winter. They are common here and are recorded in 19% of checklists in summer and 25% in winter for the state.
Downy Woodpecker sound
Credit: www.xeno-canto.org Aiden Place
Where to spot Downy Woodpeckers:
How to attract more Downy Woodpeckers to your backyard:
3. Northern Flicker
Male Yellow-Shafted
Northern flicker female yellow shafted
Female Yellow-shafted
Northern Flickers migrate and spend the winter in Oklahoma between September and April. This is the southern edge of their summer breeding range, so some stay around during summer.
In checklists recorded by bird watchers, Northern Flickers are recorded in 22% of checklist in winter here, but in summer, they are only recorded in 4.7%.
Northern Flickers call and drumming:
https://www.xeno-canto.org/ Ron Overholtz
Where to spot Northern Flickers:
How to attract more Northern Flickers to your backyard feeders:
4. Red-headed Woodpecker
Red Headed Woodpecker
Red-headed Woodpeckers are seen in Oklahoma all year, but they are mostly found in the eastern part of the state during winter.
In summer, they are the third most commonly spotted woodpecker in Oklahoma and are in nearly 7% of checklists and 5% in winter.
• Wingspan: 16.5 in (42 cm)
Where to spot Red-headed Woodpeckers:
How to attract Red-headed Woodpeckers to your backyard feeder:
Red-headed Woodpeckers visit backyards for suet.
5. Pileated Woodpecker
Pileated Woodpecker for identification in west virginia
Although not common in Oklahoma, Pileated Woodpeckers can be spotted in the east of the state all year.
Pileated Woodpecker call and drumming:
Where to spot Pileated Woodpeckers:
How to attract more Pileated Woodpeckers to your backyard:
6. Hairy Woodpecker
Hairy woodpecker
Hairy woodpecker female
Hairy Woodpeckers are not very common in Oklahoma, but they can be found all year. They are spotted in 3% – 4% of checklists.
They can be found across all US states and most of Canada and into Mexico. They can be seen on backyard feeders and are powerful small birds that make a whinnying sound or explosive peak calls.
Hairy Woodpecker sounds
Credit: https://www.xeno-canto.org/ Richard Webster
Where to spot Hairy Woodpeckers:
How to attract more Hairy Woodpeckers to your backyard
7. Yellow-bellied Sapsucker
yellow bellied sapsucker
yellow-bellied sapsucker femalefor identification in Michigan MN
Yellow-bellied Sapsuckers can be spotted in Oklahoma during winter between October and April. However, they start to migrate north as early as late February.
In winter, they have been spotted in checklists 7% for the state.
Yellow-bellied Sapsuckers are relatively small and are about the size of a robin. They are mostly black with red foreheads, and the male has a red throat.
They migrate from Canada and Northeastern US states after breeding in the summer and spend the winter in the Southern US and Mexico.
Where to spot Yellow-bellied Sapsuckers:
How to attract more Yellow-bellied Sapsuckers to your backyard:
8. Ladder-backed Woodpecker
Ladder-backed Woodpeckers are quite rare in Oklahoma and can only be spotted in the west of the state.
Ladder-backed Woodpeckers are small with a black and white ladder pattern on their backs and a checkered pattern on their wings. They are whiteish-gray underneath with faint black markings. Males have a red crown, and females have a black crown.
• Weight: 0.7-1.7 oz (21-48 g)
• Wingspan: 13.0 in (33 cm)
Deserts and thorn forests, across dry southern states from California to Texas, up to southeastern Colorado, and down through Mexico, are the usual habitats of Ladder-backed Woodpeckers. Ladder-backed Woodpeckers do not migrate.
Mainly feeding on insect larvae and some adult insects such as ants and caterpillars and occasionally Ladder-backed Woodpeckers will also eat cactus fruit.
Where to spot Ladder-backed Woodpeckers:
Early morning in February and March is the best time to spot Ladder-backed Woodpeckers as they are out defending their territories in preparation for breeding. Look for them in dry areas with Joshua trees, Juniper, willow, or honey mesquite.
How to attract more Ladder-backed Woodpeckers to your yard:
Ladder-backed Woodpeckers love mealworms, and they will also visit black oil sunflower seed feeders and eat peanut butter.
9. Golden-fronted Woodpecker
Male Golden-fronted Woodpecker - Texas
Golden-fronted Woodpeckers are rare woodpeckers in Oklahoma that have only been spotted in the southwest of the state.
Golden-fronted Woodpeckers have black-and-white bars on their backs, tan breasts, yellow napes, and a yellow patch in front of their eyes. Females also have a pale yellow lower belly, and males have a small red patch on the crown.
• Length: 8.7-10.2 in (22-26 cm)
• Weight: 2.6-3.5 oz (73-99 g)
• Wingspan: 16.5-17.3 in (42-44 cm)
Golden-fronted woodpeckers look similar to Red-bellied Woodpeckers and fight to defend their territories against each other where their territories cross in parts of Texas. Birds further south in Mexico have more red coloring n the head and more yellow bellies.
Nicaragua is the most southerly range of the Golden-fronted Woodpecker and up through Mexico into Texas and Oklahoma. They do not migrate.
Texas is the only state in the United States that Golden-fronted Woodpeckers are common all year. In Texas, they range west of Dallas from north to south and are most common in the southeast of the state, south of San Antonio.
Fruit and nuts make up half of the diet of the Golden-fronted Woodpecker, and the rest is insects. They especially love prickly pear cactus and will have purple-stained faces from eating them.
Where to Spot Golden-bellied Woodpeckers:
Golden-fronted Woodpeckers like open woodland and arid scrub, and they are common in backyards.
How to Attract More Golden-Bellied Woodpeckers to Your Yard:
Golden-fronted Woodpeckers are common around backyards and love fruit and jelly, especially oranges.
10. Lewis’s Woodpecker
Credit: Mike Bird
Lewis’s Woodpeckers are very rare in Oklahoma but they have been spotted in the Wichita Mountains Wildlife Refuge and Lake Thunderbird State Park.
Lewis’s Woodpeckers look like a completely different bird species, catching insects on the wing rather than hammering on trees. Then there is the pink belly, gray collar, and dark back with a dark red face to set it apart from its family.
• Length: 10.2-11.0 in (26-28 cm)
• Weight: 3.1-4.9 oz (88-138 g)
• Wingspan: 19.3-20.5 in (49-52 cm)
Lewis Woodpeckers can be found from as far north as British Columbia and down to California and Texas. They tend to breed further north in British Columbia, east to Wyoming, and south to Nevada before migrating south to southwestern states. Those on the Pacific Coast tend to remain all year, as do those in the southeast of their range.
As well as eating flying insects, Acorn Woodpeckers also eat nuts and fruit, and they will store them in crevices of cottonwood trees in winter.
Lewis’s Woodpeckers do not make their own nests, preferring instead to us those created by other woodpeckers, and they lay 5 – 9 eggs.
11. Red-cockaded Woodpecker
Red Cockaded Woodpecker
Red-cockaded Woodpeckers are extremely rare in Oklahoma. In fact, they are regarded as near threatened on the red list of endangered animals.
• Wingspan: 14.2 in (36 cm)
How to attract more Red-cockaded Woodpeckers to your backyard:
Red-cockaded Woodpeckers may be attracted to your backyard with fruit such as berries if you live near pine forests. Try planting native berry-producing plants such as grape, bayberries, hackberries, or elderberries.
12. Williamson’s Sapsucker
Male Williamson's Sapsucker
Williamson’s Sapsuckers are very rare in Oklahoma, and according to records, they have last been spotted in Kenton back in 2020.
Williamson’s Sapsucker males are more black than many woodpeckers with a glossy black back, vertical wing patches, red throat, and yellow belly.
Females have the more common black and white pattern on their back, and they have a brown head and black breast patch.
• Length: 8.3-9.8 in (21-25 cm)
• Weight: 1.6-1.9 oz (44-55 g)
Williamson’s Sapsuckers are migratory and spend the summer breeding in the mountainous west and the winter in southern states and Mexico.
Mainly feeding on sap from conifer trees, especially in spring, and then more insects such as ants, beetles, and flies in summer. Winter food is often fruit and seeds.
13. Acorn Woodpecker
acorn woodpecker
Acorn Woodpeckers are extremely rare in Oklahoma and according to records, there have only been a couple of sightings in Wichita Mountains Wildlife Refuge.
Acorn Woodpeckers have distinctive clown-like faces with red caps, white faces, a black patch around the beak, and black over the back of their heads and back. Their bellies are white with black markings. Female Acorn Woodpeckers have less red on the crowns than males.
• Weight: 2.3-3.2 oz (65-90 g)
• Wingspan: 13.8-16.9 in (35-43 cm)
Acorn Woodpeckers are quite different than most woodpeckers in that they live in large groups and hoard acorns. They live in oak forests in western Oregon, California, and across to Texas, and down through Mexico to Central America.
They may look like clowns but it’s no laughing matter when it comes to eating, as the gruesome Acorn Woodpecker stores dead bugs in a ‘pantry’ and even eat the eggs of their own species!
Masses of holes drilled in winter in dead trees provide the perfect pantry, known as a granary tree, for acorns and other nuts collected by the Acorn Woodpecker. They will even check stored acorns and move them to smaller holes once they dry out and shrink.
Insects are not left out when it comes to storage, but this gruesome pantry of dead bugs is often left in cracks or crevices. Fruit and sap provide other food sources, as do eggs, including eggs of their own species.
Where to Spot Acorn Woodpeckers:
Oak forests are the best place to spot them, as looking out for their guarded pantry stash and listening for their parrot-like squawks is an easy way to find these sociable birds.
How to Attract Acorn Woodpeckers to Your Backyard:
You may find Acorn Woodpeckers an unwelcome visitor as they are known to drill holes in wood siding and utility poles as this is considered deadwood! You may still get them visiting if you live near oak forests.
14. Ivory-billed Woodpecker
Credit: James St.John
Ivory-billed Woodpeckers are extremely rare species in Oklahoma that are regarded as critically endangered or possibly extinct.
Ivory-billed Woodpeckers were considered critically endangered and do not migrate. They have been thought to have been seen along the Gulf Coast in Arkansas, Louisiana, and Florida.
Unfortunately, the process to declare them as extinct has started as no verified recordings have been found in many years.
How Frequently Woodpeckers are Spotted in Oklahoma in Summer and Winter
Checklists for the state are a great resource to find out which birds are commonly spotted. These lists show which woodpeckers are most commonly recorded on checklists for Oklahoma on ebird in summer and winter.
Woodpeckers in Oklahoma in Summer:
Red-bellied Woodpecker 23.8%
Downy Woodpecker 19.3%
Red-headed Woodpecker 6.8%
Pileated Woodpecker 5.2%
Northern Flicker 4.7%
Hairy Woodpecker 3.3%
Ladder-backed Woodpecker 0.8%
Golden-fronted Woodpecker 0.3%
Red-cockaded Woodpecker 0.1%
Lewis’s Woodpecker <0.1%
Yellow-bellied Sapsucker <0.1%
Woodpeckers in Oklahoma in Winter:
Red-bellied Woodpecker 25.3%
Downy Woodpecker 24.5%
Northern Flicker 22.3%
Yellow-bellied Sapsucker 7.1%
Red-headed Woodpecker 4.7%
Pileated Woodpecker 4.4%
Hairy Woodpecker 4.1%
Ladder-backed Woodpecker 0.5%
Golden-fronted Woodpecker 0.3%
Lewis’s Woodpecker 0.2%
Acorn Woodpecker <0.1%
Red-cockaded Woodpecker <0.1% |
To help you prepare for the rigors of exam day, Wiley has prepared a 6-page cheat sheet with step-by-step instructions for answering ten typical Level II exam questions including foreign exchange, investments in financial assets, binomial interest rate trees, full and partial goodwill methods, periodic pension cost and more!
Which of the following is least likely if a parent uses the full goodwill method as opposed to the partial goodwill method to account for an acquisition?
A. Return on assets and return on equity will be lower under the full goodwill method.
B. Net income and shareholders’ equity are the same under both methods.
C. The net profit margin will be the same under both methods.
Answer Rationale:
(B) is the correct answer. Total assets and total equity are higher under the full goodwill method; hence, ROE and ROA are lower. Net income and retained earnings are the same under both methods, but total shareholders’ equity is higher under the full goodwill method. The income statement is the same under both.
TIP: Do not read the question too fast; pay attention to what the question is specifically asking (least vs. most likely, etc.) and watch out for tricky distractors. FRA also includes employee compensation, multinational operations and builds upon Level I financial reporting quality and financial statement techniques. The following questions highlight some “must know” concepts from employee compensation and multinational operations.
To download the full cheat sheet, click here! |
Use this button to switch between dark and light mode.
State Legislative Passage Rates
April 20, 2020
Why Some State Legislatures Pass More Bills Than Others—and Why It Matters
Some state legislatures pass far more of the bills they introduce than others. The reasons for that disparity aren’t easy to discern, but understanding how those factors shape passage rate patterns can be critical for organizations seeking to comply with the laws or engage in the lawmaking process of any state. That knowledge is the foundation of the industry-leading State Net analytics.
Bill Passage Rates All Over the Map
On average, the nation’s 50 state legislatures pass about 20% of the bills they introduce each biennium. But individual state passage rates actually range from less than 5%, as in Minnesota and Missouri, to over 60%, as in Utah and Idaho. The variation extends to key decision points within each state’s legislative process, such as when a committee in a bill’s initiating chamber votes on whether to approve the bill or when the bill comes up for a floor vote in the opposite chamber.
Diversity Built Into State Legislative Processes
The short answer to why some states’ bill passage rates are so much higher than others is that they are built into the legislative processes. To some extent it’s the result of state constitutions, statutes and chamber rules that dictate everything from the size of each state’s legislature, to the number of committees used to evaluate legislation, to the length of the legislative session.
But the impacts of those formal codes aren’t entirely straightforward. For instance, passage rates are generally higher in states with shorter legislative sessions, like Utah, where the regular session lasts just 45 days. But California’s Legislature also has a high passage rate and meets virtually year-round.
What’s more, passage rate patterns appear to be significantly influenced by unwritten behavioral norms and traditions that are passed along from one generation of lawmakers to the next, some of which seem to defy reason. In Massachusetts, for example, nearly all bills are passed out of committee in their originating chamber, even some that receive “do not pass” recommendations. But only a small percentage of those bills go on to obtain favorable votes on the floor.
Passage Rates Key to Forecasting Legislative Action
Despite their diversity and complexity, passage rate patterns remain highly consistent from session to session. Over the last four bienniums they’ve varied less than 2%. “We were genuinely surprised to find how rigidly each chamber adhered to its own particular passage rate pattern,” said State Net Data Scientist W. Mark Crain. “The deviation over the four biennial session cycles is incredibly small.”
Crain notes that because of that consistency, “historical passage rate patterns provide a strong foundation for predicting future bill outcomes.”
Key takeaway: The knowledge required to determine which bills are more or less likely to pass—and consequently, to better prioritize the allocation of legal and government affairs resources—isn’t easy to come by.
Legislative Analytics Driven by Knowledge
State Net has built predictive models for every state legislative chamber that take into account the unique aggregation of formal rules and informal practices that shape the passage rate pattern in each of them. Drawing on decades of legislative activity and employing machine learning to prioritize the factors that carry the most weight at any given point in a state’s legislative process, the models power analytics that provide critical insights about pending legislation, such as the likelihood of a bill passing its current legislative stage or whether it is moving faster or slower than usual. By simply accessing these tools, users gain the benefit of the extensive State Net knowledge of state legislative processes, helping them make more informed, data-supported decisions.
Learn how State Net can help you stay on top of this issue.
News & Views from the 50 States
Free subscription to the Capitol Journal keeps you current on legislative and regulatory news. |
Suicide prevention
Self-harm deaths are preventable
Friday, August 17 2018
It's difficult to understand what drives people to take their own lives. But a suicidal person may be in so much pain that they can see no other option. A number of high-profile suicides have been in the headlines recently and it is being reported that rates are rising across the country. Self-harm deaths are preventable, but it starts with knowing what to look for and what to do.
“Statistics reflect that the current suicide rate in South Africa is 10.7 per 100 000 people,” says Sarah Lamont, occupational therapist at Akeso Randburg, Crescent Clinic. “This is reportedly higher than some of our neighbouring African countries and is 62nd when we compare it to statistics globally. Reasons that have been attributed to the increase in suicide rates are a rising sense of helplessness and desperation. These feelings have been exacerbated by the rise in unemployment and economic hardship and poverty. Domestic violence and substance abuse are other factors that are indicated in increasing a sense of desperation that may have a direct impact on the rise of suicide in all countries.”
“Shame is attached to mental illness and that stops people seeking help,” adds Sandy Lewis, Head of Psychological Services at Akeso Clinics. “They are sometimes overly worried about others’ perceptions of how well they are coping. This is especially relevant for professionals like doctors who perceive depression as a personal weakness. They don’t want treatment to be visible to others so they often won’t agree to hospitalisation or other treatments that might impact on work or social perceptions. Suicide feels less shameful than visible treatment, and they believe it enables them to keep their pride intact. The real work lies in changing these perceptions.”
Why men might be at greater risk
The vast majority of suicide victims in South Africa are male, according to research conducted by Africa Check.1 In 2012, 5 095 men of all ages died due to suicide – equalling nearly 14 each day. The male death rate for suicide was 21 per 100 000 people, over 5 times higher than the female death rate by suicide of 4.1 Given the factors that lead to suicide, as well as an understanding of how most cultures have groomed men to believe they need to be strong and act as the provider for the family, Lamont says it’s easy to understand why suicide rates in men are higher than in women.
“Men in general place increased expectations on themselves to perform and succeed financially. In addition, men may feel uncomfortable about reaching out or expressing the negative feelings that they may be experiencing. They therefore prevent themselves from gaining healthy perspectives and healthier solutions to their problems. That may be when hopelessness, helplessness and desperation take over.”
Lamont notes that society perceives it to more acceptable for women to discuss their problems and express their thoughts and feelings than men. It is for this reason that more women access care and get the help they need in order to eliminate suicide as the first option. People need to be made more aware that suicide is a thought many of us experience at some point, as we have all faced desperate times for different reasons, she says. For some of us, the idea of suicide happens for a brief moment when we find ourselves wishing we could just end a current suffering and wake up to find ‘it’s all over’.
“If you maintain that way of thinking long enough, however, you may begin researching and making plans on how to end your life. This is a dangerous place to be in, as it shows a strong intent to follow through with the act. The fact that the internet enables access to sites that support suicide and offer tips on how to successfully end your life are also problematic; they offer support and a sense of universality to someone who is desperate. It is far more beneficial to seek out services that offer constructive help.”
Recognising the signs of depression
It is important to be aware of the signs and symptoms of depression. Depression is a clinical illness created by an imbalance in the neurochemicals that controls our moods, Lamont explains. When someone is depressed their mood is not only low but their thoughts are influenced and they are temporally incapable of seeing circumstances realistically and thus generating realistic solutions. Their low mood also impacts on their general level of functioning within all their areas of life and therefore a change or deterioration in any aspect of someone’s life could be a red flag.
Lamont says depression manifests the following signs and symptoms:
• Change in personal hygiene that result in a more unkempt appearance.
• Changes in appetite that can often lead to unusual changes in weight.
• Changes in sleep routine, with the individual often feeling more exhausted and needing to sleep for extended periods of time, staying in bed all day or for an entire weekend.
• Avoidance of social interaction and remaining withdrawn or isolated. They may be less active on social media, for example, and their posts might reflect less energy or positivity than previously.
• People who are depressed develop poor coping strategies such as an increase in smoking, drinking, and substance abuse. They may also begin gambling as an attempt to find a quick fix to financial pressures. These only have further negative impacts on their levels of desperation and their inability to generate healthier solutions.
• Work performance may deteriorate and attendance may become problematic. Low energy levels and poor level of motivation can make it difficult to attend to daily tasks, which is particularly evident in the work place and at home.
• People who are depressed will also avoid their usual leisure-time activities and there may be increase in activities that allow them to isolate or engage in maladaptive/addictive behaviours mentioned above.
• Their thoughts may also reflect a general sense of hopelessness that can leave those who engage with them feeling negative and hopeless too. They may avoid conversations and may give you the misleading answer that everything is ‘fine’.
“As a family member or close friend, if you witness any of these behaviours, you need to trust your feelings of discomfort, recognise what you are seeing, continue to be supportive and encourage the person to seek help,” Lamont stresses.
Both inpatient and outpatient treatment are available. What is most important is that the person concerned gets help and finds a connection with a professional who will ensure they get the assistance they need. Different layers of treatment exist, from medication and psychotherapy to ECT (electro-convulsive therapy) and TMS (transcranial magnetic stimulation), and ketamine drips. “It is important for the individual not to be ashamed to try these,” Lewis says.
If you are concerned about yourself or someone who you care about you can contact the following:
• Your local clinic
• South African Depression and Anxiety Group (SADAG): 0800212223
• Lifeline: 0861322322
• Your GP – they may refer you to an appropriate service
• Religious, spiritual or community centres
• Akeso Clinic Group: 0861 HELP US (4357 87) or Akeso Randburg: 087 098 0457
• Befrienders South Africa: 051 444 5691
• Any emergency medical service or your closest emergency room
Communities play a critical role in suicide prevention. They can provide social support to vulnerable individuals and engage in follow-up care, fight stigma and support those bereaved by suicide. We can all assist in reducing the suicide rate in our country by being more open and understanding with people who are struggling in the difficult times we are facing. Encourage anyone who you feel is struggling to access help.
Facts and Figures
• Globally, more than 800 000 people die due to suicide every year.2
• Suicide is the second leading cause of death in 15-29-year-olds.2
• Suicide was the fourth leading cause of death for young people aged 15-24 in South Africa in 2012. That year, 1,665 young people died as a result of suicide.2
• 78% of suicides occurred in low- and middle-income countries in 2015.2 |
Binge drinking: Young women three times more likely to blackout compared to men - study
Photo credit: Getty Images
Adolescent women are three times more likely than men of the same age to blackout from binge drinking alcohol, a new study has found.
Additionally, young women are almost twice as likely as men to blackout from drinking the same amount of alcohol, possibly due to differences in metabolism.
Australia's National Drug and Alcohol Research Centre found about 10 percent of 14-year-olds who drank alcohol had experienced a blackout, and the rate of blackouts rose throughout high school. By the age of 19, nearly half of all young people who drank alcohol had blacked out, and of these people, approximately 14 percent had experienced five or more blackouts.
The eight-year study tracked the drinking behaviour of 1821 people in New South Wales, Western Australia, and Tasmania, starting from when they were 13 years old. Participants were asked when they started drinking alcohol, if they had any alcohol-related blackouts, and if they had abused alcohol or become addicted to it.
Wing See Yuen, the lead author for the study, says young people tended to know about the behavioural risk of blackouts but didn't know there was a difference in risk between men and women.
The Ministry of Health describes binge drinking as drinking alcohol heavily over a short period with the intention of getting drunk, and gives a warning about the "serious health effects".
"Drinking large amounts of alcohol can result in confusion, blurred vision, poor muscle control, nausea, vomiting, sleep, coma or even death," it says.
"It can also impair a person's judgement and decision-making ability, which can increase the risk that they may do silly things and put themselves in dangerous situations." |
A supermoon which will turn blood red
Total lunar eclipse on Wednesday, May 26, will make turn blood red. This moon is also a and the closest full moon of the year. It will appear infinitesimally larger to skywatchers on Earth. On May 26, the moon will reach its fullest at 7:14 a.m. EDT (1114 UTC). However, the full moon reaches perigee, or its closest distance to Earth, the day before at 9:21 p.m. EDT on Tuesday, May 25 (0121 May 26 UTC). Usually, the moon is an average of 240,000 miles (384,500 km) from Earth, but at that moment, the full moon will be 222,022 miles (357,311 km) from Earth. Some areas of the world and all of the United States will be able to see at least parts of the lunar eclipse, including its partial and penumbral phases.
During a lunar eclipse, the Earth passes between the moon and the sun, meaning that Earth’s shadow falls on the moon. The entire eclipse will last for about five hours. It’s perfectly fine to look directly at the moon during the lunar eclipse. binoculars can help you see the moon’s rough terrain, while a telescope will help you zero in on distinct features, such as cracks in the moon’s surface known as rilles, which formed when ancient lava on the moon once filled basins before cooling and contracting, The next big lunar event, a partial lunar eclipse, will happen on Nov. 19, 2021. |
Studying Carbon Uptake in the Southern Ocean with Unmanned Surface Vehicles
Just how variable is CO2 uptake in the Southern Ocean in winter?
February 25, 2019
The 2019 Saildrone Antarctic Circumnavigation, the first autonomous circumnavigation of the Southern Ocean, endeavors to accomplish a significant list of science objectives, in collaboration with leading research agencies in the US, Europe, and Australia. The Southern Ocean accounts for approximately 40% of the total ocean carbon uptake, but only 20% of the surface area. Vast areas of the Southern Ocean remain unsampled, especially during the stormy autumn and winter seasons when ship-based observations are particularly difficult. Shifts in winds and circulation around Antarctica have already been shown to alter the amount of carbon dioxide uptake from the atmosphere. A full year of observations made with Saildrone unmanned surface vehicles (USVs) could provide critical data about how the region is changing, as well as the biological and physical processes driving those changes.
Scientists from the National Oceanic and Atmospheric Administration (NOAA) and Commonwealth Scientific and Industrial Research Organisation (CSIRO), among others, will use the data collected by Saildrone to study carbon uptake in the Southern Ocean.
Wind and solar-powered saildrones are equipped with GPS and navigational instruments that make them capable of autonomously sailing a set course of waypoints, as prescribed by the Saildrone Mission Control in Alameda, CA. Saildrones are designed for deployment up to one year and return to port on their own. Minute level data is transmitted in real time; second per second data is downloaded upon mission completion.
“Over the past 20 years of making ship-based measurements, we’ve learned that there’s a lot more variability in the amount of carbon the Southern Ocean can take up than we’d previously realized. We need more information to understand the regional changes, and how carbon uptake is changing year to year, but we can’t get that with ships alone,” said Dr. Bronte Tilbrook, a biogeochemist studying ocean acidification and the global carbon cycle at CSIRO. “The advantage of the saildrones is that they can go to areas where there have been very few ship observations. We’re sending the saildrones into regions we just couldn’t before. It’s quite significant.”
southern ocean carbon sink
A look at CO2 data collected by a Saildrone USV in the Southern Ocean on January 26, 2019. The red line is atmospheric CO2 in parts per million; the purple is dissolved CO2 (i.e. CO2 in the ocean). When the purple line is below the red line, the ocean is absorbing CO2, and when it’s above the red line, the ocean is releasing CO2.
Saildrone USVs carry a suite of science sensors to collect in situ data above and below the surface of the water including air and skin temperature, relative humidity, pressure, Chl-a, salinity, and pH. The ASVCO2 developed by NOAA allows us to measure the difference in the partial pressure of CO2 in the atmosphere and surface ocean, and this is used to calculate the amount of CO2 being absorbed or released by the surface ocean.
Over the course of the 270-day Antarctic mission, saildrones will periodically rendezvous with surfacing SOCCOM floats. SOCCOM floats are deployed from a ship and active for approximately three years. The saildrone will perform cross-validation sampling as close to the SOCCOM float as possible in terms of time and distance.
“The floats are suggesting that the wintertime conditions of CO2 uptake are changing quite a bit more than we understood. There’s an interesting set of data starting to emerge, showing that the overall Southern Ocean sink is more variable in the winter. The floats provide an opportunity to get some data, but we really need to verify it, and we can only do that by making independent measurements when they come up to the surface,” said Tilbrook.
Saildrones and SOCCOM floats present two very different sampling strategies: The saildrone is focused on air-sea interaction at the surface, and the float is focused on sub-surface measurements of the water column. The floats measure the pH of the water and infer the partial pressure of carbon dioxide. The ASVCO2 on the saildrone measures atmospheric CO2; an equilibrator pumps air through the surface of the seawater to bring the air and water into equilibrium in terms of CO2 for a short period of time.
soccom float deployment southern ocean
Deploying a float from the R/V Nathaniel B. Palmer during the 2016 – 2017 cruise to the Southern Ocean as part of the SOCCOM project. Photo: Greta Shum/SOCCOM Project.
“This mission is a preview of the kind of multi-platform, long-term observing system we could envision for the Southern Ocean. The SOCCOM floats have given us an unprecedented amount of data in this region, which challenged a lot of our assumptions about the ocean CO2 sink. But have conditions in this region the last few years been an anomaly, or not? Continuous, long-term observing is one way to find out,” said Dr. Adrienne Sutton, an oceanographer with the NOAA Pacific Marine Environmental Laboratory (PMEL) Carbon Group. The PMEL Carbon Group has been involved in all Saildrone missions related to CO2 to date.
The Saildrone Antarctic Circumnavigation is one of several ongoing and recent missions related to carbon uptake and data validation. On January 30, Saildrone launched a USV in Newport, RI, on a 30-day mission to study heat transfer and carbon flux in the Gulf Stream and in June 2018, the Saildrone Baja Campaign studied upwelling and frontal region dynamics, air-sea interactions, and diurnal warming effects along the US/Mexico coast to Guadalupe Island and assessed the Saildrone platform for satellite data accuracy and model assimilation.
Learn about Saildrone's ocean data collection solutions.
Main photo
Seals rest on ice floats in the Southern Ocean. Taken from the R/V Nathaniel B. Palmer. Greta Shum/SOCCOM Project. |
Python - IMAP
IMAP is an email retrieval protocol which does not download the emails. It just reads them and displays them. This is very useful in low bandwidth condition. Python’s client side library called imaplib is used for accessing emails over imap protocol.
IMAP stands for Internet Mail Access Protocol. It was first proposed in 1986.
Key Points:
• IMAP allows the client program to manipulate the e-mail message on the server without downloading them on the local computer.
• The e-mail is hold and maintained by the remote server.
• It enables us to take any action such as downloading, delete the mail without reading the mail.It enables us to create, manipulate and delete remote message folders called mail boxes.
• IMAP enables the users to search the e-mails.
• It allows concurrent access to multiple mailboxes on multiple mail servers.
IMAP Commands
The following table describes some of the IMAP commands:
S.N.Command Description
This command opens the connection.
This command requests for listing the capabilities that the server supports.
This command is used as a periodic poll for new messages or message status updates during a period of inactivity.
This command helps to select a mailbox to access the messages.
It is same as SELECT command except no change to the mailbox is permitted.
It is used to create mailbox with a specified name.
It is used to permanently delete a mailbox with a given name.
It is used to change the name of a mailbox.
This command informs the server that client is done with the session. The server must send BYE untagged response before the OK response and then close the network connection.
In the below example we login to a gmail server with user credentials. Then we choose to display the messages in the inbox. A for loop is used to display the fetched messages one by one and finally the connection is closed.
import imaplib
import pprint
imap_host = ''
imap_user = ''
imap_pass = 'password'
# connect to host using SSL
imap = imaplib.IMAP4_SSL(imap_host)
## login to server
imap.login(imap_user, imap_pass)'Inbox')
tmp, data =, 'ALL')
for num in data[0].split():
tmp, data = imap.fetch(num, '(RFC822)')
print('Message: {0}\n'.format(num))
Depending on the mail box configuration, mail is displayed.
Useful Video Courses
Python Online Training
Most Popular
187 Lectures 17.5 hours
Malhar Lathkar
Python Essentials Online Training
55 Lectures 8 hours
Arnab Chakraborty
Learn Python Programming in 100 Easy Steps
136 Lectures 11 hours
In28Minutes Official
Python with Data Science
75 Lectures 13 hours
Eduonix Learning Solutions
Python 3 from scratch to become a developer in demand
70 Lectures 8.5 hours
Lets Kode It
Python Data Science basics with Numpy, Pandas and Matplotlib
63 Lectures 6 hours
Abhilash Nelson |
Sustainable Cities
From EcoliseWiki
Sustainable Cities are increasingly being seen as a solution to climate change, global warming and social issues such as world hunger and poverty, especially after the creation of the Sustainable Development Goal 11: Sustainable cities and communities, one of the United Nations Sustainable Development Goals (SDGs). Some argue that the Sustainable Cities concept does not go far enough and suggest Regenerative Cities and Ecocities as alternatives, while Degrowth and Post Growth advocates point out that the SDGs are still based on problematic issues of continual growth. |
Warning Signs of Heart Attack
April 8th, 2009 healthwiki Health Resources 0
According to the American Heart Association, the classic warning signs are:
* An uncomfortable pressure, squeezing, fullness, or pain in the center of the chest that lasts for more than a few minutes, then disappears and returns.
* Pain that radiates to the shoulders, stomach, back, arms, neck, or jaw.
* Chest discomfort with dizziness, fainting, nausea, sweating, fluttering heartbeat, or shortness of breath
Women may also have these warning signs, which are less common:
* Unusual chest pain, stomach, or abdominal pain, which may feel like indigestion or the need to belch.
* Difficulty breathing and shortness of breath.
* Unexplained weakness, fatigue, or anxiety.
* Palpitations (an irregular heart beat), rapid heart beat, paleness, or breaking into a cold sweat.
* Pain in the jaw or back.
If you or anyone you know is having these symptoms, get to a hospital immediately. Not all the symptoms show up in every attack. Do not wait, because the heart muscle starts to die during an attack and every minute counts. It is always better to be safe than sorry.
A Unique Lesson For Diabetic
November 19th, 2008 healthwiki Diabetes 0
Fahim Ahmed was tested as diabetic incidentally after a urine test. That was about ten years ago, and from then on Mr Fahim, like so many other people with diabetes, became fixated on his blood sugar. His doctor warned him to control it or the consequences could be dire — he could end up blind, lose a leg, fail his kidneys and so on.
Mr Fahim, a 45-year-old business executive of a reputed organisation in the city, tried hard. When dieting did not work, he began taking pills to lower his blood sugar and pricking his finger several times a day to measure his sugar levels. They remained high. So he agreed to add insulin to his already complicated regimen.
Blood sugar was always in his mind. But in focusing entirely on blood sugar, he ended up neglecting the most important treatment for saving lives — lowering the cholesterol level. That protects against heart disease, which eventually kills nearly everyone with diabetes. He was also missing a second treatment that protects diabetes patients from heart attacks — controlling blood pressure. He assumed everything would be taken care of if he could just lower his blood sugar level.
Most diabetes patients try hard but are unable to control their diseases in this way and most of the time it progresses as years go by. Like many diabetes patients, he ended up paying the price for his misconceptions about diabetes. Last year, Mr Fahim had a life-threatening heart attack.
Diabetes goes undetected in many heart patients. It is a silent threat for many people who end up with heart disease because these patients do not feel the actual intensity of pain due to nerve damage as a consequence of diabetes. Blood sugar control is important in diabetes, specialists say. It can help prevent dreaded complications like blindness, amputations and kidney failure. So, controlling blood sugar is not enough.
In part it is the fault of proliferating advertisements for diabetes drugs that emphasise blood sugar control, which is difficult and expensive and has not been proven to save lives. And in part it is the fault of public health campaigns that give the impression that diabetes is a matter of an out-of-control diet and sedentary lifestyle and the most important way to deal with it is to lose weight. Again, the fault for the missed opportunities to prevent complications and deaths lies with the medical system. The doctors typically spend just 5 minutes with diabetes patients, far too little for such a complex disease.
Mr Fahim found all that out too late. So, no matter how carefully patients try to control their blood sugar, they can never get it perfect — no drugs can substitute for the body’s normal sugar regulation. So while controlling blood sugar can be important, other measures also are needed to prevent blindness, amputations, kidney failure and stroke.
Dr Md Rajib Hossain |
Special Report
The Richest Town in Every State
Source: Wikimedia Commons
Urban centers often have at their perimeters small towns with well-educated, wealthy residents. States without such large metropolitan areas tend to lag behind in income and education and as a result, tend to have less rich towns.
24/7 Wall St. reviewed household income levels in U.S. towns with populations under 25,000 people to determine the wealthiest town in each state. The richest town in the United States is Scarsdale, New York where the median annual household income is $241,453 — more than four times the median income nationwide, and 13 times the median income for households in Macon, Mississippi, the poorest town in America.
Incomes in even the nation’s richest towns vary considerably. Because wealthy areas tend to be concentrated around large urban centers, states with large cities tend to have many small wealthy towns. Meanwhile, states lacking these dense urban clusters not only tend to have fewer wealthy areas, but also the wealthy towns are comparatively less wealthy.
Click here to see the richest towns in every state.
In New York there are 35 towns where the median household income is more than double the national median income of $53,482. All 35 of those New York towns are within 75 miles of Manhattan. By contrast, the median household income in Maine’s richest town is less than the national median income. Maine’s largest city has a population of less than 67,000 residents — a fraction of the 1.6 million residents in New York City’s Manhattan borough alone.
One of the surest ways to increase income is to complete a college education. Towns with a high percentage of adults with at least a bachelor’s degree tend to have higher employment levels, and the jobs tend to be higher paying — driving up median household incomes. In 46 of the 50 towns reviewed, the percentage of adults with at least a bachelor’s degree is higher than the statewide percentage. In Short Hills, New Jersey, 88.7% of adults have at least a bachelor’s degree, the highest such attainment rate of any U.S. town.
To identify the richest town in each state, 24/7 Wall St. reviewed median household incomes in every town with a population of 25,000 or less in each state from the U.S. Census Bureau’s American Community Survey (ACS). Due to relatively small sample sizes for town-level data, all social and economic figures are based on five-year estimates for the period of 2010-2014. Still, data can be subject to sampling errors. We did not consider towns where the margin of error at 90% confidence is greater than 10% of the point estimate of both median household income and population. Towns were compared to both state and national figures. We considered the percentage of residents who have at least a bachelor’s degree, the towns’ poverty rates, and the workforce composition — all from the ACS. Because poverty rates can be skewed in areas with high shares of college students who frequently have very low incomes, college towns were also excluded. College towns are defined as towns where more than 40% of the population is enrolled in undergraduate or graduate school.
These are the richest towns in every state. |
The noble Mantis of Bushman mythology
African myths and legends
The First Bushman
Water in a desert country is so precious that for those who depend on it, it can assume divine properties. To the Bushman water is the ancient symbol of life. In it he can revitalize himself an make a fresh start. His legendary hero, Mantis, appears at the time of the beginning of the world, when the face of the earth was covered with water.
Mantis was carried over the tumult of the dark and turbulent water by a bee (bees, as honey makers, are an image of wisdom). The bee, however, became warier and colder as he searched for solid ground and Mantin felt heavier and heavier. He flew slower and sank down towards the water. At last, while floating on the water, the bee saw a great white flower, half-open, awaiting the sun’s first rays. He laid Mantis in the heart of the flower and planted within him the seed of the first human being. Then the bee died. But as the sun rose and warmed the flower, Mantin awoke and there, from the seed left by the bee, the first Bushman was born.
Mantis, Ostrich and Fire
In addition to life, Mantis also brought the first fire to the people. Before this, they ate their food raw, just as they killed it, like the Leopard and the Lion and they slept in their shelters at night, with no cheering light to brighten the long dark hours. Mantin had noticed that whenever Ostrich went to eat, his food smelt different and delicious. So one day he crept close to Ostrich to observe him as he ate. He saw Ostrich furtively take some fire from beneath his wing and dip his food into it. When he had finished eating, he carefully tucked the fire back under his wing and walked off.
Mantis knew that Ostrich would not give him any fire, so he decided to make a plan. One day he went to visit Ostrich. “Come”, he called, “I have found a tree with delicious yellow plums on it.” Ostrich was delighted. He began to eat the plums that were easiest to reach. “no, higher, higher! The best ones are right at the top”, Mantis urged him.
As Ostrich stood up on tiptoe ad spread his wings to balance himself, Mantin snatched some of the fire beneath his wing and ran off with it. This was how he brought fire to the Bushmen. Since then, Ostrich, terribly ashamed, has never flown and keeps his wings pressed to his sides, to preserve the little fire he has left.
According to the Bushman, the Ostrich has always been rather an odd fellow. When the female makes her nest in a hollow in the warm sand, she lays 20 to 30 round, creamy eggs, but invariably leaves one outside. Why? Because she and her husband are so busy brooding on the theft of his fire that they can be very absent-minding. She is even liable to forget she is sitting on a clutch of eggs and so she puts one outside, just to remind herself and her husband that they are there.
The Mantis Family
Although Mantis is a type of “superbeing”, the Bushmen do not regard him as a god like the moon and sun. Indeed, he is all too human and in many ways personifies the Bushmen himself. He is a kind of dream-Bushmen and resembles the real mantis, with his small wedge-shaped face and intelligent look. The figures which primitive artists painted on the walls of their rock shelters prance along like Mantis himself.
Mantis is very much a family man and likes to have his folk around him. His wife is Dassie, the rock hyrax. His son is young Mantis, very like his resourceful father. Porcupine is an adopted daughter whose real father is a weird monster called the All-Devourer, with whom she is too frightened to live.
Porcupine is married to a being who is neither human nor animal but a part of the rainbow, called Kwammanga. They have two sons, one called Kwammanga after his father and the other Mongoose or, as he is sometimes known, Ichneumon. The latter is a bossy young character who is always putting his grandfather Mantis in his place. Mantis also has a sister, a lovely lady called Blue Crane, of whom he is most fond.
Stories and pictures reproduced from the book ‘Myths and Legends of Southern Africa’ told by Penny Miller |
uterine prolapse
}September 7, 2021
jDr.. Muhannad Al-Khatib
Uterine prolapse - causes, symptoms and treatment methods
Uterine prolapse and uterine prolapse in women are severe pathological conditions that affect women when the uterus descends toward the vagina. Symptoms of uterine prolapse, its causes and treatment in Turkey and ways to prevent uterine prolapse.
}September 7, 2021
jDr.. Muhannad Al-Khatib
Uterine prolapse - causes, symptoms and treatment methods
Table of Contents
Uterine prolapse is a common condition that can occur as a woman ages. Over time, and with multiple vaginal deliveries, during childbirth the muscles and support ligaments around the uterus and vagina can weaken. When this supportive muscle structure begins to fail, the uterus can hang out of position through the vagina. This is called uterine prolapse.
What is uterine prolapse?
uterine prolapse
uterine prolapse
Uterine prolapse is a condition in which the structures that hold and support the uterus become weak over time. The uterus is one of the organs that are part of the female reproductive system. The uterus is located in the pelvis and is roughly pear-shaped. The uterus carries the developing baby (the fetus) and the uterus is a muscular structure that expands to fit the baby and then shrinks in size again.
Prolapse can vary depending on how weak the uterine supports are. In incomplete prolapse, the uterus descends enough to be part of the vagina. Women feel a lump or swelling. In more severe cases, the uterus can slide enough that it can be felt outside the vagina. This is called a complete landing.
Who gets uterine prolapse?
Uterine prolapse is more likely to occur in women who:
• They have had one or more vaginal births.
• In the post-menopausal age (menopause).
• Have family members who have had uterine prolapse.
Menopause occurs when the ovaries stop producing the hormones that regulate the menstrual cycle. When you haven't had a period for 12 months in a row, you're considered menopausal. Estrogen is one of the hormones that stop being secreted during menopause. This hormone helps maintain the strength of the pelvic muscles, and without it, the uterus is more likely to experience prolapse.
How common is uterine prolapse?
Uterine prolapse is a common condition that affects many women. Your risk of developing this condition increases with age and if you have multiple vaginal births.
What are the causes of uterine prolapse?
The uterus is held in place inside the pelvis by a group of muscles and ligaments. They are called pelvic floor muscles. In the case of poor support for these structures, they are unable to stabilize the uterus, and as a result of this, a prolapse occurs in the uterus as it descends and droops through the vagina. There are several causes of pelvic muscle weakness, including:
• Loss of muscle strength as a result of aging.
• Trauma during natural childbirth, especially in the case of multiple vaginal births, especially children with a large weight (more than 9 pounds).
• obesity;
• Chronic cough or strain.
• Chronic constipation.
What are the symptoms of uterine prolapse?
If you have a mild case of uterine prolapse, you may not have any obvious symptoms. However, when the uterus slips out of position, it can put pressure on other pelvic organs — such as the bladder or intestines — and cause symptoms such as:
• The woman feels pressure or heaviness in the pelvis and vagina.
• The woman feels pain in the pelvis, abdomen, or back pain.
• Pain during intercourse.
• The prolapse of uterine tissue through the opening of the vagina in advanced stages.
• Recurrent bladder infections.
• Unusual or excessive vaginal discharge.
• constipation;
• Trouble urinating, including incontinenceA need to urinate frequently (urinating frequently) or a sudden urge to urinate (urinary urgency).
Symptoms worsen when standing or walking for long periods of time. In these positions, there is additional pressure on the pelvic muscle structures due to gravity.
uterine prolapse
uterine prolapse
How is uterine prolapse diagnosed in Turkey?
Procedures for diagnosing uterine prolapse include several tests. The doctor will examine the pelvis to determine if the uterus has fallen from its normal position. During a pelvic exam, a doctor inserts a colposcope (an instrument that allows the doctor to see inside the vagina) to examine the vagina and uterus. The doctor will feel for any swelling from the uterus descending into the vagina at that time.
How is uterine prolapse treated in Turkey?
There are both surgical and non-surgical options for the treatment of uterine prolapse. Your doctor will choose your course of treatment based on the severity and symptoms of prolapse, your general health, your age, and whether or not you want to have children in the future. Treatment options for affected women may include:
Non-surgical options
• Exercise: Special exercises, called Kegel exercises, can help strengthen the muscular structures in the pelvic floor. This may be the only treatment needed in mild cases of uterine prolapse. To do Kegel exercises, tighten your pelvic muscles as if you were trying to stop pee. Tighten your pelvic floor for a few seconds and then relax. Repeat this 10 times. You can do these exercises anywhere, anytime (up to four times a day).
• Vaginal pessary (vagina bun): A pessary is a rubber or plastic, round cake-shaped device that is placed around or under the lower part of the uterus (cervix). This device helps support and hold the uterus in place. Your doctor will insert and insert a pessary, which must be cleaned frequently and removed before sex.
Types of landing
Types of landing
Surgical options
• Hysterectomy and prolapse repair: Uterine prolapse in women can be treated by removing the uterus. This can be done through an incision (incision) in the vagina (vaginal hysterectomy) or through the abdomen. Removing the uterus is considered major surgery, because once the uterus is removed, pregnancy is no longer possible.
• Hysterectomy repair without hysterectomy: This procedure involves returning the uterus to its normal position. Uterine suspension can be done by reattaching the pelvic ligaments to the lower part of the uterus to hold them in place. The surgery can be performed through the vagina or through the abdomen depending on the technique used.
What are the complications of uterine prolapse?
If untreated, uterine prolapse may affect other organs in the pelvic region of the body. The uterus hanging through the vagina can put pressure on the intestines and bladder. It can also negatively affect sexual life, as it can cause pain.
Can uterine prolapse be prevented?
You may not be able to prevent all cases of uterine prolapse, but there are ways to reduce your risk of uterine prolapse. Some lifestyle tips that can reduce your risk of developing prolapse include:
• Maintain a healthy weight.
• Exercise regularly. In addition to exercises to strengthen the pelvic floor muscles. And make sure to stick to the exercises appropriate to the health status of each person.
• Follow a healthy diet. Talk to your doctor or a dietitian (a special type of health care provider who helps you create a meal plan) about the best diet for you.
• stop smoking. This reduces the risk of developing a chronic cough, which can put extra pressure on the pelvic muscles.
• Use proper techniques to carry heavy objects.
What are the appropriate techniques for carrying heavy objects to prevent uterine prolapse?
There are several ways to lift heavy objects that can help you avoid uterine prolapse. Lifting techniques include:
• Don't try to lift things that are too heavy for you to lift on your own. Also, avoid lifting heavy objects above waist level.
• Before you lift something, make sure your foot is stable.
• To pick up something below your waist, keep your back straight and bend at your knees and thighs. Don't bend forward at the waist with your knees straight.
• Stand close to the object you're trying to pick up, keeping your feet flat on the ground. Tighten your abdominal muscles and lift your body up using your leg muscles. Then straighten your knees in a steady motion.
• Stand completely upright without twisting. Always move your feet forward when lifting something.
• If you are lifting something off a table, move it to the edge of the table so that you can hold it close to your body. Bend your knees so that you are close to the object. Then use your legs to raise the body and stand.
• Hold the objects close to your body with your arms bent. And keep your abdominal muscles tight. Then take small steps and go slowly.
Does uterine prolapse recur?
Most of the time, treatment for uterine prolapse is effective. However, sometimes the prolapse can return. This is more common if you have severe uterine prolapse, are obese or are young (under 60 years of age).
Treatment for uterine prolapse usually has very positive results and lifestyle changes (maintaining a good weight, exercising) can help prevent recurrence of prolapse. Talk to your doctor about any concerns you may have about uterine prolapse. Your doctor can help you develop a treatment plan and build good lifestyle habits to prevent any recurrences of uterine prolapse in the future.
Bimaristan Medical Center remains your first choice for treatment in Turkey.
We provide our services throughout Turkey, the best place to provide you with treatments is our destination.
We accompany you step by step towards recovery.
Free consultations around the clock.
do not hesitate Contact usBimaristan, your family center in Turkey.
Frequently asked questions and answers about uterine prolapse and its treatment in Turkey
Is there a relationship between uterine prolapse and menstruation?
So far, it has not been proven that there is a relationship between uterine prolapse and the menstrual cycle, and the possibility of uterine prolapse increases in women in menopause.
What are the symptoms of uterine prolapse?
Symptoms of uterine prolapse include a feeling of pressure and heaviness in the vagina, protrusion of uterine tissues from the vagina, infections and urinary problems, pressure on the colon and rectum causing constipation, lower back pain.
Is uterine prolapse dangerous?
Like any other surgical procedure, it is not without complications, such as the formation of a fistula (an abnormal connection) between the vagina, bladder, intestine and rectum, incontinence Relapse of uterine prolapse.
Is uterine prolapse prevents pregnancy?
Uterine prolapse reduces the chances of pregnancy, although pregnancy can occur primarily from uterine prolapse.
Is uterine prolapse prevents the descent of the course?
Uterine prolapse does not prevent women from menstruating.
How does uterine prolapse affect sexual intercourse?
Uterine prolapse may be asymptomatic, in advanced stages causing pain during sexual intercourse in women and urinary incontinence during sexual intercourse.
What is the success rate of uterine prolapse surgery in Turkey?
According to the last Studies Surgical treatment of uterine prolapse has been very successful in 80-95 % women and the prolapse rate is only 3%.
Table of Contents
Related Posts
No Results Found
Share This |
The Greatest Comeback: From Genocide to Football Glory: The Story of Bela Guttman
This product is not available in the selected currency.
Before Pep Guardiola and before Jose Mourinho, there was Bela Guttmann: the first superstar football coach, and the man who paved the way for the celebrated coaches of the modern age.
He was also a Holocaust survivor. In 1944, much of Europe had wanted Guttmann dead. He hid for months in an attic near Budapest as thousands of fellow Jews in the neighbourhood were dragged off to be murdered. Later, he escaped from a slave labour camp before a planned deportation and almost certain death. His father, sister and wider family were murdered.
But by 1961, as coach of Benfica, he had lifted Europe's greatest sporting prize, the European Cup, a feat he repeated the following year.
This biography spans two contrasting visions of Europe: one of barbarism and genocide, and one of beauty, wonder and romance, of balmy evenings in magnificent cities, where great players would stretch every sinew in a bid to win football's holy grail. With dark forces rising once again in that continent, the story of Bela Guttmann's life asks the question: which vision will triumph in our times?
Detalls del producte
Biteback Publishing
Data de publicació
Matèries IBIC:
Obtingues ingressos recomanant llibres
Uneix-te al programa d’afiliats |
Maine to Move to a Standards-Based High School Diploma
During the 2012 Maine Legislative Session, Commissioner of Education Stephen Bowen spoke in favor of passing L.D. 1422, An Act to Prepare Maine People for the Future Economy. The legislature then passed the bill in May. The state policy holds its education system responsible for preparing “all of the people of the State for success in college, career, citizenship and life.”
The law establishes goals related to improving early childhood programming, increasing high school and college completion rates and guaranteeing college and career readiness. One major change developing out of the law is a move to a standards-based high school diploma. The move to this diploma was initiated because over half (54%) of Maine high school students who continue their education at a community college require remedial coursework prior to taking college-level classes, according to Maine Department of Education’s spokesperson, David Connerty-Marin. The law will require potential graduates (beginning in 2017) to show proficiency in English, mathematics, and science—among other subject areas which would be determined by the local school board.
During the legislature’s deliberations, Commissioner Bowen,supported the bill, saying "[t]he idea is that under the bill, this year's seventh-graders would be that first class of kids that would have to demonstrate that they've met the learning results in order to graduate. That means that if I'm a high school, I'm going to have that group of kids soon."
Schools will have to allow students to show proficiency in any number of ways, from a traditional test, to a performance, exhibitions, or portfolios. Additionally, there is a waiver provision that would give districts until 2020 to fully implement the new diplomas if they can demonstrate they are working toward that goal.
Add new comment
Filtered HTML
Plain text
• No HTML tags allowed.
• Lines and paragraphs break automatically.
To prevent automated spam submissions leave this field empty.
3 + 1 = |
Rhyming Sonnets
A sonnet is an echo chamber of sounds, a closed form, allowing sound patterns to form consciously, or otherwise, in the reader’s mind. Rhyming is part of that sound-patterning, and rhymes can appear either at the end, the beginning or inside each line.
Rhymes at the end of each line have characterised sonnets for centuries; they allow meaning to be highlighted, structure to be delineated.
I will denote end-of-line words with lower case letters: a,b,c,… so a rhyming couplet would be described as aa or bb or cc. A stanza with alternating end-rhymes would be abab. Rhymes may emphasise stanza breaks by ensuring there is a closing rhyme at the end of the stanza: denoted xbxb, where the end of lines 1 and 3 may not rhyme at all but the last line rhymes with line 2.
However, rhymes should not be obvious, unless for humorous effect; they should be below the perception of the reader operating at an unconscious level.
This can be hard to do with full rhyme, so some poets use pararhyme for a lighter touch, e.g., W.B. Yeats (Leda and the Swan) “up/drop”; S. Heaney (Fosterling) “land/mind”. These use assonance and consonance, alliteration in end-rhymes, with word sounds sometimes just chiming together, e.g. “justice/hostess”.
Achieving this natural effect may nudge you to explore different line endings and alternative phrasing during the drafting phase. Through the need for end rhymes, you may occasionally be obliged to search for a different word than your initial choice to make an original rhyme. This process can create the magic where the sonnet starts to write itself. You may be steered along an alternative path to that originally conceived, ideally to a better poem.
Rhymes facilitate poetry’s power to move, so it pays to invest some effort in this aspect of poetic structure. Here are some alternative rhyming schemes used throughout history of the sonnet form:
• Petrachan: abba abba cde cde or abba abba cdcdcd. Note the “enclosing rhyme” pattern abba
• Shakespearean: abab cdcd efef gg (attributed to Shakespeare but established earlier by Henry Howard, Earl of Surrey). Note the “crossing rhyme” pattern abab
• Spenserian: abab bcbc cdcd ee
• Closed Terza rima or a chain-rhyme sonnet: aba bcb cdc ded ee
• Meredithian: abba cddc effe ghhg
• Rhyme royal: ababbcc ababbcc
• English Rondel: ABab baAB baabAB (see my blog on the English Rondel).
In modern sonnets, many more variants have been devised, including unrhymed sonnets. There is also the form that I have devised with orphaned lines, abax b cdcx d efef, as used in The Arrow of Time [published in The Cannon’s Mouth Quarterly, p.48, Cannon Poets, Issue 71, March 2019, ISSN 1745-663003].
Sonnet Rhyme Schemes - Crop and title
Leave a Reply
WordPress.com Logo
Google photo
Twitter picture
Facebook photo
Connecting to %s |
Why does the UK government use an obscure word?
The word “society” is in the title of a new Government paper.
The title of the paper is titled “A sociological vision of society”.
But what does that even mean?
The paper is part of a document titled The Societal Vision of Britain and it refers to a “sociological vision” which is a “view of the world in which society functions, the way it relates to the environment and the way in which it should behave”.
It also refers to “societies” which, as well as being “groups of people who are members of a society”, “are groups of individuals, families and groups, and of a broader group of individuals and groups”.
What is “societal vision” anyway?
The word “vision” is used in the Constitution, the Government’s statutory framework for governing British law, and the UK Government is entitled to use it, as its statutory authority under the constitution.
But “visionary vision” is a less-used term in English law.
So what does “visionaries vision” mean?
In order to answer this question, the UK Ministry of Justice, through its Bureau of Social Justice and Public Policy, released a short video outlining the concept of “vision”.
The idea is that when people consider how the world should be and what they should expect to be able to achieve, then they will begin to understand what it means to live in a society that is “vision-aware”.
In the video, a group of people are shown on a video screen in a room, and each of them has an image of a tree, a bird, a tree branch or a tree trunk.
The video then shows the person who has the image of the tree or bird standing at the end of the video.
The person who was the first person to have the image is the person on the left.
The person who is on the right is the same person on whom the person to the left had the image.
The left person is now shown the right person, but it is clear that they are not the same people.
In the video the right-hand person is seen standing at a window with the tree in the background.
“People see what they expect to see” is one way of putting it, and there are a number of reasons why people might expect to have something that looks like a tree or a bird in their house.
People might have a strong desire to live a life that reflects their sense of self.
They might want to live where they are and what is possible, and they might want their children to grow up to live like that.
They may want to be the kind of person who lives in a house that reflects who they are.
People might also want to look after animals, which may be seen as a good way to live, because it will allow them to be close to nature.
Another way of seeing the world may be as a “model” of what they would like to live.
People are often taught that what they see in a painting or on TV is what they want to see.
So people may expect to live as they imagine their life, in a place that reflects what they hope to achieve in their lives.
One way to describe “vision”, though, is to think of it as a way of viewing the world.
A “model of what to do” might be something like this: “This is what you should do in a certain situation, in order to achieve that outcome, and it might be what your parents and teachers and your friends and your relatives and your neighbours and your family members and the other people you trust will see it as best for you to do.”
The reason for this model, the “vision model”, is that “what is a model of what you can achieve in a given situation depends on the person doing the modelling”.
“It depends on their perspective”, says Dr Andrew C. Walker, from the University of York.
Dr Walker’s colleague Dr Andrew L. Walker said that “vision models” were very different to “model models”.
“People may want the vision model, but they may not want to have it”, he says.
If a person did not want their vision model to reflect their “future expectations” they would “never” have a vision model.
They would have “never-seen-anything-like-this”, Dr Walker says.
So “model-models” are very different from “vision projects”.
The difference is that model-models are “the kind of things that people have to work hard to achieve and they do it with a very narrow focus”, Dr Andrew Walker says, and “vision project” is “something more like an ongoing process”.
The “vision vision project” in the video is a way to explain the difference between “model model” and “model vision”.
“Model model” is an “interactive interactive” model that can |
Blockchain Technology Explained (2 Hour Course)
2.035.960 visualizações7 de fev. de
the blockchain is a term that has come to mean many things to many people for developers it is a set of protocols and encryption technologies for securely storing data on a distributed network for Business and Finance it is a distributed ledger and the technology underlying the explosion of new digital currencies for technologists it is the driving force behind the next generation of the .
Internet for others it is a tool for radically reshaping society and economy taking us into a more decentralized world whichever way you look at it blockchain has become a term that captures the imagination and fascinates many as the implications of such technology are truly profound for the first time in human history people anywhere can trust each other and transact with in large peer-to-peer networks without centralized management trust is established not by centralized institutions but by protocols cryptography and computer code this greatly strengthens our capacity for collaboration and cooperation between organisations and individuals within peer networks enabling us to potentially form global networks of collaboration without centralized formal institutions unprecedented but hugely relevant in an age of globalization and a new set of 21st century challenges that require mass collaboration Chane is a complex technological economic and social phenomenon it calls into question what might have seemed to be established parameters of the modern world like currency economics trust value and exchange to make sense of this one needs to understand it in a holistic context all the way from its technicalities to it’s aspirational potential this course is designed to do exactly that by giving a 360-degree overview to the different dimensions of the technology its potential application within various industries and its far-reaching implications for society and economy in the first section of the course we give an overview to the blockchain both on a technical and non-technical level we also discuss the importance of the blockchain within the context of the emerging next-generation .
Internet in the second section we talk about the blockchain as a so-called trust machine and how it enables transparency and collaboration we will look at distributed ledger technology talking about smart contracts etherium and decentralized applications in the third section we introduce you to the workings of token economies illustrating how the blockchain and distributed Ledger’s can work to build vibrant ecosystems through the use of tokens to incentivize behavior section of the course we will be looking at specific applications of the blockchain to economy society technology and environment looking at both existing practical applications and potential future applications the blockchain is a so-called emerging technology that is currently experiencing very rapid evolution within the space of just two or three years it has already gone through changes in its technical implementation and our understanding of what it is and can be as such our aim is to future-proof this course by not dwelling excessively on existing technical implementations but presenting a more conceptual understanding of the blockchain within the broader process of change of the emerging next-generation .
Internet is much more than a technology it is also a culture and community that is passionate about creating a more equitable world through decentralization it is a movement to disrupt the disruptors to redesign the .
Internet and in so doing shake up existing centralized incumbents throughout the course we will introduce you to this culture and its aspirations this course is non technical in nature it is an introductory course and thus all terms will be explained it should be accessible to anyone with a basic understanding of web technologies and economics in this video we’re going to give a high-level overview looking at the primary dimensions of this technology that we call the blockchain will first talk about the underlining technology then the distributed edges that this technology supports then the token economies that can be built on top of that ledger system will only touch on these topics here to get an overview before going into them in more detail in future videos on its most basic level the blockchain is a new class of information technology that combines cryptography with distribute computing both of which existed for a number of decades it was the genius of Satoshi Nakamoto to combine them in new ways to create a model where a network of computers collaborate towards maintaining a shared and secure database as such we can say the blockchain as the technology is simply a distributed secure database this database consists of a string of blocks each one a record of data that has been encrypted and given a unique identifier called the hash mining computers on the network validate transactions add them to the block they’re building and then broadcast the completed block to other nodes so that all have a copy of the database because there is no centralized component to verify the alterations to the database the blockchain depends upon a distributed consensus algorithm in order to make an entry onto the blockchain database all the computers have to agree about his state so that no one computer can make an alteration without the consensus of others once completed a block goes into the blockchain as a permanent record each time a block gets completed a new one is generated there is a countless number of such blocks in the blockchain all connected to each other by links in a chain in proper linear chronological order the blockchain was designed so that transactions are mutable meaning they cannot be deleted each block contains a hash value that is dependent upon the hash of the previous block so they’re all linked together meaning if one is changed then all the other blocks linked to it going forwards will be altered this works to make the data entered tamper proof what we’ve described here is the workings of the first generation of block chains which function largely simply as databases but the technology is currently evolving to become much more than this as the second generation already provides the capacity to execute any computer code on the blockchain the system is evolving to become a globally distributed tile computing infrastructure and as we’ll discuss in a future video it remains very much a work in progress when seen from this perspective blockchain technology works to create a permanent and secure database this makes blockchain suitable for the storage of a record or transaction that involves value or in some way it needs to be a secure and trusted source of information these secure distributed records are called distributed Ledger’s a distributed ledger is a consensus of replicated shared and synchronized digital data geographically dispersed across multiple sites countries or institutions without centralized administration or centralized data storage being maintained instead by a distributed network of computers such Ledger’s can be used for any form of asset registry such as inventory or monetary transactions this might include the recording of hard assets such as physical property cars homes etc or intangible assets such as currencies patents votes identity healthcare data or any other form of valuable information this distributed ledger technology enables us to replace a multiplicity of private databases within each organization with one shared database that is trusted and accessible by all parties involved in this respect the blockchain enables trust between parties that may otherwise not trust each other the results greatly strengthen our capacity for collaboration between organizations or between individuals peer to peer without dependency on third parties centralized institutions likewise their results in transparency and many other efficiencies this is of major significance as we currently have many centralized organizations that may be internally optimized but the inter organizational space in between them is really inefficient with huge amounts of border friction redundancy arbitrage and resources wasted on competition by enabling trusted into organizational networks these Ledger’s enable the formation of organization and collaboration where previously there was none such as across whole supply chains or for different healthcare providers to collaborate around the patient’s needs or for different transport providers to collaborate in delivering and integrated logistics network likewise second-generation blockchains offer the possibility to automate the workings of these networks through what we call smart contracts smart contracts are computer code that is stored inside a blockchain which encode contractual agreements these smart contracts are self executing contracts with the terms of the agreement or operation directly written into lines of code which are stored and executed on the blockchain like normal computer programs these containers hold algorithms that take an input of data and depending on the value of the input trigger certain events for example this might be a financial contract that takes as the input the amounts of money in a person’s accounts if it is above a certain level then it increases the interest rate that they earn on their deposits such smart contracts can be used for automating and many basic operations on the network once again working to remove the need for intermediary third party institutions as smart contracts can be trusted a tamper-proof and executes automatically much of the current discussion surrounding blockchain remains at the level of the technology and the possibilities of distributed edges as he shared trusted database enabling the collaboration between organizations with the resulting disintermediation of centralized institutions and market exchanges however its implications go far beyond this as the blockchain concepts is more than just a database or ledger it is a new organizing paradigm for the discovery validation and transfer of all discrete units of value and the developments of distributed organizations via token market systems a token is a quantified unit of value that is recorded on the blockchain this value may be of any kinds it may be likes on social media it might be a currency it might be the integrity of an ecosystem or might be an electrical units token networks consists of a network of independent nodes that act autonomously but through incentive structures and the signaling system of the market self-organized to create emergent coordination and thus a distributed management system for example we might create a clean air token where anyone who provides a service that contra buttes to the maintenance and provision of clean air can earn tokens for example by planting a tree while those who pollutes by say operating a combustion engine have to pay in air tokens thus instead of having a centralized .
Authority and a Clean .
Air .
Act’s we have a token market that works to create signals that align people’s incentives with maintaining and growing the underlining resource likewise the same model could be applied to the management of technology infrastructure as an example we could think of traffic control we currently have traffic control systems in cities whose operations are monitored by centralized control centers but in a world of autonomous vehicles and the blockchain cars could signal to each other its appear bidding tokens to see which gets priority in such a way the system has dynamically allocated resources and self-organized via distributed token networks in short blockchain is not just an information technology but also an institutional technology in that it enables us to design incentive structures in the form of token economies and in such a way converts centralized organizations into distributed markets via token economics this is where things start to get quite complex as you move into the realm of designing economies and incentive systems for coordinating human activity in a decentralized fashion something that could potentially enable the coordination of human activity at a much larger scale than has been possible before the great design innovation of the blockchain is really its capacity to coordinate a network of autonomous nodes towards maintaining a shared infrastructure and this is done not just through innovations in information technology but also through the design of incentive systems which has traditionally been the domain of economics through adding a layer of trust and value exchange to the .
Internet the blockchain merges our newly developed information networks with the institutional structures that sit on top of them in so doing it greatly strengthens the capacity of those networks as a new mode for organizing society and economy by merging economics and technology it enables us to redesign institutional structures and ultimately reconceptualize how we organize virtually every aspect of society economy and even technology infrastructure based on networks of autonomous nodes that are incentivized to collaborate of course it does not do this alone such claims can only be realized in combination with other technologies and broader processes of change as such the blockchain has to be understood in the context of a broader set of technological transformations taking place with the current evolution of the internet most notably much what the blockchain promises will only be possible given parallel developments in the .
Internet of Things data fication and advanced analytics all of which are combining to form the next generation of .
Internet of which the blockchain will be a critical infrastructure in this video we’re going to talk about the basics of the blockchain as the technology on its most basic level the blockchain can be understood as a new kind of database at least this was its original design what’s different about this database is that it’s distributed digital databases have been around for a while now but until recently they’ve been designed to centralize information on one computer or within one organization the blockchain though uses a distributed network of computers to maintain a shared database the blockchain is then a set of protocols and encryption methods than enable a network of computers to work together in securely recording data within a shared open database this database consists of a series of encrypted blocks that contain the data the blockchain is a continuously growing list of these blocks of data which are linked and secured using cryptography this makes it a trusted database with this trust being maintained by open secure computer code an encryption instead of any single institution the database stores information in blocks that are linked together through hash values with entries to this database being made by computers that all have a copy of the database and all must come to consensus about its state before they can update it so these are three central concepts to understanding the system’s workings that of blocks and hashing mining and proof of work and distributed consensus we’ll go over each of these separately in terms of its structure a block chain may be considered as a series of blocks of data that are securely chained together new blocks are formed as participants create new data or wish to update existing data these blocks are encrypted and given a hash value that represents a unique identifier of the data within that block this hashing works by a standard algorithm being run over the blocks data to compress it into a code which is called the hash which is unique to that documents no matter how large the file or what information is contained it is compressed into a 64 character secure hash this hash value can be recalculated from the underlining file confirming that the original contents have not changed but the reverse is not possible given just the hash value you cannot recreate the blocks data contained within it which is encrypted all blocks of data which are formed after the first block are securely chained to the previous one this means that the hash value of the next block in the chain is dependent upon the previous one thus once recorded the data in any given block cannot be altered afterwards without the alteration of all subsequent blocks as well as this hash pointer linking to the previous block each block typically contains as well a timestamp so that we know what happened and when it happens this hashing and linking of blocks makes them inherently resistant to the modification of their data making them immutable records you can only write data to the database and once it’s there it’s very hard to change almost impossible thus data is stored on the blockchain is generally considered incorruptible blockchain security methods include the use of what we call public key cryptography a public key which is a long random looking string of numbers is an address on the blockchain value tokens sent across the network are recorded as belonging to that address a private key is like a password that gives its owner access to their digital assets or the means to otherwise interact with the corresponding data a public key is associated with the private key so that anyone can make an encrypted transaction to the public key address but that encrypted message can only be deciphered with the private key that corresponds to that public key in such a way effective security only requires keeping the private key privates the public key can be openly distributed without compromising security for example on the Bitcoin blockchain to receive funds from another person you use a piece of software called the wallets which creates a public key that you give to someone else for them to send bitcoins to that address with your corresponding private key you can then access that address with those bitcoins on it the blockchain is a distributed system this means there is no centralized organization to maintain and verify the entries on the database this database is instead maintained by a large number of computers that are incentivized to provide computing resources by earning some form of tokens in exchange but these computer nodes in the network themselves cannot be trusted individually the city’s required that the system provide a mechanism for creating consensus between scattered or distributed parties that do not need to trust each other but just need to trust the mechanism by which their consensus has been arrived down any computer that is connected to the blockchain network and using a client can perform the task of validating and relaying transactions each of these so-called minor computers gets a copy of the blockchain which gets downloaded automatically upon joining the network when new entries into the database are made these changes are automatically broadcast across the network mining nodes validate transactions add them to the block their building and then broadcast the state of the complete block to other nodes on the network in order to randomize the processing of blocks across the nodes and to avoid certain service abuses block chains use various time stamping schemas such as proof of work proof of work describes a system that requires a certain amount of resources or effort to complete an activity typically this resource is computing time as in the case of the Bitcoin blockchain this is realized on some form of challenge such that no one actor on the network is able to solve this challenge consistently more than everyone else on the network miners compete to add the next block in the chain by racing to solve a very difficult cryptographic puzzle the first to solve the puzzle wins the lottery as a reward for his or her efforts the miner receives the small amounts of newly minted bitcoins and a small transaction fee it can sense this algorithm like bitcoins proof-of-work functions to ensure that the next block in the blockchain is the one and only version of the truth and it keeps powerful adversaries from derating the system block chains are trying to create a secure trusted shared database and do this through encryption and hashing proof of work and network consensus the hashing and linking of blocks makes it difficult to go back and change a previous block once it’s entered but this alone would not be suffice to ensure that the data is truly tamper proof so then the proof of work system intentionally makes it computationally more difficult to alter the database thus making it extremely difficult to alter all the blocks on top of this it places that distributed consensus mechanism so that even if someone did manage to do this their record would not match that of others and thus would not be accepted as a valid record so to successfully tamper with the block chain you would need to alter all the blocks on the chain redo the proof of work for each block and take control of more than 50% of the peer-to-peer network only then would your altar block become accepted by everyone else on a block chain of almost any size this would be almost impossible to do indeed the Bitcoin blockchain is very good proof of this given that it now secures hundreds of billions of dollars using this method without the network having yet been compromised at the end of the day what this technology enables is a database that is secured with automatic trusts that is enabled by open source code and encryption the data is tamper proof once information is put into the database it cannot be altered afterwards it is a shared database as many people across a network have a copy which is continuously being updated so that all have a single source of truth likewise it is transparent meaning that everyone can see all of the transactions and alterations made to the database if needed data quality and the resilience of the network is maintained by massive database replication across many different nodes on the network no centralized official copy exists and no user is trusted more than any other having started out life is simply a mechanism to enable Bitcoin it has become increasingly recognized that this system is secure enough to work as a ledger for the recording and exchange of any value what we now call a distribution there germ over the past years blockchain evolving fast from the original Bitcoin protocol to the second generation aetherium platform to today to where were in the process of building the third generation of block chains in this evolution we can see how the technology is evolving from its original form as essentially just a database to becoming a fully fledged globally distributed cloud computer in this video we’re going to trace the past present and future of blockchain technology the first blockchain was conceptualized in 2008 by an anonymous person or group known as Toshi Nakamoto the concepts and technicalities are described in an accessible white paper termed Bitcoin a peer-to-peer electronic cash system these ideas were then first implemented in 2009 as a core component supporting Bitcoin where it served as the public ledger for all transactions the invention of the blockchain for Bitcoin made it the first digital currency to solve the double spending problem without the need of a trusted .
Authority or central server it was only later that we came to really separate the concept of the blockchain from that of its specific implementation as a currency in Bitcoin we came to see that the underlining technology had a more general application beyond digital currencies in its capacity to function as a distributed ledger tracking and recording the exchange of any forms of value the Bitcoin design has been the inspiration for other applications and has played an important role as a relatively large scale proof-of-concept within just a few years the second generation of blockchains emerged designed as a network on which developers could build applications essentially the beginning of its evolution into a distributed virtual computer this was made technically possible by the development of the etherium platform aetherium is an open-source public blockchain based distributed computer platform featuring smart contract functionality it provided a decentralized turing-complete virtual machine which can execute computer programs using a global network of nodes aetherium was initially described in a white paper by the italic Bertrand in late 2013 with a goal of building distributed applications the system went live almost two years later and has been very successful attracting a large and dedicated community of developers supporters and enterprises the important contribution of etherium as the second generation of blockchains is that it worked to extend the capacity of the technology from primarily being a database supporting Bitcoin to becoming more of a general platform for running decentralized applications and smart contracts both of which we’ll discuss in coming videos as of 2018 aetherium is the largest and most popular platform for building distributors applications on many different types of applications have been built an app from social networks to identity systems to prediction markets and many types of financial applications aetherium has been a major step board and with its advent it has become ever more apparent where we’re heading with the technology which is development of a global distributed computer a massive globally distributed cloud computing platform on which we can run any application at the scale and speed of today’s major websites with the assurance that it has the security resilience and trustworthiness of today’s block chains however the existing solutions that we have are like extremely inefficient Computers the existing blockchain infrastructure is like a really bad computer that is not able to do much except proof of concepts getting to the next level remains still a huge challenge that involves some original and difficult computer science game theory and mathematical challenges scalability remains at the hearts of the current stage in the journey that were on and this is what the third generation of blockchain technologies are trying to solve the mining required to support the Bitcoin network currently consumes more energy than many small nations being equal to that of Denmark and costing over 1.5 billion dollars a year in the lectures deem a lot of this is being fueled by cheap but dirty coal energy in China where almost 60% of the mining is currently being done this high energy consumption is simply not scalable to mass adoption etherium and Bitcoin use a combination of technical tricks and incentives to ensure that they accurately record who owns what without a centralized .
Authority the problem is it’s difficult to preserve this balance was also growing the number of users currently blockchain requires global consensus on the order an outcome of all transfers in aetherium all smart contracts are stored publicly on every node of the blockchain which has its trade-offs the downside is that performance issues arise in that every node is calculating all the smart contracts in real time which results in those speeds this is clearly a cumbersome task especially since the total number of transactions is increasing approximately every 10 to 12 seconds with each new block added the volume of transactions is likewise an existing constraint with cryptocurrency speed is measured by TPS transaction per second the Bitcoin network theoretical maximum capacity is up to seven transactions per second while the ethereum blockchain as of 2018 can hand about 15 transactions per seconds by comparison a credit card network is capable of handling more than 20 thousand transactions per seconds equally Facebook may have about 900 thousand users on the site in any given minutes meaning that its handling about a hundred and seventy thousand requests per second another issue is that of cost the fact that it costs some small amount to run the network so as to pay the miners for maintaining the ledger what we have is okay for a limited number of large transactions such sending money but making a small transaction by purchasing a coffee could not be done by most block chains they simply can’t in their existing form deal with a very large amount of micro transactions such as will be required to enable high-volume machine-to-machine exchanges it would prove too expensive to operate these kind of economies that involve many small exchanges but this is exactly what many people will want to use the blockchain for in the future in response to these constraints a third generation of blockchain networks are currently under developments many different organizations are currently working on building this next-generation blockchain infrastructure such projects include Definity Neo Geo’s iota and a theorem itself they’re each using different approaches to try and overcome existing constraints going into the details of how these different networks work is a bit advanced for this course so we’ll just give a brief overview to two of them the Lightning Network is one such project that seeks to extend the capacities of existing block chains the main idea is that small and non significant transactions do not have to be stored on the main blockchain this is called an off chain approach because small transactions happen off of the main goal chain it works by creating small communities where in transactions can take place without each of those transactions being registered on the main blockchain a payment channel is opened between a group of people with the funds been frozen on the main blockchain those members can then transact between each other using their private key to validate the transactions this is a bit like having a tab or an .
U with the shop where you just mark down what you’ve exchanged so that you don’t have to update the main record in the bank each time you make a purchase the record stays local between the members involved before at some time setting the finances and updating the main bank record this only requires two transactions on the main blockchain one to open the transaction channel and once closed here all other transactions happen just within the network without it being registered on the main blockchain this both reduces the workload on the main blockchain and makes it possible to run a very many very small transactions within the sub network as of the start of 2018 there is a proof-of-concept running live on the Bitcoin test net but the system will not be fully operational until later in the year as is the case with most of these projects .
A is another example where as existing block chains are sequential chains where blocks are added in a regular linear chronological order the data structure of the iota system is able to achieve high transactional throughput by having parallel operations the data structure is more like a network rather than a linear chain wherein processing and validation can take place alongside each other the other big difference is that there are no specialized minors in this network every node that uses the network functions as a minor in the i/o to network every node making a transaction also actively participates in forming the consensus that is to say everyone does the mining this means that there is no centralization of mining within the network which is what creates bottlenecks and demands lots of energy likewise with this network there are no transaction fees for validation and with iota because it is more user generated the more people that use the network the faster becomes which is the opposite of existing systems and obviously makes it very scaleable there are lots of other possible approaches to overcoming existing constraints but suffice to say the blockchain should be understood as an emerging technology whose existing implementation is like a large-scale proof-of-concept running on a very inefficient system but through lots of experimentation and iteration well hopefully in the coming years evolved into this global distributed computer as nan Lee Swan writes in her book first there was the mainframe PC personal computer paradigms and then the internet revolutionized everything mobile and social networking were the most recent paradigm the current emerging paradigm for this decade could be the connected worlds of computing relying on blockchain cryptography to understand this better in the next section we’re going to talk about the blockchain in the context of the broader technological changes that are currently underway as we build the next generation of the .
Internet’s what we call the decentralized web or web 3.0 how we understand the blockchain and where we are with it today is extremely transitory in this respect what we’re talking about in this course when we talk about the blockchain is ready this emerging .
IT infrastructure of a distributed global cloud computer the next generation of block chains will take us a step further on that journey what we called the blockchain today is really just a very limited and often very inefficient version of this we still have many very difficult problems to solve before we’ll get there possibly that end stage will look something like the blockchain of today but possibly it would look very different people will make big claims about the potential of the blockchain to revolutionize the foundations of social and economic organization but the blockchain can only have such potential as part of a broader ecosystem of technologies that are emerging as the next generation of the .
Internet’s what may be called web 3.0 or the decentralized web today powerful technological changes are coalescing to take us into a new technology paradigm these include the rise of advanced analytics coupled with data fication and the .
Internet of Things the blockchain will have to work synergistically with these if it’s true capacities are to be realized but to understand this next generation web we need to understand a bit about the history of the .
Internet going back to the early 90s web 1.0 was the first generation of the world wide web it was based primarily on the technology of HTTP which worked to link documents on different computers and make them accessible over the .
Internet HTML was then used to display these documents so that any connected computer with a browser could access and read a web page this first iteration of the web was all about information as it enabled us to exchange information much more efficiently and head Sagat the name information superhighway even though it was a revolution in information exchange content creators were few with the vast majority of users simply acting as consumers of contents it was really very static and lacked interactivity whereas in the web 1.0 era people were limited to pass a viewing of contents with web 2.0 websites allowed users to interact collaborates and become the creators of contents with web 2 people could not only read from the web but also write to it and thus it got the nickname the readwrite web by the early 2000s new server-side scripting technologies such as PHP enabled developers to easily build applications where people could write information to a database with that information then being dynamically updated every time they refresh the page almost all of the websites that dominate the web today are based on this server-side scripting technology it gave the social networking blogging video sharing obey YouTube Facebook and all the other large platforms the most people spend most of their internet time using the idea of web 3.0 has been around for a while but it’s only very recently with the development of the blockchain that it is actually starts to become something real web 2.0 has evolved to become highly centralized around very large platforms running out of ever larger data centers creating many issues surrounding security privacy control and concentration of power in the hands of large enterprises it’s only today that these issues are starting to enter into mainstream discourse web 3 is set to disrupt this whole technology paradigm as the critical change that is coming about is the word D centralizing the web the blockchain provides the protocols and cryptography for a globally distributed network of computers to collaborate on maintaining a public secure database and with a virtual machine like aetherium we can run code on this creating a new set of distributed applications these new technologies of the blockchain ipfs and the distributed web enabled us to reconfigure the internet into a distributed global computer so that we’re no longer dependent upon the web platforms and data centers of web 2.0 to run the internet but now can build and run applications on this shared global computing infrastructure as might be polled of the .
Institute for the future noted it starts with the realization that the internet that we know today is only one possible interpretation of the original vision of an open clear to peer network independent of any centralized technology commercial entity or sovereign governments think of it as a first curved internet one that is increasingly vulnerable to abuse and even collapse to date we’ve largely taken the infrastructure of the internet for grants it’s all of the innovation and action has been focused on the application layer that sits on top of it on web applications like social networking or e-commerce with the developments of blockchain and particularly with this third generation we’re starting to innovate on the low level protocols asking not if we can build a better web application but if we can build a better internet the implications of the decentralized web are indeed radical in that it enables us to create automated services disintermediate existing incumbents and enable people to set up their own secure networks of exchange empowering them in new ways the blockchain will be a core part of web 3.0 but the next-generation internet would also see the convergence of the .
Internet of Things and big data analytics the ongoing fundamental process of data fication will be a key aspect to this next-generation .
Internet as we increase in the instrument our world’s data will flow from all sources about everything data fication is the term given to our newly found ability to captures data many aspects of the world in our lives that have never been quantified before this process results in what we call big data vast amounts of unstructured data that can be mined by advanced analytical methods to gain new insight into the world around us this is important with respect to the blockchain because firstly it means we’ll have a lot of sensitive data that we want to store secondly we will be quantifying accounting for and exchanging all sorts of value that we did not or could not in the past thirdly such a diversity of sources of data combined with advanced analytics which could find cross correlations and patterns within it can provide a new source to verify the data that is being inputted to the blockchain without depending on a centralized .
Authority for validation the next-generation internet will be much smarter whereas web 1 was dumb and web 2 was dynamic web 3 will incorporate various aspects of machine learning and cognitive computing as a service as it becomes infused into almost all applications making the web truly adaptive responsive and personalized whereas web 1 and web 2 were largely about people exchanging information in web 3 machines will come online and the internet will become something much more physical as billions of devices and actuators connected to all sorts of things from tractors to watches to factories and drones enabling them to interact and coordinate machine to machine the value of the .
Internet of Things .
OT will not be in making one device or system smart it will be in enabling seamless processes across systems this will require open networks that can communicate and coordinate components on demand across domains organizations and systems the envision of .
OT is not to have our lives populated with thousands of smart things but instead to change our world from discrete things to service processes to do this these technologies will have to communicate securely it appear dynamically allocating resources and this will require some kind of distributed secure infrastructure like the blockchain and likewise micro economies this ties in with the broader process of change which comes about as we move into a services economy called services ation which is the shift from products and the ownership of things to the access of services on demands for example instead of only a car you simply have access to a car sharing service this economy of temporary usage V.
A services requires the formation of frictionless markets and automated exchanges that the blockchain is well-suited to support as we’ll discuss in a future video these components of the next generation internet the blockchain the .
Internet of Things and advanced analytics are each of them very powerful technologies that will have a profound effect on society they will take us much further into this new worlds of the information age as power shifts in a radical way from people in hierarchical institutions to automated networks and the algorithms that coordinate them in the coming decades more and more of our systems of organization will move to the .
Internet and it will become vastly more complex than today in web-one and web 2 we develop the internet from small to large through a client-server architecture the work decentralized the web around large data centers but the internet after data fication and all these .
OT devices have come online will not be large it will be more like infinite you can get from small to large by centralizing but you get from large to infinity through distributing and that’s what the blockchain can do for this next generation .
Internet technologies are just tools that enable us to do things the interesting part of the blockchain is really what it enables in terms of new forms of distributed organization as one commentator noted the blockchain is an institutional technology not an information technology there’s an enormous difference between the two institutional revolutions are things that don’t happen very often blockchain technology enables new forms of network distributes to organizations something that runs very much contrary to our existing organizational paradigm and that’s what makes it somewhat difficult for us to understand the organizational model of the .
Industrial .
Age that we inherit today was one of centralization in order to achieve economies of scale through mass production thus reducing unit cost and providing for a mass society the technologies of the .
Industrial .
Age selectively favored centralization of production within closed hierarchical organizations manufacturing is in centralized factories transport systems are centralized around transport hubs education within schools and universities entertainment centralized within mass media organizations and governance centralized within state-run organizations and so on the information revolution is in the process of taking us into a new world of distributed networks as the organizational paradigm of the .
Information .
Age the combination of telecommunications networks and computerized coordination enables us to replace centralized management with enclosed hierarchies with open networks as the underlining technology matures were able to convert more and more systems that were previously closed and centralized and have them managed through automated networks and the blockchain is just one more stage in this process we saw this with the rise of the online platforms like eBay uber or .
Alibaba social networks blogging etc that built massive networks of uses exchanging goods and services but these platforms were still dependent upon the centralized organization to manage the shared database for the computing infrastructure for the algorithms for financial transactions and to enable trust and authentication in the network the decentralized web takes these platforms a step further by offering a shared open and secure database that can be trusted by all parties and a set of protocols for the secure exchange of value between organizations and individuals peer-to-peer the web platforms are open networks this means they do not just optimize within a given organization but can enable coordination across whole industries indeed this is why and how they’re quickly supplanting the closed organizations what centralized organizations enabled was trust cooperation and coordination within organizations but what the blockchain enables is trust coordination and cooperation between organizations and between individuals if we look at how our society and economy is currently organized we’ll see many closed organizations that are internally optimized but when we look at the inter organizational space it’s extremely inefficient along many dimensions if we look at the way businesses coordinate along a supply chain or the way nation-states interoperate in the global political system we will see there is huge redundancy and friction caused by discontinuities a classical example of this is the border system between nation-states and the bureaucratic procedures for obtaining a visa for moving from one nation to another which creates a massive amount of friction at the inter organizational space and it’s because those organizations don’t have an effective into organizational infrastructure for collaboration and coordination this greatly reduces the overall effectiveness of these systems and the delivered outcome for the end user when we look internal to these organizations they look like efficient well-oiled machines but when we look between them the whole space is very inefficient the whole space is very ineffective at delivering overall outcomes and this is part of the significance of the blockchain because it provides a shared trusted database between organizations it has the potential to switch to dynamics within economy and society from competition between close centralized organizations to collaboration between organizations and greatly strengthened working capacities across organizational boundaries the results of this would be much more efficient overall societal and economic outcomes indeed we can note that achieving coordination across organizations could result in quantum leaps in delivering outcomes and our capacity to tackle major global challenges of today such as environmental degradation where weak existing into organizational institutional infrastructure has gained little traction in this respects the blockchain has the potential to give us not just incremental improvements but an order of magnitude greater capacities within society through collaboration within whole ecosystems it is precisely this coordination across organizations industries nations and people that is required to provide the resources needed to tackle some of today’s most complex challenges and it’s precisely this that is significantly absent within existing institutional structures because of the centralizing forces prevalent within the industrial age we live in societies that are operated by many different closed organizations many different companies all producing cars and competing for market share many different governments that all focus on the interests of their citizens over those of others many different health care providers transport providers at cetera the results of this is though a huge amount of inefficiency in redundancy when taken as a whole many different companies all recreating the wheel within their own organizations and expending huge amounts of time and energy on trying to get ahead of their peers we assume that this is the normal state of operations that it’s just human nature in some way but in fact it’s really just a function of the institutional structures we have built over the past centuries as game theory will tell us people respond to the incentives and the socio-economic forces acting on them in the absence of cooperative structures competition is often the optimal strategy for individuals and organizations but once there is the institutional structures to enable trust and coordination between members cooperation can become a much more viable strategy for the agents involved because the blockchain enables this shared and trusted database that doesn’t belong to any single organization it makes it greatly more possible and viable for organizations to collaborate on a single solution or single source and achieve much better results for each organization and the economy as a whole as an example of this cooperation across different closed organizations we could think about the design and construction of a building there may be many different companies involved in this process or creating their own designs and diagrams for the building with each having to continuously contact each other to access exchange and cross-reference all this information given a single shared database they could though or collaborate on a single design of the building making for a greatly more efficient overall process while at the same time each organization would benefit and thus the overall results are more efficient likewise the same is true for identity we currently have many different copies of our identity spread across many different organizations governments social networks employees etc but each of these only has a partial understanding of us currently we’re recreating the will for each organization while data and reputation does not move well between them instead a single identity could be created on the blockchain that belongs to the individual with each organization then contributing to this data as they work together to create a more complete record of identity and reputation in so doing we’ve moved from all those frictions between these closed organizations to collaboration and synergies between them creating something that is greater than any of the parts they had before the same for a supply chain instead of each participant holding their own documents and records during each stage in supply chain a single record for the item could be created on the blockchain with each organization then contributing their information to it to create a single source of truth that is accessible to always need it while at the same time being more secure than having separate records in each centralized database the end results of this new institutional technology is a much greater capacity for inter organizational collaboration and powerful ecosystems that are greater than the sum of their parts given that all our current systems of organization that are centralized could be decentralized in this fashion using blockchain technology we can see how it really could enable every organization of society and economy organizations within society rarely operate in isolation they function as parts of ecosystems and the value for society is not created by anyone but instead by the flow of value across the ecosystem it’s not .
Apple that delivers our iPhones it’s a massive global supply chain of hundreds of different organizations collaborating whereas our previous institutional structures optimized for individual organizations the blockchain optimizes for the value within the whole ecosystem and thus potentially a much greater value for society as a whole the information revolution is changing the world from disconnected to connected the genie of hyper-connectivity is out of the bottle and connectivity along virtually all dimensions is proliferating daily as a consequence our systems of organization will change from being based around fixed structures and boundaries to being coordinated via connections instead of the controller components through fixed higher structures organization would emerge out the interaction and the exchange of value along those interactions enabling that will require a massive build-out of secure frictionless information networks this global cloud computer of blockchain to understand better how this shared database that enables inter organizational collaboration works we’ll talk about distributed ledger technology as we’ve been talking about the blockchain is like another layer to the .
Internet that enables secure trusted records and transactions between people who may not otherwise trust each other the trust is in the technology computer code in mathematics rather than people and centralized institutions in this respect people sometimes talk about the blockchain as a trust machine in its capacity to enable a network where Trust is created by design it’s built into the system automatically because the blockchain creates a trusted database it can function as a record of value storage in exchange these records of value and transactions may be called Ledger’s since ancient times Ledger’s have formed the backbone to our economies to record contracts and payments for the buying and selling of goods or the exchange of assets like property these ledges started out as records in stone clay tablets and papyrus and later paper as they evolved into the ledger books supporting modern accounting these Ledger’s enabled the formation of currencies trade lending and the evolution of banking over the last couple of decades though these records have moved into the digital realm as whole rooms of people working to maintain accounts have been replaced by digital computers which have made possible the complex global economic system we live in today this record-keeping system is once again being revolutionized as these Ledger’s are shifting to a global network of computers which is cryptographically secure fast and decentralized what we call distributed ledger or distributed ledger technology DLT for shorts a distributed ledger can be described as a ledger of any transactions or records supported by a decentralized network from across different locations and people eliminating the need of a centralized .
Authority all the information on ledger is securely and accurately stored using cryptography and can be accessed using keys and cryptographic signatures any changes or additions made to ledger are reflected and copied to all participants in a matter of seconds or minutes the participants at each node of the network can access the recording shared across the network and can own an identical copy of it at the same time these networks make constantly available for examination a full audit trail of the information history which can be traced back to the moment when a piece of information was created and every participant in the network can get simultaneous access to a common few of the information these Ledger’s can be used for the recording tracking monitoring and transacting of all forms of assets all asset registries inventories and exchanges including every area of economics finance and currencies physical assets such as cars and houses and intangible assets such as votes ideas health data reputation etc in this case the blockchain can serve as a public record repository for whole societies including the registration of all documents events identity and assets in this system all property could become smart property this is the notion of encoding every assets of the blockchain with the unique identifier such that the asset can be tracked controlled and exchanged on the blockchain for example distributed edges could be used to replace or supplement all existing intellectual property management systems as they can register the exact content of any digital asset such as a file image health record or code to the ledger and give it a unique identifier in the form of the hash values that we discussed earlier there are two main classes of distributed ledger public Ledger’s and permission Ledger’s the former type is maintained by public nodes and is accessible to anyone Bitcoin is a well-known example of a public blockchain where anyone can read the chain anyone can make legitimate changes and anyone can write a new block into the chain ripple is an example of a permission blockchain where the creators of the network determine who may act as transaction validators on that network distributed ledger platforms in each category have their own unique features some are designed for specific types of application and others for more general use for instance in the quarter DLC platform which is a consortium of more than 70 of the world’s largest financial institutions the sharing of individual ledger data is limited to parties with the legitimate need to know which is not the case for public platforms DLT technology can have a powerful disintermediation effect as data can be put directly onto the shared database by the nodes in the network there is no longer need for a centralized organization to provide this service a developer can create a DLT on a blockchain and use public/private key cryptography to give people secure storage space on that ledger allowing people to own their own data which creates a very different scenario to the world we live in today currently centralized organizations like Google and Facebook suck up all of the little bits of data we need behind us and use it to serve us customized advertisements from which they create their revenue this results in a huge power imbalance within society where centralized organizations armed with teams of mathematicians and computer scientists use mountains of data to influence people’s behavior towards purchasing with products of their appetizers data that is a very valuable asset and the information society and of critical importance to tackling major societal challenges is being used against us in many ways creating a stumbling the societies are becoming increasingly aware of in a world of distributed edges people have their own little databases on the blockchain and can own their own data giving it to organizations to use when and where needed fundamentally reversing the current dynamic and truly empowering individuals your health records reside in your health ledger and different health care providers can access and update that single record but only with the permission of the end-user as the data remains theirs and they choose who can have access to it likewise when people own their own data on a distributed their germ they can transact directly its appear as is the case with Bitcoin with the existing traditional system when you pay for a ride in a taxi with a credit card it looks like you’re paying the driver directly when in fact what is happening is that a database record belonging to my bank is being debited and a database belonged to the bank of the company that the driver works for is being credited in this respect we can note that in our society value and data do not really belong to individuals all the time they’re being held behind the walls of some centralized organization and we are dependent upon them to secure and validate it’s creating huge power imbalances within society in contrast with the Bitcoin blockchain the individual has a ledger record and a secure key with which they can access their records when they send money they send it directly to the other person’s record it simply gets debited from your record and added to theirs directly peer-to-peer no centralized organization holds that data distributed ledger technology can greatly improve transparency reduce corruption and improve security while reducing overhead costs of auditing accounting and legal issues currently records of value are hidden within the databases of centralized organizations where they’re largely in excess for their many possible uses within other systems they are open to manipulation by members within those organizations which breeds corruption and because of that there has to be all sorts of regulation and legal requirements that create many overhead costs and it’s this they are centralized points of failure for critical data sources as large concentrations of valued data proved very attractive for malicious actors likewise it is inefficient to be constantly updating and synchronizing data across many centralized databases by putting the information on a shared ledger it can be easily made accessible and visible on demand as needed because there is tamper proof we can remove many existing points of corruption and the associated need for regulation likewise it is made secure by distributed networks without a single point of failure and continuously synchronized across all nodes to create a single source of truth for all users one of the key technology innovations of second-generation blockchains has been the development of what are called smart contracts smart contracts are computer code that is stored inside of a blockchain which encode contractual agreements smart contracts are self executing with the terms of the agreement or operation directly written in two lines of code stored and executed on the blockchain computer a contract in the traditional sense is a binding agreement between two or more parties to do or not do something each party must trust the other parties to fill their side of the obligation they are a written or spoken agreement that is intended to be enforced by law a multiplicity of different contractual agreements form the institutional foundations to our modern society and economy which have evolved since ancient times if we think about something as seemingly simple as a cafe serving a cup of coffee we will see that this process is really enabled by a massive amount of contractual agreements between different parties that enable them to cooperate in delivering that outcome contracts between employees and employer of the coffee shop contracts that provide workers with health coverage contracts that ensure the coffee-shop contracts between suppliers along the supply chain contracts between property owner and tenant etc our economies are powered by a massively complex set of contractual agreements that are currently created and enforced by centralized organizations like insurance companies and banks which themselves are supported by the ultimate centralized authority in the system the nation-states institutions are societies and economies almost completely dependent upon third party organizations to maintain and enforce those contractual agreements smart contracts feature these same kind of agreements to act or not act but they removed the need for the trusted third party between members involved in the contracts this is because a smart contract is both defined by the computer code and executed or enforced by the code itself automatically without discretion at such blockchains as smart contract technology can remove the reliance on centralized systems and enable people to create their own contractual agreements that can be automatically enforced and executed by the computer code these smart contracts are decentralized in that they do not subsist on a single centralized server but are distributed and self executing across a network of nodes this means that untrusted parties can transact with each other in a much more fluid fashion without depending upon third parties to initiate and maintain the rules of the transaction likewise smart contracts enable autonomy between members meaning that after it is launched and running a contract in its initiating agents need not be in further contacts one illustration of this concept is offending machine unlike a person a vending machine operates algorithmically you provide the source input of money and product selection which the machine takes as input and simply execute on a rule automatically to produce the pre-specified output the same instruction set will be followed every time in every case when you deposit money and make a selection the item is released there is no possibility of the machine not wanting to or not feeling like complying with the contract or only partially complying as long as it’s functional there’s another example we can think about a situation where four people pool their money to make a joint investment that will return interests of them a smart contract could be programmed on the blockchain to take any interest that is created divided into four and sent each amount to the corresponding wallets of the different stakeholders a smart contract is then ready just an .
Accounts on the blockchain that is controlled by code instead of by user because it’s on the blockchain it is immutable that means the code cannot be changed and thus all participants in this investment can be assured that they will get their fair share automatically the code dictates how the process will take place and no individual has the power to change it no individual no organization no government can censor alter or manipulate the contracts in this respect it’s often said that code is law in the sense that the code will execute no matter what of course computer code has been for a while now acting as the law for example as services have gone online we’re increasingly faced with web forms that strictly control what inputs are allowed if you want to buy an item on iTunes .
A then you’ll have to have a credit card with the .
US address the system will also enforce this by not letting you complete the purchase with an incorrect address as another example a logistics company could use smart contracts to execute code that says if .
I receive cash on delivery at this location then trigger a supplier request to stock a new item since the existing item was just delivered a combination of smart contracts with blockchain encoded property gives us the idea of smart property smart property is simply property whose ownership is controlled via blockchain encoded contractual agreements for example a pre-established smart contract could automatically transfer the ownership of a vehicle title from the holding company to the individual owner when all the loan installments have been cleared the key idea of smart property is controlling ownership and access to an asset by having it registered as a digital asset on the ledger and connecting that to a smart contract in some cases physical world hard assets could quite literally be controlled via the blockchain one example of such an .
OT blockchain system is SL.
OC it a door lock that is connected to a smart contract on the blockchain which controls when and who can open the lock this enables anyone to rent sell or share their property without need of a middleman with such innovations parking spots can be subletted on-demand .
Airbnb accommodation could become fully automated or someone with twenty bikes in Bangladesh could rent them out with smart contract locks the bike could shut itself off if it is not been paid for or if it’s stolen then there could be an automatic deposit system or likewise if the person wanted they could simply pay a certain price to purchase the bike at any time like all algorithms smart contracts require input values and only act if certain predefined conditions are met when a particular value is reached the smart contract changes its state and executes that programmatically predefined algorithms automatically triggering an event on the blockchain thus the workings of the overall contracts can only be as good as the inputted data if both data is imported to the system then false results will be outputted blockchains cannot access data outside of their network thus requires some form of trusted data feed as input to the system what may be called an .
Oracle an .
Oracle is a data feed provided by an external service and designed for use in smart contracts on the blockchain .
Oracle’s provide external data and trigger smart contract executions when predefined conditions are met such conditions could be any data like weather temperature the quantity of items in stock the completion of a successful payments changes in the prices on the stock market etc an .
Oracle in the context of block chains and smart contracts is then an agent the finds and verifies real-world occurrences and provides this information to a blockchain to be used by smart contracts .
Oracle’s are third-party services which are not part of the blockchain consensus mechanism thus whether it be a news feed website or a sensor the source of information needs to be trustworthy as an example we could think of an online betting platform based on the blockchain that uses smart contracts to automatically execute payouts to people who have placed bets on sports matches the smart contract system would then have to be connected to a trusted .
Oracle to provide it with the score of the matches as of present this .
Oracle would likely have to be associated with some trusted third party centralized organization like a sports channel or Bloomberg for stock prices however in the future through data fication an .
OT pervasive sensing this might also be automated given the use of advanced analytics using automated .
Oracle’s that draw data from a myriad of sources and complex analytics to find cross correlations that provide a statistical assurance that for example a given event occurred or did not occur the advantages of smart contracts are numerous firstly they are automatic which could remove the time and costs associated with managing and enforcing them making them more efficient as they can be cheaper and faster to run through this form of automation a much greater amount of exchange could take place that otherwise would have never happens in such a way we can see how distributive edges and smart contracts are key parts in enabling a true services economy where ownership is displaced by temporal usage through the on-demand provisioning of services secondly they could reduce corruption as code is both transparent in its workings and automatically executed this leaves the room for individuals or organizations to alter it to their advantage thirdly they can reduce dependency upon centralized organizations that people may be able to set up their own contractual agreements peer to peer thus limiting the arbitrary power of centralized organizations lastly they can also deliver certainty as smart contracts guarantee a very specific set of outcomes that are predetermined before hands enabling all parties know exactly what will happen but herein also lies some of their limitations by automating the execution of a contract they are dependent upon formal rules with well specified inputs and leave little room for a multiplicity of eventualities where the rules may need to be slightly altered because of unforeseen circumstances for example a car that’s being used on demand that operates through a smart contract may simply shut the user out if they have not paid their bill and would take little accounts of the facts but it may be a life-or-death emergency usage in the real world many unpredictable and unforeseen events to occur and rules sometimes need to be flexible and adapt Sporto accommodates and this is one advantage of having human oversight as people are much more capable at judging such circumstances and responding appropriately to complex unforeseen eventualities so the degree to which we can automate contracts is relative the kind of environment that is being operated in and in more complex situations they will often need to be some form of governing body to intervene when needed and this creates new complications surrounding governance that are still yet to be figured out the advent of the etherion platform in 2015 has worked to provide a virtual computing infrastructure for running applications on the blockchain this new form of program is called a distributed application or D.
AP for short aetherium was the first developer platform for building distributed applications it was a foundational general-purpose blockchain based platform that is a turing-complete virtual machine meaning that it can run any computer code although etherium was the first and still the largest platform for building distributed applications there are now others such as block stack oreos all of which provided the underlining infrastructure for building D.
AB’s our working definition of a D.
AP is an application that runs on a network in a distributed fashion with participant information securely protected and operations executed in a decentralized fashion across a network of nodes taps use open source code operate autonomously with data and records cryptographically stored on a blockchain on a technical level a tab is very much similar to a normal web application except unlike with the normal web app where the back-end code is running on a centralized server a dab has its code running on a distributed peer-to-peer network a tap can have front-end code and user interfaces written in any language just like a normal app as such taps will often look and feel very much like regular apps and people will soon be using them in the coming decade without even realizing him like all apps perform specific functions whereas bitcoin is the decentralized value exchange a decentralized application aims to achieve functionality beyond transactions the middie exchange value many types of decentralized apps are starting to emerge as the underlining technology continues to progress already we can see many tabs represent alternatives to the existing popular web applications probably the most successful D.
AP to date is steam it steam it is a blogging and social networking websites on top of the steam it blockchain database the general concept is very similar to other blogging websites or social news sites like Reddit but the text content is saved in a blockchain using a blockchain enables rewarding comments and posts with secure tokens value in this way users can earn currencies for their posts and comments likewise for existing marketplace applications like eBay and Craigslist we have the decentralized version open Bazaar open Bazaar is an open source project developing a protocol for e-commerce transactions in a fully decentralized marketplace because the application connects people directly via peer to peer network it cost nothing to download and use it unlike sites like eBay or .
Amazon there are no fees to list items and no fees when an item is sold open Bazaar is not a company like eBay but an open source projects each user contribution etwork equally and is in control of their own storage and private data another example is Storch which is the decentralized cloud storage application similar to job box storage is based on blockchain technology and peer-to-peer protocols to provide secure private and a-fishing cloud storage system the application incentivizes storage providers and connects them with those who require it each file saved on the application is free encrypted and spread across the network until you’re ready to use it again the keys to the database remain with the owners meaning the data is not accessible by a centralized cloud provider there are many other examples of D.
APs but the general concepts can be applied to any area that requires secure records and benefits from decentralization these applications are automated which means they can operate at very low or even zero cost because of their snaps may be used to disrupt the existing platform economy as whole platforms like uber or .
Airbnb may eventually be converted into dabs that run automatically without the need for the centralized platform the advantages of dabs is that they’re fully automated have superior fault tolerance and trustless execution these decentralized apps potentially represent the next generation of computing because the blockchain is a secure system that enables a trusted network it’s often described as a value exchange protocol in this respect people often say that what the web did for the exchange of information the blockchain will do for the exchange of value just as the web revolutionized the use an exchange of information within society disrupting whole industries based upon the centralization of information so to the blockchain is set to do the same for the recording and exchanging of all forms of quantifiable value this idea of value lies at the hearts of the blockchain if there is no value involved in the process then there is no need for trust and no need to use the blockchain the vision of the internet of value is for any quantum of value to be exchanged as quickly and as fluidly as multimedia is today on the web although multimedia can move around the world almost instantaneously a single payment from one country to another is slow expensive and unreliable often taking days and involving numerous intermediary third parties to validate and process transactions at a significant cost thus it is no accident that the first widespread use of the blockchain was for currencies because it is the most immediate and obvious source for quantified value within society however to truly understand the revolutionary potential of this technology is to appreciate how value and its exchange influences and regulates almost all aspects of human affairs as a consequence the control of how value gets defined measured and exchanged is the key source of power and control within society and has been since the origins civilization today value of almost any kind is defined quantified and regulated by centralized organizations whether this is a national government creating their own currency or one’s role within a hierarchy defining one’s economic status or the branded clothing that we wear to signal to others our social status and value in society however the move into the networked society shifts the locus of organization from closed institutions to individuals and networks but off chain technology is a key element enabling this process by creating a shared ledger where people can own their own data it also enables a shift in the locus of value within economy to the individual in networks in a world of limited connectivity limited transparency unlimited peer-to-peer trust it was necessary to have third party institutions to define quantify and authenticate sources of value within society and economy but in a world of pervasive peer-to-peer connectivity transparency and trusted low cost automated networks value can be defined through a negotiation between peers within distributed networks the rise of digital currencies about one such example of this the surprising thing for a lot of people is that most major currencies like the dollar yen and euro aren’t backed by anything they’re just pieces of metal paper and entries in a bank account that get their value from everyone simply believing that they have value and accepting them as a medium of exchange and that’s all it’s really necessary currencies and money work a little bit like languages they are subject to network effects to give them value the more people who agree to and understand the language the more valuable that that specific language has as a form of communications dollars remnant B euros and bitcoins have no intrinsic value they’re all social protocols and they merely represent a way of supporting the value flows between individuals in the past because of low levels have trusted here connectivity we required centralized institutions like governments and banks to get these value exchange networks started to support them regulate them and maintain them and this gave those organizations a lot of power this is a critical aspect that the internet and the blockchain are changing the blockchain enables us to create trusted and automated peer networks of exchange which greatly strengthens the capacities of people to negotiate and define value via direct here to peer exchanges people can now set up their own currencies with the value of the currency depending simply on what others are willing to pay for it Veeran automated peer-to-peer network exchange but the internet of value is more than just currencies because value is of course a much broader concept than just pure economic utility in talking about the internet of value it’s important to recognize that on a societal level were moving into a post-industrial services economy the traditional conception of what society values is being revisited as a new set of societal and environmental factors re-enter the equation people are less and less content with the traditional concept of GDP as the sole metric for how well they’re doing and more and more demanding actual quality of life which of course engenders a broader spectrum of values beyond economic utility over the past decades we’ve increasingly begun the process of tracking and accounting for different forms of value whether this is green bonds social impact bonds company loyalty schemes carbon accounting or a multiplicity of other forms but simply the erosion and loss of social and environmental capital that occurred during the .
Industrial .
Age is generating a recognition and growing awareness to their value metrics for how well a society is doing increasingly take account of many more environmental and social parameters in combination with GDP along with this recognition to the importance of different forms of value comes also the technical means the quantify an exchange them through the process of data fication information technology lets us measure track and exchange evermore types of value at ever smaller increments likes on Facebook people’s attention carbon emissions etc with the rise of big data and .
OT will be quantifying an ascribed value to almost everything and the blockchain will provide the network infrastructure for tracking and exchanging all these micro and macro quanta of value this shift from the narrow form of economic value that dominated in the .
Industrial .
Age to a broader spectrum of values that emerges within a post-industrial society is enabled by the Tribute alleges system that supports what we call token economies wherein we can define a token as a measure of any form of value and then built an economy around that token economies and the internet of value built upon the current expansion of digital markets brought about by the rise of the platform economy over the past decades with web 2.0 we’ve begun the process of expanding markets to more and more spheres of life that were previously organized via centralized coordination after only ten years or so of this process the biggest accommodation service in the worlds is no longer a centralized organization like the Hilton it’s now an online market the same is true for the taxi industry the same is true for commerce with 10 million merchants and 440 million active users the .
Alibaba network is now reported as the largest retailer in the worlds after just 19 years of existence markets are complex they typically require the aggregation of large amounts of information and peer-to-peer interactions without the technology it is much more viable to achieve coordination very centralized hierarchical model but has become to quantify an account for more and more areas of life blockchain based networks will expand the capacities of plug-and-play markets to all spheres of activity social economic technological and environments on the .
Internet of value will function as the infrastructure to the emergence of the services economy which is currently taking place within post-industrial economies the move into a services economy results in the conversion of industrial age products into services whereas the product based economic paradigm was about the production and consumption of more products as measured by GDP a services economy is about value delivered a service is an exchange of value you don’t get the product you just get its function and the value that it delivers as such all spheres of economy become redefined away from the static conception of units of products towards the more fluid exchange of value you don’t buy lift to put in your office building you get it as a service paying only for the functional value it delivers in some offices now they don’t even buy the carpets on the floor the function of the carpet is delivered as a service and they pay only for the value that’s exchanged the blockchain is a key infrastructure enabling this services economy as it requires a very fluid dynamic an automatic tracking an exchange of value smart property and smart contracts will form the technological infrastructure powering the services economy as they operate within large peer networks automatically allocating resources and processing the financial debits and credits of value exchanges behind the scenes this huge shift in our economy lets us reconceptualize every industry to really question what is the actual value that it delivers and then reconstructed by building token markets around that value where anyone can participate in the delivery of the service with web 2.0 and the platform economy we extended the capacities of markets so that many more people could participate as exemplified by uber enabling anyone to operate as a driver however these markets were centralized around the platform operators and they were dependent on traditional currency systems and the financial system for processing transactions in web 3.0 blockchain applications will function as distributed automatic plug-and-play markets were extremely small increments of value can be exchanged directly it appear with very high levels fluidity when this is coupled with .
OT and data analytics we’ll be able to track the real value that things deliver which will help us to make the much-needed move from our product based economies to an outcomes economy the better reflects the underlying value being created in exchange with the ongoing revolution in in technology our economic systems of organization are being transformed and disrupted by the rise of information networks it started with the advent of the personal computer World Wide Web and with the rise of online platforms the disruptive power of information networks to reshape economic organization became ever more apparent today this process of economic transformation continues with a new set of technologies as we are currently in the process of remaking the technology stack of the .
Internet building what is called web 3.0 a primary component of which is the blockchain the defining feature of this next stage of economic development is that it decentralizes our economy and shifts operations to global information based networks like never before this distributed internet technology stack that is currently being built enables a network of computers to maintain a collective database of value ownership and exchanges via internet protocols this bookkeeping is neither closed nor the control of one party rather it is public and available in one digital ledger which is distributed across the network the most mature example of this is what we call the blockchain in the blockchain all transactions are logged including information on the date time participants and amount of every single transaction on the basis of sophisticated mathematical principles the transactions are verified by the so called miners who run the computing infrastructure required to maintain the Ledger’s the technology of web 3.0 enables a new form of decentralized economy as it removes the dependency on a centralized authority for managing the network instead replacing it with a distributed consensus model managed by many this shared securely encrypted database enables trustless peer-to-peer interactions via new internet protocols people can begin to set up their own networks for coordination and direct exchanges of value peer-to-peer and it enables the rules of these transactions to be automated in new ways at the heart of this system is the distributed ledger which records the exchanges of value these distributed Ledger’s can account for and validate the exchange of any form of value it may be a currency it may be property it may be a kilowatt hour of energy the usage of a parking spot the number of followers a person has on social media these distributed Ledger’s provide the infrastructure for building token economies a token is simply a quantified unit of value tokens are both generic and fungible it is generic in that it can be used to define any form of value and it is fungible meaning it is exchangeable between different specific forms of value traditional monetary currencies are not fully fungible as there are many circumstances when one cannot exchange a monetary currency for other forms of value for example likes on social media may have a certain value but typically cannot be directly exchanged for monetary currencies a token differs from our traditional monetary currency and that it is more generic our existing currencies define a particular type of monetary value what we call utility which is based upon the economic logic of the industrial economy while tokens because they are more generic can define a broader set of values social capital natural capital or cultural capital for example natural capital is the integrity of an ecosystem that enables it to function and provide ecosystems services to people in our traditional economic model we only quantify and account for the services that the ecosystem delivers such as food water materials etc however we do not account for the integrity of the ecosystem that enables it to function the generic nature of the token means it can be used to account for values such as this natural capital the capacity to differentiate between different forms of value is made possible by the programmability of token units because tokens are digital they are also programmable which enables one to specify certain rules for that token and have those rules executed when it is exchanged thus enabling certain constraints or possibilities in its usage one can specify that a certain token is only spendable under certain terms or specify how it can be converted for example one could program the token so that it cannot be exchanged for diamonds that are mined in a particular location of the world known for its use of slave labor in this way the token is not just a unit of utility but also expresses social values likewise one could create a health care allowance in dollars or .
Euros that could be programmed on the blockchain so that it can only be used to pay for health care at certified parties automating these measures leads to a considerable decrease in bureaucracy this programmable token system works to shift our economies from a single value model to a multi value model they create many different types of value and economies but still retain the possibility for exchange between them the distributed web is the convergence of the economic market system with information technology that enables us to convert traditional organizations into distributed markets based on tokens tokens define whatever is a value within that organization and the market system is used as a distributed coordination mechanism for managing and growing that resource by creating an expanded definition of value and converting closed organizations into open markets this means that we can vastly expand the scope and capacities of the economy the provisioning of services within the economy no longer becomes dependent upon a limited number of centralized organizations acting for profit but instead anyone can now provide the service via these open protocols this means we can harness the resources of the many in a distributed fashion instead of being dependent upon a few likewise the token economy can harness the motives of individuals not just for financial rewards but for a multiplicity of values to illustrate how this works let’s think about the service of cloud data storage currently this is provided by a limited number of enterprises like .
Amazon and Microsoft these centralized organizations have huge data centers but still those data centers are only a very small fraction of the storage capacity in the world most of the storage is in the personal computing devices of end-users and most of that is not being used a file coin is one organization that works to create a distributed token economy for this storage file coin is a decentralized storage network that turns cloud storage into an algorithmic market the market runs on a blockchain with a native protocol token also called file coin which miners earned by providing storage to clients conversely clients spend file coins hiring miners to store or distribute data the sum of all these computers that are coordinated through an automatic market system on the blockchain can provide a much larger more resilient system than the centralized model while reducing redundancy and inefficiencies in the overall system it also pushes the provision of the service out to the location where it is demanded as people are connecting peer-to-peer locally instead of going to the centralized server that may be on the other side of the planet tokens such as file coins can be exchanged for other currencies or members can hold on to their tokens whose value may appreciate as the networks grow over time this illustrates a very interesting aspect to tokens anyone who uses the system is also an investor in the system thus tokens merge investment capital and liquid exchange capital in new ways in the traditional capitalist model we have a divide between owners of capital and workers a divide between a more fixed investment capital and liquid exchange currencies the shares in a company are not the same thing as what people get paid with for working in that enterprise and use for everyday exchanges in the market this creates the notorious divide within the industrial economy described by Karl Marx between the capitalists that make money off their investments in the workers that have to stay selling their labor for money without ownership tokens represent both the inherent value of the community which is its capital investment and they are also units of exchange within that ecosystem the founders of the project issue a number of tokens at inception and sell those for someone to use the system they have to buy the tokens in so doing they become part investors in the project but they also use those same tokens to make exchanges within the market thus the people creating the value in the ecosystem are also getting paid in tokens meaning the workers that are creating the value through their work also have ownership within the organization in the traditional utility-based exchange of cash people have no ownership in the organization they just try to make money and this can create divides between the owners and the users the token system works to better align the incentives of the individuals with the overall system because the value of the tokens they earn is also dependent upon the value of the whole when you are working for a token network you are both working for yourself and for the whole organization as the to become more aligned unlike the traditional divide between capitalist and worker the token system enables networks to overcome the chicken-and-egg problem if you are the first user of a network like eBay then the value would be very low thus it is difficult to get the network started because it has to reach a critical mass before it will be of value to the users this means that it may require a large investment to create a network the Silicon Valley model worked by having large initial venture capital backing that enables them to overcome this but it means that most networks don’t get off the ground and that once a network reaches scale and has value it becomes dominant and very difficult to compete with resulting in a lock-in effect and making it easy for large incumbent organizations to become extractive over time it also means that those who founded the organization win big time if the network takes off it creates a winner-takes-all dynamic with most people losing because of the threshold the token system extends the benefits of being an early adopter of a new network to all the users and thus helps to solve this issue it does this by issuing tokens for anyone to purchase at the beginning of the project as the project grows the tokens come to have greater value for all of the holders this also works to make the users of the network promoters of that organization because as it grows the tokens that they hold become more valuable it incentivizes people to join networks early so as to gain the benefits of the increase in their token value as it grows which reduces the problem of thresholds with this technology companies no longer have to go to traditional capital markets through an initial public offering of shares in the company in exchange for money but instead they simply sell tokens directly on the .
Internet to raise initial capital for the project in what is called an initial coin offering or .
IC o—- this means that founders can monetize their networks directly by simply holding their tokens and making the network useful to date we have had an .
Internet patched onto the side of an economy operated through the many centralized organizations of the .
Industrial .
Age creating a strong contradiction between the underlying technology and the institutional arrangements the distributed web will work to transform this by merging information networks and economic organization as the flow of information an economic value becomes one this will greatly reduce our dependency on centralized organizations expanding markets as systems of organization the global market economy will become available to the many through small distributed peer-to-peer interactions running through web protocols as the decentralized internet takes us a step further into the networked economy
Notify of
0 Comentários
Inline Feedbacks
View all comments |
Setting in The Yellow Wallpaper
Please note! This essay has been submitted by a student.
Download PDF
In the short story “The Yellow Wallpaper”, the narrator is suffering from a nervous depression which was a slight hysterical tendency. John, which was her physician and husband did not believe that there was anything wrong. Of course, during the 19th-century mental illnesses were not fully understood at all, so John felt as if there was nothing wrong. To overcome her depression, John decides to rent a house for three months and keep her in a room to get perfect rest. Unfortunately, when she sees the room, the first thing she notices was the uncleaned yellow wallpaper.
Throughout the story, I noticed how John was always away on days and even some nights when cases were serious. Sadly, the narrator felt as if John didn’t too much care for her, so she was glad her case wasn’t as serious as his others. The fact that John didn’t realize how much she suffered just because he was basing it off of his opinion made him look bad during the story. All she wanted to do was get well faster and return to her normal self. If it was me being trapped in a room, I would be willing to occupy myself the best way I could, but instead, the narrator decided to obsess with the room’s yellow wallpaper. Moving on, she did notice that the wallpaper had a smell to it that went through the entire house. As a reader, the first thing I did notice was how mentally the narrator was suffering from problems because she was trying to free a woman in the yellow wallpaper.
Essay due? We'll write it for you!
Any subject
Min. 3-hour delivery
Pay if satisfied
Get your price
Once their three months were almost up at the house, the narrator and her husband noticed that her life was becoming so much better to the point where she was eating and having more to expect. Jennie, which was John’s sister asked the narrator if she could sleep with her, which was obviously weird after the fact that she saw Jennie touching the wallpaper. With under one week left to stay, the narrator was demanded to peel the wallpaper until John walked in and caught her. He was so shocked that all he could do was faint, but that didn’t stop her from creeping around the room because she realized that she finally got out at last.
Overall, at the beginning of the story, I was a bit confused about what was going on. As I continued to read, the details were coming together making the story somewhat interesting. Honestly, I didn’t get what the writer was trying to accomplish during the story. The technique that was used didn’t explain the situation thoroughly and caused it harder to understand. In my opinion, I don’t believe the wallpaper was the cause of her depression to increase. Before she got into the room she was apparently already sick mentally, but her obsession with the wallpaper forced her mind to open. Since John chose to leave her in the room alone, she felt as if she was locked away in her mind to the point where she was never going to be free. When she caught eyes on the women in the wallpaper, she noticed that it was her opportunity to live by setting them free, even if they weren’t real. Basically, the women that were trapped in the wallpaper were a representation of herself, so that meant she could only free them and taste their freedom, instead of her own. In conclusion, this story involved a lot of thinking outside of it, but I would love to read more stories like this one because I felt like it was a mystery story, but mainly it was based on the author’s real-life and treatment.
writers online
to help you with essay
banner clock
Clock is ticking and inspiration doesn't come?
|
Blog entry by Astrid Dinneen
Picture of Astrid Dinneen
by Astrid Dinneen - Monday, 14 May 2018, 3:38 PM
Anyone in the world
One question often asked is ‘What are the differences between teaching English as an Additional Language (EAL) and teaching English as a Foreign Language (EFL)? They’re practically the same thing, aren’t they?’ Lynne Chinnery, Hampshire EMTAS Specialist Teacher Advisor on the Isle of Wight clarifies how these approaches to English language learning are distinctively different from each other.
Who are EFL and EAL learners?
English as a Foreign Language (EFL) is used to refer to both adults and children learning English in a non-English speaking country or in the UK for a limited period of time, such as a summer course. English for Speakers of Other Languages (ESOL) mainly refers to adult learners of English in the UK attending English language classes. English as an Additional Language (EAL) is usually used to refer to children who are living and attending school in the UK and whose first language is not English.
Practitioners should remember however that ‘pupils with English as an additional language are not a homogeneous group’ (Naldic, 2012a); their age, background, previous education and previous experience of language learning will all play a part in shaping the way they learn English. Practitioners will therefore need a variety of EAL specific approaches to help each individual along his or her learning path.
What difference does it make?
One of the main differences between EAL and EFL is that because EFL is mostly taught in non-English-speaking countries, EFL students have limited exposure to English. They may have between one and five English lessons a week (the majority from my experience having only one to three lessons) and for many students, this time in the classroom will be the only time that they are immersed in English. They will usually be given English homework and, depending on their level, may even watch English TV programmes or listen to English songs outside the classroom, but on the whole, they will not have extended periods of communication in English, apart from their time inside the classroom.
EAL students, on the other hand, have a completely different experience. Not only are they immersed in English all day, five days a week, both in the classroom and in the playground, but they will also hear English outside school too. Even if they speak their first language at home or use it as a tool for learning, which we highly recommend, they will be listening to English all around them whenever they go out, and will eventually be communicating in the language outside as well as inside school.
One result of this is that EAL learners will normally acquire English much faster than learners of EFL. Even so, learning a language is a long process and EAL learners can take between 7 to 10 years to catch up with their monolingual peers. What is interesting to consider is the way in which they pick up the language. EAL students have time to listen to lots of talk, especially if they are seated with peers who can act as good language models. Being immersed in the language, they will begin to copy what they hear, experiment with it and eventually shape it for themselves. With lots of speaking opportunities in a supportive atmosphere, as well as a teacher and peers to model new language and recast it correctly, their confidence, fluency and accuracy will flourish. EFL students also need many opportunities for practice, but because their exposure to English is more limited, there will be much less time to develop the patterns and rules of language themselves in such a natural way. Because of this, they are likely to need more explicit instruction of grammar and syntax than the EAL student.
Another difference to bear in mind between EFL and EAL, is the fact that the EAL learner is usually alone or in a minority language group within the classroom, while EFL learners often share the same first language. This can make the initial stages of learning much more stressful for the EAL student. Sometimes early stage EAL learners also experience what is known as the ‘silent period’, where the learner begins to absorb the language around them but is not yet ready to speak. Although an EFL student may suffer some anxiety before their first lesson or two, this is not usually so pronounced or prolonged. Imagine how different it would feel if you were learning French in a class of English speaking peers at the same level as yourself compared with learning French in a class of French native-speakers!
EAL learners also differ from their EFL peers as, while they are learning English, they are also learning the school curriculum - in English, which means they have the difficult task of trying to learn English, access the curriculum and catch up with their peers – all at the same time! For this reason, focussing on vocabulary lists that have little or no connection with the curriculum, such as colours, pets or vegetables, is not good practice.
Are there any similarities?
While we are considering the differences between EAL and EFL learners, it is also important to remember that there are many similarities between the two disciplines and that what is good practice in one field can also be good practice in the other. Many methods that are recommended when working with EAL students are also used with EFL learners, such as the use of visuals, role play, paired activities and collaborative group tasks. In fact, these methods work well for all students, not just those learning EAL. In any English language classroom, the form and function of language will still need to be explored but the way language is taught has changed, even in the field of EFL. For example, endless exercises practising one particular grammatical structure, without context or the opportunity to experiment with it in natural conversation, will be of as little benefit to the EFL student as it is to the EAL student. Talk is creative: it is spontaneous and unpredictable and teachers should therefore not only plan activities which enable the student to apply the language they have learnt, but also to use it in real conversations in order for them to grow into independent learners (Naldic, 2012b).
The world of EFL has recognised this shift in methodology and most EFL course books now reflect modern approaches to language learning. We have torn out the language labs and have introduced many more opportunities for spontaneous conversation, that provide a platform to practise the structures and vocabulary taught, rather than an overreliance on drilled exercises with no avenues for real expressiveness. In this way EFL has become more in line with the pedagogy of EAL, where ‘The active use of language provides opportunities for learners to be more conscious of their language use, and to process language at a deeper level. It also brings home to both learner and teacher those aspects of language which will require additional attention’ (Naldic, 2012b).
Accessing the curriculum
EAL and EFL students will have very different experiences on their language journeys, not only because of the differing amounts of exposure to English, but also because of the purposes for which they are learning the language; EFL learners are learning English as a discrete subject whereas EAL learners are learning the curriculum through the medium of English. The good news is there are many useful strategies which work well for both EFL and EAL students, and have been proven to be good practice for all children, including native-speakers. However, practitioners working in the mainstream setting should remember that the curriculum provides the context for English language learning and therefore EAL strategies should be planned into lessons to support pupils’ access to the language demands of the curriculum. Check out Hampshire EMTAS and the Bell Foundation for more ideas.
Naldic (January 2012a) Pupils learning EAL [online] (accessed 09.05.2018)
Naldic (January 2012b) The Distinctiveness of EAL Pedagogy [online] (accessed 09.05.2018)
[ Modified: Thursday, 7 March 2019, 4:56 PM ] |
structure of dna definition properties types
Structure of DNA-Properties, Types, Functions
In this article you will know complete about DNA like what is DNA, Structure of DNA- Definition, Properties, Types, Functions, Replications, Complete Notes
What is DNA
DNA is a molecule in which the genetic code of any person and almost all beings is present. DNA is also present in plants, animals, protists, bacteria and archaea.
DNA is in every cell of every organism and determines what proteins the cells will make.
We often get to hear about DNA when we watch movies or news, but have you ever tried to know what DNA is in the end and who discovered it.
By the way, there are many people who listen to its name but there are very few people who have knowledge about DNA.
This is the reason why today it came to my mind that why not tell all of you in easy words what DNA is and what is the meaning of DNA structure. So let’s start without delay and know what DNA is and what is the full form is.
Where Is This Test Done?
The DNA of any person is made from a mixture of the DNA of their parents. There are also many people who try to find out what DNA is. To answer that, we have prepared this post, so you can get all the information related to DNA by reading this post.
Vitamins are made in maximum quantity in the proteins produced by the cell. This is why many symptoms of parents occur in children, such as skin color, hairstyle, and eye color.
DNA is a long virus that contains our unique genetic code.
You will also then understand that DNA Whment is found, ie which parts of the body are from where the DNA can be used as a sample.
These bases are held or sequenced to form the genome.
Whenever we watch films, any kind of case is investigated, then DNA is taken. A lot of things are discovered through DNA. That is, it is an element that analyzes that helps to give an accurate answer.
Nowadays, DNA test shows a very important role in investigating any type of case. Its use has increased considerably, especially in forensic science. This also helps scientists a lot in carrying out their experiments.
If a drop of blood is found in a person, then the structure of DNA can be traced and the exact answer can be found, after all, who is this person, even his entire family can be found out.
The DNA structure is different from that of every organism, but when there are organisms of a lineage, their DNA is found, which shows who the parent of the organism is.
By the way, science has developed so much today that new recipients are also being made in the field of medicine. But since the discovery of DNA, many things have been simplified. This greatly helps in understanding the basic principle of anybody design.
In school and college also, children who study biology are given good information about DNA. Many people have the desire to know when and how its test is done and if they have reached this post to find this information, then they will understand it to a great extent.
Also read this: Classification of Plant Kingdom Class 11 Notes
(Structure of DNA– Definition, Properties, Types, Functions, Replications, Complete Notes)
Structure of DNA
DNA is made up of molecules containing nucleotides. And these nucleotide has a sugar group, a nitrogen base, and a phosphate group.
According to the National Library of Medicine (NLM), human DNA has about 3 billion bases, and greater than 99% of the rises are the same in all types of people.
The four types of nitrogen bases are adenine (A), guanine (G), thymine (T), and cytosine (C). This base number determines the instruction or genetic code of the DNA.
Just as alphabets can be used to formation words using the letter order, DNA generates the license of the base of nitrogen in the sequence. that is in the language of the cell indicates how cells make proteins.
The second type of ribonucleic acid, nucleic acid or rayon, carries genetic information from DNA to proteins.
The two nickel spirals form spirals in the form of two long strands to form a structure called a nickel tide double helix.
DNA molecules are long, so long, in fact, that they are perfect and cannot fit into cells without them. To fit inside cells, DNA is tightly coiled into structures that we call chromosomes.
If you think this double helix structure is like a step, then phosphate and sugar will have their side as an aside. While Ri will be their rub.
One strand causes the pair to pair with the other stand. Adenine with thymine forms pair while guan with pirrosine forms pairs.
Each chromosome contains a single DNA virus. Humans have 23 phase chromosomes, which are found inside the nucleus of a cell.
Types of DNA
DNA consists of four baseline blocks or bases.
• Adenine (A),
• Cytosine (C),
• guanine (G) and
• Thymine (T).
Discovery of DNA
The discovery of DNA was occurred in 1950 AD by the scientist James Watson who is an American biologist and Francis Crick who is an English physicist
DNA was first discovered by Friedrich Miescher who had a German biochemist in the year of 1869 AD.
But until many years later, researchers did not realize the importance of this virus at all. But this was only the case until 1953, until James Watson, Francis Crick, Morris Wilkins, and Roseland Franklin discovered the structure of DNA.
This structure was in the form of a double helix that they felt could carry biological information.
Molecular structure of nucleic acids Watson Crick and Wilkins have been awarded as Nobel Prize in the year of 1962 in the field of medicine for efforts of their response to its importance for the transfer of information between organisms.
By the way, there is a great difference between people who have discovered it. We have also tried to give you information about who first gave information about it in reality and then who did the search for its main use.
What is DNA sequencing?
DNA sequencing is a technique that allows researchers to determine the order of bases in a DNA number.
Technology can be used to determine the recruitment of genes, chromosomes, or bases to an entire genus.
In 2000, researchers completed the first complete sequence of the human genome, according to a report by the National Human Genome Research Institute.
Function of DNA
DNA stores the information required to create and control a cell.
The transfer of information from mother to daughter through the process of DNA replication is also known as genetic transmission.
DNA of nucleosides and nucleotides can be used in this chemical form for cells. Unlike other macromolecules, DNA does not play a role in the structure of cells.
Coding For Proteins
DNA codes for proteins that are complex molecules and that work heavily around our bodies. The information in the DNA is initially read and then transformed into a messenger molecule.
This language is one of the amino acids known as the billing block of proteins. This is the special language that describes how amino acids should produce a particular protein.
The information placed in this communicator is then translated into a language that the body can understand.
Genetic Code
DNA is very important for our genetic code.
It transfers genetic messages to all cells of your body.
If you think of a reproductive function, consider that the addition of an egg and sperm to form your first cell provides your complete genetic code that your body will use throughout your life.
Before that, half of your chromosome within the cell, which we call the chromosome that contains your DNA, came from your father and half came from your mother.
DNA shows a distinctly important role in the human body and is one of the most important paintings of the twentieth century. Will help us to know more about knowledge of our DNA work.
Also read this: Classification of Living Organisms Class 11 Notes
DNA replication
The DNA responses are essential for every function from the reproduction of a cell, tissue, and body systems to maintenance and growth.
Your body’s cells replicate so that good and blood cells are possible.
A DNA molecule essentially unzips to copy itself. DNA has four main bases. All parts of a nucleotide are special, including sugar and phosphate.
Importance of DNA
Nowadays, if you go into the medical sector, you get to know a lot about the characteristics of DNA.
We understand through most of the films and newspapers shown on TV that DNA miraculously helps medical science. Let us know about some important facts of this.
Diagnosis and Treatment
An important part of DNA research is genetic and clinical research. Due to the detection of our DNA, the ability to treat the disease quickly has improved.
In fact, there are no medications that stop the flow of emotions, although their effects can be curbed.
In addition, we can better assess a person’s genetic predisposition to specific diseases. To do this, we have allowed the development of new drugs to treat these diseases.
DNA discovery basically leads to drugs and treatments for patients with severe diseases that were previously considered dangerous and in the absence or failure of treatment.
Paternity and Legal Influence
Although the invention of DNA has affected the most, its benefaction to other fields is still equally important. Paternity cases have a great impact on families and children around the world.
Through the assessment of DNA, the paternity of the child can be affected, which has a significant impact on the child’s upbringing and life.
Agriculture and DNA
The effect of DNA on farming is very important because it has facilitated the breeders to breed animals that have better immunity to diseases.
It also allows farmers to produce more nutritious produce, which is of particularly significant consequence in countries where populations depend on a small range of small foods that have very little diversity.
This means that micronutrient deficiencies in these countries can be overcome.
Forensics and DNA
DNA is quite important in the field of forensic science.
The discovery of DNA means that a person investigating a crime can be traced between a guilt and innocent human.
It is also important that victims can be identified, especially in cases where the victim’s status is unfamiliar to family or friends.
It also means that little evidence about the perpetrator of a crime can still provide significant clauses.
In this sense, DNA has been instrumental in bringing revolution in the whole field of forensic science.
This effect is felt within the criminal justice system and contributes to the precise protection of society.
In short
In this fast-developing era of today, every day new innovations are bringing rapid changes in our lives. As time is being passed, in the same way, TS has become more developed. It is used in both aspects, good and bad.
But the truth is that the use of technologies is done keeping in mind only good works. But how it is used by people depends on them.
Many people only know it by the name of DNA but only a few people know what the full form of DNA is (full form of DNA in English). That is why in this post we told you what is the importance of DNA and what works.
So that you people have full knowledge about it. We hope that you have got all the information related to this post, if you like this post, then share it more and more on Facebook, Twitter, Instagram.
Also read this:
Leave a Comment
|
3 Rare Disease of the Muscular System
3 Rare Disease of the Muscular System
What is a rare muscular disease?
A muscular rare disease affects the muscles of the body. This can be a weakening, or irritability, or low muscle tone. This can also have a consequent impact on an individual’s physical development and ability.
The body contains over 600 muscles, and muscles control every movement we make. There are three main types of muscles – cardiac, skeletal and smooth.
Smooth muscles control many functions of the body, including the contraction of blood vessels.
Rippling muscle disease
This rare genetic disease belongs to a group of syndromes known as caveolinopathies. Generally this disease develops during late childhood, or into adolescence.
Symptoms affect the muscles of the body in many different and sometimes debilitating ways. These include muscle irritability, repetitive muscle tensing, a bunching up of the muscles, and a visible rippling of the muscles.
Individuals also often suffer from fatigue, cramps, and muscle stiffness in response to excessive activity, or extreme cold.
In some individuals this muscular disease may also trigger an overgrowth of muscles.
This genetic disorder is caused by mutations in the CAV3 gene.
STAC 3 disorder
This muscular rare genetic disorder affects the skeletal muscles of the body. It causes muscle weakness and low muscle tone. Delayed motor skill development is common with the syndrome. Joint deformities are also a main symptom.
Issues with the muscles can cause feeding and swallowing difficulties, especially in infants with the syndrome. With the muscles responsible for controlling so many of the body’s functions, issues with the muscles can create a multi-system disorder with a wide range of health issues.
Individuals with STAC 3 are also at risk of developing malignant hyperthermia. This is a severe reaction to specific types of drugs, and specifically anesthetics.
This muscular rare disease is caused by mutations in the STAC3 gene. It is inherited in an autosomal recessive pattern, meaning both parents must be carriers of the gene mutation for a child to be potentially affected by it.
Danon disease
This rare genetic disorder affects both the cardiac and skeletal muscles of the body. One of its most serious symptoms is cardiomyopathy, which is a thickening of the heart muscles, which make it difficult for the heart to contract, and function properly.
Individuals with this condition have weak skeletal muscles, which affects their physical movement ability.
Apart from symptoms relating to the muscles, this muscular genetic syndrome also presents with intellectual disability.
Caused by a mutation in the LAMP2 gene, this X-linked genetic syndrome affects males earlier in life, and more severely than females. This is also reflecting in the different life expectancy for individuals with this syndrome – with males living, on average, to 19 years, and females with the rare disease to 34.
Anyone facing a diagnosis of a rare muscular disease should consult with a genetic counselor, in terms of understanding the diagnosis and testing process, as well as what it means to manage a rare disease long term.
While there may not be treatments per se, for a rare disease, there are often therapies that can help with some of the symptoms of a muscular disease.
FDNA Telehealth can bring you closer to a diagnosis.
Schedule an online genetic counseling meeting within 72 hours! |
Skip to main content
Systems pharmacology and genome medicine: a future perspective
Our knowledge of the mechanisms by which drugs act physiologically advanced radically during the twentieth century. With the advent of biochemistry and molecular biology, the targets of drugs became increasingly well characterized. The development of receptor theory by Clark [1] and Black [2, 3], followed by analyses that distinguished between competitive and non-competitive inhibition, began to shed light on the mechanisms by which drugs worked at the molecular level [4]. The influence and relevance of receptor theory in modern pharmacology is derived from the large number of drugs that target membrane receptors, the majority of which are G protein-coupled receptors (GPCRs). The theory of enzyme kinetics led to substrate-based inhibitor design of drugs. These theoretical underpinnings, the size of the market for specific classes of drugs and the ease of drug design for a proven target have resulted in many similar drugs that can target a single protein. ACE inhibitors that are used to treat hypertension are good examples of this approach. The drug pipeline has evolved, with the appearance of targeted therapies and biological therapeutics, such as monoclonal antibody therapies. Many diseases, such as hypertension, ulcers and several types of cancer, that could not be treated two generations ago, can now successfully be managed, if not cured. Yet the 'drugome' (the proteins and genes that are targeted by drugs approved by national regulators such as the US Food and Drug Administration, FDA) covers only a small fraction of the proteome or the 'diseaseome' (genes that have been linked with disease), and many drugs are focused in just a few areas (Figure 1) [5, 6]. This disparity reflects the current relationship between basic biological science and its use for therapeutic purposes. There are substantial opportunities to use the accumulated knowledge of biological processes for drug discovery and clinical applications. If we are to take advantage of such opportunities, genome medicine and systems pharmacology need to be well integrated.
Figure 1
figure 1
Relationships between the genome, proteome, diseaseome and drugome. The number of distinct protein species (about 400,000) comprising the proteome (green circle, scaled down by 25% relative to the other circles), is estimated by taking the approximately 25,000 currently annotated genes (yellow circle) and assuming about four splice variants per gene and about four post-translationally modified proteins per splice variant. The genome, diseaseome and drugome form a Venn diagram. The red circle represents the approximately 1,800 genes known to be involved in various diseases (the diseaseome). Of these, a small fraction (the drugome) is targeted by FDA-approved drugs. Not all drug targets have been characterized as disease genes. In total, proteins encoded by approximately 400 genes (0.1% of the proteome) are targeted by about 1,200 FDA-approved drugs. There are more drugs than protein targets because more than one drug can target the same protein.
As the systems-level understanding of biological processes expands, it is becoming a crucial driver of pharmacology that is anchored in the human genome and personalized medicine. The path from laboratory research to clinical application is becoming short as translational research grows, facilitating collaborations between basic researchers and clinicians. Genomic and proteomic technologies drive discovery of biomarker sets for the classification of diseases and the stages of their progression, as exemplified by microarray-based marker sets that have been developed to identify stages of cancer progression [7, 8]. Although more of these approaches need to be discovered and then standardized before they are routinely used in clinical practice, the importance of using systems-type methodologies to characterize therapeutic interventions, to delineate the pathways (or more often networks) involved in disease, and to identify the mechanisms of action and off-target effects of current drugs is becoming clearer. A multi-faceted understanding of therapeutic intervention is necessary, given the complexity of human physiology and the increasing availability of numerous clinical parameters and analyses.
Here, we explain the reasoning underlying the assertion that systems-level knowledge of pharmacology and pathophysiology, rooted in genomic information, will increase the efficacy of existing drugs by aiding in the development of personalized medicine and will facilitate the rational discovery of new drugs using a much wider target base. For pharmacology to be understood at a systems level, it is necessary to use a genome-based approach to systems-level studies of physiology and pathophysiology.
Genome medicine
An operational definition of genome medicine is the way in which genomic information from a patient helps in the diagnosis or treatment of disease. This includes areas such as the genetics-based diagnosis of the origin of diseases, their progression and the response to drugs. In the area of drug response, genome medicine overlaps with pharmacogenomics, a field that studies how genome variation affects drug response. It is well established that some diseases arise from mutations in single genes and can be treated using the wild-type product from that particular gene. Fabry's disease is an example of such a monogenic disease [9, 10]. However, it is now known that many complex diseases arise from interactions and changes in multiple genes. The two-hit model for cancer [11] was an early example of this recognition. Mapping of genetic variations, such as single-nucleotide polymorphisms (SNPs), in the human genome has given rise to the idea that combinations of variant genes can alter susceptibility to various diseases. Genome-wide association studies have become popular over the past few years [12], although their ability to predict disease susceptibility is only beginning to be determined. Nevertheless, it is clear there is sufficient genomic variation between individuals to affect the origin and progression of diseases, as well as drug response. For example, genetic testing for cancer drugs, such as imatinib [13, 14], trastuzumab [15], gefitinib [16] and bucindolol [17], provides information about whether the form of the protein target that is expressed in each individual will be responsive to the drug. Testing for the gene BRCA1 is used to determine the feasibility of preventative measures for breast cancer [18]. Other types of genetic testing are more pharmacogenomic in nature, such as those for warfarin [19, 20] or tamoxifen [21]. These tests screen for polymorphisms in distinct cytochrome P450 isoforms, which are responsible for differences in how the drug is metabolized and thus affect the therapeutic index, the ratio between the toxicity and effectiveness of the drug. Pharmacogenomics can thus be considered to be a very important part of genome medicine. Recent views on pharmacogenomics describe in greater depth the relationship between genomics and drug response [22, 23].
Our understanding of a patient's genome will increasingly drive medical practice in the 21st century. The development of technologies for rapid sequencing of whole genomes is an important factor in this changing approach to medical practice. The growing number of treatment options for pathophysiology, and the accumulation of knowledge about the risk-to-benefit ratio of prescribing one therapy over another when anchored in individual genomic information, should allow the individualized tailoring of therapies. Such tailoring would be based on an individual definition of the therapeutic index of a drug for a specific individual. Currently, many clinical practices are based on empirical trial-and-error approaches, and drug usage typically follows a 'one size fits all' approach.
Even for diseases for which therapies have been very successful, we generally do not understand why one type of therapy works for one individual and not for another. For example, there are four general types of therapy to treat hypertension (thiazide-based diuretics, angiotension converting enzyme inhibitors or angiotension II receptor blockers, calcium channel blockers and β-adrenergic receptor blockers or beta-blockers) [24]. There is relatively little predictability as to which hypertension treatment will be more effective for any given patient, and there is a large patient-to-patient variability in response to each therapy and required dosage. Current data suggest that 50% of patients who do not respond to one type of therapy will respond to another and that 70-80% of patients will respond if switched for a second time to yet another type of antihypertensive drug [25]. The characterization of all of the hypertension-related genes in the patient's genome may facilitate the construction and systems-level analyses of the regulatory networks, using those hypertension-related genes as seed nodes. From the physiological functions of these networks, we may be able to identify the pathways involved in disease of an individual to predict the action of a drug. Such personalized disease networks for patients would allow clinical practice to move away from a trial-and-error approach to prescribing drugs and to advance to a more genome-informed disease management. This type of clinical practice could easily be called 'genome medicine'.
Although the state of the genome will be a major determinant of how diseases originate and progress, it is unlikely to be the only cause in many cases. Environmental factors also have an important role. In many cases these two factors are related, whereby a particular genotype changes the risk of an environmentally induced disease. This relationship can be understood by considering the biochemistry and physiology of the system. Genes are most often transcribed and translated into components of biochemical networks that underlie cellular processes. Often there is more than one type of change in a particular gene; some of these types of change lead to a change in the function of a key cellular component and ultimately to a disease state, whereas others are inconsequential. Sometimes, even if a mutant gene product behaves aberrantly at a biochemical level, this behavior can be compensated for and the physiology remains normal. Alternatively, the aberrant behavior of diseased cells might not be only genetic in origin but also a result of changes in normal cell signaling processes that regulate transcription, translation and effector protein function. The hypothesis that inflammation underlies the origins of several diseases, for example, is based on the assumption that the normal process has gone awry because of environmental signals, resulting in sustained activation of intracellular signaling networks that results in pathophysiologies. There is accumulating evidence to support this hypothesis [26, 27].
The effects of post-transcriptional and translational regulation of signaling can increase the difficulty of developing methods for individualized mechanism-based therapeutic interventions. A systems understanding of disease and drug action at the level of cellular biochemical reactions and physiological function would be useful in framing the effect of genomic variations in the context of environmental cues. This is where systems pharmacology comes in.
Systems pharmacology
Systems pharmacology seeks to develop a global understanding of the interactions between pathophysiology and drug action. To develop such an understanding it is necessary to analyze interactions across and between various scales of organization. The representations of different scales are illustrated in Figure 2. The biological insights gained from multi-scale analyses of physiological processes have been noted previously [28, 29]. Analysis of such multi-scale systems requires one to 'zoom' in and out depending on the type of analysis being conducted. During the process of multi-scale analysis (zooming), it is essential that we develop a mechanistic understanding of the relationships across the various levels. Simply correlating structural information or molecular interactions with clinical phenotypes is a good starting point, but it will not yield the ability to predict disease progression or drug treatment outcomes. The type of 'system' to be analyzed can vary depending on the zoom level (Figure 2) of information desired. The system can be considered at the organismal, organ, tissue, cellular or molecular levels. The effects of a drug on pathophysiology that are seen at the organismal level, that is, symptoms or clinical measurements, are zoomed-out observations. These observations usually consist of clinical data, ranging from blood chemistry to measurements reflecting organismal function, such as blood pressure and stress tests, all of which are documented in the electronic medical records of the patient that will aid future computational analyses across scales.
Figure 2
figure 2
Multi-scale analyses in systems pharmacology. The top half of the figure is a schematic representation of different scales of organization involved in human pathophysiology and systems pharmacology. Clinical indicators and analyses (left) indicate measurements of various types of blood concentrations, blood pressure, stress and so on; these parameters are available in the electronic medical records of patients. From left to right, the scale becomes smaller, or 'zoomed in'. The human body (or organism) can be analyzed at the levels of organs, tissues, cells (represented here together with tissues) or molecules. Drugs are prescribed and taken at the organismal level but exert their effects by interacting with their target at the molecular level (red arrow). The gradient from white to blue corresponds to the various levels of interaction systems: white represents a clinical setting; blue represents a laboratory setting. Studies in systems pharmacology fully span all levels shown here.
The physiological responses to a drug or disease are manifestations of events that can be studied at zoomed-in perspectives. Emerging imaging technologies are complementing pathology to observe disease progression and treatment outcome. The entire network of events that are involved in therapeutic intervention can be analyzed in proteomic or genomic studies, or, at a more detailed level, specific protein-protein interactions or enzymatic reactions can be examined. Much of the zoomed-in systems pharmacology understanding comes from basic laboratory research. The methodologies available to address the questions across zoom levels include biochemical experiments, microarray and other high-throughput methodologies, animal models and computational modeling and analysis of molecular and cellular functions. Generally, a systems approach that integrates knowledge from analyses across multiple zoom levels is likely to uncover a new target in a pathway of interest, or the basis for an observed adverse effect, that would probably not have been discovered using traditional techniques. In order for a discovery made at the molecular or cellular level to be applicable clinically, the effect must be demonstrated at the organ and organismal levels and established to be applicable to a population. Thus, systems pharmacology uses a range of approaches that spans multiple scales of organization, as is shown in Figure 2. The different areas of study in systems pharmacology and their relationship to genome medicine are briefly discussed below.
Research in systems pharmacology integrated into genome medicine tends to follow two directions: the 'physiology to therapeutic intervention' direction (bottom-up) comprises approaches that facilitate new drug or drug-target discovery; and the 'therapeutic intervention to physiology' direction (top-down) focuses on the characterization of current drugs across scales of interactions in terms of their mechanism of action, off-target effects or similarity to other drugs. Studies in the bottom-up direction include network analysis, which provides a foundation for systems pathophysiology. Network analysis can be used to identify new targets for therapeutic intervention and to understand adverse events and drug resistance from genomic information. Bottom-up analyses are based on the integration between systems pharmacology and genome medicine (Figure 3). The bottom-up approach can be complemented by top-down studies and include analyses of networks at various levels to study the effect of a drug on a system, commonalities among drugs (global analyses of the 'drugome'), drug resistance and effects of drug combinations.
Figure 3
figure 3
The relationship between genome medicine and systems pharmacology. The diagram summarizes various aspects of genome medicine (in blue) and systems pharmacology (in yellow). Overlapping aspects of analyses and practice are in green (intersection of circles). The positioning of the circles indicates the operational classification of 'genome medicine to systems pharmacology' as top-down and 'systems pharmacology to genome medicine' as bottom-up. The key analyses and practices are in the circle for the field that uses them. Approaches and practices that are used in both fields are in the overlapping region. Genome medicine starts with genetic and genomic testing. Experimental data are computationally processed using statistical genetics tools to yield information that is used in personalized medicine for therapeutic-index targeting (such as dosage of warfarin) and combination therapy. Network analysis is a common approach that integrates genome medicine and systems pharmacology. Systems pharmacology starts from cataloguing the characteristics of individual drugs and targets from biochemistry and cell-physiology experiments. Computational methods and genomic and proteomic data together enable the use of this catalog of information to make predictions regarding drug discovery, drug action and adverse events. Such predictions can be experimentally and clinically tested. Approaches common to both genome medicine and systems pharmacology are based on network analyses that underlie systems pathophysiology, whereby the origins of disease are understood in the context of multi-scale systems. Such understanding enables network-based drug screening and whole genome-based predictions of adverse events and drug resistance. Thus, ultimately, therapeutics intervention will be guided by integrating genome medicine and systems pharmacology.
The role of systems pathophysiology in bottom-up intervention
Many efforts to systematically understand the cellular processes involved in disease lead to sets of proteins that are mutated to become under- or overactive, deleted, over-expressed or aberrantly post-translationally modified in the disease state. Genomic and proteomic studies can be designed to globally screen for cellular components that stand out in the disease model. Subsequent small-scale studies often focus on verifying one or several of the predictions from the large-scale screen. In order to obtain a multi-faceted analysis, techniques such as RNA interference (RNAi) screens, gene-expression profiling, chromatin immunoprecipitation (ChIP) analyses or protein micro-arrays are combined. Analyses on patient-derived samples allow data from humans to constrain and validate animal or cell models of disease. Approaches such as these have been applied to cancer [3032], malaria [33], heart failure [34] and HIV [35, 36].
These global analyses to identify genes, proteins and other cellular components related to the origin or progression of a specific disease yield lists of 'seed nodes' that can be used to computationally construct disease networks by integrating knowledge about protein interactions reported in biomedical literature. With additional experiments or analyses, it becomes possible to place seemingly unrelated proteins in the context of a pathophysiology and thus to explore whether they are potential therapeutic targets or biomarkers. The use of a network of known pathways surrounding genes with annotated cardiovascular involvement, for example, enabled the identification of a series of mass spectrometry biomarkers for major adverse cardiac events [37]. Similarly, a network-based approach was able to identify sets of interactions between nuclear receptors and insulin signaling-pathway proteins related to type 2 diabetes [38]. Identification of previously unrelated relationships between disease pathways and cellular functions is a first step in identifying drug targets.
The accumulation of published systems-level datasets related to drug action calls for new ways to compile the data into a computable format. An example of this type of effort is the Connectivity Map [39], which finds correlations between gene-expression signatures and the sets of proteins involved with the action of a class of drugs or with a particular disease. Although the correlations found through the Connectivity Map approach are an excellent starting point to gain insights into how the gene-expression signatures indicate aberrant cellular or physiological behavior, more detailed mechanistic studies are necessary to develop predictive capabilities.
Even at the level of a single protein, a systems-level understanding becomes useful. A mutant protein implicated in disease can have multiple changes in its behavior in the network. These changes must be put into the context of the disease 'system' in order to determine which property has the highest impact. Different mutations in the small G protein Ras, which are found in many cancers, increase Ras activity by several different mechanisms [40]. To understand which mutation has a more important role in oncogenesis, the effect of two mechanistically distinct Ras mutations on the entire downstream signaling network was analyzed computationally and the results were confirmed experimentally [41]. The computational model predicted that a drug that specifically binds GTP-bound Ras will have a more specific effect on the mutant than it will on the wild-type system, and thus also on cancerous than on normal tissue. Such a prediction could not have been made by simply correlating mutations with disease phenotype. Thus, a systems understanding of a single protein function in a disease signaling network can predict new targets for therapeutic intervention in a mechanism-based manner.
Top down approaches: global drug analyses
Direct relationships between diseases, drugs and proteins enable statistical inferences about less obvious relationships between them. Global analysis of FDA-approved drugs has found that, unlike ubiquitously expressed essential genes, drug targets tend to be expressed in specific tissues yet tend to interact with many other proteins in the cellular network while being independently regulated [5, 6]. A bipartite graph of 1,052 FDA-approved drugs interacting with 485 targets contained 179 'islands' (sets of nodes that are not connected to any other node in the graph). Most of these islands are made up of 10-30 interacting cellular components. A single large island of 481 components consisted of drugs that target GPCRs [5]. This large island exists in part because of the many physiological processes, as varied as cardiac contractility, acid secretion and airway constriction, that are regulated by GPCRs, and in part because the extracellular ligand-binding domains of these receptors make ideal drug targets.
Many drugs target GPCRs and, despite the physiological diversity of these proteins, all GPCR-mediated signaling is coupled through heterotrimeric G proteins. Commonalities of signaling mechanism among existing drug targets, such as this, create a basis for identifying what network properties make a protein a potentially good drug target. Statistical metrics from network analyses, such as a centrality measure, which quantifies the relative importance of a protein in communicating between different modules within a network, have been suggested for identifying nodes (proteins in a network) that have attractive properties as potential drug targets [42]. Recent work has generated a network connecting drugs on the basis of their structural similarity and similarity of side-effect profiles [43]. This method has been proven effective at identifying groups of drugs that share common targets. When drugs with predicted shared targets did not in fact have known targets in common, the predictions provided a framework for testing these drugs for binding against each other's targets. Hence, this approach led to the identification of new targets for existing drugs.
Integrating the bottom-up and top-down approaches: network analysis
Studies on cellular signaling pathways and networks drive systems pharmacology because they can lead to discoveries that enable new therapeutic interventions. These networks are often based on genomic information. Network analysis integrates genome medicine and systems pharmacology, as is shown by its central location in the overlapping part of the Venn diagram in Figure 3. Proteomic and genomic studies have increased dramatically in the past five years, and datasets describing global behavior, such as genomic or proteomic analyses of many cellular pathways and processes, are now available [44]. These data expand our knowledge of signaling pathways by implicating many more signaling molecules, which may serve as new drug targets or have implications for drug resistance or off-target effects.
Certain proteins or protein families are targeted more frequently in therapeutic interventions than others, because of their involvement in disease and the straightforwardness of designing drugs to target the proteins [45]. Systems pharmacology studies on these systems are usually more common. For example, there are at least eight different approved or pipeline drugs that target one or several members of the ErbB family of receptor tyrosine kinases, which are modified in many kinds of cancer and other diseases [46]. There are also many studies that computationally model [4752] or experimentally analyze [51, 5356] the dynamics of the ErbB signaling network. For example, the ErbB network has been analyzed by examining the interactions between the known phosphotyrosine interaction domains (PTB and SH2 domains) and phosphopeptides representing the phosphorylated forms of the four ErbB family members using microarrays [57]. This study [57] uncovered behavior of the ErbB network that is pertinent to its role in oncogenesis and the implications of therapeutically modulating this family of receptors.
Network analysis has facilitated the development of computational prediction algorithms that predict all possible molecules affected by specified perturbations of upstream targets by hypothetical drugs [58]. These algorithms can also solve the 'minimum knockout' problem by yielding predictions of the smallest number of drug targets needed to fully block a cellular process [58]. A similar computational framework for identifying targeted disruption of signaling networks allows identification of all sets of potential drug targets to block a process, while not affecting other processes [59]. These types of network analysis suggest that computational analyses of cell signaling networks are likely to be an important aspect of systems pharmacology and future drug discovery [60].
Adverse events and drug resistance: relationship to genome characteristics and systems pathophysiology
Most studies that focus on understanding resistance to therapies are in oncology. Many cancer therapies, such as microtubule stabilizers [61], tamoxifen or endocrine therapy [62] and drugs targeted at epithelial growth factor receptors (EGFRs) [63], are effective for a limited period and/or in a limited population and then resistance develops. Screens using RNAi or with genomics and proteomics identify proteins that are up- or down-regulated or are necessary for drug action in resistant cell populations. The receptor tyrosine kinase c-MET has recently been implicated in resistance to EGFR-targeted therapies and, on the basis of these discoveries, therapies directed against c-MET are in development [64]. In order to understand better the convergence of signals from these two receptors, the tyrosine signaling networks of several cancer cell lines that overexpress c-MET or EGFR or express a mutant form of EGFR were examined to identify a core network of 50 proteins mediating drug response [65]. This type of network analysis can form the basis for the selection of drugs that target the proteins common to both pathways and thus overcome drug-induced resistance.
Genome medicine and systems pharmacology need to be integrated for defining the genes and proteins involved in drug treatment or drug resistance. Such integrated analyses can lead to identification of targets that are likely to synergize with or add to the effect of the drug or therapy. For example, studies focused on defining the genes involved in radiation therapy aim to discover new targets that would increase the effectiveness of radiation therapy [66, 67]. In fact, many drugs are prescribed in combination with other drugs, because the effectiveness of both drugs is increased when they are combined. Predicting which combinations will show this effect, and the doses of such combinations, is not entirely intuitive or simple. Sometimes, serendipitous results from genomic or proteomic screens identify targets that might have synergistic effects in conjunction with commonly used therapies, such as ceramide transport protein in taxane-based therapy [68]. In proactive approaches, network analysis [42] or computational modeling [69] can provide information on the effect of intracellular signaling on inhibition of the two nodes, predicting how dosing with pairs or groups of drugs that target different proteins would work. Such a model was used to predict combination therapies in the EGFR pathway [70]. These types of study can also be used to look for ways to lower dosages while sustaining effectiveness, avoiding unnecessary drug toxicity.
Adverse events from drugs are a major concern in the development and prescription of pharmaceuticals. The susceptibility and severity of adverse events can vary tremendously between patients as a result of, among other confounders, genomic factors. At the simplest level, adverse events can be due to dosage effects of the drug. Pharmacogenomic factors affecting drug metabolism can lead to increased levels of active drug in the body. For drugs such as warfarin, which acts as an anticoagulant, too high a blood level owing to reduced metabolism (resulting from variant CYPs) can lead to uncontrolled bleeding and cerebral hemorrhage [20]. For patients with CYPs that have lower warfarin metabolizing capabilities, the dosage of warfarin needs to be reduced so as to obtain its therapeutic benefits without increasing the risk of adverse events.
Genetic factors can affect how patients respond to therapeutic doses of a drug. Patients with a deficiency in glucose 6-phosphatase dehydrogenase (G6PD) cannot respond sufficiently to multiple forms of oxidative stress and, thus, many drugs, including many anti-malarials, analgesics and antibiotics, can cause these patients to develop hemolytic anemia [71]. Adverse events can also be modulated by the immune system, whereby the patient's immune system responds to the drug, leading to damage of otherwise healthy cells. For example, this can lead to drug-induced neutropenia [72]. Several of this class of adverse event have been associated genetically with specific HLA alleles, the genes that control presentation of immunogenic epitopes [73].
Drug-induced cardiac arrhythmias comprise another important class of side-effects. One example, the long QT syndrome, has been extensively studied because it can in many cases lead to fatal arrhythmias. Increased understanding of a congenital form of the syndrome, the frequency of the side-effect and the variability in its severity has led researchers to suggest that individual variability in cardiac repolarization reserve is among the major risk modifiers [74]. Cardiac repolarization reserve is the capability of cells to compensate for changes in ion channel function that underlie the normal myocyte action potential. Thus, understanding how drugs regulate specific ion channels to modulate electrical interactions between channels to produce a physiological event is critical. This type of adverse event, which reflects an interaction between a drug and a complex biological system, is where systems pharmacology will prove most effective. Systems pharmacology projects have begun to identify targets that are important for causing the side-effect by grouping drugs according to their side-effect profiles [43]. In addition, large-scale studies have begun to identify cellular modules that are important for causing the side-effects. For example, a large-scale study of mitochondrial function identified mitochondrial expression profiles affected by drugs that can cause drug-induced myopathies [75].
Both genome medicine and systems pharmacology, as interdisciplinary research fields, are in their infancy. The convergent goals of both fields are to treat disease in each patient on the basis of the patient's genome and their unique environmental interactions. Thus, they can be considered to be the twin pillars supporting the gateway to personalized medicine. In both fields, there are considerable opportunities for new conceptual and technological developments. Success in personalized medicine will require advances in technology and concepts in both fields to be well integrated. The most influential driver for this integration is a mechanism-based understanding of disease and therapy across scales of biological organization. As a multi-scale understanding of the human body develops, it is likely to influence not only the treatment and prevention of disease, but also the economics and social aspects of health care.
angiotensin converting enzyme
breast cancer 1, early onset (mutations in this gene greatly increase predisposition to breast cancer)
chromatin immunoprecipitation
cytochrome P450 (drug metabolizing enzymes)
epithelial growth factor receptor
US Food and Drug Administration
glucose 6-phosphatase dehydrogenase
G-protein coupled receptor
RNA interference
single nucleotide polymorphism.
1. 1.
Kenakin T: Principles: receptor theory in pharmacology. Trends Pharmacol Sci. 2004, 25: 186-192. 10.1016/
PubMed CAS Article Google Scholar
2. 2.
Black J, Leff P: Operational models of pharmacological agonist. Proc R Soc Lond B Biol Sci. 1983, 220: 141-162. 10.1098/rspb.1983.0093.
PubMed CAS Article Google Scholar
3. 3.
Maehle AH, Prull CR, Halliwell RF: The emergence of the drug receptor theory. Nat Rev Drug Discov. 2002, 1: 637-641. 10.1038/nrd875.
PubMed CAS Article Google Scholar
4. 4.
Colquhoun D: The quantitative analysis of drug-receptor interactions: a short history. Trends Pharmacol Sci. 2006, 27: 149-157. 10.1016/
PubMed CAS Article Google Scholar
5. 5.
PubMed PubMed Central Article Google Scholar
6. 6.
PubMed CAS Article Google Scholar
7. 7.
Mook S, Schmidt MK, Viale G, Pruneri G, Eekhout I, Floore A, Glas AM, Bogaerts J, Cardoso F, Piccart-Gebhart MJ, Rutgers ET, Van't Veer LJ, on behalf of the TRANSBIG consortium: The 70-gene prognosis-signature predicts disease outcome in breast cancer patients with 1-3 positive lymph nodes in an independent validation study. Breast Cancer Res Treat. 2008, doi: 101007/s10549-008-0130-2.
Google Scholar
8. 8.
Kroon BK, Leijte JA, van Boven H, Wessels LF, Velds A, Horenblas S, van't Veer LJ: Microarray gene-expression profiling to predict lymph node metastasis in penile carcinoma. BJU Int. 2008, 102: 510-515. 10.1111/j.1464-410X.2008.07697.x.
PubMed CAS Article Google Scholar
9. 9.
Desnick RJ: Enzyme replacement therapy for Fabry disease: lessons from two alpha-galactosidase A orphan products and one FDA approval. Expert Opin Biol Ther. 2004, 4: 1167-1176. 10.1517/14712598.4.7.1167.
PubMed CAS Article Google Scholar
10. 10.
Desnick RJ, Schuchman EH: Enzyme replacement and enhancement therapies: lessons from lysosomal disorders. Nat Rev Genet. 2002, 3: 954-966. 10.1038/nrg963.
PubMed CAS Article Google Scholar
11. 11.
PubMed CAS Article Google Scholar
12. 12.
Manolio TA, Brooks LD, Collins FS: A HapMap harvest of insights into the genetics of common disease. J Clin Invest. 2008, 118: 1590-1605. 10.1172/JCI34772.
PubMed CAS PubMed Central Article Google Scholar
13. 13.
DeMatteo RP: The GIST of targeted cancer therapy: a tumor (gastrointestinal stromal tumor), a mutated gene (c-kit), and a molecular inhibitor (STI571). Ann Surg Oncol. 2002, 9: 831-839. 10.1007/BF02557518.
PubMed Article Google Scholar
14. 14.
Schiffer CA, Hehlmann R, Larson R: Perspectives on the treatment of chronic phase and advanced phase CML and Philadelphia chromosome positive ALL. Leukemia. 2003, 17: 691-699. 10.1038/sj.leu.2402879.
PubMed CAS Article Google Scholar
15. 15.
PubMed CAS Article Google Scholar
16. 16.
Sequist LV, Martins RG, Spigel D, Grunberg SM, Spira A, Jänne PA, Joshi VA, McCollum D, Evans TL, Muzikansky A, Kuhlmann GL, Han M, Goldberg JS, Settleman J, Iafrate AJ, Engelman JA, Haber DA, Johnson BE, Lynch TJ: First-line gefitinib in patients with advanced non-small-cell lung cancer harboring somatic egfr mutations. J Clin Oncol. 2008, 26: 2442-2449. 10.1200/JCO.2007.14.8494.
PubMed CAS Article Google Scholar
17. 17.
Liggett SB, Mialet-Perez J, Thaneemit-Chen S, Weber SA, Greene SM, Hodne D, Nelson B, Morrison J, Domanski MJ, Wagoner LE, Abraham WT, Anderson JL, Carlquist JF, Krause-Steinrauf HJ, Lazzeroni LC, Port JD, Lavori PW, Bristow MR: A polymorphism within a conserved β1-adrenergic receptor motif alters cardiac function and β-blocker response in human heart failure. Proc Natl Acad Sci USA. 2006, 103: 11288-11293. 10.1073/pnas.0509937103.
PubMed CAS PubMed Central Article Google Scholar
18. 18.
Nusbaum R, Isaacs C: Management updates for women with a BRCA1 or BRCA2 mutation. Mol Diagn Ther. 2007, 11: 133-144.
PubMed CAS Article Google Scholar
19. 19.
Schwarz UI, Ritchie MD, Bradford Y, Li C, Dudek SM, Frye-Anderson A, Kim RB, Roden DM, Stein CM: Genetic determinants of response to warfarin during initial anticoagulation. N Engl J Med. 2008, 358: 999-1008. 10.1056/NEJMoa0708078.
PubMed CAS PubMed Central Article Google Scholar
20. 20.
Au N, Rettie AE: Pharmacogenomics of 4-hydroxycoumarin anticoagulants. Drug Metab Rev. 2008, 40: 355-375. 10.1080/03602530801952187.
PubMed CAS Article Google Scholar
21. 21.
Goetz MP, Kamal A, Ames MM: Tamoxifen pharmacogenomics: the role of CYP2D6 as a predictor of drug response. Clin Pharmacol Ther. 2008, 83: 160-166. 10.1038/sj.clpt.6100367.
PubMed CAS PubMed Central Article Google Scholar
22. 22.
PubMed CAS Article Google Scholar
23. 23.
Court MH: A pharmacogenomics primer. J Clin Pharmacol. 2007, 47: 1087-1103. 10.1177/0091270007303768.
PubMed CAS Article Google Scholar
24. 24.
Frank J: Managing hypertension using combination therapy. Am Fam Physician. 2008, 77: 1279-1286.
PubMed Google Scholar
25. 25.
Materson BJ, Reda DJ, Preston RA, Cushman WC, Massie BM, Freis ED, Kochar MS, Hamburger RJ, Fye C, Lakshman R, et al: Response to a second single antihypertensive agent used as monotherapy for hypertension after failure of the initial drug. Department of Veterans Affairs Cooperative Study Group on Antihypertensive Agents. Arch Intern Med. 1995, 155: 1757-1762. 10.1001/archinte.155.16.1757.
PubMed CAS Article Google Scholar
26. 26.
Libby P: Inflammatory mechanisms: the molecular basis of inflammation and disease. Nutr Rev. 2007, 65: S140-146. 10.1111/j.1753-4887.2007.tb00352.x.
PubMed Article Google Scholar
27. 27.
Lucas SM, Rothwell NJ, Gibson RM: The role of inflammation in CNS injury and disease. Br J Pharmacol. 2006, 147 (Suppl 1): S232-S240. 10.1038/sj.bjp.0706400.
PubMed CAS PubMed Central Google Scholar
28. 28.
Hunter PJ, Borg TK: Integration from proteins to organs: the Physiome Project. Nat Rev Mol Cell Biol. 2003, 4: 237-243. 10.1038/nrm1054.
PubMed CAS Article Google Scholar
29. 29.
Hunter PJ, Crampin EJ, Nielsen PM: Bioinformatics, multiscale modeling and the IUPS Physiome Project. Brief Bioinform. 2008, 9: 333-343. 10.1093/bib/bbn024.
PubMed CAS Article Google Scholar
30. 30.
Shaffer AL, Emre NC, Lamy L, Ngo VN, Wright G, Xiao W, Powell J, Dave S, Yu X, Zhao H, Zeng Y, Chen B, Epstein J, Staudt LM: IRF4 addiction in multiple myeloma. Nature. 2008, 454: 226-231. 10.1038/nature07064.
PubMed CAS PubMed Central Article Google Scholar
31. 31.
PubMed CAS Article Google Scholar
32. 32.
Ruiz-Vela A, Aggarwal M, de la Cueva P, Treda C, Herreros B, Martin-Perez D, Dominguez O, Piris MA: Lentiviral (HIV)-based RNA interference screen in human B-cell receptor regulatory networks reveals MCL1-induced oncogenic pathways. Blood. 2008, 111: 1665-1676. 10.1182/blood-2007-09-110601.
PubMed CAS Article Google Scholar
33. 33.
Tarun AS, Peng X, Dumpit RF, Ogata Y, Silva-Rivera H, Camargo N, Daly TM, Bergman LW, Kappe SHI: A combined transcriptome and proteome survey of malaria parasite liver stages. Proc Natl Acad Sci USA. 2008, 105: 305-310. 10.1073/pnas.0710780104.
PubMed CAS PubMed Central Article Google Scholar
34. 34.
Frost RJ, Engelhardt S: A secretion trap screen in yeast identifies protease inhibitor 16 as a novel antihypertrophic protein secreted from the heart. Circulation. 2007, 116: 1768-1775. 10.1161/CIRCULATIONAHA.107.696468.
PubMed CAS Article Google Scholar
35. 35.
PubMed CAS Article Google Scholar
36. 36.
Lu TC, Wang Z, Feng X, Chuang P, Fang W, Chen Y, Neves S, Maayan A, Xiong H, Liu Y, Iyengar R, Klotman PE, He JC: Retinoic acid utilizes CREB and USF1 in a transcriptional feed-forward loop in order to stimulate MKP1 expression in human immunodeficiency virus-infected podocytes. Mol Cell Biol. 2008, 28: 5785-5794. 10.1128/MCB.00245-08.
PubMed CAS PubMed Central Article Google Scholar
37. 37.
Jin G, Zhou X, Wang H, Zhao H, Cui K, Zhang XS, Chen L, Hazen SL, Li K, Wong ST: The knowledge-integrated network biomarkers discovery for major adverse cardiac events. J Proteome Res. 2008, 7: 4013-4021. 10.1021/pr8002886.
PubMed CAS PubMed Central Article Google Scholar
38. 38.
PubMed PubMed Central Article Google Scholar
39. 39.
PubMed CAS Article Google Scholar
40. 40.
PubMed CAS Article Google Scholar
41. 41.
Stites EC, Trampont PC, Ma Z, Ravichandran KS: Network analysis of oncogenic Ras activation in cancer. Science. 2007, 318: 463-467. 10.1126/science.1144642.
PubMed CAS Article Google Scholar
42. 42.
Hwang WC, Zhang A, Ramanathan M: Identification of information flow-modulating drug targets: a novel bridging paradigm for drug discovery. Clin Pharmacol Ther. 2008, doi: 101038/clpt.2008.129.
Google Scholar
43. 43.
PubMed CAS Article Google Scholar
44. 44.
45. 45.
Lauss M, Kriegner A, Vierlinger K, Noehammer C: Characterization of the drugged human genome. Pharmacogenomics. 2007, 8: 1063-1073. 10.2217/14622416.8.8.1063.
PubMed CAS Article Google Scholar
46. 46.
Johnston JB, Navaratnam S, Pitz MW, Maniate JM, Wiechec E, Baust H, Gingerich J, Skliris GP, Murphy LC, Los M: Targeting the EGFR pathway for cancer therapy. Curr Med Chem. 2006, 13: 3483-3492. 10.2174/092986706779026174.
PubMed CAS Article Google Scholar
47. 47.
Hendriks BS, Orr G, Wells A, Wiley HS, Lauffenburger DA: Parsing ERK activation reveals quantitatively equivalent contributions from epidermal growth factor receptor and HER2 in human mammary epithelial cells. J Biol Chem. 2005, 280: 6157-6169. 10.1074/jbc.M410491200.
PubMed CAS Article Google Scholar
48. 48.
Kholodenko BN, Demin OV, Moehren G, Hoek JB: Quantification of short term signaling by the epidermal growth factor receptor. J Biol Chem. 1999, 274: 30169-30181. 10.1074/jbc.274.42.30169.
PubMed CAS Article Google Scholar
49. 49.
Nakakuki T, Yumoto N, Naka T, Shirouzu M, Yokoyama S, Hatakeyama M: Topological analysis of MAPK cascade for kinetic ErbB signaling. PLoS ONE. 2008, 3: e1782-10.1371/journal.pone.0001782.
PubMed PubMed Central Article Google Scholar
50. 50.
Schoeberl B, Eichler-Jonsson C, Gilles ED, Muller G: Computational modeling of the dynamics of the MAP kinase cascade activated by surface and internalized EGF receptors. Nat Biotechnol. 2002, 20: 370-375. 10.1038/nbt0402-370.
PubMed Article Google Scholar
51. 51.
Citri A, Yarden Y: EGF-ERBB signalling: towards the systems level. Nat Rev Mol Cell Biol. 2006, 7: 505-516. 10.1038/nrm1962.
PubMed CAS Article Google Scholar
52. 52.
Wiley HS, Shvartsman SY, Lauffenburger DA: Computational modeling of the EGF-receptor system: a paradigm for systems biology. Trends Cell Biol. 2003, 13: 43-50. 10.1016/S0962-8924(02)00009-0.
PubMed CAS Article Google Scholar
53. 53.
PubMed CAS PubMed Central Article Google Scholar
54. 54.
Sevecka M, MacBeath G: State-based discovery: a multidimensional screen for small-molecule modulators of EGF signaling. Nat Methods. 2006, 3: 825-831. 10.1038/nmeth931.
PubMed CAS PubMed Central Article Google Scholar
55. 55.
PubMed PubMed Central Article Google Scholar
56. 56.
Hatakeyama M: System properties of ErbB receptor signaling for the understanding of cancer progression. Mol Biosyst. 2007, 3: 111-116. 10.1039/b612800a.
PubMed CAS Article Google Scholar
57. 57.
Jones RB, Gordus A, Krall JA, MacBeath G: A quantitative protein interaction network for the ErbB receptors using protein micro-arrays. Nature. 2006, 439: 168-174. 10.1038/nature04177.
PubMed CAS Article Google Scholar
58. 58.
Ruths D, Muller M, Tseng JT, Nakhleh L, Ram PT: The signaling petri net-based simulator: a non-parametric strategy for characterizing the dynamics of cell-specific signaling networks. PLoS Comput Biol. 2008, 4: e1000005-10.1371/journal.pcbi.1000005.
PubMed PubMed Central Article Google Scholar
59. 59.
Dasika MS, Burgard A, Maranas CD: A computational framework for the topological analysis and targeted disruption of signal transduction networks. Biophys J. 2006, 91: 382-398. 10.1529/biophysj.105.069724.
PubMed PubMed Central Article Google Scholar
60. 60.
Kumar N, Hendriks BS, Janes KA, de Graaf D, Lauffenburger DA: Applying computational modeling to drug discovery and development. Drug Discov Today. 2006, 11: 806-811. 10.1016/j.drudis.2006.07.010.
PubMed CAS Article Google Scholar
61. 61.
Fojo T, Menefee M: Mechanisms of multidrug resistance: the potential role of microtubule-stabilizing agents. Ann Oncol. 2007, 18 (Suppl 5): v3-8. 10.1093/annonc/mdm172.
PubMed Article Google Scholar
62. 62.
Lord CJ, Iorns E, Ashworth A: Dissecting resistance to endocrine therapy in breast cancer. Cell Cycle. 2008, 7: 1895-1898.
PubMed CAS Article Google Scholar
63. 63.
Ciardiello F, Tortora G: EGFR antagonists in cancer treatment. N Engl J Med. 2008, 358: 1160-1174. 10.1056/NEJMra0707704.
PubMed CAS Article Google Scholar
64. 64.
Jin H, Yang R, Zheng Z, Romero M, Ross J, Bou-Reslan H, Carano RA, Kasman I, Mai E, Young J, Zha J, Zhang Z, Ross S, Schwall R, Colbern G, Merchant M: MetMAb, the one-armed 5D5 anti-c-Met antibody, inhibits orthotopic pancreatic tumor growth and improves survival. Cancer Res. 2008, 68: 4360-4368. 10.1158/0008-5472.CAN-07-5960.
PubMed CAS Article Google Scholar
65. 65.
Guo A, Villen J, Kornhauser J, Lee KA, Stokes MP, Rikova K, Possemato A, Nardone J, Innocenti G, Wetzel R, Wang Y, MacNeill J, Mitchell J, Gygi SP, Rush J, Polakiewicz RD, Comb MJ: Signaling networks assembled by oncogenic EGFR and c-Met. Proc Natl Acad Sci USA. 2008, 105: 692-697. 10.1073/pnas.0707270105.
PubMed CAS PubMed Central Article Google Scholar
66. 66.
Sudo H, Tsuji AB, Sugyo A, Imai T, Saga T, Harada YN: A loss of function screen identifies nine new radiation susceptibility genes. Biochem Biophys Res Commun. 2007, 364: 695-701. 10.1016/j.bbrc.2007.10.074.
PubMed CAS Article Google Scholar
67. 67.
Amundson SA, Do KT, Vinikoor LC, Lee RA, Koch-Paiz CA, Ahn J, Reimers M, Chen Y, Scudiero DA, Weinstein JN, Trent JM, Bittner ML, Meltzer PS, Fornace AJ: Integrating global gene expression and radiation survival parameters across the 60 cell lines of the National Cancer Institute Anticancer Drug Screen. Cancer Res. 2008, 68: 415-424. 10.1158/0008-5472.CAN-07-2120.
PubMed CAS Article Google Scholar
68. 68.
PubMed CAS Article Google Scholar
69. 69.
PubMed CAS Article Google Scholar
70. 70.
Araujo RP, Petricoin EF, Liotta LA: A mathematical model of combination therapy using the EGFR signaling network. Biosystems. 2005, 80: 57-69. 10.1016/j.biosystems.2004.10.002.
PubMed CAS Article Google Scholar
71. 71.
PubMed CAS Article Google Scholar
72. 72.
Bux J: Molecular nature of antigens implicated in immune neutropenias. Int J Hematol. 2002, 76 (Suppl 1): 399-403.
PubMed Article Google Scholar
73. 73.
Opgen-Rhein C, Dettling M: Clozapine-induced agranulocytosis and its genetic determinants. Pharmacogenomics. 2008, 9: 1101-1111. 10.2217/14622416.9.8.1101.
PubMed CAS Article Google Scholar
74. 74.
Roden DM: Cellular basis of drug-induced torsades de pointes. Br J Pharmacol. 2008, 154: 1502-1507. 10.1038/bjp.2008.238.
PubMed CAS PubMed Central Article Google Scholar
75. 75.
Wagner BK, Kitami T, Gilbert TJ, Peck D, Ramanathan A, Schreiber SL, Golub TR, Mootha VK: Large-scale chemical dissection of mitochondrial function. Nat Biotechnol. 2008, 26: 343-351. 10.1038/nbt1387.
PubMed CAS PubMed Central Article Google Scholar
Download references
This work was supported by the NIH grants GM54508 and New York Systems Biology Center grant P50-GM071558. SB is supported by pre-doctoral training grant in Pharmacological Sciences GM-062754. We thank Avi Ma'ayan and Emmanuel Landau for comments.
Author information
Corresponding author
Correspondence to Ravi Iyengar.
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ original submitted files for images
Authors’ original file for figure 1
Authors’ original file for figure 2
Authors’ original file for figure 3
Rights and permissions
Reprints and Permissions
About this article
Cite this article
Wist, A.D., Berger, S.I. & Iyengar, R. Systems pharmacology and genome medicine: a future perspective. Genome Med 1, 11 (2009).
Download citation
• Published:
• DOI:
• Personalized Medicine
• Epithelial Growth Factor Receptor
• Genome Medicine
• Disease Network
• Zoom Level |
Now Let's Research Old Lyme
The Intriguing Tale Of Chaco National Park
Lets visit Chaco Canyon Park (North West New Mexico) from Old Lyme. Based from the use of similar buildings by current Puebloan peoples, these rooms had been areas that are probably common for rites and gatherings, with a fireplace in the middle and room access supplied by a ladder extending through a smoke hole in the ceiling. Large kivas, or "great kivas," were able to accommodate hundreds of people and stood alone when not integrated into a housing that is large, frequently constituting a center location for surrounding villages made of (relatively) little buildings. To sustain large buildings that are multi-story held rooms with floor spaces and ceiling heights far greater than those of pre-existing houses, Chacoans erected gigantic walls employing a "core-and-veneer" method variant. An core that is inner of sandstone with mud mortar created the core to which slimmer facing stones were joined to produce a veneer. These walls were approximately one meter thick at the base, tapering as they ascended to conserve weight--an indication that builders planned the upper stories during the original building in other instances. While these mosaic-style veneers remain evident today, adding to these structures' remarkable beauty, Chacoans plastered plaster to many interior and exterior walls after construction was total to preserve the mud mortar from water harm. Starting with Chetro Ketl's building, Chaco Canyon, projects for this magnitude needed a huge number of three vital materials: sandstone, water, and lumber. Employing stone tools, Chacoans mined then molded and faced sandstone from canyon walls, choosing hard and dark-colored tabular stone at the most effective of cliffs during initial building, going as styles altered during later construction to softer and bigger tan-colored stone lower down cliffs. Liquid, essential to build mud mortar and plaster combined with sand, silt and clay, was marginal and accessible only during short and summer that is typically heavy. Rainwater ended up being caught in wells and dammed areas formed in the arroyo (intermittently running stream) that cut the canyon, Chaco Wash, and in ponds to which runoff was diverted by a system of ditches, along with natural sandstone reservoirs. Timber sources, which had been needed to construct roofs and story that is upper, were formerly abundant in the canyon but vanished about the time of the Chacoan fluorescence owing to drought or deforestation. As a consequence, Chacoans went 80 kilometers on foot to coniferous woods to the south and west, cutting down trees, peeling them, and drying them for an extended length of time to minimize weight, before returning and moving them straight back to the canyon. This was no undertaking that is easy given that each tree would have taken a team of workers several days to transport, and that more than 200,000 trees were utilized in the building and renovation of the canyon's approximately dozen major great house and great kiva sites over three centuries. Chaco Canyon's Designed Landscape. Despite the fact that Chaco Canyon had a density of construction never seen previously in the region, the canyon was just a tiny part of a huge linked territory that created Chacoan civilisation. Outside the canyon, there were more than 200 settlements with large homes and kivas that is magnificent in the same distinctive brick style and design as those found inside the canyon, but on a lesser scale. Although the majority of these sites were found in the San Juan Basin, a stretch was covered by them of the Colorado Plateau greater than England. Chacoans built an extensive system of roadways to connect these settlements to the canyon and to one another by digging and leveling the ground that is underlying, in some instances, adding clay or masonry curbs for support. These roads often began at large buildings inside the canyon and beyond, and then radiate outward in amazingly straight parts. The existence of cocoa indicates a migration of ideas too as product products from Mesoamerica to Chaco. Cacao was venerated by the Maya civilisation, who used it to produce drinks that were frothed by flowing as well as forth between jars before being consumed during elite rites. Cacao residue was discovered on potsherds in the canyon, most likely from tall jars that are cylindrical in surrounding sets and similar in shape to those used in Maya rites. Several of these expensive trade products, in addition to cacao, are thought to have had a function that is ceremonial. They were unearthed in large numbers in great houses' storerooms and burial chambers, among artifacts having ceremonial meanings like as carved wooden staffs, flutes, and animal effigies. One chamber alone at Pueblo Bonito had around 50,000 pieces of turquoise, another 4,000 pieces of jet (a dark-colored sedimentary rock), and 14 macaw bones. Tree ring data collections show that great house building halted about c. 1130 CE marks the start of a 50-year drought in the San Juan Basin. With life at Chaco already precarious during times of normal rainfall, an protracted drought would have stressed resources, precipitating the civilization's downfall and exodus from the canyon and numerous outlying sites, which would have ended by the middle of the 13th century CE. Evidence of the sealing of large house doors and the burning of big kivas suggests a probable spiritual acceptance of this move in circumstances - a notion made more feasible by the role that is central plays in Puebloan origin legends.
The typical family size in Old Lyme, CT is 2.83 residential members, with 79.6% owning their particular dwellings. The mean home valuation is $387318. For people renting, they pay out on average $1334 monthly. 53.9% of homes have 2 incomes, and an average household income of $96567. Median individual income is $45045. 3.7% of citizens are living at or below the poverty line, and 8.4% are disabled. 9.4% of residents of the town are ex-members of this armed forces of the United States.
The labor pool participation rate in Old Lyme is 62.1%, with an unemployment rate of 5.5%. For everyone within the labor force, the average commute time is 25.5 minutes. 26.6% of Old Lyme’s populace have a graduate degree, and 27.3% posses a bachelors degree. For many without a college degree, 23.5% have some college, 18.1% have a high school diploma, and only 4.4% possess an education less than twelfth grade. 2.5% are not included in medical insurance.
Old Lyme, CT is found in New London county, and includes a populace of 7396, and rests within the higher Hartford-East Hartford, CT metro region. The median age is 52.7, with 6.7% for the community under 10 many years of age, 13.4% between 10-19 years old, 5.3% of citizens in their 20’s, 6.6% in their 30's, 14.3% in their 40’s, 18.4% in their 50’s, 16.3% in their 60’s, 12.7% in their 70’s, and 6.4% age 80 or older. 50.1% of residents are men, 49.9% female. 59.7% of citizens are recorded as married married, with 12.2% divorced and 21.5% never wedded. The percent of residents confirmed as widowed is 6.7%. |
Skip to main content
Smart testing can help reopen the economy
Smart testing can help reopen the economy
With the ongoing COVID-19 pandemic, there are two main threats that impact the very core of human existence. One is the public health emergency that has shocked the health care system around the world. The other is the economic crisis that too has exposed the increasing divide between the rich and the poor.
As the world grapples with both of these issues and looks for a way to balance the two sides of the crisis, one thing is quite clear and that is that there is no going back to normal. To the extent that there is going to be a need to rethink some of our key models that we have been living with.
In this session of TED Connects, political theorist Danielle Allen describes how we can ethically and democratically address both problems by scaling up "smart testing," which would track positive cases with peer-to-peer software on people's cell phones so we can end the pandemic and get back to work. She is an expert on the intersection of ethics and democracy and highlights how unprecedented measures need to be looked at to deal with the ongoing unprecedented crisis at hand.
Recommended For You
Trending on Mashable |
What is the basic difference between confounding and interaction? Is it possible to occur both at the same time in data? Can anyone please explain this plainly and with an example?
A confounding variable is a variable that correlates with both your regressor and the dependent variable. In some way, this second predictor variable explains all or part of the dependent variable and also is reflected in the independent variable. In essence they share a common quality that means when both are included that quality is over-represented.
In an ecological system, something like a disease that kills both predator and prey is acting on the populations of both, yet has nothing to do with the effect of predation on the decline of prey or growth of predators. It confounds the true predator-prey relationship, particularly if it is disproportionate in its virulence between species.
Interaction is much more complicated because it means that two separate regressors work together to create an outcome variable. They do not overlap, they in some way coalesce in an effect that is not simply additive. Their relationship, as it acts on your dependent variable, is sometimes difficult to figure out.
If you have a situation where two proteins work together to accomplish some kind of chemical process in the human body with only one pathway. Removing one or the other will break your model. Though it may be difficult from the model to exactly quantify their relationship if there are other components which create the appropriate environment for the reaction or regulate the presence of the resulting product (like reuptake or conversion).
With confounding variables, you can often leave one or the other out and get a more accurate model (although not always). With an interaction, leaving one or the other out will likely make it worse.
Your Answer
|
Carl Jung's Persona: Behind the Mask
Dernière mise à jour : 25 févr. 2021
Carl Jung ; Henri Cartier-Bresson (1959)
"There is an unconscious propriety in the way in which, in all European languages, the word "person" is commonly used to denote a human being. The real meaning of persona is a "mask", such as actors were accustomed to wear on the ancient stage ; and it is quite true that no one shows himself as he is, but wears his mask and plays his part. Indeed, the whole of our social arrangements may be likened to a perpetual comedy."
Arthur Schopenhauer
Studies in Pessimism
"One could say, with a little exaggeration, that the persona is that which in reality one is not, but which oneself as well as others think one is." Carl Gustav Jung
The Archetypes and the Collective Unconscious
Jung's Map of the Soul : An Introduction
by Murray Stein
The Persona
"Today the term persona has been somewhat accepted into the vocabulary of psychology and contemporary culture. It is used frequently in popular parlance, in newspapers, and in literary theory. It means the person-as-presented, not the person-as-real. The persona is a psychological and social construct adopted for a specific purpose.
Jung chose it for his psychological theory because it has to do with playing roles in society. He was interested in how people come to play particular roles, adopt a conventional collective attitude, and represent social and cultural stereotypes rather than assuming and living their own uniqueness. Certainly this is a well-known human trait. It is a kind of mimicry. Jung gave it a name and worked it into his theory of the psyche.
Jung begins his definition of the persona by making the point that many psychiatric and psychological studies have shown that the human personality is not simple but complex, that it can be shown to split and to fragment under certain conditions, and that there are many subpersonalities within the normal human psyche. However,
“It is at once evident that such a plurality of personalities can never appear in a normal individual.”
In other words, while we are not all “multiple personalities” in a clinical sense, everyone does manifest “traces of character splitting.” The normal individual is simply a less exaggerated version of what is found in pathology.
“One has only to observe a man rather closely, under varying conditions, to see that a change from one milieu to another brings about a striking alteration of personality ... ‘angel abroad, devil at home’.”
In public such an individual is all smiles, backslapping, gladhanding, extroverted, easygoing, happy-go-lucky, joking; at home, on the other hand, he is sour and grumpy, doesn’t talk to his kids, sulks and hides behind the newspaper, and can be verbally or otherwise abusive. Character is situational. The story of Jekyll and Hyde represents an extreme form of this.
Another novel with the same theme is The Picture of Dorian Gray, where the main character keeps a picture of himself in the attic. As he grows older, the portrait ages, revealing his true nature and character; yet he continues to go out in public without wrinkles — youthful, sophisticated, and cheerful.
The Picture of Dorian Gray, Oscar Wilde, 1931
(Three Sirens Press/WikiCommons)
Jung goes on to discuss the fascinating subject of human sensitivity to milieus, to social environments. People are usually sensitive to other people’s expectations. Jung points out that particular milieus such as families, schools, and workplaces require one to assume specific attitudes. By “attitude” Jung means“an a priori orientation to a definite thing, no matter whether this be represented in consciousness or not.” An attitude can be latent and unconscious, but it is constantly operating to orient a person to a situation or a milieu.
Further, an attitude is “a combination of psychic factors or contents which will ... determine action in this or that definite direction.” An attitude is a feature of character, therefore. The longer an attitude persists and the more frequently it is called upon to meet the demands of a milieu, the more habitual it becomes.
As behaviorism would express it, the more frequently a behavior or attitude is reinforced by the environment, the stronger and the more entrenched it becomes. People can be trained to develop specific attitudes to certain milieus and thus to respond in particular ways, reacting to signals or cues as they have been trained to do. Once an attitude has been fully developed, all that is required to activate behavior is the appropriate cue or trigger.
Jung observed this in 1920, about the time that behaviorism was gaining ground in North America, led by John Broadus Watson, whose first major publication appeared in 1913. In contrast to people living and working in rural or natural areas, which are relatively unified environments, many educated urban dwellers move in two totally different milieus: the domestic circle and the public world.
This was more true of men than of women in the Europe of Jung’s day. Men of Jung’s time and culture worked in one environment and lived domestically in another, and they had to respond to two distinctly different milieus, each of which provided a different set of cues.
"These two totally different environments demand two totally different attitudes, which, depending on the degree of the ego’s identification with the attitude of the moment, produce a duplication of character.”
A friend of mine has a midlevel managerial job in a government agency, and so he must set the tone for employees in his group regarding values and behavioral patterns in the public sector. The agency is a milieu, and he finds out from other sources what the correct values are and then informs the workers under him that, for example, they must be sensitive to such issues as nondiscrimination, sexism, and affirmative action. |
Bill of Lading Definition for shipping clothing or textiles presented by Apparel Search
Shipping Directory Glossary of International Trade Logistics Textile Glossary Fashion Industry News
A bill of lading is a document issued by a carrier, e.g. a ship's master, acknowledging that specified goods have been received on board as cargo for conveyance to a named place for delivery to the consignee who is usually identified. A through bill of lading involves the use of at least two different modes of transport from road, rail, air, and sea. The term derives from the noun "bill", a schedule of costs for services supplied or to be supplied, and from the verb "to lade" which means to load a cargo onto a ship or other form of transport.
Short statement of principles
The standard short form bill of lading is a part of the contract of carriage of goods and it serves a number of purposes:
• it is evidence that a valid contract of carriage exists and it incorporates the full terms of the contract between the consignor and the carrier by reference (i.e. the short form simply refers to the main contract as an existing document, whereas the long form of a bill of lading (connaissement int
) issued by the carrier sets out all the terms of the contract of carriage);
• it is a receipt signed by the carrier confirming whether goods matching the contract description have been received in good condition (a bill will be described as clean if the goods have been received on board in apparent good condition and stowed ready for transport); and
• it is also a document of transfer, but not a negotiable instrument, i.e. it governs all the legal aspects of physical carriage but, unlike a check or other negotiable instrument, it does not affect ownership of the goods actually being carried. This matches everyday experience in that the contract a person might make with a commercial carrier like FedEx is separate from any contract for the sale of the goods to be carried.
Main types of bill
Straight bill of lading
This bill states that the goods are consigned to a specified person and it is not negotiable free from existing equities, i.e. any endorsee acquires no better rights than those held by the endorser. So, for example, if the carrier or another holds a lien over the goods as security for unpaid debts, the endorsee is bound by the lien although, if the endorser wrongfully failed to disclose the charge, the endorsee will have a right to claim damages for failing to transfer an unencumbered title.
Also known as a non-negotiable bill of lading.
Order bill of lading
This bill uses express words to make the bill negotiable, e.g. it states that delivery is to be made to the further order of the consignee using words such as "delivery to A Ltd. or to order or assigns". Consequently, it can be endorsed by A Ltd. or the right to take delivery can be transferred by physical delivery of the bill accompanied by adequate evidence of A Ltd.'s intention to transfer.
Also known as a negotiable bill of lading.
Bearer bill of lading
This bill states that delivery shall be made to whosoever holds the bill. Such bill may be created explicitly or it is an order bill that fails to nominate the consignee whether in its original form or through an endorsement in blank. A bearer bill can be negotiated by physical del
Other terminology
A waybill is a non-negotiable receipt issued by the carrier. It is most common in the container trade either where the cargo is likely to arrive before the formal documents or where the shipper does not insist on separate bills for every item of cargo carried (e.g. because this is one of a series of loads being delivered to the same consignee). Delivery is made to the consignee who identifies himself. It is customary in transactions where the shipper and consignee are the same person in law making the rigid production of documents unnecessary.
The U.K.'s Carriage of Goods by Sea Act 1992 creates a further class of document known as a ship's delivery order which contains an undertaking to carry goods by sea but is neither a bill nor a waybill.
A sample of the issues
In most national and international systems, a bill of lading is not a document of title, but does no more than identify that a particular individual has a right to possession at the time when delivery is to be made. Problems arise when goods are found to have been lost or damaged in transit, or delivery is delayed or refused. Because the consignee is not a party to the contract of carriage, the doctrine of privity of contract states that a third party has no right to enforce the agreement. However, whether this is a problem to the consignee depends on who owns the goods and who holds the risks associated with the carriage. This will be answered by examining the terms of all the relevant contracts. If the consignor has reserved title until payment is made, the consignor can sue to recover his or her loss. But if ownership and/or the risk of loss has transferred to the consignee, the right to sue may not be clear in contract, although there could be remedies in tort/delict (the issue of risk will have been most carefully considered to decide who should insure the goods during transit). Hence, a number of international Conventions and domestic laws specifically address when a consignee has the right to sue. The legal solution most often adopted is to apply the principle of subrogation, i.e. to give the consignee the same rights of action held by the consignor. This enables most of the more obvious cases of injustice to be avoided.
In the municipal law of the U.S., the issue and enforcement of bills which may be documents of title, is governed by Article 7 of the Uniform Commercial Code. However, since bills of lading are most frequently used in trans border, overseas or airborne shipping, the laws of whatever other countries are involved in the transaction covered by a particular bill may also be applicable including The Hague Rules, The Hague/visby Rules and The Hamburg Rules at international level for shipping, The Warsaw Convention for the Unification of Certain Rules for International Carriage by Air 1929 and The Montreal Convention for the Unification of Certain Rules for International Carriage by Air 1999 for air waybills, etc. It is customary for parties to the bill to agree both which country's courts shall have the jurisdiction to hear any case in a forum selection clause, and the municipal system of law to be applied in that case choice of law clause. The law selected is termed the proper law in private international law and it gives a form of extraterritorial effect to an otherwise sovereign law, e.g. a Chinese consignor contracts with a Greek carrier for delivery to a consignee based in New York: they agree that any dispute will be referred to the courts in New York (since that is the most convenient place the forum convenes) but that the New York courts will apply Greek law as the lex causae to determine the extent of the carrier's liability.
Less Than Truckload (LTL)
Semi Trailer
Logistics Definition
The above article is licensed under the GNU Free Documentation License. From Wikipedia, the free encyclopedia ( 12/27/05
comments powered by Disqus
Clothing Industry
|
Victory Day: glorifying Russia and Stalin
by Vladimir Rozanskij
On 9 May, Russia marks the defeat of Nazism, one day after the rest of the world. On this day, people remember the sacrifice of so many soldiers, and above all the glory of the country, bulwark for the whole of humanity. Stalin is still more popular than Putin today. Out of 68 heads of state invited for the occasion, only 20 showed up.
Moscow (AsiaNews) – In Russia, today is Victory Day, the day when Nazism was defeated. For Russians, the anniversary, in addition to marking the end of the Second World War, is a day heavy with significance.
Over the past 70 years, especially in the past decade, the event has undergone changes and re-evaluations. Once movingly commemorated to remember the sacrifices made to Mother Russia and humanity, with veterans at the centre of the attention (75,660 are still alive), the event is now centred on the triumph of Stalinist USSR, whose mantle of power reflects upon today's Russia.
In his speech, Putin praised the "eternal value of the military triumph of our people. It was the people who defended and saved our Motherland, became the hope and a tower of strength for the humankind, the main liberator of European nations. [. . .] With every new year we come to a deeper realisation of the moral power of that unparalleled feat”.
He went on to say that the disposition of the heroes of that time is reflected today by the soldiers of contemporary Russia, whilst the idols of fascism and Nazism are raising in some countries.
Russia has the necessary power to defend everyone, as shown by the powerful weapons paraded on Red Square. For most Russians, as polls indicate, the 1945 Victory is "the greatest event in world history". Patriarch Kirill, on the eve of the celebration, laid a wreath of flowers on the monument to the Unknown Soldier, on the walls of the Kremlin.
In the past, the memory united Russians with their allies in the war against Hitler's folly, but today a breach has grown between them, as evinced by the absence of many guests of honour at the tribune on top of Lenin's mausoleum. Out of 68 invited heads of state, only 20 showed up, mostly the few supporters of Russia’s current isolationist course. Alongside Putin sat Chinese president Xi Jinping as well as Kazakhstan’s "eternal leader" Nursultan Nazarbayev.
The 9 May parade has now become a propaganda tool to promote Russia’s new greatness, with special emphasis on the importance of the state in the history of war and victory. In its extreme form, this translates into the increasingly popular slogan that "Stalin won the war". Thus, in the name of a higher patriotic ideal, the memorial slate of the bloody Georgian dictator is wiped clean.
In the 1960s, Khrushchev’s condemnation of Stalin's crimes at the 20th Congress of the Communist Party of the Soviet Union in 1957 had led ordinary Soviet citizens to view the Victory as the replacement of one totalitarian regime by another, as masterfully expressed in Aleksandr Solzhenitsyn’s 1968 novel In the First Circle. Among Soviets of that time, Stalinism had no right to the glory of Victory, and the May festival was limited to the regime flexing its muscles, based on Cold War blinkers.
What remained closely associated with 1945 was the memory of the victims and the sacrifices of the soldiers, best exemplified by the moving quietness of the commemorations and the rancour for a regime that had won by sending to the slaughter ten times the number of soldiers as the vanquished did.
Since the mid-2000s – in particular since the conflict with Ukraine in 2014 – humane feelings of compassion have been increasingly replaced by revenge, so that the old victory becomes the starting point to envisage new perspectives of greatness. This is helped by the fact that the celebration is held on a different day than the rest of the world, the latter being 8 May, the day when the Soviets remembered their entry into Berlin.
Russia thus celebrates "its" victory, one that is different from that of the other victors (for Ukraine’s outgoing president, Petro Poroshenko, the Kremlin has privatised the victory). At the same time, ordinary Russians are re-evaluating the figure of Stalin, beloved in today’s Russia even more than Putin himself. The net effect is that what is celebrated today is Russia’s moral greatness rather than the victory of freedom and democracy over Nazi totalitarianism, a greatness the world needs in lieu of Europe’s exhausted and powerless democracy.
This, at least, is the sense of the triumphal song that five great military choirs performed, at the close of today’s parade, an anthem to the homeland, the great country, which was echoed in St Petersburg by Mikhail Glinka’s historic 19th century anthem, glorifying the Russian tsar, Russia’s first official anthem, which some would like to see reintroduced today. |
Chest Pain?
We can help. At Cardiology Specialists, we can help work out whether your chest pain is from the heart.
There are different causes for chest pain but heart pain may not present in a typical way.
The heart muscle needs a supply of oxygen and energy that has to increase or decrease depending on the heart muscle demand.
The most common symptom of this problem is chest pain that occurs when you exert yourself (angina). Typical angina chest pain is a heavy sensation in your chest associated with shortness of breath. It sometimes radiates to your arms, jaw, or between shoulder blades, and can make you feel like being sick, dizzy or sweaty. Some patients state that it is not a pain but instead a ‘tightness’ or ‘weight on the chest’ and/or down inside of arm. Some people describe heart pain as “burning” but different to their indigestion.
Heart pain can wake a patient at night, or occur after meals.
Other people just “slow down” or are short of breath. Not everybody experiences the same sensation and any one of those symptoms can represent angina.
Heart Attack (Myocardial Infarction)
If an attack of angina lasts for more than 20 minutes then you may be having a heart attack. This is when a severe coronary narrowing or blockage causes part of the heart muscle to be deprived of oxygen at rest which can damage the heart muscle. There are treatments available in hospital that can prevent heart attacks and save lives so if you have chest pain or symptoms of angina that last for more than 20 minutes you should call an ambulance and go to hospital as soon as possible.
Risk factors for Coronary Artery Disease
Resting Electrocardiograph (ECG)
A Resting ECG at Cardiology Specialists is where electrodes are placed on the skin of your arms, legs and across chest to measure electrical activity of the heart. It gives information about heart rate, evidence of old heart attacks, sign of thickening of heart muscle, or heart rhythm problems such as extra (ectopic) beats or atrial fibrillation.
Exercise Tolerance Test (ETT)
Exercise Stress test at Cardiology Specialists is used to increase the workload of the heart resulting in an increased heart rate and increased blood flow to the heart muscle. A resting ECG and blood pressure is performed and then the heart ECG tracing and blood pressure are monitored as the patient walks or jogs on a treadmill to assess whether symptoms and/or changes in the resting ECG tracing occur with exercise. ECG changes with exercise may suggest narrowings or blockages in coronary artery(s) supplying the heart muscle. It is also useful for assessing certain levels of fitness. You can stop the test at any time. The ETT is performed by a trained technician and supervised by a doctor.
Echo at Cardiology Specialists uses ultrasound (high frequency sound waves) to measure size of cardiac chambers and Doppler to measure flow and gradients across heart valves. Echo is also called Cardiac Ultrasound and is similar to a gall bladder or pregnancy ultrasound but is instead focused on the heart. Professor Hamid Ikram pioneered the introduction of echo to Canterbury. At Cardiology Specialists, Echo is performed by a specially trained technician who applies gel to the skin of the chest wall and moves a plastic transducer over the chest wall to obtain images of the heart chambers and valves, which are seen on a video screen. Echo (cardiac ultrasound) is useful for diagnosing weakened heart muscle, old heart attacks, heart valve narrowing or leaking, thickening of heart muscle, holes between heart chambers, or fluid in the sack around the heart.
Depending on the results of these tests you may go on to have imaging of the coronary arteries.
CT Coronary Angiography
Coronary AngiographyCT coronary angiography arranged at Cardiology Specialists is where you sit inside a CT scanner and contrast is injected through a vein in the arm while you hold your breath. The CT scanner takes pictures of the heart arteries and then special software processes the images in time with your heart beat. The CT images can help rule out coronary disease, or show the extent of early cholesterol deposits (plaque) within the coronary arteries. Sometimes it can reveal more severe coronary narrowings. It is important that your heart rate is regular and slow to get good images. The presence of too much calcium or heart movement artifact can affect the image quality. This test is useful if the pain is atypical, you are at low to intermediate risk, and/or the exercise test is inconclusive. There is a small risk of radiation and allergy to contrast.
Dr Dougal McClean is an affiliated provider for Southern Cross for CT coronary angiogram.
Coronary Angiography
Dr Dougal McCleanCoronary Angiography arranged at Cardiology Specialists is performed when you have symptoms which are suggestive of angina, and/or the exercise stress test is positive.
Dr McClean will explain the benefits and risks of the procedure and you will give informed consent. The procedure is preformed at the Heart Centre, Cardiology Day Unit, St George’s Hospital. You will be given some medication to relax you. Local anaesthetic is used to numb an area of skin in the forearm just above your wrist. A small tube is placed into an artery in the forearm and catheters are advanced through the blood vessels to the heart. Dye is then injected so that the blood vessels around the heart muscle can be seen on X-ray. Pictures of the heart arteries in different projections are then obtained giving Dr McClean information about the state of your heart and the exact nature of any narrowed blood vessels. In many cases, Dr Dougal McClean can insert a Stent at the same time as the Coronary Angiogram if a severe coronary narrowing is found, see: Stents. |
The Waste Hierarchy – 6 Steps to Waste Disposal Success
The waste hierarchy is a 6-step process, often displayed in a pyramid style, to provide guidance for best waste disposal practices. Starting from the top of the ‘pyramid’, consumers are encouraged to work their way down each point of the list to help manage their waste better, with the final option in the chart, option 6, being the last resort. The “waste hierarchy” ranks waste management options according to what is best for the environment. It gives top priority to preventing waste in the first place. When waste is created, it gives priority to preparing it for re-use, then recycling, then recovery, and last of all disposal.
The easiest and most eco-friendly way of managing your waste is to reduce the levels you produce, this is mainly done by purchasing or obtaining less, resulting in less waste.
Purchasing smartly, ensuring you are only buying what you need and not over-purchasing is the key to reducing our waste. Only buy necessary items, do shopping lists before heading to the supermarkets and avoid impulse purchases, all of which can lead to excess waste. If you do not need something, do not buy it.
Many items, even those dubbed ‘single-use’ have a long life-span, allowing them to be re-used several times before they begin to lose purpose. Re-using items will help reduce your waste levels – items like water bottles, plastic bags, food containers are all made from robust material and have great longevity. Even items that cannot be re-used for their initial intended purpose can often be re-used for other purposes.
Save Waste and Repair Your Clothing
A lot of items get thrown away when they become damaged. However, many of these items can be easily repaired, so they are fit for purpose once again. Before disposing of your broken items, repair should always be considered, whether this is an electronic item, sporting equipment or clothes – there is usually always a way these items can be fixed.
Stage four is recycling, when it comes to disposing of your waste (if you cannot follow steps 1-3) the first thing you should do is figure out if it can be recycled. The vast majority of items are now easily recyclable either in your home kerbside recycling collections or at specific recycling points. If you are ever unsure whether an item can be recycled or not, a simple bit of research will give you the answers. If your item cannot be recycled and you no longer need it, sell it to someone who will make use it of or even give it to a charity shop or donation bank.
Recovery which can often be referred to as “waste for energy”, is a process where waste is transformed into energy through a range of processes including incineration (burning waste to create electricity), anaerobic digestion (microorganisms break down food and other organic waste to produce biogas) and landfill gas recovery (collecting the methane given off by landfill). This helps stop waste being sent to landfill by transforming it into forms of energy.
Different Types Of Recycling
The final step, and last resort when it comes to waste management is disposal. This level of the waste hierarchy refers to landfilling or incinerating rubbish without recovering the energy. Disposal should only be considered if you cannot follow any of the above steps instead, this is the worst option sustainably and should be avoided at all costs. But if disposal is the only option, it must be done responsibly in the bin, without littering. |
How should you prepare for a half marathon?
Running a half marathon not only requires proper training but also a proper diet. In order to run a good time for this event, runners must prepare their bodies so that they are at their peak on the day of the race. Consequently, a range of different strategies is required: changes to training, recovery and diet.
The dietary strategy for a half marathon must fulfil the runners' nutritional needs without imposing restrictions or disrupting the runners' digestion.
Preparing for your half marathon
This phase prepares your body for a half marathon. You must monitor any fluctuation in weight, requiring a correction in the energy you expend as a result of the increase in training and the daily intake of calories. Having a balanced diet makes it easier to manage energy intake and thereby avoid changes in weight.
5 – 6 weeks before the half marathon
Be sure to eat a balanced diet and lead a healthy lifestyle.
- Do not skip a meal and eat at regular times;
- Meat, fish, eggs: 1 to 2 times a day for protein;
- Carbohydrates: at each meal to provide energy;
- Dairy products: 2 to 3 times a day to provide protein and calcium;
- Fruit and vegetables: 5 a day to provide water, vitamins and fibre;
- Fat: preferably eat vegetable-based fat while reducing the overall intake of fat;
- Sugar: reduce your consumption of sugar;
- Naturally, there is no restriction on how much water you drink.
D-7: final week
- Maintain a balanced diet;
- Ramp up the quantity of carbs in order to increase your energy reserves;
- Increase your intake of water in order to top up your water reserves;
- Reduce the quantity of fatty meat.
D-3 and D-2: final days
- Increase the carb intake once again with the help of some maltodextrin: 1 to 2 bottles a day;
- Eat less fruit and raw vegetables because the high fibre content can speed up digestion.
D-1: The day before the half marathon
- Maintain a carb-rich diet
- Take 2 shakes of maltodextrin
- Continue to drink throughout the day.
- Reduce your intake of raw vegetables if you have sensitive bowels
The day of the race
The final meal must be effective, high in carbs and easy to digest (low fibre and fat content). Ideally, it must be taken 3 hours before the start in order to ensure good digestion.
Its main objective is to maximise your energy reserves.
Eat an Ultracake 3 hours before the start in order to take on a considerable amount of energy without disrupting your digestion.
During the race:
Do not get dehydrated
Avoid hypoglycaemia and do not use up all your energy reserves
Compensate for losses in minerals and vitamins
Avoid digestive problems.
How does it work?
Most people who run half marathons start without a water bottle although it is recommended to drink regularly from the start of the race onwards!" In any case, do not wait to feel thirsty before drinking. This is where the supply points play an important role as they provide fresh water. Don't miss them out!
Several solutions for avoiding hypoglycaemia:
- Take sports drinks (although you need to carry a bottle with you during the race),
- Eat Energy gels: easy to consume and practical to carry. They must be consumed with water. Ideally, they should be taken before each supply point and for the final ¼ of an hour of racing.
- Eat Ultra bars: Chewing can be difficult when racing, so take some products that are easy to chew.
- Replace the water you have lost;
- Reconstitute energy reserves.
- Replace lost minerals and vitamins
- Repair damaged muscle fibre.
How does it work?
As soon as you reach the finish line, you must drink water to compensate for the water you have lost, take on carbs to restore your energy reserves, take sodium to compensate for the sodium lost by sweating and take protein to repair your muscles. These elements can be provided in just the right proportions in the After drink.
To prevent digestive problems, you need to test the products that you will use during the race when you train. Indeed, the choice of the type of food you eat during a marathon is very personal.
Half marathon pack:
- Maltodextrin
- Ultra cake
- Energy gel
- After drink
Marie Fauchille |
Turkey: Sea of Marmara in dire straits
For weeks now, the Sea of Marmara has been covered by layers of algal slime called "sea snot" or "sea saliva". Gigantic white swathes of the slime can be seen in satellite images of the area. Conservationists, fishermen and politicians are alarmed. Turkish media complain about the abuses that are causing the problem and the government's seeming lack of concern.
Open/close all quotes
Milliyet (TR) /
Negligent poisoning
The entire ecosystem is threatened, Milliyet warns:
“Octopuses, algae, and seagrass beds are trapped under this slimy layer. Plants can no longer perform photosynthesis. The oxygen content of the water is almost depleted and has dropped to zero in some areas. The fish are suffocating. ... The reasons why the Sea of Marmara is struggling with death are as follows: 1. Climate change and rising temperatures. 2. Man-made pollution. Fifty percent of Turkey's industry is located in the Marmara region. ... Most of these companies don't treat their sewage. Dirt and poison have been discharged into the Sea of Marmara for a decade through the shameful practice known as 'deep-sea wastewater discharge'. ... If the state delays action, this 'saliva' will very soon start flowing southwards into the Aegean and the Mediterranean.”
Habertürk (TR) /
Rethink Istanbul Canal project
President Erdoğan wants to go ahead with the plans to start building a canal between the Sea of Marmara and the Black Sea at the end of June despite the arguments against the project, Habertürk laments:
“Many people who see the 'sea saliva' now fear that the Istanbul Canal could cause a much bigger natural disaster in the future. Apart from the complainants, most of the scientists who signed the environmental impact assessment for the project have avoided defending the canal in public. If despite the atmosphere the government goes ahead with the project without making scientific statements about its impact on nature, even the AKP party base may turn against the Istanbul Canal. ... The Sea of Marmara, which we will bequeath to our children, should not fall victim to political confrontation.” |
How does massage help people with MS?
MS and Muscle Challenges
MS is an auto immune disease wherein the body attacks the covering of the nerves (called the myelin sheath) as well as the nerve fibers in the brain, optic nerves and spinal column. The damage left by these attacks make it more difficult for nerves to communicate, which can cause many problems and symptoms such as rigid and spasming muscles, pain, muscle weakness, mood disorders and cognitive decline. Movements become hard to perform and as a result the patient may start having trouble walking, using arms and hands...
How Massage Helps
Therapeutic massage for MS has a physical effect beyond relaxation. Significant benefits for MS patients appear to be reduced spasticity and pain, improved circulation and increased muscle and joint flexibility.
Stress reduction also winds up being an important benefit of therapeutic massage for MS. A small 2016 study suggested that massage therapy was associated with an improved quality of life, along with decreased fatigue and pain. |
Next Article in Journal
The Place of RNA in the Origin and Early Evolution of the Genetic Machinery
Next Article in Special Issue
Synthetic Biology: A Bridge between Artificial and Natural Cells
Previous Article in Journal
Distribution and Ecology of Cyanobacteria in the Rocky Littoral of an English Lake District Water Body, Devoke Water
Previous Article in Special Issue
Reconciling Ligase Ribozyme Activity with Fatty Acid Vesicle Stability
Droplets: Unconventional Protocell Model with Life-Like Dynamics and Room to Grow
Centre for Integrative Biology (CIBIO), University of Trento, Via Sommarive, 9 I-38123 Povo (TN), Italy
Life 2014, 4(4), 1038-1049;
Received: 31 October 2014 / Revised: 8 December 2014 / Accepted: 11 December 2014 / Published: 17 December 2014
(This article belongs to the Special Issue Protocells - Designs for Life)
Over the past few decades, several protocell models have been developed that mimic certain essential characteristics of living cells. These protocells tend to be highly reductionist simplifications of living cells with prominent bilayer membrane boundaries, encapsulated metabolisms and/or encapsulated biologically-derived polymers as potential sources of information coding. In parallel with this conventional work, a novel protocell model based on droplets is also being developed. Such water-in-oil and oil-in-water droplet systems can possess chemical and biochemical transformations and biomolecule production, self-movement, self-division, individuality, group dynamics, and perhaps the fundamentals of intelligent systems and evolution. Given the diverse functionality possible with droplets as mimics of living cells, this system has the potential to be the first true embodiment of artificial life that is an orthologous departure from the one familiar type of biological life. This paper will synthesize the recent activity to develop droplets as protocell models.
Keywords: artificial cells; droplets; convection; emergence of life; fluid dynamics; minimal cells; origin of life; protocells artificial cells; droplets; convection; emergence of life; fluid dynamics; minimal cells; origin of life; protocells
1. Droplets
The droplet consists simply of a liquid compartment that is highly insoluble in another liquid. Typically when the system consists of immiscible fluids, the interfacial tension is high (e.g., nitrobenzene in water at 25 °C is approximately 27 mN/m). If the two phases are at all miscible, the droplet will slowly decrease in volume and dissolve uniformly and predictably following the Epstein-Plesset model [1]. Simply by placing one immiscible liquid into another, a droplet will form and not dissolve. However when surfactants are added, the system can become very dynamic. Surfactants tend to self-assemble at the liquid-liquid interface and mitigate the interfacial tension between the liquids dynamically. When the system is far from equilibrium due to the initial concentration of chemicals in one phase or due to chemical transformation, the distribution of surfactants and therefore the interfacial tension can be non-uniform. Small convective flows develop and can grow [2,3,4]. Under these conditions flow structures form, primarily due to Marangoni-type instabilities [5,6]. These flow fields (both within the droplet and in the liquid proximal to the droplet) can affect the shape, the state of the droplet and its dynamic properties.
Different types of hydrophobic chemicals can be solvated into oil droplets and hydrophilic chemicals into water droplets. Therefore droplets can act as containers. Chemicals can either become stably entrapped in a droplet or diffuse out, depending on their properties. Mass transfer in oil-water-surfactant systems is also possible (e.g., [7]). Oil-in-water and water-in-oil droplet systems, such as presented here, are intended as artificial life models that are able to possess some of the properties of living biological systems. These are examples of bottom-up synthetic biology where the properties of living matter are constructed from “the simpler to the more complex, beginning with the reproduction of the more elementary vital phenomena” [8]. Emulsion droplet systems are unlikely to be the direct progenitors of the first biological cells due to their structure and content alone. However droplet systems provide an experimental framework for synthetic biology that is different from other protocell model systems such as vesicles [9] with distinct advantages. Exploiting the fluid dynamical properties of droplets using different chemistries, this general droplet platform can be custom purposed as described in this review towards creating models for artificial life, targeted applications and exploration of origin of life scenarios not easily done with other supramolecular platforms.
2. Individuality
One primary concern in creating artificial systems as protocells is if the created units are completely uniform or variable as in living systems. The encapsulation of an ensemble of molecular types and functions can lead to individual compartments that are not clonal but compositionally and functionally individual. Due to the small internal volume of typical vesicle based or emulsion based containers, a high degree of noise and fluctuation is expected, with certain up concentration mechanisms at play [10]. Stochasticity has been demonstrated in both vesicle systems and droplet emulsion systems where gene expression machinery has been encapsulated [11,12]. Therefore variation, right down to the protocell level [13], can be explained and is expected in such artificial systems. Such stocasticity and variation in individual protocell composition and function can form the basis for selection and evolution in such systems. For example, nitrobenzene droplets seeded with oleic acid then placed in an aqueous environment of oleate micelles will produce self-movement [14]. However nitrobenzene droplets in the same environment but seeded with the cationic amphiphile CTAB (cetyl trimethylammonium bromide) will produce self-division [15]. In addition, Toyota’s group showed that when the length of the linker between two cationic surfactants is varied in a gemini surfactant droplet system, the droplets will either show self-motion or fusion [16]. Therefore, the individual behavior and characteristics of a droplet can be linked to individual droplet content.
3. Self-Division and Replication Cycle
Fluid dynamics dominate the behavior of chemical droplets. Browne et al. [17] showed how organic droplets containing a single surfactant could divide while dissolving as they approach equilibrium, with the extent of division controlled primarily by pH. The macroscale droplets consisting of dichloromethane mixed with the monocarboxylic acid, 2-hexyldecanoic acid, were produced in water at pH 12, and they divided continuously until they reached the nanoscale. In an alternative system, Caschera et al. [15] showed how a droplet system composed of two interacting catanionic surfactants can trigger fluid dynamics that pull a droplet apart into daughter droplets. The system starts far from equilibrium with one surfactant in one phase and the oppositely charged surfactant solvated in the other phase. Within seconds after a droplet is formed in the aqueous solution the droplet becomes unstable and the internal flow dynamics promote division. Because of the mixture of catanionic surfactant there is a transient temporal window of very low interfacial tension where flow forces can perturb the droplet. However as the distribution of the surfactants approaches equilibrium, the interfacial tension of the system rises and the droplet scan no longer self-divide or dissolve, see Figure 1. In this general way the singular temporal division event is comparable to cellular division. By coupling droplet self-division with droplet fusion, we were able to demonstrate a recursive droplet fusion-fission cycle [15]. Recently Derényi and Lagzi [18] showed that timing of droplet self-division can be effected through a chemical pH clock reaction (see video [19]).
Figure 1. Self-dividing droplet. A representational temporal progression of the droplet transformation from a single droplet to several daughter droplets. As the system approaches equilibrium the droplet initially spherical reaches very low interfacial tension where it can distort and divide. At equilibrium the interfacial tension rises and the daughter droplets round up. For original droplet fission video, see [15].
Life 04 01038 g001
4. Self-Propelled Oil Droplet
Droplets formed in the presence of surfactants can also form organized fluid dynamics and self-motion [20,21,22]. For a recent review of self-moving droplets, as well as vesicles and other particles see [23]. There are three main configurations for self-moving droplets:
no reactive chemistry
onboard fuel
onboard catalyst
The link to the environment is established differently for each configuration and this affects the degree of autonomy and life-time of movement.
For the first type a droplet is placed into an environment that is externally patterned to produce a tension gradient on the droplet. The gradient can be in the chemical composition of the solution [24,25] or on the underlying surface [26,27,28]. The droplet will continue to move as long as the gradient is in place. No chemical reaction is necessary. Rather the droplet will move as long as the distribution of chemicals acts as a gradient that affects the interfacial tension of the droplet or the patterned surface sustains the wettability difference of the droplet with the surface. The movement of the droplet in this case is closest in kind to a ball rolling down a hill. The droplet will not move in an environment where no external gradient is imposed. When a gradient is present, the droplet will move to the lowest energy state of the system and then stop unless another gradient is presented.
In the second type, the droplet movement itself creates the local chemical gradient by using onboard fuel [14]. The asymmetry of the droplet is sustained by the chemical reaction and allows for movement for minutes to hours. The droplet follows this self-created chemical gradient and will continue to move until the fuel is exhausted or the waste products build up to a degree that appreciably slows the reaction rate. We showed how a droplet of nitrobenzene that contains oleic anhydride as a source of chemical potential and surfactant can become a self-motile convective droplet, see Figure 2 [14]. Since the droplet system contains the monocarboxylic acid surfactant oleic acid, the system dynamics are sensitive to pH. Therefore droplet follows the pH gradient (internally generated or externally imposed) and is therefore capable of chemotaxis as found previously only in living systems. It is notable that the droplet when moving in such a system has an emergent mechanism of self-movement that allows it to move directionally away from the waste that it produces. The fluid flow dynamics (in this case convective flow) becomes the feedback loop for sustained motion. The overall effect of sustaining the self through movement may also be an important motivator in living systems. We have been primarily investigating how reactive droplets with onboard fuel move through liquid systems. However a reactive droplet can produce self-movement through chemical modification of a surface. Bain [29], dos Santos and Ondarçuhu [30], and Yoshinaga et al. [20] have demonstrated fast droplet motion due to a gradient in wettability created dynamically by a chemical reaction between the droplet and the surface.
Figure 2. Self-moving droplets consisting of oleic anhydride and nitrobenzene. A micrograph shows the visual pattern in a spherical droplet that is moving by convective flow (left). The same micrograph overlayed with direction of droplet movement as well as the convective flow structures. Size bar: 100 μm.
Life 04 01038 g002
In the third type of self-moving droplet, an encapsulated catalyst can fuel the movement of the droplet with the fuel source supplied in the external environment [31]. The catalyst in the droplet processes the external fuel source to fuel its movement. If the fuel source is continually supplied (with the removal of waste) the droplet could in principle move indefinitely, as the catalyst localized within the droplet is not consumed.
The potential utility of such self-propelled droplets has been applied in some interesting contexts. The self-moving droplet system with oleic anhydride has been developed into an alginate-based capsule robot where the chemical droplet acts as the motor [32]. The chemotactic property of such droplets have been exploited by Lagzi et al. [24] where droplets consisting of dichloromethane and 2-hexyldecanoic acid where able to follow pH gradients and effectively solve a 2D maze. Recently we developed a decanol droplet system that can follow salt gradients with the ability to reverse the direction of movement repeatedly. We used this system to navigate a topologically complex maze, to carry and release a chemically reactive cargo, to select a stronger concentration gradient from two options, and to initiate chemotaxis by an external temperature stimulus [25]. The maze solving droplets represent self-moving droplets of type 1 (with no reactive chemistry). When presented with gradients of varying magnitude, a droplet of this type will follow the steepest gradient reproducibly to arrive at the lowest energy state of the system. We note that some living organisms follow gradients differently. For example slime mold Physarum polycephalum can also solve a maze but it will do so by initially exploring the many different paths. Once the correct path through the maze is established, this one path will then be reinforced preferentially among the other options [33]. The ability of self-propelled droplets to act as sensors for environmental cues, to couple the sensorial information to directional movement and to be able to transport reactive chemical cargo may have applications in medicine, bioremediation and soft body robotics (e.g., [34]).
5. Group Dynamics and Higher Order
Through the chemotactic assays, we have shown that a droplet can sense cues from the chemical environment in the form of chemical gradients. But a droplet can also sense other droplets and accordingly change its behavior [31]. We analyzed the movement of single droplets and two droplets as a function of droplet size (volume). We found that in the first 20 min of movement the smaller droplets tended to stay close to each other while moving in the dish [35]. This kind of communication between droplets served to coordinate group dynamics and was highly dependent on both the scale of the system (with larger droplets not coordinating movement) and droplet shape. In addition we note that the variation in a single droplet’s behavior, analyzed as the stop-go interval, displays a power law distribution indicating some bias in behavior due to memory effects in the system [36]. This analysis is useful for determining if these simple droplets either individually or in populations are capable of higher order processes such as decision-making. The individuality of a droplet may be found in chemical composition [37] coupled with internal flow structures. The memory effect we discovered was not necessarily due to the internal structure of the droplet but may be patterned in the immediate external chemical environment. Due to the close coupling between sensing of the environment and movement of the droplet, we have argued that the simple droplet system can form the basis for cognitive systems and smart materials [38].
If we accept that information can be structure, expressed by structures interacting through laws [39], we can show how information can be introduced to the droplet world in a simple programmable way. We have developed a custom ssDNA anchoring system that can be integrated on the surface of droplets. The anchoring system consists of a phospholipid DSPE (1,2-distearoyl-sn-glycero-3-phosphoethanolamine-N-[amino(polyethylene glycol)) convalently linked to polyethylene glycol with a terminal biotin. This lipid can be complexed with a streptavidin protein and additional biotinylated ssDNA oligonucleotides. Otherwise generic oil droplets (consisting of diethyl phthalate) are decorated with different ssDNA information acting as barcodes, which can interact specifically with other complimentary ssDNA information on other oil droplets. Several distinct sets of complementary DNAs can be used to distinguish different droplet populations. By mixing together complementary droplet populations, the droplets can assemble together reversibly forming large interlinked emulsions [40]. The nanoscale DNA information follows molecular pairing laws that can organize microscale droplets into interacting macroscale structures programmably and reversibly. Therefore the change of state in droplet structure and organization can be programmed through molecular polymeric information.
6. Prebiotic Droplets
A typical chemical droplet experiment involves the careful addition of a few choice and pure chemicals. However it has been argued that pure chemical substances are not realistic components of origin of life protocells. In fact all prebiotic synthesis scenarios produce a highly complex mixture of organic molecules when starting from simple precursors. It is from this huge molecular complexity that life had to organize into protocells and cells, see Figure 3. Mechanisms by which life can emerge from this prebiotic soup are of great interest. To this end we tested a droplet based protocell that used not pure fuel for propulsion such as oleic anhydride [14] but used instead a prebiotic soup based on hydrogen cyanide self-polymerization and hydrolysis. Under these more realistic prebiotic conditions we were able to produce self-motile oil droplets that were also capable of chemotaxis [41]. It is noted that in these experiments the full molecular complexity of a prebiotic soup was added to the droplet system without fractionation or purification. There were likely molecules present in the soup that would both promote and inhibit droplet movement. Nevertheless, the system was able to organize itself into self-propelled droplets that emerged with internal mechanism to avoid equilibrium.
Figure 3. A conceptual model for the origin of life that starts with a high degree of molecular complexity.
Life 04 01038 g003
7. DIY Protocells
Most of the experiments presented were performed in chemical laboratories with appropriate safety equipment and with chemicals of high purity. However a simple protocell system can also be made DIY using easily available components. One of the first artificial cell models was developed by the zoologist Otto Bütschli and published in 1892 [42]. In this study Bütschli was interested in studying the shape and movement characteristics of protists such as the amoeba. He created an artificial amoeba with pseudopodia that mimicked at least superficially a living protist. He observed that his artificial system captures some of the lively dynamic properties of the natural living system [43].
Such protocells, as created by Bütschli, can be made simply by adding potash to olive oil, an old practice followed to make soap. Potash from which the element potassium takes it name [44] is made by soaking the ashes of burned trees and plants in water. There are many ways to make potash but I have had success with using the ashes from burned hardwoods. I added enough pure spring water to completely cover the ashes in a one liter plastic container and let the mixture soak for one week with daily stirring. The ashes settle to the bottom and a clear slightly yellow liquid is left at the top. The top clear layer can be extracted and passed through a cloth to remove debris. One can use a fresh egg to check if the density of the resulting potash liquid is high enough. An egg that does not sink into the liquid but floats at the top indicates that the leaching of salts from the ashes was successful. When making potash solution one should be careful as the resulting solution (also called lye) is rich in potassium hydroxide, is caustic, and can cause a chemical burn.
When a few droplets of the potash solution are added to fresh olive oil, the aqueous droplets react through a saponification reaction and will start to migrate through the oil and eventually break into smaller droplets. One can examine the behavior of these chemical droplets by eye and also with a microscope. An example of microscopic group dynamics of such droplets is shown in Figure 4. Here the aqueous droplets group together while they make long visible trails of sodium-fatty soap precipitate. When a small group of droplets invade and join the larger group, the droplets quickly disperse as if reaching a threshold. Such dynamics are easily observable with this DIY system. If the reaction is slow or terminates quickly, then the oil phase may be too old. For the best lively movement, fresh olive oil must be used. Other oils such as canola with high triglyceride content may also be used. Otherwise if the system is slowly reacting, the potash solution may be too dilute. The solution can be concentrated by carefully evaporating the excess water. To color the aqueous droplets for macroscopic visualization, the juice from blueberries (fresh or frozen) can be added to the potash solution.
Figure 4. Bütschli aqueous droplets reacting in olive oil. Arrow in first panel shows the initial group of the droplets. Arrow in the next panel shows the invading droplets. Time course: 30, 60, 100, 180 s. Visualized under low magnification with an inverted light microscope, for more technical details see [43]. Size bar 1 mm. For the complete video, see [45].
Life 04 01038 g004
8. Conclusions
Over the past 130 years or so, several protocell models have been created and developed. Starting with Bütschli’s amoeboid-like droplets in the 1880’s, coacervates, proteinoid microspheres, and vesicles have all been established as simplified experimental models of natural biological cells [9]. Here we focus on chemical droplets as protocell models. When far from equilibrium, these droplets are dynamic and display life-like behaviors. So far we have set the initial conditions of the experiment far from equilibrium and then the system remains closed. In order to further exploit the non-equilibrium dynamics of droplets, we have developed a robotic workstation that not only can perform droplet experiments but monitors the experiments and can modify the course of an experiment in real time. We have already demonstrated that such a robotic platform can maintain the non-equilibrium state of droplets and can effect state change in selected droplets in a population, essentially creating a controlled but more thermodynamically open system [46].
Because chemical droplets are easy to construct, several instances of self-moving, self-dividing, and chemotactic droplets have already been realized. In other self-propelled systems such as the classic camphor in water [47,48], tethered nanospheres [49], molecular motors embedded on surfaces [50], peroxide-activated goldplatinum nanorods [51], Janus particles [52], and catalytic microtubular jet engines [53], it is difficult to imagine how to develop such systems into a customizable, generalizable dynamic technical platforms. Droplets on the other hand are robust, economical and easily customizable. Directed responses to light [54], heat [55,56,57], pH [14,18,24,58], redox-active surfactants and electrochemistry [59], magnetic fields [60], and salt stimuli [25] have been demonstrated.
As protocell models, it is argued that oil droplets present simple self-organized systems that realize emergent movement to avoid equilibrium. This is analogous to the motivation for movement in living biological systems. Although biological systems use much more sophisticated mechanisms for movement such as rotating flagella or actin-myosin polymerized cytoskeleton, the organization of matter into sensing and self-moving matter might be an easily accessible solution for simple prebiotic systems to sustain themselves and avoid equilibrium. With regard to synthesizing life using protocells simply from self-assembly of components, no one has successfully demonstrated a living protocell based on the principle of self-assembly despite more than 100 years of experimentation. This is one of the main motivations for producing a different type of protocell based on self-organizing self-propulsion [61]. Immediately from the first analysis of self-movement and chemotaxis we were able to establish the link between environmental sensing and direction and mode of motion. This intimate link between the environment and protocell behavior may form the basis for higher order functionality in such simple nonliving protocell systems [38]. The idea of dynamic droplets as protocells promotes more open thinking about how non-living matter might self-organize into evolving matter that adapts over time to a changing environment. Perhaps such simple dynamics in simple physical systems will constitute in the future the first true embodiment of artificial life that is an orthologous departure from the one familiar type of biological life based on DNA and protein enzymes. This lifts the constraints of searching for liquid water-based life in the universe where such conditions are not feasible [62].
I would like to offer my thanks to the many collaborators and researchers who have been inspired to make a simple droplet into something much more fascinating. This work was supported in part by the Center for Fundamental Living Technology (FLinT), the Danish National Science Foundation, the European Commission FP7 Future and Emerging Technologies Proactive: 249032 (MATCHIT) and 611640 (EVOBLISS).
Conflicts of Interest
The authors declare no conflict of interest.
1. Duncan, P.B.; David, N. Microdroplet dissolution into a second-phase solvent using a micropipet technique: Test of the Epstein-Plesset model for an aniline-water system. Langmuir 2006, 22, 4190–4197. [Google Scholar] [CrossRef] [PubMed]
2. Suzuki, H.; Tatsuyuki, K. Convective instability and electric potential oscillation in a water-oil-water system. Biophys. Chem. 1992, 45, 153–159. [Google Scholar] [CrossRef]
3. Ikezoe, Y.; Ishizaki, S.; Yui, H.; Fujinami, M.; Sawada, T. Direct observation of chemical oscillation at a water/nitrobenzene interface with a sodium-alkyl-sulfate system. Anal. Sci. 2004, 20, 435–440. [Google Scholar] [CrossRef] [PubMed]
4. Ikezoe, Y.; Ishizaki, S.; Yui, H.; Fujinami, M.; Sawada, T. Chemical oscillation with periodic adsorption and desorption of surfactant ions at a water/nitrobenzene interface. Anal. Sci. 2004, 20, 1509–1514. [Google Scholar] [CrossRef] [PubMed]
5. Thomson, J. On certain curious motions observable at the surfaces of wine and other alcoholic liquors. Philos. Mag. Ser. 1855, 10, 330–333. [Google Scholar]
6. Agble, D.; Mendes-Tatsis, M.A. The effect of surfactants on interfacial mass transfer in binary liquid-liquid systems. Int. J. Heat Mass Transfer 2000, 43, 1025–1034. [Google Scholar] [CrossRef]
7. Dos Santos, D.J.; Gomes, J.A. Molecular Dynamics Study of the Calcium Ion Transfer across the Water/Nitrobenzene Interface. ChemPhysChem 2002, 3, 946–951. [Google Scholar] [CrossRef] [PubMed]
8. Leduc, S. The Mechanism of Life; William Heinemann: London, UK, 1914. [Google Scholar]
9. Rasmussen, S.; Mark, B.; Liaohai, H.; David, D.; David, C.K.; Norman, H.P.; Peter, F.S. Protocells: Bridging Nonliving and Living Matter; MIT Press: Cambridge, MA, USA, 2008. [Google Scholar]
10. Pereira de Souza, T.; Steiniger, F.; Stano, P.; Fahr, A.; Luisi, P.L. Spontaneous crowding of ribosomes and proteins inside vesicles: A possible mechanism for the origin of cell metabolism. ChemBioChem 2011, 12, 2325–2330. [Google Scholar] [PubMed]
11. Nishimura, K.; Tsuru, S.; Suzuki, H.; Yomo, T. Stochasticity in gene expression in a cell-sized compartment. ACS Synth. Biol. 2014. [Google Scholar] [CrossRef]
12. Weitz, M.; Kim, J.; Kapsner, K.; Winfree, E.; Franco, E.; Simmel, F.C. Diversity in the dynamical behaviour of a compartmentalized programmable biochemical oscillator. Nat. Chem. 2014, 6, 295–302. [Google Scholar] [CrossRef] [PubMed]
13. Hrdlička, A. Normal Variation; American Philosophical Society: Philadelphia, PA, USA, 1934; Volume 74, pp. 226–253. [Google Scholar]
14. Hanczyc, M.M.; Toyota, T.; Ikegami, T.; Packard, N.; Sugawara, T. Fatty acid chemistry at the oil-water interface: Self-propelled oil droplets. J. Am. Chem. Soc. 2007, 129, 9386–9391. [Google Scholar] [CrossRef] [PubMed]
15. Caschera, F.; Steen, R.; Martin, M.H. An Oil Droplet Division-Fusion Cycle. ChemPlusChem 2013, 78, 52–54. [Google Scholar] [CrossRef][Green Version]
16. Banno, T.; Shingo, M.; Rie, K.; Toyota, T. Mode Changes Associated with Oil Droplet Movement in Solutions of Gemini Cationic Surfactants. Langmuir 2013, 29, 7689–7696. [Google Scholar] [CrossRef] [PubMed]
17. Browne, K.P.; Walker, D.A.; Bishop, K.J.M.; Grzybowski, B.A. Self-Division of Macroscopic Droplets: Partitioning of Nanosized Cargo into Nanoscale Micelles. Angew. Chem. 2010, 122, 6908–6911. [Google Scholar] [CrossRef]
18. Derényi, I.; Lagzi, I. Fatty acid droplet self-division driven by a chemical reaction. Phys. Chem. Chem. Phys. 2014, 16, 4639–4641. [Google Scholar] [CrossRef] [PubMed]
19. Droplet self-division. Available online: (accessed on 12 December 2014).
20. Yoshinaga, N.; Ken, H.N.; Yutaka, S.; Hiroyuki, K. Drift instability in the motion of a fluid droplet with a chemically reactive surface driven by Marangoni flow. Phys. Rev. E 2012, 86, 016108. [Google Scholar] [CrossRef]
21. Izri, Z.; van der Linden, M.N.; Olivier, D. Self-propulsion of pure water droplets by spontaneous Marangoni stress driven motion. 2014; arXiv:1406.5950. [Google Scholar]
22. Sumino, Y.; Yoshikawa, K. Amoeba-like motion of an oil droplet. Eur. Phys. J. Spec. Top. 2014, 223, 1345–1352. [Google Scholar] [CrossRef]
23. Shioi, A.; Takahiko, B.; Youichi, M. Autonomously moving colloidal objects that resemble living matter. Entropy 2010, 12, 2308–2332. [Google Scholar] [CrossRef]
24. Lagzi, I.; Siowling, S.; Paul, J.W.; Kevin, P.B.; Bartosz, A.G. Maze solving by chemotactic droplets. J. Am. Chem. Soc. 2010, 132, 1198–1199. [Google Scholar] [CrossRef] [PubMed]
25. Cejkova, J.; Novák, M.; Štěpánek, F.; Hanczyc, M.M. Dynamics of chemotactic droplets in salt concentration gradients. Langmuir 2014, 30, 11937–11944. [Google Scholar] [CrossRef] [PubMed]
26. Chaudhury, M.K.; George, M.W. How to make water run uphill. Science 1992, 256, 1539–1541. [Google Scholar] [CrossRef] [PubMed]
27. Quéré, D.; Armand, A. Liquid drops: Surfing the hot spot. Nat. Mater. 2006, 5, 429–430. [Google Scholar] [CrossRef] [PubMed]
28. Daniel, S.; Sanjoy, S.; Jill, G.; Manoj, K.C. Ratcheting motion of liquid drops on gradient surfaces. Langmuir 2004, 20, 4085–4092. [Google Scholar] [CrossRef] [PubMed]
29. Bain, C.D.; Graham, D.B.-H.; Richard, R. Montgomerie. Rapid motion of liquid drops. Nature 1994, 372, 414–415. [Google Scholar] [CrossRef]
30. Dos, S.; Fabrice, D.; Thierry, O. Free-running droplets. Phys. Rev. Lett. 1995, 75. [Google Scholar] [CrossRef] [PubMed]
31. Toyota, T.; Naoto, M.; Martin, M.H.; Takashi, I.; Tadashi, S. Self-propelled oil droplets consuming “fuel” surfactant. J. Am. Chem. Soc. 2009, 131, 5012–5013. [Google Scholar] [CrossRef] [PubMed]
32. Suzuki, A.; Maeda, S.; Hara, Y.; Hashimoto, S. Capsule gel robot driven by self-propelled oil droplet. In Proceedings of the 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vilamoura, Portugal, 7–12 October 2012.
33. Nakagaki, T.; Hiroyasu, Y.; Ágota, T. Intelligence: Maze-solving by an amoeboid organism. Nature 2000, 407. [Google Scholar] [CrossRef] [PubMed]
34. Seah, T.H.; Guanjia, Z.; Martin, P. Surfactant Capsules Propel Interfacial Oil Droplets: An Environmental Cleanup Strategy. ChemPlusChem 2013, 78, 395–397. [Google Scholar] [CrossRef]
35. Horibe, N.; Martin, M.H.; Takashi, I. Mode switching and collective behavior in chemical oil droplets. Entropy 2011, 13, 709–719. [Google Scholar] [CrossRef][Green Version]
36. Horibe, N.; Hanczyc, M.M.; Ikegami, T. Shape and motion dynamics in self-moving oil droplets. In Proceedings of the 3rd International Conference on Mobiligence, Awaji, Japan, 19–21 November 2009.
37. Segré, D.; Dafna, B.-E.; Doron, L. Compositional genomes: Prebiotic information transfer in mutually catalytic noncovalent assemblies. Proc. Natl. Acad. Sci. USA 2000, 97, 4112–4117. [Google Scholar] [CrossRef] [PubMed]
38. Hanczyc, M.M.; Takashi, I. Chemical basis for minimal cognition. Artif. Life 2010, 16, 233–243. [Google Scholar] [CrossRef] [PubMed]
39. Bogdan, R.J. Information and semantic cognition: An ontological account. Mind Lang. 1988, 3, 81–122. [Google Scholar] [CrossRef]
40. Hadorn, M.; Eva, B.; Kristian, T.S.; Harold, F.; Peter, E.H.; Martin, M.H. Specific and reversible DNA-directed self-assembly of oil-in-water emulsion droplets. Proc. Natl. Acad. Sci. USA 2012, 109, 20320–20325. [Google Scholar] [CrossRef] [PubMed][Green Version]
41. Hanczyc, M.M. Metabolism and motility in prebiotic structures. Philos. Trans. R. Soc. B 2011, 366, 2885–2893. [Google Scholar][Green Version]
42. Bütschli, O. Untersuchungen über Mikroskopische Schäume und das Protoplasma; Engelmann: Leipzig, Germany, 1892; Volume 1. [Google Scholar]
43. Armstrong, R.; Martin, H. Bütschli dynamic droplet system. Artif. Life 2013, 19, 331–346. [Google Scholar] [CrossRef] [PubMed]
44. Davy, H. The Bakerian Lecture: On Some New Phenomena of Chemical Changes Produced by Electricity, Particularly the Decomposition of the Fixed Alkalies, and the Exhibition of the New Substances Which Constitute Their Bases; and on the General Nature of Alkaline Bodies. In Philosophical Transactions of the Royal Society of London; The Royal Society: London, UK, 1808; pp. 1–44. [Google Scholar]
45. Protocell Phase Transition Colony. Available online: (accessed on 12 December 2014).
46. Hanczyc, M.M.; Parrilla, J.M.; Nicholson, A.; Yanev, K.; Stoy, K. Creating and maintaining chemical artificial life by robotic symbiosis. Artif. Life 2014, 21, 1–8. [Google Scholar] [CrossRef]
47. Kohira, M.I.; Hayashima, Y.; Nagayama, M.; Nakata, S. Synchronized self-motion of two camphor boats. Langmuir 2001, 17, 7124–7129. [Google Scholar] [CrossRef]
48. Soh, S.; Kyle, J.M.B.; Bartosz, A.G. Dynamic self-assembly in ensembles of camphor boats. J. Phys. Chem. B 2008, 112, 10848–10853. [Google Scholar] [CrossRef] [PubMed]
49. Tao, Y.-G.; Raymond, K. Design of chemically propelled nanodimer motors. J. Chem. Phys. 2008, 128, 164518. [Google Scholar] [CrossRef] [PubMed]
50. Eelkema, R.; Pollard, M.M.; Vicario, J.; Katsonis, N.; Ramon, B.S.; Bastiaansen, C.W.M.; Broer, D.J.; Feringa, B.L. Molecular machines: Nanomotor rotates microscale objects. Nature 2006, 440. [Google Scholar] [CrossRef]
51. Paxton, W.F.; Kistler, K.C.; Olmeda, C.C.; Sen, A.; St. Angelo, S.K.; Cao, Y.; Mallouk, T.E.; Lammert, P.E.; Crespi, V.H. Catalytic nanomotors: Autonomous movement of striped nanorods. J. Am. Chem. Soc. 2004, 126, 13424–13431. [Google Scholar] [CrossRef] [PubMed]
52. Walther, A.; Axel, H.E.M. Janus particles: Synthesis, self-assembly, physical properties, and applications. Chem. Rev. 2013, 113, 5194–5261. [Google Scholar] [CrossRef] [PubMed]
53. Sanchez, S.; Solovev, A.A.; Harazim, S.M.; Deneke, C.; Mei, Y.F.; Schmidt, O.G. The smallest man-made jet engine. Chem. Rec. 2011, 11, 367–370. [Google Scholar] [CrossRef] [PubMed]
54. Florea, L.; Wagner, K.; Wagner, P.; Wallace, G.G.; Benito-Lopez, F.; Officer, D.L.; Diamond, D. Photo-Chemopropulsion-Light-Stimulated Movement of Microdroplets. Adv. Mater. 2014, 26, 7339–7345. [Google Scholar] [CrossRef] [PubMed]
55. Brochard, F. Motions of droplets on solid surfaces induced by chemical or thermal gradients. Langmuir 1989, 5, 432–438. [Google Scholar] [CrossRef]
56. Kotz, K.T.; Noble, K.A.; Faris, G.W. Optical microfluidics. Appl. Phys. Lett. 2004, 85, 2658–2660. [Google Scholar] [CrossRef]
57. Rybalko, S.; Nobuyuki, M.; Kenichi, Y. Forward and backward laser-guided motion of an oil droplet. Phys. Rev. E 2004, 70, 046301. [Google Scholar] [CrossRef][Green Version]
58. Miura, S.; Taisuke, B.; Taishi, T.; Toshihisa, O.; Shoji, T.; Taro, T. pH-Induced Motion Control of Self-Propelled Oil Droplets Using a Hydrolyzable Gemini Cationic Surfactant. Langmuir 2014, 30, 7977–7985. [Google Scholar] [CrossRef] [PubMed]
59. Gallardo, B.S.; Vinay, K.G.; Franklin, D.E.; Lana, I.J.; Vincent, S.C.; Rahul, R.S.; Nicholas, L.A. Electrochemical principles for active control of liquids on submillimeter scales. Science 1999, 283, 57–60. [Google Scholar] [CrossRef] [PubMed]
60. Dorvee, J.R.; Austin, M.D.; Sangeeta, N.B.; Michael, J.S. Manipulation of liquid droplets using amphiphilic, magnetic one-dimensional photonic crystal chaperones. Nat. Mater. 2004, 3, 896–899. [Google Scholar] [CrossRef] [PubMed]
61. Hanczyc, M.M. Structure and the synthesis of life. Archit. Des. 2011, 81, 26–33. [Google Scholar]
62. Committee on the Limits of Organic Life in Planetary Systems; Committee on the Origins and Evolution of Life; National Research Council. The Limits of Organic Life in Planetary Systems; The National Academies Press: Washington, DC, USA, 2007. [Google Scholar]
Back to TopTop |
Are you for reel? How the Compact Cassette struck a chord for millions
From fuss-free audio tape recording to Walkmans, it's all in your head
The birth of a standard
Although established design principles for magnetic sound recording were observed, there’s quite a bit of give and take in the analogue world. This kept the dream of a handheld audio recorder alive, but Philips still had to make some important decisions regarding frequency response and acceptable levels of distortion. The battleground for these arguments lay in the bias and equalisation configurations.
As established decades earlier, simultaneously adding an inaudible high-frequency signal to audible sounds during recording dramatically improves the audio quality, thanks to the way magnetic tape heads work. This so-called AC bias dealt with coercivity issues inherent in magnetic media and, rather than go into the physics of it all, you can find plenty of material online explaining hysteresis loops and tape saturation.
From the Philips Technical Review - Compact Cassette EQ curve
Compact Cassette EQ curve: M shows the variation of tape magnetisation across the frequency range
P shows corresponding playback characteristics. Source: Philips Technical Review
That said, choosing the level of bias current determines certain factors regarding frequency response, and Philips had to judge what was appropriate for its intended audience. A high bias field favoured lower frequencies whereas a small bias current suited higher frequencies. It’s all to do with where the signal gets recorded in the tape layer. Low frequencies are deeper into it whereas high frequencies are more on the surface.
How a tape head works. Source: George State University
Philips worked out a suitable compromise with a low bias current that favoured the high frequencies and utilised equalisation circuitry to boost treble when recording and conversely boost low frequencies on playback. Using EQ in this way was common in magnetic recording, yet the configurations Philips had devised needed to become standardised. The bias and EQ circuitry for every cassette recorder worldwide had to follow the company’s sound reproduction recipe. Apart from the cassette specifications, reference tapes of frequencies at precise levels would be used for calibration.
Micro management
Another factor in all this was the tape head, which is effectively an electromagnet whose poles are positioned just above the moving media. The tiny gap of the head (see the above diagram) determines the highest frequency that can be recorded. A gap of 2µm was chosen to achieve a theoretical maximum frequency of about 12kHz. For Philips, up to 10kHz was good enough and it worked out all its bias and record-playback EQ settings based around that gap dimension. A smaller head gap would lower the high frequency response but boost the recording strength, and so this was another critical part of the standard to be adhered to.
If everyone started customising the specs, tapes recorded on one machine would potentially sound dreadful when replayed on another. And in some respects, Dolby's noise reduction tech, which would eventually find its way into Compact Cassette recorders and dominated pre-recorded Compact Cassette production, has a lot to answer for.
Dolby B circuitry on a Signetics IC from 1973
Dolby B noise reduction circuitry on a Signetics IC from 1973
As the tape needed to be flipped over, details of the recording track area were essential too, and as stereo recording had also been implicit during development, these track widths were crucial. The tape was capable of four tracks – a pair in each direction for two track heads – and these needed to be separated to avoid crosstalk.
In a mono arrangement, each track was 1.5mm per side across the 3.8mm tape width. For stereo, the left and right tracks were only 0.6mm apiece, with 0.3mm separation to avoid crosstalk. Needless to say, smaller track sizes did impact on the output of the left and right channels, but combined, the output was satisfactory, and Philips claimed cassettes reproduced better stereo separation than vinyl records.
Yet where a record turntable only had to rotate at a regular speed and the tone arm would do the rest, tape needed to be dragged from one spool onto another and across the erase, record-playback heads along the way and all at a constant speed.
Compact Cassette meet RCA Sound Tape Cartridge
Compact Cassette meets RCA Sound Tape Cartridge: note the exposed tape on the RCA media
Source: Wikimedia, Creative Commons
RCA’s approach included tiny tension arms used by reel-to-reel systems – you can see the slots for them on the sides of the cartridge. The need for these variable levers was to even out any slack in the tape transport and maintain good contact with the tape head array.
Single-spool cartridge formats, such as the Fidelipac, had no tension arms but made firm contact with the head by having pressure pads behind the tape. So when the cartridge was engaged, the tape was sandwiched between the head and a fibre pad.
Playing for reel
Despite being originally intended as a dictation machine, the free licensing of the Compact Cassette standard sparked widespread adoption by electronics manufacturers, particularly in Japan. In a relatively short time, technical advances in the recorder components and magnetic media led to a steady improvement in the performance of the format.
Consequently, the Musicassette - cassette tapes prerecorded with music - increased in popularity as the sound reproduction improved. Admittedly, some companies with interests in other formats held off mass production of Musicassettes of their artists’ catalogues, but they would be won over in the end.
Philips Musicassettes and other tape media from 1965
Philips Musicassettes and other tape media from 1965
Source: Philips Company Archives
The actual production of Musicassettes was done on machines running 32 times faster than normal playback. Cassette tape would be reeled over four heads recording what would be both sides at once at 60 IPS. The master tape that was source of the original music had been recorded at 7.5 IPS and this would also run 32 times faster, clocking up a playback speed of 240 IPS for duplication purposes.
A 1,500m reel of cassette tape was used for each run from which multiple Musicassettes would be made. Tones separating the programme material were used to identify the beginning and end of each completed Musicassettes album to aid splicing and packaging.
This super-fast tape transport also required the circuitry to follow suit. So instead of the bias frequency being around 80kHz, it was now 2.4MHz; the amplifiers also needed to work over a frequency range of 200kHz to 500kHz. The head gap was also enlarged to 4µm. This fast tape copying was the only way to knock out cassettes to production deadlines.
Next page: Highly sprung
Similar topics
Other stories you might like
Continue reading
Annoying, maybe – but totally ruining this science, maybe not
Continue reading
Breakthrough could lead to development of drugs to target illness
Continue reading
Biting the hand that feeds IT © 1998–2022 |
The Palestinian Holocaust
The word holocaust comes from the Greek “holokauston” which means “sacrifice by fire to gods” or “burnt offering”. But now the word has expanded to the broader sense of genocide to a race or a group of people, because of their religion, colour or any other less significant reason.
The Jews kept the term “holocaust” to themselves alone to describe the Nazi crimes against them during World War II. Therefore, Nazis’ gas chambers and concentration camps in European countries under their control, especially in Poland, were solid evidence of those crimes which was well traded on by the Jews.
According to the Zionist viewpoint, the holocaust was an exceptional crime committed against exceptional people. Israel and the Zionists characterised the holocaust as unique, as if the Jews were the only group or race that have suffered from this heinous crime.
After World War II Zionism used this crime to blackmail the whole world. Germany in particular was subject to extortion and was forced to pay huge compensation to the descendants of the victims and to Israel. The Jewish state claims it represents all the Jews in the world, so it is entitled to get the compensation as well.
Now the Jews who were once victims have become the worst culprits. They imitated the Nazis’ appalling crimes when they carried out their aggression against Gaza. Their assault on Gaza is still under way, putting to death children, women and old people in cold-blooded massacres that have sparked widespread condemnation by the international community.
The aggression on Gaza is not an exception in Israel’s history. Israel is responsible for many massacres since its establishment in 1948, and even before that date. Clear examples of the Jewish state’s violence were the massacres committed against Palestinians in Kafr Qasim and Deir Yaseen and others carried out against nearby Arab states.
The question that should be dealt with now is whether the Palestinians have the right to appeal to the international community to apply the concept of holocaust on them. The Palestinians should stand up for their right to call upon the world to view them as the victims, while dealing with the Jews as the executioners who are cast in the same mould as the Nazis. Palestinians should bear in mind that the EU parliament recognised the holocaust against the Armenians in Turkey in 1915 in response to increasing demands pressed by the Armenians.
Civil society and human rights organisations can pave the way for that particular Palestinian demand through the recording of Israeli war crimes. This can be done in cooperation with the Palestinian civil society organisations and human rights advocates all over the world to support the legal, humanitarian and political struggle of the Palestinian people. |
Site Loader
What are the rules of love?
The Rules of Love
• Always tell the truth.
• Love, goodwill, wisdom and understanding are absolutely required.
• A sense of humor is quite necessary.
• Respect each other and each other’s desire for privacy.
• Be tolerant.
• Be patient; it is foolish to fuss over small things.
• Never let the sun set on your anger.
• Avoid self-consciousness and false pride.
What’s the difference between hope and expectations?
So what’s the difference between hope an expect? While hope is a desire for something to happen (you want it to happen), expect refers to something you think will happen, even if you don’t want it to happen.
What is self expectations?
Self-expectations and the expectations placed upon us can be realistic or unrealistic, helpful or hurtful. Our “shoulds” of ourselves reflect expectations that we feel we are not meeting. When we tell ourselves that we “should” be doing something, we are reinforcing the idea that we are not doing it.
How do I make reasonable expectations?
How Do I Set Realistic Expectations?
1. Change Your Mentality. If you are the type of person who does everything all at once in hopes to quickly improve how you feel, you are cheating yourself of being able to appreciate what you need to.
2. Know Your Limits.
3. State Your Truth.
4. Keep Your Objective.
5. Stay Your Course.
6. Do Not Get Distracted.
What are your expectations for me as a teacher?
You might expect that your teacher has mastery over the material; that your teacher is skilled enough to provide different learning experiences for different kinds of learners; that he/she is conscientious enough to prepare lessons that not only engage but that also promote learning, mastery and independence;that …
Are expectations good or bad?
The positive effects of expectations Holding expectations can sometimes lead to self-fulfilling prophecies. If you have a belief about a certain feature, in yourself or others, actions that are consistent with the initial belief can be evoked. This is especially true in social situations.
Why do teachers have expectations?
Students are influenced by the expectations of their teachers, because they communicate their expectations to their students. Teachers with high expectations provide their students with a variety of learning opportunity, a high quality of praise and a continuous encouragement.
What is the rule for dating someone younger?
What does love without expectations mean?
Loving without expectations is loving unconditionally; that is without expecting anything back. It is a form of love which is highly underrated and often misunderstood. When we love without expectation, we don’t expect anything back nor ruminate on the fact that our affection is not being reciprocated.
What is expected of a teacher?
Today’s teachers are expected to have advanced knowledge and skills and high academic and ethical standards. Teachers are expected to promote student’s academic progress as well as further students’ social, emotional, and moral development and to safeguard students’ health and well-being.
Post Author: alisa |
Forms affecting Books
Types for Novels
The idea of “new” by itself derives from your This french language message “nouvelle”, which implies new. The particular novel is actually, throughout a sense, a fabulous fictional style connected with misinformation, as well as improved fictional works can be looked as this resourceful paintings or job from inventing, from the penned written text, representations involving real life as well as other sorts of subject. Books can be labeled in keeping with design, and building plots, and then layouts (which may incorporate mass art print, creating the law, as well as eBook formats). In addition there are genres during the unique itself. As an illustration, famous novels say to the story plot for present functions, even when hallucination fiction usually are thought about non-fiction works out along with take care of fantasy.
There are various famous sizes associated with creative authoring that include: narration, small adventure, screenplay, graphical narrative, as well as others. Through tale works of fiction, the primary cartoon figures establish within the land, with regards to keeping individuals is the back-burners whoever react support or improve the protagonist’s progression. Story works of fiction tend to be recognized for their degree, considering the fact that it calls for elaborate ethnical affairs plus authentic environment, rendering it more pleasing with an audience. Quick tales, on the other hand, tend to be known to be of great success, though they are really usually in smaller span compared to the installments of an full-length novel. Such is effective, most commonly, involve your contributor towards minimize their emphasis towards a particular character, in that way simplifying the duty from preparing a plotline.
Evidently, inside modern-day time, the concept’epic saga’features a significantly much wider message than it used to experience within years no longer by. Modern-day people characteristically affix varied definitions with the period, according to targeted place and also sources accustomed to article the job (e.g., classifieds, television, video clip, guide, and then audio books). While a few use a words having a possessive mentality, a lot of simply use the item using a detached attitude. For that reason, you can find 2 kinds of classic tomes during today’s world: people some sort of naturally specified starting up, center, and then last part, and additionally those which lack a first time, core, and then end.
As modern technology is constantly on the boost, customers may possibly learn that ones own ideal fictitious choice work day a word’novel.’ Even when misinformation may be looked as a story which usually offers critical incidents which are lucid, plausible, and even large in their own right, brand-new types hype are generally increasing the constraints from the novel. The fact is, one of the biggest styles in today’s books might be the development on the time period within various versions that are included with non-traditional styles for example science fabrication, make believe, puzzle, terror, and even comedy.
Fiction which reveal to a tale based on a presented premise, regardless this assumption is undoubtedly traditional legendary and / or conventional, are generally respected as narrative novels. In spite of this, it must be considered that we now have genres in non-narrative story along the lines of concise accounts, which often are acknowledged to catch the attention of traffic who prefer to read simple things an important plot-driven, in preference to character-driven, story. Besides that, most people what individuals go through fiction love checking individuals which are placed in various surroundings, including locations that may replicate actual life (i.e. present-day The uk, during the Fight, or possibly down interval that could be tightly related to a story).
Works of fiction that happen to be crafted like a Mirror image and / or Portrait about Serious Lifestyle are generally listed like fictional textbooks, because they normally feature fabricated aspects including charm evolution, planning engineering, including a marginal amount of narration. In contrast, fictitious books which indicate to the storyline of a imaginary’figure,’ these are known as non-fiction, non-fictions, or simply non-narrative novels. Probably the most preferred will work regarding document fit in the type of fiction.
Latency can also be a issue that might influence a good novel’s success. This element refers to the instance the fact that traverses around if your actions begins not to mention the top from the story. Probably the most widely used types associated with coming up with will be world famous dream, that got its start round Homer’s Illiad as well as Odyssey. The phrase’unbelievable’was first in reality lifted with Language of ancient greece mythology together with carried out to the category associated with story writing. Larger-than-life imagination books generally cope with gods, mythic bugs, powerful, enemies, warfare, and additionally more.
There’s 2 anxiousness helpful to relegate an important work of fiction practice: the regular one particular, as well as the new one. The normal creative gets underway with an important plotline, possesses a new, middle, plus a good ending; a creative thought, in contrast, has no an initial, middle of the, as well as end. If you intent to getting into any book having plot of land lines, an individual ought to know that it will probably be much simpler to advertise a story so that you can a dealer when you are sharing with a much more dramatic tale as compared with a quick story. Nevertheless, also a narrative by way of small performance and less obvious display can nevertheless be sold whether its topic and also practice holds typically the reader’s attention. Whether or not you anticipate with a old-fashioned or simply today’s manner of categorizing ones own innovative theory, to eliminate publishing a new book aren’t going to be whatever challenging compared to publishing any other kind involved with story.
Are various kinds of styles of fabricated books, with each advances once more much more in direction of some kinds of novels. Everyday materials widely used types fictional novels involve illusion, scientific disciplines fiction, secret, repugnance, and then comedy. And discover your current different sort of epic saga, you’ll must look into what precisely types interest you all the most. You can even commence searching unique styles of fictional is working to see which fictional kinds appeal to you these most. This will aid prepare a book that is not the same on a vacation fresh published by folks that show your interests.
Leave a Reply
|
What can go wrong before lambing/kidding?
The list is longer than we'd prefer, but can be mostly prevented with good nutrition and management. Nutritional requirements increase during late gestation, especially for ewes/does carrying multiple births. There is an increase in the need for both energy (TDN) and calcium (for ewes) in the diet. Protein requirements don't increase substantially until after lambing/kidding (lactation). Good nutrition means meeting but not exceeding the nutritional requirements of the animals. There are consequences to overfeeding.
Besides diet, it's important not to stress ewes/does during late gestation. They should not be handled too close to term, especially if they are not used to being mustered. Ewes should not be sheared too close to lambing. Vaccinations and any deworming should be done about a month before the first offspring are expected. Groups should not be mixed. Exercise should be encouraged but not excessive. Rams/bucks should be housed away from pregnant females. Weather can be a huge stress.
Pregnancy toxemia
One of the most common problems in late gestation is pregnancy toxemia (or ketosis). This disease is also called twin lamb disease because it mostly affects females carrying multiple births. Pregnancy toxemia is caused by insufficient energy in the late gestation diet, resulting in low blood sugar. Besides females carrying multiples, fat and thin ewes/does are the most likely candidates. The first sign of pregnancy toxemia is a ewe/doe that lags behind the flock or is slow to come to the feed trough. She is lethargic. She may grind her teeth. She may have poor muscle control. In latter stages of the disease, the female goes down and is unable to rise. There is sometimes a mucous discharge (or foam) from the nose. Diagnosis can be confirmed with blood or urine tests.
Early stages of pregnancy toxemia can be treated with propylene glycol or other quick sources of energy (e.g., molasses, corn oil). Affected females should be drenched several times a day with the energy solution. Later stages of the disease require glucose via an IV. In some cases, induced lambing/kidding or a caesarian section is necessary. Like most diseases, early recognition and treatment is key. Pregnancy toxemia should be suspected any time a late pregnant ewe/doe appears sick.
Pregnancy toxemia is prevented by providing sufficient energy in the late gestation diet. To meet the increased needs for energy, it is customary to feed some grain. Grain is a more concentrated source of energy than hay or other forages. The amount of grain needed depends on the size (weight) of the female and the number of fetuses she is carrying, as well as the quality of the forage in the diet. Having ample feeder space is also important, as grain is usually limit fed. Older, younger, or more timid females may not get enough to eat unless there is sufficient bunk space.
Milk fever (hypocalcemia)
Milk fever occurs mostly in late gestation and is usually the result of a diet deficient in calcium. Milk fever can also occur after parturition and be the result of a diet too rich in calcium. The signs of milk fever are similar to pregnancy toxemia.It's not uncommon to treat females simultaneously for both pregnancy toxemia and milk fever. Response to treatment confirms the diagnosis of milk fever. A blood test can confirm low calcium.
Females are usually given calcium borogluconate via an IV. The calcium should be injected very slowly. The response is usually dramatic. More marginal cases of milk fever can be treated with calcium via sub-cutaneous injections. There are also oral drenches and gels that can be given to provide additional calcium.
Milk fever is prevented by providing the proper amount of calcium in the late gestation diet. Legumes are a good source of calcium. In some cases, they contain too much calcium and should be saved for the lactation diet. Poor quality grass hays may be deficient in calcium. Grains and oilseeds are poor sources of calcium. Limestone is the best source of calcium. Kelp is a good source, too. Free choice minerals do not ensure adequate calcium intake. It's better if the calcium is incorporated into the ration.
Vaginal prolapse
Some females (sheep more than goats) prolapse their vaginas in late pregnancy. There are many contributing factors. Ewes that prolapse their vaginas should probably not be retained, as there is a strong likelihood they will do it again. Because it is a heritable condition, their offspring should probably not be kept (or sold) for breeding.
Prolapses need to be replaced as soon as possible to prevent more serious problems. The exposed part needs to be cleaned with warm, soapy water and pushed back into the female. The prolapse can be kept in place by using a bearing retainer (or "spoon") and/or prolapse harness. Homemade harnesses can be made from baling twine and tied to the sheep's wool. Sometimes, the prolapse is kept in with a suture. Ewes/does can push their lambs/kids through the retainers and harnesses, but the suture needs to be removed before lambing/kidding. Once the ewe/doe has birthed her offspring, the prolapse usually stays in. Problem solved.
Some ewes/does abort their offspring before term. There can be many causes, both infectious and non-infectious. Early term abortions are usually not noticed, and the female usually breeds back. This is how we get some of our late lambs/kids. Late term abortions may result in the birth of premature or full term fetuses (sometimes deformed) or weak lambs/kids that die shortly after birth. Abortion due to infectious causes is a big deal. Veterinary advice should be sought. Treatment and prevention will depend on the cause.
Additional reading
Pregnancy toxemia (ketosis) in ewes and does - Colorado State University
Pregnancy toxemia in ewes and does - UC Davis
Risk factors of vaginal prolapse (infographic)
Abortion in sheep - Merck Veterinary Manual | In goats |
The Physics of Sound: How and Why Sound and Frequency Heals
Jonathan and Andi Goldman, Guests
Waking Times
An examination of the majority of spiritual paths and religions on this planet reveals an overriding belief that sound was the primary force of creation. Examples of this come from the Old Testament (“And God said, ‘Let there be Light’”) and the New Testament (“In the beginning was the Word”). It comes from many other traditions—Egyptian, Hopi, Mayan, Polynesian, and more—which all have creation myths that invoke the power of sound. It is said in the Hindu spiritual path that “Nada Brahman”—everything is sound. Even from a Western scientific perspective, we talk about the “Big Bang,” signifying that the creation of the universe was somehow sonic in origin.
• Frequency
Sound is energy that travels as a wave. The wave enters our ears and travels through our auditory pathways into our brain, ultimately affecting our breathing, heart rate, and nervous system. We experience this wavelike energy primarily as a phenomenon that we hear. However, these waves also pass into our body, affecting us on a cellular level.
One of the most extraordinary demonstrations of the effects of sound and resonance was conducted by a visionary Swiss doctor named Hans Jenny. Dr. Jenny’s seminal work titled Cymatics (a Greek word that means “wave form”), whose first volume was published in 1967, showed the effects that sound waves have upon different types of material, including water, pastes, and other liquids. Dr. Jenny placed these substances on a steel plate and vibrated the plate with a crystal oscillator, which produced an exact frequency, and then he photographed the effects. He photographed liquid plastic (a material similar to Silly Putty) that formed into an object resembling a sea anemone, and lycopodium dust (a material similar to talcum powder) that took on shapes resembling the cells of the body. Some of his most amazing photos were of water, which took on astonishing geometric forms, depending upon the vibrational frequency that was used.
Dr. Jenny’s work demonstrates the extraordinary power of -vibration—that is, sound—to create form. While the structures and forms he created with sound were not living creatures, many of them certainly look as though they were. You can almost imagine that, with a “divine” sound coming from a sacred source, in the beginning the Word could indeed create life.
Several people have carried on Dr. Jenny’s work in the twenty-first century, including Alexander Lauterwasser of Germany and John Stuart Reid of England. The following are two photos taken by Reid showing the beautiful geometric forms that water took on when vibrated by two diverse frequencies on a CymaScope, a device similar to Dr. Jenny’s. When you think about the fact that the human body is mostly made of water, it’s easy to realize how powerful the effects of sound can be upon us.
Fig. 2.1. The voice of author Jonathan Goldman rendered as a visual pattern by a CymaScope.
Fig. 2.2. A female voice mandala made visible by a CymaScope
How Sound Can Heal: Method One
The Humming Effect. Available at Amazon.
Although we have been introduced to many different types of instruments designed for sound healing, our preferred instrument is one that doesn’t require electricity or batteries, has an owner’s manual that’s really simple to use, and is free. This instrument is, of course, our own voice. And it’s what we’ve been teaching for dozens of years.
Please note: we are not talking about using the voice in a musical fashion such as singing—that’s entertainment. We’re not talking about getting up in front of an audience and singing “Strangers in the Night” or whatever song turns you on. Here, in relation to sound specifically utilized for healing, we’re talking about the concept of entrainment.
There are many ways to use entrainment for sound healing. At a basic level, you can use vibration—humming—to restore the natural resonance of an organ or system. When an organ begins to vibrate out of its natural frequency, its energy becomes blocked. The organ then becomes vulnerable to potential imbalances—deterioration, disease, viruses, bacteria, and so on. When we reinforce its natural frequency, the organ’s resonance is restored, it regains its energy, the intruder energy that was causing damage ceases to exist, and the organ is restored to health.
You might also use different rhythms to influence bodily pulses, such as our heartbeat, respiration, and brainwaves. Or you can use sounds that are slightly out of tune, called “beat frequencies,” which can be applied in a specific manner to influence brain waves. We will discuss some of these uses of entrainment later in this book.
How Sound Can Heal: Method Two
You can find videos of people shattering wine glasses with their voices on YouTube. Even such reputable shows as the Discovery Channel’s MythBustershave recorded the phenomenon. When a singer is able to match the resonance of the wine glass with his or her voice with great amplitude (volume), the glass begins to vibrate and ultimately shatters. This ability of sound to disintegrate matter has been known for a long time. Remember the story of Joshua and the walls of Jericho in the Old Testament? Joshua and his men blew horns and gave a great shout and the walls crumbled.
With sufficient amplitude, sound is powerful enough to cause any object to shatter when its resonant frequency matches that of the object. This second approach to sound healing focuses on using sound to disintegrate whatever pathogen, malignancy, or energy is causing harm to the body.
Recently, we were sent a link to a video for a TEDx Talk by Professor Anthony Holland, a musician-scientist at Skidmore College, discussing his work shattering cancer cells and bacteria with high-frequency vibrations. Among other things, the video has some astonishing visuals of this phenomenon occurring.* It truly demonstrates the power of sound to heal.
Dr. Elias was treating an elderly woman with such bad shaking from tremors that she could not write her name with any sort of legibility. Using an fMRI to view her brain, Dr. Elias trained a beam of highly focused sound energy on the area of the brain where damage had occurred. The procedure lasted less than a minute. About ten minutes later, the woman was asked to sign her name, and her script was lovely and legible.
Any surgery to address the woman’s tremors would have been extensive, and recovery would have taken close to six months. In contrast, the ultrasound treatment was brief and noninvasive, and the woman’s recovery time was almost instantaneous. It was astonishing. The reporter covering the story asked Dr. Elias what other conditions might be treated with sound. He suggested that there were many different possibilities. What is perhaps most significant about the work of Professor Holland and Dr. Elias is that it has been filmed, which vividly validates the power of sound to heal and transform.
About the Authors
Jonathan Goldman, M.A., is an award-winning musician, composer, writer, teacher, and chant master. An authority on sound healing and a pioneer in the field of harmonics, he is the author of several books, including Healing Sounds, and the founder and director of the Sound Healers Association. Andi Goldman, M.A., L.P.C., is a licensed psychotherapist specializing in holistic counseling and sound therapy, the director of the Healing Sounds Seminars, co-director of the Sound Healers Association, and coauthor, with Jonathan Goldman, of Chakra Frequencies. The authors live in Boulder, Colorado.
This article is excerpted from The Humming Effect: Sound Healing for Health and Happiness by Jonathan Goldman and Andi Goldman, published by Inner Traditions.
Like Waking Times on Facebook. Follow Waking Times on Twitter.
• We’re Also Uncensored On… |
Alcohol and the liver
The liver is a robust organ that can usually process small amounts of alcohol, but heavy drinking can cause damage.
The damage caused by heavy drinking can range from fatty liver to cirrhosis. Sticking to the recommended weekly drinking guidelines can lower the risk of developing alcohol-related liver problems.
The liver is an organ with over 500 different functions, including processing digested food, combating infections and dealing with toxins like alcohol.
In the UK around 7,700 people die each year due to alcohol-related liver disease, who are mostly between 40 and 65 years old [1]. It’s one of the only preventable conditions that is continuing to increase in the UK, while it falls across most of Europe. Although alcoholic liver disease can be treated if caught early, often symptoms do not appear for a long time.
Liver disease causes approximately 2% of all deaths in the UK.
How alcohol affects the liver
When we drink alcohol, it is absorbed into the bloodstream from the stomach and intestines. This blood first passes through the liver before circulating to the rest of the body. Liver cells process the alcohol at a rate of about one unit per hour, breaking it down into other chemicals which are then in turn broken down into water and carbon dioxide, before being passed out in the urine and from the lungs.
However, if the liver has to break down too much alcohol, its other functions are negatively affected. Drinking too much alcohol can lead to three stages of damage: fatty liver, alcoholic hepatitis and alcoholic cirrhosis, which can lead to liver failure and increases the risk of liver cancer.
Fatty liver
A build-up of fat occurs within the liver cells of most heavy drinkers, but it may also be found in those drinking just above the weekly low-risk drinking guidelines. Fatty liver may not progress to more severe damage and can be reversed by stopping or reducing drinking. However, it is an indicator that more permanent damage may occur in the future.
If the liver has to break down too much alcohol, its other functions are negatively affected, and it can be damaged.
Alcoholic hepatitis (inflammation)
About a third of people with fatty liver will develop alcoholic hepatitis. Mild hepatitis may not cause any symptoms; more severe cases tend to cause symptoms such as loss of appetite, vomiting, abdominal pain and jaundice (yellowing of the skin). At its severest, alcoholic hepatitis can quickly lead to liver failure and death.
Alcoholic cirrhosis
Cirrhosis is the result of continuous liver damage. Normally when the liver is damaged it can regenerate itself. In cirrhosis, the process of healing fails and scar tissue develops, preventing the liver from being able to carry out its normal functions.
Cirrhosis is found in about 20% of heavy drinkers. In some instances, cirrhosis has no obvious symptoms, but where symptoms are visible, they usually include general ill health, flatulence, lack of appetite, sallow skin, jaundice, itching, anaemia, vomiting of blood, lower back pain and abdominal swelling.
There is no cure for cirrhosis, but sufferers who stop drinking completely have a much stronger chance of survival. Those who continue to drink will go on to develop complete liver failure and a further 10 per cent of sufferers develop liver cancer, which is fatal in about six months.
Liver cancer
People suffering with cirrhosis of the liver are at a higher risk of developing liver cancer (hepatocellular carcinoma). It is not well understood how cirrhosis increases cancer risk. It could be that as the scar tissue grows, the liver attempts to heal itself by creating new cells. However, the more cells the liver creates, the higher the chances that a change, or mutation, will take place – resulting in cancerous tumours. It could also be related to the extent of the damage the liver has already endured in reaching alcoholic cirrhosis. Having cirrhosis does not mean you will definitely get liver cancer but certain causes of cirrhosis do have a particularly strong link. This includes excessive alcohol consumption.
Managing the risk
Liver disease causes approximately 2% of all deaths in the UK, and over a third of these are thought to be related to alcohol. Whilst the liver is a very resilient organ that can regenerate itself, prolonged heavy drinking over many years reduces this ability to regenerate, which can result in long-term damage and, in the severest cases, premature death.
Anyone drinking heavily should seek advice from their GP. It is possible that you may be damaging your liver without being aware of it. Your GP should be able to refer you for tests to check, and to support you cut down your intake. You can also get advice and support from the British Liver Trust.
The most effective way to reverse or prevent alcohol-related liver problems is to cut back our consumption, or stop drinking altogether. The UK’s Chief Medical Officers recommend limiting your weekly alcohol intake to 14 units, which means about six pints of lager or one and a half bottles of wine, and spreading your intake over three days or more. Try to have a few alcohol-free days too.
• seizures (fits)
• hand tremors (‘the shakes’)
• sweating
• seeing things that are not actually real (visual hallucinations)
• depression
• anxiety
• difficulty sleeping (insomnia)
[1] British Liver Trust (2019). The alarming impact of liver disease in the UK.
Download pdf
Find out more about alcohol and you |
Discover the World’s Best Science and Technology Academies: Where to Find the Best Academies for Students
By Jennifer Beaudoin and David LevenePublishers Weekly/The Wall Street JournalA year after the US Supreme Court ruled that science teachers should be allowed to use public education dollars for classroom instruction, an industry group says it’s finally looking to find a new way to reach students.
Science teachers have long complained that public schools have failed to adequately prepare them for their work, and that schools and colleges have not been responsive to their needs.
The Association of Science Education Professionals (ASEP), an industry trade group, has called on education leaders to focus on improving science instruction, including creating more hands-on learning opportunities and improving teachers’ abilities to teach science.
In a report released Wednesday, ASEP called for a national science curriculum that “provides students with an immersive and meaningful science education experience that enables them to understand the universe and explore its diverse nature, as well as develop critical thinking skills that can help them learn from their own mistakes, and help them make decisions based on scientific evidence”.
As the US science curriculum has expanded to include more than 100 disciplines, the association estimates that about 10% of American students who took part in science-based learning in 2016 would now be in STEM (science, technology, engineering, and mathematics) careers.
While there are about 30,000 teaching and technical programs in the US, only about 5,000 are certified by the National Science Teachers Association, which is part of the Association of Professional Teachers of Science and Engineering.
The National Science Board, which advises the president on science and technology education, does not require schools to have a science curriculum.
But ASEP has said it wants more schools to offer such a curriculum, and the group has launched an online survey of teachers in the field.
The survey found that about three-quarters of teachers who responded said that their science curricula were not ready for primary-school students, and nearly half said they had seen students take the first few months of a science-related course without understanding the material.
“We need to start thinking of how we can help teachers prepare students for the future and develop their skills, rather than relying on teachers to do it,” ASEP president Susanne Kallmann said in a statement.
Some states have been pushing to change the way science instruction is delivered in schools, and some states are now looking to charter schools to provide science instruction.
The American Association of State Colleges and Universities (AASUC) last year proposed to set up a new science education professional association, the American Society for Applied Physics, to develop “a broad set of best practices and guidelines for education of physics, mathematics, and related disciplines” and “provide the foundation for a new generation of science education professionals”.
In the same year, the National Association of Elementary School Principals (NASEP) also called for new standards to be established for teaching science in schools.NASEP’s board has been working to create new standards and guidance for science instruction for years, but has not yet released recommendations.
But the association is calling for the government to set national standards and standards to guide teacher preparation for science education, as is the case in other countries, such as Japan and Australia.
“Our job is to help schools make sure that students can understand and apply the science to solve problems, but they need to be prepared to be able to communicate it in a meaningful way,” said David Leighton, the group’s president.
“It’s not enough to just teach about how the world works, you have to be capable of explaining how science works in terms of how it relates to other sciences.”
Teachers have long warned that science education is not up to par with the needs of students, even though they now study more than 90% of the world’s published scientific research.
The US has a reputation for teaching a weak science curriculum, in part because the US government and private school system have been reluctant to fund more high-quality science instruction at public schools, said Julie Pimentel, an associate professor of science at the University of Massachusetts, Boston.
Many American children learn most of their science on iPads, and many US states now require that students take standardized tests, Pimentels said.
While the US has some of the lowest rates of college enrollment in the developed world, its public schools still offer an impressive amount of science instruction and, for students who choose to study in those schools, science is seen as an integral part of life.
“Students don’t have a sense of how to prepare for the sciences,” said Leighton.
“They learn about physics, they learn about biology, they go to chemistry, they do physics in their high school, but their understanding of biology is very poor.” |
Unity has built in physics engine which allow you to simulate physics for gravity, collisions and other various needs.
To get physics working you need colliders on your objects and at least one needs a rigidbody component. It is the main component that enables physical behaviors of your game objects.
When the rigidbody attached to your game object it will use gravity unless it is turned off like below
The collider is the component that allows a collision to be noticed by the physics system.
Many of the Unity primitives come equipped with an appropriate collider when we create them.
It is important to note that for a collision to occur, both objects involved in the collision must have a collider attached. While only 1 of the objects involved requires a rigidbody. Unless Is Trigger is checked I will explain that in a different article. |
Flooded street in Beledweyne in Somalia.
The severe floods in Beledweyne, Somalia, have affected up to 500 people. Photo: Tobin Jones / AMISOM
The CNN effect leaves Beledweyne to his fate
There are numerous civil wars, revolutions and natural disasters around the world. Many of these events have received media attention. What happens to the disasters that are not noticed? Are those people left to their own fate? It is often said that media reporting is affected by the CNN effect.
Nicklas Håkansson who researches political communication and journalism at the University of Gothenburg describes for Svt that the CNN effect is forcing politicians to react to media reporting. The CNN effect is a research theory based on the American news channel's influence over political decisions.
For the second year in a row, the city of Beledweyne in Somalia has been hit by severe flooding. It has affected up to 500 people, of whom 000 are children. Schools have been forced to close, roads have been destroyed and access to clean water and food has declined. The UN reports that more and more people will suffer from malnutrition and disease if no humanitarian efforts are carried out.
The media debate and political agenda revolve primarily around the entry of Turkish troops into northern Syria and the Iraqi civil war. As a result, other disasters do not receive as much media attention. Nicklas Håkansson describes how the CNN effect was important during the Kuwait War and how the media, by showing emotional images, could influence political decisions. We also see this in today's wars and disasters. When many news articles about a certain disaster are published, politicians are forced to have an opinion about the event. Many times it can lead to a political change.
The CNN effect can have positive outcomes, but if the research theory is correct, the lack of news reporting can also lead to fewer humanitarian efforts in certain disasters. The flood in Beledweyne, which has been overlooked by the major media channels, is an example of such a natural disaster.
All wars, natural disasters and revolutions deserve to be in the spotlight, but some disasters will always be seen more than others. It is therefore important that we, the civilian population, share posts and disseminate information about disasters on social media so that they too can be heard in the public debate. After all, social media is a powerful tool that can contribute to change. It was made visible last summer the protests in Sudan, when millions of people on Instagram changed their profile picture to a blue dot to show solidarity with the protesters.
Is there something in the article that is not true? Contact us at opinion@fuf.se
Share this: |
University of Wisconsin–Madison
Testing humans for MAP
Human Para’s inaugural MAP testing study is nearing completion. A total of 201 participants donated a blood sample at locations in Orlando, Philadelphia and New York City between May and September 2018. The final samples were drawn on September 10, 2018.
Since this was a blinded study comprised of both IBD patients and control subjects, each sample was assigned a number and sent to 6 participating researchers. The laboratories of Dr. Saleh Naser, Dr. Tim Bull and Dr. Irene Grant received buffy coats (the white blood cell layer) which were extracted at the Temple University laboratory. They tested all positive culture samples for two PCR markers unique to MAP: IS900 and f57. More detail about the testing methodologies can be found at the Human Para Foundation website.
Comment: Zoonotic pathogens typically have their “preferred” normal host. When they infected a different, abnormal host, they often change their behavior. When MAP finds itself inside a host that is not the normal one, i.e. not a ruminant animal, it may behave in unpredictable ways. Some experts suggest that it stops making its typical thick waxy cell wall rendering it difficult to stain using normal acid-fast stains for visualizing mycobacteria and no longer looks like a rod-shaped bacterial cell. This may explain why pathologists cannot see typical acid-fast stained (red) rod shaped bacteria as shown below in a tissue section from a cow.
Diagnostic tests for MAP in humans therefore may necessarily differ in design from those tests used in animals. Hopefully, among the 6 different testing methods being evaluated at least one emerges that can effectively identify MAP infections in humans. An accurate diagnostic test for MAP in humans combined with the effective anti-MAP therapy being pioneered by RedHill Biopharma could revolutionize diagnosis and treatment of Crohn’s disease and other disease of humans linked to MAP. |
Water-based filters are one of the newest additions to the consumer water filter market, and they’re getting more popular with the public.
They come in several flavors, but the basic idea is the same: you add a water filter, and the water is filtered to remove any harmful bacteria and harmful substances.
But when it comes to making these filters, you have to get the water filter right.
And, if you’re going to buy a water purification device, you want to make sure it’s water-soluble.
That’s because you want it to be able to filter out bacteria and solids.
Here’s how to do that.
First, you need to know how much water you’re using.
For the purposes of this article, we’ll use a 5 gallon tank, so that’s the amount of water we’ll be using to make the filter.
To make sure you’re getting the correct amount of filter water, you’ll need to check your water bill, and it’s important to note that the water that you use for water purifications comes from a municipal water supply, and most water systems don’t have a way to make that water clean.
If you’re concerned about your water, go to your water provider to see what the proper water intake is.
The water in a municipal supply is usually filtered through a device called a water separator, which has a metal ball with a cap on it.
This separator is attached to a filter that is installed in the water line, and that filter can filter out all the contaminants.
The idea behind this is to remove the harmful chemicals that can build up in the environment, like the bacteria that can cause skin irritation.
To find out if you have a municipal source of water, head to your local water provider.
If your water source doesn’t have one, ask your provider to contact a municipal filtration company, which will be able tell you how much you should use and what to add to your tap.
To do this, head online to the city’s website or call your local utility, and you’ll be directed to a page with a QR code, which can help you download the software.
Follow the instructions on that page to scan the QR code and install the water purifier you just purchased.
You’ll need an Internet connection, so head to a computer or tablet, and type in the code you downloaded earlier.
It should look something like this: 1G-PX-4-0-1-0.zip 1G.PX.4.0.1.jpg Once you’ve downloaded the software, open it up on your computer and follow the instructions to get your filter ready to go.
Make sure that the filter isn’t in the filter bag, because the bag can block the water from flowing through the filter, causing it to not work properly.
Now, start filling up the water with filtered water.
Put a couple tablespoons of water in the top of the water separators bag, and then add a teaspoon of water to the bottom of the bag, about half the volume of the separators.
You should end up with about a half cup of water.
When the water in your water separatory is nearly full, you can fill the bag to about one third full, but be careful not to let the water get so high that you spill your filter water.
Once the water has reached a quarter-cup level, close the bag and put it in the freezer.
Next, start adding water to your filter bag.
Add about one cup of filtered water to each half-cup size of bag, filling the bag up to about three-quarters full.
To get your water purifying device to work properly, you will need to put the water into a container that has been completely frozen, and open the lid to let out all of the cold air.
You may have to add a couple inches of ice to help hold the water to a certain temperature.
After about two to three hours, the water should have frozen solid, and your water filter should be ready to use.
If not, you may have more than enough water to go through the water filters, and this will result in your filter freezing completely.
After the water freezes solid, it will need about an hour to get to a proper temperature.
Once it does, you should open the filter and check it to make certain that the bag has fully filled.
This can take up to 24 hours.
Once you get the filter to this temperature, you’re ready to start purifying the water.
Before you start purification, you might want to take a look at the instructions for the water you will be purifying.
If the instructions tell you to use an ice-filled container, you shouldn’t, because ice will freeze and break the water’s plastic shell, which is the part of the filter that can seal the water when it’s cold.
You also shouldn’t use a water treatment system that’s designed to be used in the winter, because
후원 콘텐츠
|
Strategy of the attack
Portrait of Wolfram von Richthofen in military garb sitting in a chair
Wolfram von Richthofen, the architect of the bombing of Gernika
Gernika was the first town to be bombarded:
1. As a large-scale military experiment through the massive use of bombs (more than 41 tons of explosives).
2. Using a plan of attack that would later be employed elsewhere in Europe during World War II (such as in Warsaw and also by the Allies in Dresden), a combination of carpet bombing, “Koppelwurf” or corral bombing and shuttle bombing.
This bombardment was a perfect testing ground for the newly created Luftwaffe, the German air force.
Several authors still defend that the target of the bombing was the destruction of the small Errenteria Bridge, aiming to prevent the withdrawal of the Basque troops toward Bilbao. However, the disproportionate force of the attack, and the activity of the fighters machine-gunning civilians for three and a half hours, indicates that it was a terror bombing operation with the intention of completely destroying Gernika. The destruction of the “holy city of the Basques” would demoralize the troops, discourage the civilian population, and precipitate the surrender of the Basque government, prompting the subsequent fall of Bilbao and its heavy industry.
The bombing of Gernika took place on Monday, April 26, 1937, market day. Even if the population were conscious that there was a certain danger of attack, no one expected that Gernika could become the target of an attack of such a massive scale as the one that took place that day. Therefore, the market was held as usual and, although it is difficult to calculate, the number of people in Gernika at that time was between 10,000 and 12,000, mostly civilians.
At 4:20 p.m., the bells of the church of Andra Mari alerted the population of the arrival of enemy planes. Most of these aircraft belonged to the German Luftwaffe (renamed “Condor Legion” in 1937), and to a lesser extent to the Italian “Aviazione Legionaria” (Legionary Air Force).
The airfields that served as bases for the bombing were those of Gasteiz for the fighter planes, and those of Burgos and Soria for the bombers.
After the warning, the citizens of Gernika ran to protect themselves in the different shelters that had been built, where they stayed for almost four hours until the bombardment ended. It was an incessant attack with hardly any intervals between the different waves, and was planned using the following tactics:
1. First, a single Heinkel He 51 flew over Gernika from the east. The planes did not come from the north (from the sea) unseen. This was premeditated. That first and sole Heinkel He 51 flew for about fifteen minutes east of Gernika and provoked the system of alarm: The flags from Kosnoaga alerted the two watchmen at Andra Mari who then rang they bells. Consequently, people ran to the shelters. This is precisely what Richthofen wanted—the victims did not know that the shelters were going to become death traps. As part of this first wave of the attack, minutes after the sole Heinkel He 51 dropped its bombs in the city center, three bomber planes bombed Gernika’s water deposit to ensure that after the bombing there was no water left for the firefighters.
2. One of the consequences of this first wave of attacks was that the people thought the bombing was over. The emergency services start acting; firefighters, nurses, doctors, and other first aid personnel went to the city center to help the first victims. The rebel air command knew that it was going to take about 30 to 45 minutes for the workforce to reach the city center and start assisting the victims. Therefore, they waited until the second wave of bombers to attack the city center and surprised medics, firefighters, nurses, and other assistants out in the open (this tactic was later employed in other bombings, such as that of Rotterdam).
For forty minutes, between the first and second wave, the fighters and the ground attack planes flew in a circle, preventing anyone from escaping from the urban nucleus, in order to keep everyone within the “circle of fire” of downtown Gernika.
1. Forty minutes later, 21 heavy bomber Junkers Ju 52s bombed Gernika flying from the north and, thus, did so without being detected by the watchmen until it was too late.
The first bombs dropped were breaker bombs, weighing between 50 and 250 kilograms, to destroy buildings. The bombs broke the roofs and detonated two seconds after falling to the ground, which, in the case of the 250 kg bombs, caused the complete collapse of the buildings. Thus, they exposed the entire wooden structure of the houses.
Next, the bombers dropped the one-kilogram incendiary bombs. These bombs contained an alloy of magnesium, aluminum, and zinc that, when in contact with other metals, reacted and caused an uncontrollable fire and temperatures of more than 1,500 degrees Celsius. Consequently, a huge fire broke out in Gernika, which could be seen from villages many kilometers away.
Shelters became a deadly trap. After receiving two direct hits with 250-kilogram bombs, the shelter of Andra Mari street collapsed, immediately killing most of the 450 to 500 people who were there and burying the rest alive, who later died asphyxiated or burned. Numerous witnesses have made reference to the screams that were heard for hours, as the fire advanced in the direction of the shelter. Only four survivors have been registered, all of them placed near the entrances of the shelter.
1. Finally, the survivors who were trying to escape from the urban center were once again machine-gunned by fighters and ground attack planes for nearly two more hours (100 minutes). These planes flew in “chains” of three planes diving to less than 50 meters off the ground. Each plane was equipped with two machine guns capable of firing 20 bullets per second. They acted in the surroundings of Gernika, flying in circles to keep the population within the perimeter of fire. The city center was made up of very narrow streets and houses linked together, which facilitated the spread of fire.
If the town had been a military objective, and the civilians had been troops defending a fortress or a military stronghold, the machine-gunning would have kept them neutralized within the village's “circle of fire,” which would have allowed the infantry to advance without resistance and quickly take the ruins of the town by storm.
But it was a war experiment and, as Richthofen recorded in his diary, it was a great "technical success," since Gernika was completely destroyed and the population was kept immobile, within the perimeter of the town, during the three long hours that the attack lasted.
Following the logic of the terror bombing, the three regimes of the coup coalition asked within 24 hours of the bombing for the surrender of the Basque government and the Basque troops. However, the Basques did not surrender. |
Hopkinton, Rhode Island: A Delightful Place to Visit
Let's Have A Look At Chaco Canyon Park In NM, USA Via
Lets visit Chaco Canyon in New Mexico from Hopkinton, Rhode Island. Based from the use of similar buildings by current Puebloan peoples, these rooms had been areas that are probably common for rites and gatherings, with a fireplace in the middle and room access supplied by a ladder extending through a smoke hole in the ceiling. Large kivas, or "great kivas," were able to accommodate hundreds of people and stood alone when not integrated into a housing that is large, frequently constituting a center location for surrounding villages made of (relatively) little buildings. To sustain large buildings that are multi-story held rooms with floor spaces and ceiling heights far greater than those of pre-existing houses, Chacoans erected gigantic walls employing a "core-and-veneer" method variant. An core that is inner of sandstone with mud mortar created the core to which slimmer facing stones were joined to produce a veneer. These walls were approximately one meter thick at the base, tapering as they ascended to conserve weight--an indication that builders planned the upper stories during the original building in other instances. While these mosaic-style veneers remain evident today, adding to these structures' remarkable beauty, Chacoans plastered plaster to many interior and exterior walls after construction was total to preserve the mud mortar from water harm. Starting with Chetro Ketl's building, Chaco Canyon, projects for this magnitude needed a huge number of three vital materials: sandstone, water, and lumber. Employing stone tools, Chacoans mined then molded and faced sandstone from canyon walls, choosing hard and dark-colored tabular stone at the most effective of cliffs during initial building, going as styles altered during later construction to softer and bigger tan-colored stone lower down cliffs. Liquid, essential to build mud mortar and plaster combined with sand, silt and clay, was marginal and accessible only during short and summer that is typically heavy. In addition to sandstone that is natural, precipitation was caught of wells and dammed places in the arroyo (a running stream) which sculpted the canyon, chaco wash, and ruined by a series of ditches. Timber sources, which were essential for the building of the roofs and top levels, were formerly abundant in the canyon but vanished during the Chacoan fluorescence owing to deforestation and drought. For that reason, Chacoans trekked 80 kilometers on foot to southern and western coniferous woods, chopping down trees then peeling and letting them dry for a time that is long before returning and transporting them all back to the canyon. That is no minor undertaking as the hauling of each tree took a group of workers for many days and during the three century of building and handling of this about twelve huge home and huge kiva sites in the canyon used throughout 200,000 trees. The Chaco Canyon's Designed Landscape. Although the Chaco Canyon included a large architectural density never seen previously in the area, the canyon was a tiny part in the heart of a wide linked area forming the civilisation of Chaco. Almost 200 settlements with large homes and kivas with the same characteristic style and architecture as those in the canyon existed beyond the canyon, but on a lesser scale. While those internet sites were the most frequent when you look at the San Juan Basin, they comprised a wider region of the Colorado Plateau compared to the English area. In order to aid to connect these settlements to the canyon and to each other, Chacoans built an extensive system of roadways by digging and leveling the ground below, some adding steel or steel storage bays for support. These roads were regularly seen in large residences in the canyon and beyond and radiated amazingly straight. Some sites may have served as observatories. This permitted Chacoans track the position associated with sun before each solstice or equinox. Information that could have been used in agriculture and ceremonial planning. Certainly one of the most well-known of them is the "Sun Dagger", a set stone images created by carvings or similar at Fajada Butte's east entrance. Two petroglyphs that are spiral located near the summit. They were bisected by or frame shafts of sun ("daggers") that flowed through three granite slabs in front of the spirals at the solstice, equinox and the moon. Pictographs, rock images created by painting or equivalents and discovered on part of the canyon walls provide further proof of the Chacoans knowledge that is celestial. Pictogram 1 depicts a bright star, which could be a symbol of a supernova in 1054 CE. This event would have been visible for a time that is long was therefore easily seen from the canyon wall. A pictograph showing a Moon that is crescent in proximity regarding the explosion supports this debate. The moon ended up being in its decreasing phase that is crescent the time the supernova reached its top brightness.
The average family unit size in Hopkinton, RI is 2.97 family members members, with 83.9% being the owner of their particular residences. The average home valuation is $265939. For those people paying rent, they pay out an average of $1015 per month. 61.5% of homes have 2 incomes, and a median domestic income of $90134. Average individual income is $38117. 5% of inhabitants survive at or beneath the poverty line, and 9.5% are handicapped. 6.3% of inhabitants are ex-members associated with US military.
The work force participation rate in Hopkinton is 71.8%, with an unemployment rate of 4.5%. For all into the work force, the common commute time is 30.4 minutes. 10.1% of Hopkinton’s populace have a masters degree, and 20.6% have a bachelors degree. Among the people without a college degree, 33.6% attended at least some college, 31.5% have a high school diploma, and just 4.3% have an education significantly less than senior high school. 3% are not included in medical insurance. |
Helpful Tips From Professionals In Eggs.
Eggshells are constructed from extremely difficult healthy proteins that safeguard the yolk (the egg white) from being pierced. Female animals of all varieties of birds and also reptiles lay eggs, which typically include albumen, a safety covering, chorion, and also flavanous vellum, inside various slim shelled membrane layers. In some types, eggs are fed by enzymes. In other species, both the egg and the embryo are produced externally. In still others, both organs develop simultaneously.
The shell as well as albumen are hollow, though it is not visible from the Eggshell itself. The eggshell itself (often called the “germ”) is comprised of several layers. The innermost layer has a slim film of keratin, while the outermost layer is made up of shed skin cells. Eggshells differ in size and density, depending upon types as well as reproductive capability. They are normally not smooth, though there are some eggshells that are semi-round or oval in shape, or include small bumps or ridges on their surface. In chickens, eggshells might be red, brown or yellow.
Poultries lay regarding one egg every 2 days, which can appear surprisingly brief when you think about that the typical human being consumes around 2 eggs per day. Obviously, chickens are not constantly able to maintain every one of their eggs; some are culled throughout very early manufacturing and also others might pass away soon after hatching out. Nonetheless, because they are so effective at producing healthy and balanced, effective eggs, industrial egg farmers take into consideration all poultries to be efficient, also those that do not lay an egg for weeks or months at once. As a matter of fact, chickens are truly fairly durable animals, with couple of health problems common in wild birds. Still, the extra modern approaches of farming such as battery rearing, mass feed, prescription antibiotics as well as other chemicals can position threats to your hen’s health and wellness, making it crucial to pick healthy and balanced, natural eggs over the more affordable options.
After the egg yolk is eliminated, it is removed from the chicken as well as its head is often tossed aside. After this, the continuing to be parts of the poultry are cleaned up as well as dealt with according to neighborhood practice. The most nourishing parts of the poultry include the white meat, which is almost always ground into flour to make buns as well as is the most popular resource of protein among consumers. The most effective top quality chicken meat is really lean, with nearly no fat. The white meat ought to be marinaded in a special hen breed’s olive oil, which helps in preserving a natural sparkle as well as flavor. Hen dog breeders often include dyes and seasonings to the marinade to make it extra enticing to the customers.
After the egg is cleansed and also any kind of marinating or added spices have been applied, the yolk is then taken from the body and also nurtured in an incubator. The yolk is then divided from the egg white using a great tooth mill. The resulting egg white and also yolk are after that cooking making use of a rotisserie or oven-roasted chicken on a hot grill up until it is done. After being cooked, the eggs are positioned in canning jars as well as allowed to get to maximum expiry day. There are numerous alternatives readily available for preserving your chickens’ eggs, such as canning, drying out, freezing, dehydrating, or smoking.
The albumen is what we call the “difficult” inner egg white and is typically offered in small pieces to consumers. It is an extremely valued as well as demanded item because of its abundant, creamy appearance as well as an abundant, creamy preference. Most of the albumen is eliminated from the chicken at the time of its fatality, which suggests that it is kept in the fridge until it can be readily released. This procedure of keeping the hen’s albumen in the fridge is called “cold.” There are currently several techniques to maintaining the albumen, but one of the most commonly used methods is to make use of a process called “germinal disc”.
This process, which is still being refined by the experts, enables the poultries to be maintained healthier for longer periods of time. There are still several points that require to be developed before this is presented to the basic market, however one thing is for sure … the world will certainly require eggs, so it will possibly happen. For additional information on just how to effectively maintain your poultry eggs, visit our internet site listed here.
If you are trying to find the very best items that will assist preserve your hens’ fresh eggs, you can locate them in our shop. We have all kind of choices, including cleansing remedies, which have been developed to tidy as well as sanitize without causing any type of damage to the birds themselves. There are likewise various sorts of cleansers that are developed specifically for cleaning and also disinfecting nesting boxes, supplying remarkable security versus contamination as well as disease. So, if you are trying to find means to maintain your flock healthy as well as happy over the long haul, you need to absolutely have a look at our website. To see complete information, you can see our Kassandra Smith January write-up on the topic.
Lots of people recognize that eggs are a standard resource of nourishment, but not everybody knows that there are a number of varieties of birds that lay eggs. The most noticeable among these types are the Scooks, Thysanura, Eclectus, Lesser Jacana, as well as the Black-capped Chickadee. All of these types of birds have both males and women, but the only species to which human beings are accustomed are the Scolds. The other types of laying eggs are a lot more acquainted to us, such as the Lories, Echidnas, Carp, Lories, Ring-necked Parakeet, Macaw, Lechura, etc
. Many eggs created by these varieties of birds are generated with a protective covering of some type. Eggshells are usually a mix of calcium carbonate and albumen. Eggshells provide an egg’s solidity as well as defense against splitting. Eggshells likewise act as a kind of shock absorber for the eggshell itself, which is extremely essential in egg manufacturing.
There are several types of chickens which will lay eggs, yet they are all carefully related to the hen. The breeds that will normally lay eggs are the Rhode Island White Hen, the Rhode Island Red Hat, the Jacket Red Neck, the Rhode Island Lobster Back, the Eastern White Poultry, the Maine Coonback, as well as the Canada Goose. Every one of these types will certainly ovulate during the exact same period, which has resulted in lots of people calling them all “similar.” They are even called “genetic twins,” given that there are generally close similarity in between any two breeds of chicken. That is why lots of people will acquire 2 of the very same types of poultries, due to the fact that they are so comparable. Norco Ranch Eggs
Some of the hens will certainly not ovulate in any way or will not ovulate appropriately. This can be rare, but it can take place. Most of the time, though, the ladies will still produce practical eggs. The ladies tend to have a slightly greater propensity to create bigger quantities of practical eggs. These bigger eggs will normally have greater healthy protein components too.
Leave a Reply
|
July 18, 2017 Reading Time: 5 minutes
Developing countries continue to struggle with accessible financial services, a key indicator of a healthy economy. Without important banking services, poverty-stricken nations find themselves stuck with stagnant economies. Blockchain technology introduces a portfolio of options that may enable critical steps to a more developed and stable country. With blockchain’s algorithms, a user can transfer money, equities, bonds, titles, or other important assets from peer to peer in a secure, private, and more cost-effective way. With the iron grip of financial intermediaries loosening because of blockchain technology, people without accessible financial services in developing countries could find themselves on a more even playing field with other, more fortunate parts of the globe.
Most of the people that reside within developing countries don’t have stable credit history, proper identification, or access to banking facilities. These roadblocks have slowed the economic growth of developing countries, which have sometimes resorted to extreme alternative options. Land and livestock are common signs of wealth and often used as currency where financial institutions are non-existent, but problems clearly arise on a global scale. It may sometimes be feasible to trade chickens for corn in a Togolese village, or lard for beans in a Honduran community, but global transactions require financial intermediaries and currency. Financial accessibility through the blockchain might be a path to greater financial equality.
Remittances Reimagined
Blockchain technology and cryptocurrencies such as Bitcoin enable a consumer with an internet connection to send and receive money without requirements that many in the developing world cannot meet. With the ability to keep money secure and accessible, new opportunities immediately become reachable. For example, microloans, payday loans, cash advances, and business loans all become options for consumers within developing countries. Wayniloans, a Bitcoin-lending platform in Latin America is the trailblazer for this newly developed concept. Its portfolio of different lending services uses the Bitcoin blockchain and offers a much lower interest rate than traditional lending companies.
Fostering such investment opportunities can lead to increases in growth and employment within developing countries. The amount of remittances coming into some countries can be a frightening reminder of how lacking in employment opportunities those countries can be. Blockchain technology can make the remittance process more cost-effective as well.
Blockchain can offer substantial cost savings for the poor who send money home to their families in other countries. Mark Van Rijmenam, an expert in Blockchain development, says it best: “A blockchain based remittance service uses a crypto coin to transfer money instantly across the globe for a fraction of the costs and uses local agents to exchange the crypto coin into the local fiat money that can be used by the receiver. Instead of days and costing a fortune, it takes minutes and is nearly free.”
In many poverty-stricken countries, such remittances are used for education, food, clothing, and medicine. According to the World Bank, developing countries received over $410 billion in remittances in 2013, which grew to $441 billion in 2016. These transactions are currently facilitated for a hefty cost by powerful financial institutions such as Western Union, Money Gram, TransferWise, and Ria. On average, the cost for a transaction can be anywhere from 8.4 percent to 12 percent. In most cases, blockchain can eliminate or significantly lower these transaction costs. With the extreme poor living on $1.25 a day, every cent counts.
The challenges for widespread implementation may seem daunting, but various start-ups and organizations have increased their efforts to meet the challenges of blockchain and Bitcoin remittance. Aircoinz of Argentina, Beam Remit of Ghana, BitPesa of Kenya, Coins.ph of the Philippines, and Coin Batch of Mexico are all leaders in this innovative process. In addition to the private sector, the United Nations is considering applications in microfinance, remittances, and other areas with planned blockchain-focused projects.
Today, Nigeria is the nucleus for such instant, reliable, cheap money transfers. With the largest economy and highest population in Africa, it seems to be an ideal proving ground for this technology. WorldRemit, a global money-transfer company in Nigeria, has 140 cash-pickup locations and processes 400,000 transactions every month. With 68 percent of the population lacking access to financial institutions, blockchain brings hope for more financial inclusion.
Assets Turned Legal
Although remittances are a lifeline for many families, people must begin building and investing in local economies for developing countries to have significant growth. In many cases, this can only happen with investments from banks or other lenders. Insufficient collateral is a major roadblock for many businesses in developing countries because nothing is officially documented, saved, or updated. A blockchain ledger, a digital recordkeeping system running on millions of devices capable of recording anything, can help in this regard as well.
Rule of law and protection of property rights are often very weak in developing countries. Although much of the traditional legal system has never been put in place, let alone enforced, blockchain technology might be able to help in a country’s pursuit of financial equality. Land is often never documented with legal deeds or contracts, thus sparking conflicts over ownership. One Ghanaian company, Bitland, plans to use the blockchain to allow individuals and groups to survey land and record title deeds on the Bitland Blockchain. With nearly 78 percent of the land unregistered in Ghana, the Bitland Blockchain can resolve land disputes quickly and correctly.
The experience of Haiti after the 2010 earthquake shows the potential importance of blockchain-based land registries. The quake left much of the country’s infrastructure in ruins, including municipal buildings in which land deeds and trusts were stored. To this day, the earthquake has caused major hurdles for proving the ownership of land among many of the citizens in Haiti.
Recording land ownership on a blockchain is ideal because the data are irreversible. Blockchains cannot be tampered with, stolen, or privately changed. It also reduces manual errors, while improving security processes for transferring documents including mortgages and contracts. In addition, land ownership and documentation of it could make a community or region more attractive to financial institutions, because prospective borrowers have their own land as collateral.
Businesses Better Equipped
When small businesses in developing countries look to developed countries for customers, the process is often complicated and expensive, but the use of a blockchain platform with a digital currency can streamline it.
Farmers and small business owners who remain confined to their communities can now have access to global customers without paying high transaction fees. Bitcoin and other digital currencies have potential to facilitate small-scale global commerce. A producer of fresh papayas from Mexico or a hand-weaved basket maker in India can sell their products to customers in the United States in exchange for digital-currency tokens, which can then be redeemed for local currency. With the use of Bitcoin or other digital currencies, small businesses in developing countries can immediately have access to a secure bank account and transaction platform, and can receive payments instantly. In addition, they would not have to pay transaction fees to financial intermediaries such as Visa, Chase, or PayPal.
The Path to Prosperity
Introducing and implementing blockchain technology might be the start of a domino effect that could potentially bring financial improvement to many regions within the developing world. Family members working abroad could send money more securely and at a lower cost. At home, citizens in developing countries could better protect and secure title to assets. With their land becoming a legal asset, financial institutions would be more willing to loan them money. With this loan, a business could expand and hire more employees, or a family could send its children to schools. Both outcomes would improve the local economy and clear the dark cloud that inhibits economic growth. In addition, businesses in developing countries could tap into customers all over the world without the iron grip of large financial intermediaries. Many developing countries face an uphill climb to achieve real prosperity, but blockchain technology may bring such a goal within range.
Max Gulker
Max Gulker
Get notified of new articles from Max Gulker and AIER.
Benjamin Williams
Get notified of new articles from Benjamin Williams and AIER.
AIER - American Institute for Economic Research
250 Division Street | PO Box 1000
Great Barrington, MA 01230-1000
Contact AIER
Press and other media outlets contact
[email protected]
Editorial Policy
This work is licensed under a
Creative Commons Attribution 4.0 International License,
except where copyright is otherwise reserved.
© 2021 American Institute for Economic Research
Privacy Policy
AIER is a 501(c)(3) Nonprofit
registered in the US under EIN: 04-2121305 |
Skip to main content
Confederates in Congress: Heritage or Hate?
January 2022
13min read
Our research reveals that 19 artworks in the U.S. Capitol honor men who were Confederate officers or officials. What many of them said, and did, is truly despicable.
Confederates honored with statues in the U.S. Congress include CSA President Jefferson Davis, Vice President Alexander Stephens, and Generals Robert E. Lee, Wade Hampton, George, and Kirby Smith.
Confederates honored with statues in the U.S. Congress include CSA President Jefferson Davis, Vice President Alexander Stephens, and Gen. Robert E. Lee, Gen. Wade Hampton, Col. Zebulon Vance, and Gen. Edmund Kirby Smith. Photos courtesy of the Architect of the Capital.
The Civil War ended 165 years ago, but still casts a long shadow. In recent protests during which statues of Confederate heroes were torn down or defaced, many people have made it clear they will no longer tolerate public memorials to men who fought to keep Americans of African descent enslaved, and who caused the deaths some 365,000 members of the U.S. military. The Civil War caused pain beyond measure.
“The Arkansas constitution should preserve ‘WHITE MAN’S government in a WHITE MAN’S COUNTRY” said Uriah Rose, whose statue represents his state in Congress.
American Heritage has poured through lists of artworks in the U.S. Capitol and discovered that there are 19 statues, busts, and paintings of Confederates, not 11 as has frequently been mentioned. Below in this article is a list of these artworks and short bios of the men they depict.
“The halls of Congress are the very heart of our democracy,” Speaker Pelosi said in a letter requesting the removal of the statues. “The statues in the Capitol should embody our highest ideals as Americans, expressing who we are and who we aspire to be as a nation. Monuments to men who advocated cruelty and barbarism to achieve such a plainly racist end are a grotesque affront to these ideals. Their statues pay homage to hate, not heritage. They must be removed.”
Alexander Stephens is one of the great villains in American history.
Alexander Stephens served as Vice President of the Confederacy, and was perhaps the most eloquent speaker urging the dissolution of the United States before the Civil War. Library of Congress.
Each state is allowed to be represented by two statues. At this point in our history, any state has many admirable figures to select from to represent them. Last year, for example, Arkansas voted to replace two figures from the Civil War with statues of music legend Johnny Cash and civil rights icon Daisy Lee Gatson Bates, although it hasn't happened yet.
A statue of Robert E. Lee stands in the Capitol's crypt. Although widely admired, Lee also led troops fighting the U.S. in 14 battles causing an estimated 126,000 casualties among U.S. military. Photo Jamie Steihm.
The statue of Gen. Robert E. Lee in the Capitol's crypt memorializes a man who led Confederate troops in 14 battles causing an estimated 126,000 U.S. Army casualties. Photo by Jamie Stiehm.
Research into Confederate statues in the Capital can be a little difficult because for some reason they may not be included on the Architect of the Capitol's list of artworks representing their states (for example, Robert E. Lee is not on the list of artworks of people from Virginia, but he is on the AOC website, and Zebulon Vance is not on the North Carolina list.)
Democrats in the House have introduced a bill to remove the statues, but Senate Majority Leader Mitch McConnell has insisted that a decision on statues in the Capitol should be left up to states.
"Every state is allowed two statues, they can trade them out at any time ... a number of states are trading them out now. But I think that's the appropriate way to deal with the statue issue. The states make that decision," McConnell told reporters.
It is good to remember that in the decades after the Civil War, men in Congress who only a few years before had fought desperately to kill each other, often in very brutal and personal ways, were able to work together to pass legislation that helped to build the great nation that emerged on the world stage in the 20th Century.
At least there is an important lesson in that history.
Here are short profiles of the Confederates honored in the Capitol.
1. Gen. Joseph Wheeler, Statue, National Statuary Hall
Gen. Joe Wheeler
Gen. Joe Wheeler
"Fighting Joe" Wheeler was a graduate of the U.S. Military Academy who joined the Confederate Army in 1861 as a cavalry officer. He rose to lieutenant general and fought in numerous battles including Shiloh, Corinth, Stones River, Chickamauga, Chattanooga, Knoxville, Atlanta, and numerous other campaigns and engagements.
After the war, he became a planter and lawyer, and served briefly in the House of Representatives, where he was said to have worked to heal the differences between Northern and Southern interests.
Wheeler volunteered at the start of the Spanish-American War and was given command of a cavalry division that included Teddy Roosevelt's Rough Riders. He was senior officer present at the Battle of San Juan Hill.
2. Uriah Milton Rose, Marble Statue, Statuary Hall
Uriah Rose
Uriah Rose
One of the two statues representing Arkansas in the Statuary Hall is of Uriah Rose, a prominent judge and lawyer who helped formed the American Bar Association and served as its president. “Judge Rose was one of the great lawyers not only of Arkansas but of the United States,” wrote a fellow judge.
Rose was initially opposed to secession because he felt the South couldn’t win a war against the North. But Rose actively backed the Confederacy throughout the War, serving as a state judge. When captured by Union forces, he refused to swear allegiance to the Federal government. After the War, Rose fought hard against the ratification of the Reconstruction Constitution.
During the drafting of the state’s 1874 constitution ending Reconstruction, Rose stated that the document should “preserve ‘WHITE MAN’S government in a WHITE MAN’S COUNTRY’”
3. Gen. Edmund Kirby Smith, Statue, Capitol Visitor Center
Edmund Kirby Smith
Gen. Edmund Kirby Smith
Smith graduated from the U.S. Military Academy in 1845, and served in the Mexican War under General Zachary Taylor and General Winfield Scott. After the war he taught math at the Military Academy and fought in the cavalry on the frontier.
Smith joined the Confederate forces and served as chief of staff to General Joseph E. Johnston at Harper's Ferry and helped organize the Army of the Shenandoah. While commanding a brigade in the army, he was severely wounded at Manassas.
From 1863 until the end of the war Smith commanded the Trans-Mississippi department. He surrendered the last military force of the Confederacy and considered, but abandoned, a plan to settle in Mexico to form a slavery-based republic.
Smith later served as president of the Atlantic and Pacific Telegraph Company, chancellor of the University of Nashville from 1870 to 1875, and professor of mathematics at the University of the South in Sewanee, Tennessee.
Smith died on March 28, 1893, at Sewanee, the last surviving full general of either army.
4. Dr. Crawford W. Long, Statue, Crypt
Crawford Long
Crawford Long
Best known for discovering ether as an anesthetic, Crawford Long never fought in the Confederacy, but he did join a Confederate militia unit in Athens as a doctor. He was born in Georgia and attended the University of Georgia in Athens, where he was the roommate of future Confederate Vice President Alexander Stephens.
After obtaining his M.D. in Pennsylvania and studying in New York, Long returned to Georgia to practice medicine. In 1842, he used sulfuric ether as an anesthetic for the first time while surgically removing a tumor from a young boy. Long was a Whig, following family friend and statesman Henry Clay, and opposed Georgia’s secession from the Union. Still, after the state joined the Confederacy, Long accepted a post as physician to the University Campus Hospital in Athens, where he provided medical care to Confederate soldiers.
Following the war, Long applied for and received a presidential pardon for his service on behalf of the Confederate government. He has since had many statues, museums, and hospitals named in his honor.
5. Howell Cobb, Painting, Speaker’s Lobby (Removed)
Howell Cobb
House Speaker Howell Cobb
Cobb was a five-term member of the House of Representatives and elected Speaker in 1849 at the age of 34. After serving as Speaker for one term, he resigned to become governor of Georgia. He also served as Secretary of the Treasury in the Buchanan Cabinet.
Cobb was one of the founders of the Confederacy. He served as the President of the Provisional Congress of the Confederate States, as the delegates of the Southern slave states voted to declare that they had seceded from the United States and created the Confederate States of America.
“If slaves seem good soldiers, then our whole theory of slavery is wrong.”
As an infantry general in the Confederate Army, Cobb served the entire war and fought in numerous campaigns. He suggested the creation of Andersonville Prison, where 13,000 Union POWs died.
Cobb also argued against the idea of enlisting slaves into the army. “You cannot make soldiers of slaves, or slaves of soldiers,” said Cobb. “If slaves seem good soldiers, then our whole theory of slavery is wrong.”
6. Charles Crisp, Painting, Speaker’s Lobby (Removed)
Charles Crisp
Charles Crisp
At the outbreak of the Civil War, Crisp enlisted in the 10th Virginia Infantry and was commissioned a lieutenant. He served with that regiment until May 12, 1864, when he was taken prisoner at the Battle of Spotsylvania Court House.
After the war Crisp studied law and passed the bar. He was elected to Congress from Georgia in 1882, and served until his death in 1896.
From 1890 until his death, he was leader of the Democratic Party in the House, as either the House Minority Leader or the Speaker of the House.
7. Alexander H. Stephens, Marble Statue, Statuary Hall
Alexander Stephens
Alexander Stephens
As a Congressman from Georgia from 1843 to 1859, Stephens actively participated in the most important slavery debates of the period including the Compromise of 1850, the Kansas-Nebraska and Fugitive Slave Acts. He was on the wrong side of history on all those important debates.
Alexander Stephens is often considered one of the great villains in American history for his “Cornerstone Speech.”
Stephens is often cast as one of the great villains in American history. After leaving Congress in 1859, he delivered one of the most famous speeches in our nation’s history in March 1861 – an eloquent rallying cry calling for Southerners to secede.
It's now known as the “Cornerstone Speech” because Stephens insisted that white supremacy and slavery were the “cornerstone” upon which secession was based. There were other reasons, too: states rights, tariffs, funding of public works, etc. But slavery was the foundation. Stephens' words helped to inflame the South and bring on the bombardment of Fort Sumter a few weeks later.
For all his efforts to end the United States, Alexander Stephens now calmly sits in its Capitol.
After all his efforts to end the United States of America, Alexander Stephens now sits serenely in its Capitol.
8. John C. Breckinridge, Bust, Senate Chamber
John C. Breckinridge
John C. Breckinridge
Breckinridge served as Vice President of the United States from 1857 to 1861. Still in the U.S. Senate at the outbreak of the Civil War, he was expelled after joining the Confederate Army.
Jefferson Davis made Breckinridge a major general and corps commander, and he fought in such important battles as Shiloh, Vicksburg, Murfreesboro, Stone's River, Chickamauga, Chattanooga, New Market, Cold Harbor, Monocacy, Winchester, and Bull's Gap.
At the battle of Saltville, his troops massacred African-American troops although Breckinridge had not ordered the action.
Breckinridge was widely respected by his fellow generals, although he often disagreed with this superior, Gen. Braxton Bragg.
Breckinridge resigned from the CSA in 1865 and was appointed Secretary of War by President Davis on January 19, 1865. After the war, he fled to Canada until Pres. Andrew Johnson declared an amnesty on December 25, 1868.
9. Edward D. White, Statue, Capitol Visitor Center
Edward D. White
Edward D. White
A native of Louisiana, White left Georgetown University in 1861 to enlist in the Confederate Army. He served for most of the war.
After the war, White practiced law in Louisiana and served in the state's Senate and Supreme Court. He was elected to the U.S. Senate in 1890 and served until 1894, when he was appointed to the U.S. Supreme Court by President Cleveland.
White was named Chief Justice by President Taft in 1910 and served until his death in 1921.
Roger B. Taney, Bust, Old Supreme Court Chamber
Chief Justice Roger B. Taney
Chief Justice Roger B. Taney, by Matthew Brady
Taney is not counted as one of the Confederates on this list, but is included because of the pivotal role he played before and during the Civil War as Chief Justice of the Supreme Court with unabashed pro-slavery leanings who opposed many of Lincoln's policies.
Taney owned a large plantation in Maryland, a slave state until the end of the Civil War. Although he did free his slaves, Taney wrote in his opinion for the controversial 1857 Dred Scott case that black people “are not included, and were not intended to be included, under the word ‘citizens’ in the Constitution, and can therefore claim none of the rights and privileges which that instrument provides for and secures to citizens of the United States.”
As the Civil War unfolded, Taney sympathized with the seceding states, but he did not resign from the Supreme Court. He actively opposed many of Lincoln's actions.
10. Jefferson Davis, Bronze statue, Statuary Hall
Jefferson Davis
Jefferson Davis
Jefferson Davis was the Confederacy's first and only president, serving from 1861 to 1865. Born in Kentucky but raised in Mississippi, he attended West Point and fought briefly in the Black Hawk War of 1832 before returning to Mississippi to run a cotton plantation.
Davis was active in politics well before the Civil War. He was elected to the U.S. House of Representatives in 1845, but resigned a year later to fight in the Mexican-American war. He entered the U.S. Senate the following year, and in 1853 was appointed U.S. Secretary of War by President Franklin Pierce, serving with distinction.
Throughout, Davis was an ardent supporter of state’s rights and the institution of slavery, though he did caution against secession in the lead up to the war. In 1861, despite his own misgivings – he had wanted a military command – the Confederate Congress elected Davis president.
As president, Davis presided over a number of significant battles and events. It was Davis who ordered the first attack on Fort Sumter, triggering all-out war with the North. He appointed Robert E. Lee to command the Army of Virginia in 1862, approving several offensives – many of them failed – throughout the conflict. At one point, he also appealed to Europe for assistance in the South’s effort.
In 1865, Davis was captured by Union forces in Georgia and imprisoned in Virginia for two years. He was never tried for treason.
See "Was Jefferson Davis Captured In A Dress?"
by James L. Swanson
11. Col. James Zachariah George, Bronze Statue, Statuary Hall
James Z. George
James Z. George
As a member of the Mississippi Secession Convention, George signed the Ordinance of Secession. He joined the Confederate Army during the Civil War and served as a colonel. He was captured twice and spent two years in prison. After the war, he returned to the practice of law.
George represented Mississippi in the Senate from 1881 until his death in 1897. He also served as a member of the Constitutional Convention of 1890 that created a new constitution to prevent the state's African-American citizens from voting, and successfully defended it before the Senate and Supreme Court.
North Carolina
12. Zebulon Baird Vance, Bronze Statue, Statuary Hall
Zebulon Vance
Zebulon Vance
Zebulon Baird Vance was a lawyer who signed up in the Fourteenth North Carolina Regiment before the War even started and was promoted to colonel. He fought at a couple of engagements, but resigned after he was elected governor of North Carolina in September 1862.
In 1879 he was elected to the U.S. Senate where he served until his death in 1894.
A prolific writer, Vance became an influential leader in the postbellum era. As a leader of the “New South”, Vance favored the rapid modernization of the Southern economy, railroad expansion, school construction, and reconciliation with the North.
South Carolina
13. James Lawrence Orr, Painting, Speaker's Gallery (Removed)
James L. Orr
James L. Orr
James Orr served five terms in Congress, including one term as Speaker of the House.
At the beginning of the Civil war, Orr organized and commanded Orr's Regiment of South Carolina Rifles. He resigned from the CSA in 1862 and entered the Confederate Senate, where he served as chairman of the influential Foreign Affairs and Rules committees.
After the war, he served as governor of South Carolina. In 1872 President Ulysses S. Grant appointed him to be Minister to Russia in a gesture of post-Civil War reconciliation.
14. Gen. Wade Hampton, Marble Statue, Statuary Hall
Gen. Wade Hampton
Wade Hampton
Born in South Carolina, Wade Hampton III was descended from a long line of prominent Southern politicians and plantation owners. His family was one of the wealthiest in the Southeast, and also owned one of the largest populations of slaves. Hampton III was elected to the South Carolina General Assembly in 1852 and served as a state Senator from 1858 to 1861, at which point he resigned to join the Confederate Army.
During the war, Hampton distinguished himself as a capable military leader. He led a force known as “Hampton’s Legion,” which he personally financed and raised using his cotton fortune, and was present at many major battles, including Gettysburg.
Following the war, Hampton returned to South Carolina politics, becoming a leader of the “Lost Cause” movement as well as of the “Redeemers,” a political coalition formed to restore white rule following Reconstruction. The faction helped get him elected governor in 1876, a position he held until entering the U.S. Senate in 1879.
15. John Bell, Portrait, Speaker's Lobby
John Bell
John Bell
John Bell was one of the most prominent American politicians before the Civil War. He represented Tennessee in the House from 1827 to 1841, and was elected Speaker for the 23rd Congress. Bell served in the Senate from 1847 to 1859, and briefly as Secretary of War during the administration of William Henry Harrison (1841).
Although a slaveholder, Bell opposed the expansion of slavery in the 1850s and campaigned vigorously against secession before the Civil War. In the race for President in 1860, often called “America's oddest election,” Bell played a critical role as the candidate for the Constitutional Union Party, a third party which took a neutral stance on slavery. The four major candidates split votes allowing Lincoln to win with only 39.8% of the votes cast.
But after Fort Sumter in April 1861, Bell abandoned the Union cause, returned to Tennessee. and called for the state to join the Confederacy.
The move angered many of Bell's former friends. Horace Greeley said that he had brought an “ignominious close” to his public career and the editor of the Louisville Journal wrote that the decision to support the Confederacy brought “unspeakable mortification, and disgust, and indignation” to Bell's former supporters.
When the Union Army occupied Tennessee in 1862, Bell fled to Alabama, and later to Georgia. He died in 1869.
16. Robert M.T. Hunter, Painting, Speaker's Gallery (Removed)
Robert Hunter
Robert Hunter
Robert M.T. Hunter was a Virginia lawyer and plantation owner who served eight years in the House of Representives including as Speaker (1839-41). He was also a U.S. Senator from 1847 to 1861. He owned an estimated 120 slaves on his plantation northeast of Richmond.
Hunter was expelled from the U.S. Senate in 1861 and became Secretary of State of the Confederacy, and later a Confederate Senator (1862–1865). Like numerous other high-ranking Confederates, he was a frequent critic of President Jefferson Davis.
Confederate $10 bill
A portrait of Virginia plantation owner Robert Hunter graces both the U.S. Capitol and a Confederate $10 bill, printed when he was Secretary of the Treasury of the CSA.
17. Gen. Robert E. Lee, Bronze Statue, Crypt (removed)
Robert E. Lee
Robert E. Lee
Although it is not listed on the Architect of the Capitol’s website, a statue of Gen. Robert E. Lee stands in the building’s crypt. It honors a man who led Confederate troops in 14 battles, causing an estimated 126,000 U.S. Army casualties.
Lee’s role as Rebel Commander in the Civil War has made him one of the most–if not the most–divisive of Confederate figures. Born to Revolutionary War hero Henry “Light-Horse Harry” Lee in Virginia, Lee attended West Point before serving as an officer in the Corps of Engineers. He fought in the Mexican-American War in 1846, then served as superintendent of West Point, establishing himself as one of the highest-profile officers in the U.S. Army. His standing was such that Abraham Lincoln, upon the Civil War’s outbreak in 1861, offered him command of federal forces–an opportunity Lee turned down to join the Confederacy when his home state seceded that same year (Lee, whose positions on slavery have been hotly debated, argued that he could not fight against “his own people”).
After serving as military adviser to President Jefferson Davis, Lee was tapped to relieve Joseph E. Johnston of his command of the Army of Virginia in 1862. His fighting force became one of the Confederacy’s most successful, leading bloody battles at places like Antietam, Fredericksburg, and Chancellorsville. A victory in the latter battle convinced him to invade the North a second time, but he soon suffered a decisive defeat at the Battle of Gettysburg. He was forced to surrender to the Union Army at Appomattox Court House in 1865, just two months after being named General-in-Chief of all Confederate forces.
Following the war, Lee returned home to Virginia and took a position as president of Washington College. He died five years later.
18. President John Tyler, Marble Bust, Senate Chamber
John Tyler
John Tyler
Although he had served as President of the United States, John Tyler headed the committee that negotiated the terms for Virginia's entry into the Confederate States of America and helped set the pay rate for CSA military officers.
On June 14, 1861 Tyler signed the Ordinance of Secession. One week later the convention unanimously elected him to the Provisional Confederate Congress. Tyler was seated in the Confederate Congress on August 1, 1861, and served until just before his death in 1862.
West Virginia
19. John Kenna, Marble Statue, Hall of Columns
John Kenna
Lesser known for his participation in the Confederacy than other figures on this list, John Kenna was born in Virginia to a lockmaster and sawmill owner. At 16, he joined the Confederate Army under General Joseph O. Shelby and served in the Iron Brigade, a cavalry force that participated in several major raids into Missouri (Shelby later fled with the brigade to Mexico rather than surrender at the war’s end).
Wounded in battle, Kenna returned home to practice law and was admitted to the bar in 1870. He rose from prosecuting attorney of Kanawha County in 1872 to Justice pro tempore of the county circuit in 1875, and to the U.S. House of Representatives in 1876. He was a major proponent of the railroad, and in 1883 was elected to the U.S. Senate, where he became Democratic minority leader. His marble statue was donated to the National Statuary Collection by West Virginia in 1901.
We hope you enjoyed this essay.
|
Search for a condition, service or location
Translate this page
Covid 19 Information
Please visit www.ghc.nhs.uk/coronavirus/
In 2020, the world has been faced with an unprecedented challenge: COVID-19. In a world that is now all too familiar with social distancing, lockdowns, and increased levels of hygiene management and control, it’s to be expected that infection prevention and control (IPC) has played a significant role.
Infection prevention and control (IPC) has a long established and vital role in preventing health care acquired infections. However, the majority of the guidance for this relates to acute general hospitals which is often difficult to translate to mental health settings .
Managing and controlling infections in mental health inpatient settings is a challenge as by their very design, mental health units aim to achieve the polar opposite of social isolation. As such, the COVID-19 pandemic has presented such environments with profound challenges.
Facilities within mental health units promote social interaction, group activity and as far as possible, freedom of movement within the environment. In terms of infection control, the characteristics of the mental health environment, in concert with the model of care, pose significant challenges.
Managing a highly infectious virus whilst at the same time preserving the unit’s primary functions, creates a unique set of circumstances. With the rapid spread of COVID-19, these challenges have never been greater and are likely to persist long into the future.
Mental health settings represent a discreet set of challenges for IPC, including environmental, practice and clinical issues. For example, patients with a disordered mental state may be unable to understand or easily accept the infection control measures put in place.
Therefore, these measures need to be carefully balanced with mental health treatment and infection control needs. Mental health situations need careful management with an acknowledgment that there is the risk that the infection could spread if the patient cannot be effectively isolated.
There are strategies that can be implemented to assist with this thus reducing the risk of spread of infection. Mental health colleagues within the Trust have embraced the changes that we have had to implement to prevent the spread of Covid-19.
Personally, I never imagined that I would see mental health staff wearing scrubs and surgical face masks. IPC colleagues and mental health colleagues are working together to continuously improve guidance for mental health services that reflects the challenges, especially in regard to PPE, which we will keep all affected parties updated about.
Written by Louise Forrister, Lead Nurse for Infection Control for Mental Health & Learning Disability Nursing Projects |
Kickboxing is a group of stand-up combat sports based on kicking and punching, historically developed from karate, Muay Thai, Taekwon-Do and Western boxing. Kickboxing is practiced for self-defence, general fitness, or as a contact sport.
Japanese kickboxing originated in the 1960s, with competitions being held since then. American kickboxing originated in the 1970s and was brought to prominence in September 1974, when the Professional Karate Association (PKA) held the first World Championships. Historically, kickboxing can be considered a hybrid martial art formed from the combination of elements of various traditional styles. This approach became increasingly popular since the 1970s, and since the 1990s, kickboxing has contributed to the emergence of mixed martial arts via further hybridisation with ground fighting techniques from Brazilian jiu-jitsu and Folk wrestling.
There are many different styles of kickboxing due to it's vast mixed background. Our Kickboxing class is purely practiced for the means of combat sport and is based on a Taekwon-Do style with western boxing incorporated too. The classes are focused on developing skill levels through the ranks of; beginner, intermediate and advanced.
When first starting, the student will begin at novice level (white-yellow belt ranks) then progressing onto intermediate (green to blue belt ranks) before progressing onto advanced (red and black belt ranks). Once reaching Black Belt, the practitioner no longer obtains a new belt colour, but their grade is represented in Dans (number ranks) usually portrayed in roman numeric format on the end of the belt. The highest rank that can be achieved in combat sport style of Kickboxing is an 8th Dan Black Belt.
WhatsApp Image 2021-04-21 at 15.17.25 (1
It is considered to be calming and relaxing. It requires a high degree of concentration and focus to be able to deliver the multiple movements, punches and kicks. Focus is often on helping body and spirit feel centred and connected to each other.
If practiced regularly it will provide a cardiovascular workout. It will help develop, muscle tone, strength, flexibility balance and concentration, it is considered to be an effective total body workout.
It can be practiced alone or with others. Group introduction is widely available. It is accessible to people of wildly ranging ages and fitness levels. Equipment in the initial stages is minimal and equipment is primarily inexpensive
By far the most popular discipline in terms of amateur kickboxing, this form of competition is run on a matted area. The objective of this event is to penetrate your opponent’s defences and deliver an effective and controlled technique with your hands or legs in the form of kicks, punches and sweeps. When a point is scored, the match is paused temporarily for the judges to record the score quickly before the action begins again.
Although this style is very "stop, start", it's still is a very fast paced form of competition and exciting to watch once you understand the rules and tactics involved.
The competitors usually wear club t-shirts or comfortable loose fitting clothing with a pair of kicking pants or shorts (except in full contact where males are topless and females usually wear a sports bra).
Not got our app yet? Simply select your device: |
Smartphone users have discovered a new tracker has been installed on their devices - even tough they didn't download it.
Google and Apple has automatically installed a coronavirus tracker on smartphones to assist the government with contact-tracing.
The new feature, called 'COVID-19 Exposure Logging', helps track the spread of the virus and is available on most phones.
Although it has been automatically installed - it is currently turned off and not available to use.
That is because it must be used in conjunction with a contact-tracing app, which is still under construction.
Despite spending millions of pounds and months on its own technology, the UK government has reportedly abandoned its own official app before its even left its testing stage.
Instead, Matt Hancock has said the NHS would use an alternative app designed by Apple and Google - which is reportedly still months away from being ready.
The Health Secretary said he was unable to put a date on when the app would be launched but it is understood to be in the autumn or winter.
The app's aim is to trace anyone that a person with coronavirus symptoms came into contact with and alert them to self-isolate.
So what has been installed on my phone?
It's not actually an app, it is the technology to assist a future app - the Exposure Notification API.
And even when the app is available, it won't be mandatory to download it.
The NHS has said "people will always have the choice of whether or not to download the app".
The Exposure Notification API is automatically turned off and even after downloading the app, users can switch it off whenever they want in their phone's settings.
Where do I find this API?
iPhone users can find it in their settings. Simply go to Privacy, then select Health and it'll come up as the first tab.
A description underneath reads: "When enabled, iPhone can exchange random IDs with other devices using Bluetooth.
"This enables an app to notify you if you may have been exposed to COVID-19. Exposure Logging cannot access any data in, or add any data to, the Health app."
On Android phones, the tracker is kept under 'Settings' and then 'Google Settings'.
A description on Android reads: "Your phone uses Bluetooth to securely collect and share random IDs with other phones that are nearby.
"Random IDs are automatically deleted after 14 days.
"If you have COVID-19 you can choose to share your phone's random IDs with the app so that it can notify others anonymously.
"Device location needs to be on to detect Bluetooth devices near you. However COVID-19 exposure notifications don't use device location." |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.