text stringlengths 144 682k |
|---|
6 Forms of Arts That Differently- Abled Artists Can Try Out Besides Drawing
Going by the speculations of many, it appears that there are limited opportunities for differently-abled artists. This school of thought is not in any way reliable, discard it. There is no limit to the forms of art to exploit as a differently-abled artist. Art exists as a world of its own, full of possibilities and opportunities.
There are many engaging activities that art offers and that you can involve yourself in, and these are all beautiful in their way.
Arts Shouldn’t Be Limited to Drawing Alone
The image imprinted on the minds of many people when it comes to the choice of craftsmanship for differently-abled individuals is “drawing.” That’s pretty shallow if you’d ask me. Art goes beyond inscribing some image on a piece of paper. Drawing is simply an expression of the big “world” of arts. There are many more expressions. Some of them include crafting, modeling, graphics designing, and even writing. Before exploring any form of art, it is essential to put all the aforementioned in mind.
As a differently-abled artist, you can exploit one or more of the many forms of art today. Technology has increasingly made it easier for artists to become ambidextrous. You can be so skilled in drawing and in many other art forms in the world we live in today.
Other Forms of Arts That Differently-Abled Artists Can Exploit
One exquisite form of art is the ability to carve out objects such as wood, metals, or rocks into different shapes and textures. Doing this is very possible today with technology. For example, wood sculpting involves cutting large planks of wood into smaller boards. Differently-abled wood sculptors have technological advancements to thank. For instance, the development of sophisticated equipment such as jointer planer machines to remove a thin strip of wood or to smooth out boards in a fraction can help them achieve exceptional results.
Music is a significant and popular form of art. It involves making sounds with your voice (singing) or with other instruments. Research shows that music has a therapeutic effect on humans and can be soothing to the soul and the body. Music is one acceptable form of art to explore.
Arts is truly one big world to explore. Writing is as powerful as it can be when it is correctly done. With this craft, it is possible to create a world of your own, void of evil, full of darkness, or as you so wish. You can also express real-life situations that will engage the minds of readers.
Drama is a performing art that involves acting out written scripts. This genre of literature is fictitious. It requires your whole being. In ancient times, people performed in front of small groups of people. Today, the scope of this art has dramatically expanded, no pun intended. Drama works hand in hand with writing and is a widely accepted form of art to explore.
Oral Literature
Also known as spoken words. Oral literature has been in existence since the time of the ancient Greek famous author, Homer. Homer, a blind man, produced one of the most outstanding books of all time – known as “Odyssey” through oral literature. As drama, this genre also works simultaneously with writing. Oral literature, as the name implies, involves narrating a written text by speaking. Individuals with good diction, vocabulary skills, and fluent speech are more likely to succeed in oral literature. Oral literature involves narrating a written poem, a novel, a script, and many more.
Graphic or Interior Design
This form of art is the most similar to drawing. It involves imprinting images on an object, shaping, or crafting items to make them look more attractive. Almost everything that we do in the world today consists of design. Be it on fabrics, kitchen utensils, flyers and posters, walls, name it. More forms of art are suitable for differently-abled artists. As earlier stated, the world of creativity is full of thousands and thousands of expressions. All you need to do is discover what you do best and tailor your talents to any of these expressions.
Tools and Techniques That Are Useful for Differently-Abled Artists
In the world we live in today, things are vastly becoming more favorable for differently-abled artists. New inventions are popping out of the air daily. These inventions go a long way in enhancing creativity and, in the long run, productivity. Some categories of newly invented tools and techniques for differently-abled artists include:
• Assistive and Adaptive Technology which involves the use of adaptive and rehabilitative devices to help differently-abled artists improve their work rate
This invention dramatically enhances independence on the part of an artist. You virtually do not need to rely on every other person’s skills. Adaptive Technology is a subset of assistive technology. It ensures mobility and more astonishing performances for your craftsmanship.
• Studio Assistants are most useful for substantial scale artistry. It involves people who help to perform administrative duties and other functions such as scheduling appointments, answering emails, and many more. These individuals help to handle daily operations of the studio while you devote more time to your artistry.
Some studio assistants are also creative, and having them work with you promotes collaborations. Collaboration is significant in the arts. Having humans as helping hands usually improves productivity.
Famous and Successful Differently-Abled Artists
Michelangelo was a differently-abled artist yet was famous for his impressive sculptures. He developed kidney stones and had significant issues with using his hands.
Paul Klee
Paul Klee was a painter, a poet, and a philosopher. He made a large number of widely recognized paintings and sketches despite suffering from “Scleroderma.”
Francisco Goya
Francisco Goya is one of the best portrait painters in history. He suffered from numerical problems, headaches, hearing loss, dizziness, and more. Still, the performance of his visual art did not deteriorate.
The world awaits men and women who will rise above their challenges and make positive impacts in the world.Art is full and is for all. |
Kids Shows
Usually kids learn new things by seeing them with their own eyes. Any place kids go to, may learn new things from both sides, good and bad things. Most of the times when kids and grownups like specific person from tv or real life, they become behaving similar to them especially kids. When kids go to a movie or theatre, their families or grown up people who they went out with are responsible to choose the type of movie or play of they were going to a theatre.
In the old days most plays contain good information in the beginning, the middle and end. Unfortunately these days most of the plays ( not all ) just have a good ending by the good or kind person always beat the bad one. The kids plays should contain many good behaviours in the beginning, middle and end of the play because most of the times, most of the audience are kids.
As most people know kids learn quickly from others if they like the such as a celebrity, this is why actors should always behave in good manners on stage even if they are the bad guys. Being an actor in kids plays is simmer to having a child of your own but for few hours only.
Our Reader Score
[Total: 0 Average: 0] |
FROM GETHSEMANI, Jesus’ captors brought Him back to Jerusalem and led Him through the dark streets of the city to the home of Annas, the former high priest. He had been deposed fifteen years before, but was still consulted occasionally in an unofficial capacity. He had apparently taken some interest in the plot against Jesus and now interrogated Him briefly in the hope of proving the case against Him before any official proceedings were begun.
Annas first questioned Jesus about His disciples and His teachings. Jesus replied:
“I have spoken openly to the world; I have always taught in the synagogue and in the temple, where all the Jews gather, and in secret I have said nothing. Why dost thou question Me? Question those who have heard what I spoke to them; behold, these know what I have said.”
Hoping, perhaps, to win favor with his master, one of Annas’ attendants now struck a blow at Jesus, and said, “Is that the way Thou dost answer the high priest?” But Jesus said, calmly and with dignity:
“If I have spoken ill, bear witness to the evil; but if well, why dost thou strike Me?”
Achieving no success in his inquisition of the prisoner, Annas returned Jesus to the guards and sent Him off under their escort to the house of Caiphas, his son-in-law, who had been appointed high priest by the procurator Valerius Gratus twelve years before.
The chief priests and leaders of the Jews had been gathered for some time at the house of Caiphas. They had waited long for this opportunity and were determined not to lose it. However, the occasion could hardly have been less propitious; Rabbinical precept opposed a night trial; the trial was to begin with testimony of witnesses favorable to the accused (which the Sanhedrin, naturally, was not interested in hearing); and, even if a guilty verdict were secured, a death sentence could not be carried out for some days: the Law forbade executions before the day after the trial, but in this instance the day following was the Sabbath, which was also a day on which a death penalty could not be executed.
Nevertheless, the members of the Sanhedrin felt that their position was strong enough for them to violate these niceties of the Rabbinical code, and they began marshalling their evidence against Jesus. To their surprise, however, they found a real difficulty in producing satisfactory witnesses. For months their spies had been following Jesus about, listening to His preaching, and collecting statements which might be used as evidence against Him. But at this critical moment, when they were brought forth to testify, they obviously contradicted each other; their testimony was worse than useless.
Two witnesses (the absolute minimum permitted by the Law of Moses) were found who testified that Jesus had blasphemed against the temple, though they were actually misrepresenting the meaning of Jesus’ words. Two years before, while in Jerusalem for the Passover, He had said to some Jews who were asking for some sign of His authority:
He was referring to His death and resurrection, but His hearers had taken His words literally and had argued that He could not possibly erect in three days a temple like that of Jerusalem, which had been in the course of construction for forty-six years. Evidently, the witnesses who now came forward had been present that day, but they had found His words significant from a quite different point of view. For they testified as follows: “We ourselves have heard Him say, ‘I will destroy this temple built by hands, and after three days I will build another, not built by hands.'”
The judges themselves were dissatisfied with this testimony, but they called on Jesus to reply, saying, “Dost Thou make no answer to the things that these men prefer against Thee?” When He remained silent, the high priest asked a direct question which he was sure would lead to the destruction of the prisoner. In the most solemn tones he could command, he said, “I adjure Thee by the living God that Thou tell us whether Thou art the Christ, the Son of God.” Jesus answered:
“Thou hast said it. Nevertheless, I say to you, hereafter you shall see the Son of Man sitting at the right hand of the Power and coming upon the clouds of heaven.”
The features of the high priest registered horror and utter dismay. Rending his garments, he exclaimed, “He has blasphemed; what further need have we of witnesses? Behold, now you have heard the blasphemy. What do you think?” And the Sanhedrin, moved by the high priest’s histrionics, condemned Jesus, saying, “He is liable to death.”
They were aware, certainly, that their decision had nothing of the character of a legal verdict: besides all the other irregularities of their procedure, they had just violated the injunction of Scripture that an accused person is to be judged on the testimony of other witnesses (not on his own testimony). They regarded their decision in this preliminary session as a kind of temporary indictment, a charge on which a definitive judgment could be made when they convened formally later that morning.
When the guards in charge of Jesus learned that the Sanhedrin had declared against Him, they began to buffet Him and to spit in His face. Then they blindfolded Him and struck Him in the face with the flat of their hands, saying, “Prophesy to us, O Christ, who is it that struck Thee?” And they continued to insult and revile Him until dawn.
Matthew 26:57-68 | Mark 14:53-65 | Luke 22:54-55 | Luke 22:63-65 | John 18:13 | John 18:19-24 | Isaiah 53:1-8 1 Peter 2:21-25
Meditation: Christ Who could have immediately stopped the whole procedure against Himself, Who could have escaped from or struck dead His accusers, allowed the evil-plotting Annas and Caiphas to carry out their plans. He was not only willing to die for us, He was willing to allow these devious characters to plot and to scheme against Him. Divine though He was, He submitted to the crude devices of unscrupulous men. At the slightest imagined injustice to ourselves, we, unlike Christ, protest indignantly.
Peter Denies Christ – Next Page
The Arrest – Previous Page
Holy Thursday – Start of Section
Life of Christ |
Understanding Kirtan (Nada) Yoga
Life is too fast and sometimes our busy day-to-day activities prevent us from catching a calm breath, smelling the roses, and reflecting about life. There is a point in our life where we feel like we need to do something to break away from our daily stressful life. Everyone has an untapped distinctive potential and will only be realized and recognized when one tries to access and explore a deeper level of consciousness within ourselves.
For those who are ready to begin their journey to a stress and anxiety-free living, Kirtan Yoga is recommended. We all know that yoga exercises and postures are performed to relax and strengthen the body and clear the mind from stress. One may also be hearing about the word mantra which is a form of sound vibrations used in the meditation process. It also helps in revitalizing and providing positive energies for the mind.
Kirtan originated from India which can be traced back 2,500 years ago. It is a variety of devotional chanting (written in Sanskrit language) where those who participate sings the sacred mantras and the names of the Hindu gods and goddesses (Kali, Rama, Shiva, Krishna, Ram, Lakshmi and Durga). This creates a livelier form of Yoga exercise where chanting is accompanied by classical Indian instruments. These instruments include the drums (including Indian tablas), harmonium (a freestanding keyboard that sounds like an accordion), finger and bells cymbals.
This practice is what they typically refer to as a call-and-response style of singing involving the audience and encouraging everyone to join in. Kirtan begins with the leader singing a line of a chant which is answered with a response from the guests or audience.
The traditional practice of Kirtan can be compared to a group singing. The group is composed of a chant leader (“kirtan walla”) and the audience. The chant leader sings out words or phrases of words and the audience echoes it back. This gathering usually lasts from 10 to 30 minutes of periodic silence in between songs. Unlike any musical performance, applause is never applied and the chant leader can modify the speed and length of chant. It is an ancient tradition that has been known as an art of relaxing the mind, unlocking the heart, and awakening & connecting the spiritual self to the Divine consciousness.
Kirtan offers a better form of Yoga for those who gets bored, who have little interest, or for individuals who find it difficult in concentrating during the meditation process. The style is different but the benefits and effects of Kirtan Yoga is basically the same. It gives one the opportunity to find the inner path, explore, and have a taste beyond the body and mind’s realm consciousness through a direct experience of Kirtan Yoga.
comments powered by Disqus |
The Water, the Rock, and the Maginot Line
After World War I, France built a massive line of fixed fortified concrete bunkers and walls. It was called the Maginot Line and it’s gone down as one of history’s great blunders. The goal was to prevent future invasions by Germany like the one in 1914.
The problem was the French were fighting the previous war and the Germans were preparing themselves to fight the next one. In a World War I framework, an adversary would have been crazy to conduct a frontal assault against the Maginot Line.
By 1940, when the Germans quickly overran France, the Germans simply flew their planes over it and drove their tanks around it. The massive fortification did nothing to defend France and their reliance upon it caused them not to otherwise prepare.
The stark imagery of this example has gone down in history as a metaphor of stolid, dogmatic thinking in the face of change.
Wisdom is adaptation to the moment. Fixed thinking, like the Maginot Line, is not up to facing new challenges.
A similar metaphor is that of the a rock and water. A rock is solid, strong, immovable. Water is soft and pliant. It goes over and around the rock. Watched for a day or a week, the rock appears to be winning the battle. It’s holding its ground and standing firm. However, come back in a year or five or fifty. Over time, the water will wear the rock into nothing.
New challenges require us to be agile and adaptive. We live in a world, especially now, that is not only experiencing massive change in this moment, but is nearing the birth of a very different world where many old paradigms are going to be tested.
If we don’t adapt, we can look forward to the fate of the rock and the Maginot Line.
If we work to understand the difference between immutable principles and merely habitual thinking, a very important distinction, we will find ways to navigate and thrive in this new world.
Better days are ahead for humanity, if we’re smart, flexible, and principled.
My grandmother lived from 1904 to 1988. When she was five, her family got into a horse-drawn wagon and rode outside of town to observe the 1910 passing of Halley’s Comet. When she was 64, she witnessed a man walking on the moon.
Wagon thinking would never have gotten us to the moon in six decades. New thinking was required.
It’s very likely that in two or three decades our world will be more advanced and changed relative to today than 1969 was to 1910.
That kind of change holds both peril and opportunity. Who will we be? Will our technology run us or will we use our technology to benefit humanity in ways never before imagined? Will power and wealth be held in fewer a fewer hands or will we finally create an egalitarian society where our genius meets our basic needs and we have the freedom to explore our highest aspirations?
I don’t and I won’t let this moment get me down. We’re going through the growing pains of massive transformation. Whether it is to the benefit or detriment of most human beings is being decided, as tomorrow always is, by our thoughts, words, and actions today.
Stay safe. Stay well. Amazing things are on the way! Oh, and by the way, just in case no one else has reminded you today, you are awesome! |
Interest Rates for Beginners
For a complete list of Beginners articles, see the Financial Crisis for Beginners page.
One of our regular readers and commenters (and a quite knowledgeable one at that) suggested that we provide an overview of interest rates and the relationship between the Federal Reserve and mortgage rates. So here goes.
An interest rate is the price of money. If you buy a 5-year CD from your bank, it will pay you something like 3% annual interest. You are selling the bank the use of your money for 5 years; in exchange, they are paying you 3% of the money each year. I’m guessing everyone knew that already.
The other basic point you need to understand is how a bond works. A traditional bond is a security with a face value, a coupon, and a maturity. Let’s take the 10-year U.S. Treasury bond issued on November 17, 2008 as an example. It had a face value of $100, a coupon of 3.75%, and a maturity of 10 years (a maturity date of 11/15/2018). If you hold this security, this means that you will get the face value ($100) back on 11/15/2018, and during the intervening 10 years you will earn 3.75% annual interest on the $100, or $3.75 per year. (Treasury bonds pay every 6 months, so you would get $1.875 every 6 months.) Note however that the price to buy this bond is not necessarily $100. Treasuries are initially sold at auction, and in this case the 10-year bond sold for $99.727098. This means that investors valued that bond’s stream of payments ($1.875 every 6 months for 10 years, then a flat $100) at about $99.73, not $100. The implicit yield is 3.783%, not 3.75%; that means that if you pay $99.73 and you get that stream of payments, you are earning 3.783% annually on your investment.
Treasury bonds are highly liquid securities, which means that you don’t have to wait 10 years to cash out if you need the money. Instead, you can sell the bond on the secondary market. Right now this bond costs about $114-10/32, or $114.31, and the implicit yield is 2.13%. This means that the investor who buys your bond on the secondary market thinks that $114.31 is the right price for the bond’s stream of payments, and that he will earn a 2.13% yield on his investment. ($3.75 is more than 2.13% of $114.31, but after 10 years he will only get $100 back, not $114.31.) In the news, you would read that the yield on 10-year Treasuries has fallen over the last month. But this doesn’t affect the Treasury department directly, because Treasury got its money on the day it auctioned the bonds off (11/17/08). However, the next time Treasury issues a 10-year bond, it will probably earn a yield that is pretty close to the yield on the most recent 10-year bond, so changes in yields on the secondary market affect the price at which Treasury can raise money in the future.
In general, the price of a bond (and therefore its yield) depends on three factors: the maturity, or the length of time that you are lending money for; the degree of credit risk, or the risk that you won’t get paid back; and the supply of and demand for money.
OK, that was the introduction. For discussion, I’m going to divide interest rates into three categories: (1) the Federal funds rate; (2) U.S. Treasury yields; and (3) everything else. Within category (3), I’ll spend an extra minute on mortgage rates.
The Federal funds rate
The Federal funds rate is the rate at which U.S. banks lend money to each other overnight. The money in question is the reserves that sit in their bank accounts in the Federal Reserve system. If Bank A has excess reserves at the end of the day and Bank B has a reserve deficit at the end of the day (reserves are the money they have to keep on hand – electronically, at least – in case people ask for it; reserve requirements are set by the Federal Reserve), Bank A will loan the money to Bank B for a period of one day. The rate of interest Bank A will charge is the Federal funds rate.
The Federal funds rate is almost the lowest rate of interest in the economy. (Right now the target for the Federal funds rate is 0.00-0.25%.) This is because the party borrowing the money is a bank that is regulated by the Federal Reserve, and hence unlikely to go bankrupt (put the last few months out of your mind for the moment), especially not in the next 24 hours. Also, there isn’t a lot else Bank A can do with the money, so the opportunity cost is low.
In ordinary times the Federal funds rate is the only rate that is set by the Federal Reserve, and the Fed doesn’t even set it directly; notice that the loan in question is a private transaction between two private entities. Instead, the Fed influences the Federal funds rate by controlling the amount of money in the system (by buying and selling Treasury securities); the more money available, the lower the interest rates that banks will charge each other. Over the last decade or so, the Fed was able to keep the actual Federal funds rate quite close to its target rate, which is the one that gets announced every six weeks. (This has broken down recently, for reasons I won’t get into.)
For more on the Federal funds rate, see Federal Reserve for Beginners.
U.S. Treasury yields
When the Federal Reserve changes the Federal funds rate, its effects ripple out through the economy, but with all sorts of lags and dampening effects. Broadly speaking, interest rates can differ from the Fed funds rate for two reasons: maturity (the amount of time you are lending money for) and credit risk (the risk that you won’t get paid back). We’ll talk first about U.S. Treasuries, because “by definition” they involve no credit risk.
The Treasury Department raises money by issuing bonds that range in maturity from a few days to 30 years. At the low end, there is virtually no risk of any sort, so the yield is purely a function of supply and demand; if a lot of people have money and nothing else to do with it, yields will be low. There was an auction today for 4-week Treasury bills, and the yield was exactly zero; people are lending money to the government for free.
With a longer maturity, however, there is risk, even when lending to the U.S. government. The main risk is inflation. Because all the payment stream of a bond is fixed in nominal terms, the higher inflation is over the maturity of the bond, the less it will be worth to you in real terms. What matters here is not the current rate of inflation, but investors’ expectations of what inflation will be over the maturity of the bond. If investors expect inflation to go up, they will demand higher yields to compensate; even if they expect inflation to remain steady, they will still demand a higher yield for a longer-maturity bond, because the longer maturity means there is more time in which inflation could increase. There may also be some question of whether, over a longer time horizon, the U.S. government is more likely to default on its debt; however, I don’t want to get into this, because it starts raising some complex issues (like, if the U.S. government defaults on its debt, what kind of world would we be living in?).
Right now, yields range from zero on the 4-week T-bill to 2.60% on the 30-year bond. (These are all at or near historic lows.)
Everything else
In the world of economics and finance, Treasury securities are generally considered risk-free. So for any maturity you want to invest in, you always have the option of buying a Treasury bill or bond. In order to be able to borrow money, entities other than the U.S. government have to offer higher yields. The yield of anything other than the U.S. government can be thought of as having two components: the Treasury yield (with a similar maturity) and the spread over the Treasury yield, which is the risk premium (the additional yield that investors demand to compensate for the additional risk of the borrower).
That spread is determined by a few major factors, of which I’ll mention three: (a) the creditworthiness of the borrower; (b) whether the loan is secured; and (c) the general state of the economy.
(a) The less creditworthy the borrower, the higher the interest rate, since lenders require additional yield to compensate for the risk of default. For bonds issued by governments and businesses, creditworthiness is generally determined by the bond rating agencies, who look at fundamental factors like projected cash flows and debt burdens to estimate the likelihood of a default. Each agency has a scale of ratings that it uses; the top few rungs are considered “investment grade,” and everything else is “junk,” which was recently euphemised into “high yield.” For individuals, creditworthiness is determined based on your credit score (calculated based on factors such as your past payment history, current debt outstanding, current credit available, etc.) and other attributes of your financial situation, such as your income and assets.
(b) A secured loan is one where the borrower pledges collateral to the lender, as in a home mortgage or a car loan. Lenders will accept lower interest rates for these loans than for unsecured loans, such as credit cards.
(c) The same borrower who pays a low interest rate during good economic times will pay a higher interest rate, or will be unable to get a loan at all, during a recession. In a recession, everyone’s risk of default goes up. This is why all sorts of spreads go up in an economic downturn. For example, the spreads on high-yield (junk) corporate debt are far above their previous record levels at over 20 percentage points. That means that if the yield on a 10-year Treasury is about 2%, the yield on a 10-year junk bond is over 22%.
Mortgage rates
For current purposes, I’m just going to talk about traditional, 30-year fixed-rate mortgages.
Even when it has a nominal 30-year maturity, the average mortgage only lives for about 7 years. For every mortgage that is paid off month after month over 30 years, there are many more mortgages that are prepaid, usually because the mortgage holder refinances or sells the house. So when a bank loans money to homeowners, or an investor buys mortgages or mortgage-backed securities, he is thinking that the maturity will be about 7 years.
As a result, people generally think of mortgage rates as the spread over the 10-year Treasury yield. That is, people investing in mortgages, which have some default risk, have the option of buying 10-year Treasury bonds instead, so mortgage rates contain a spread to compensate for that risk.
If you look at this chart comparing 30-year fixed mortage rates to 10-year Treasury yields (among other things), you’ll notice two things. First, on a month-to-month basis, the two seem to move together. Second, however, over longer periods of time, the spread can change. In 2006 and the first half of 2007 the spread was a little less than 2 percentage points, but by early 2008 it had widened to a little over 3 percentage points, where it is today. (Some people argue that this is proof that mortgage rates are not related to 10-year Treasury yields. I think that’s just a product of how you look at things. Because the spread can change, the two are obviously not linked. But conceptually, I think it still makes sense to think of the mortgage rate as being composed of the Treasury yield plus a changing spread.) The spread has gone up for the reasons we’re all familiar with; after a long period of thinking that mortgages were absolutely safe, now lenders and investors think they are risky again, so they are demanding higher yields in exchange for their money.)
You’ll note that that chart was intended to make a different point: that the Federal funds target rate does not affect mortgage rates. That’s because the Fed funds rate has only a limited impact on the 10-year Treasury. Remember, the 10-year Treasury yield is primarily determined by inflation expectations, and a lower Fed funds target rate is not going to by itself reduce inflation expectations (arguably it would increase them). This is why the conventional wisdom is that the Fed has limited ability to affect long-term interest rates. Recently, however, Bernanke has started talking about the Fed buying hundreds of billions of dollars’ worth of mortgage-backed securities in an effort to push mortgage rates down. This isn’t guaranteed to work, because the Fed is only a small part of the global market for U.S. mortgage-backed securities, but simply announcing the intention has already brought mortgage rates down significantly.
Mortgage rates are an unusual case because the government has another lever it can use to influence them. Fannie Mae and Freddie Mac make up a large part of the secondary market for home mortgages, in two ways. First, they buy mortgages from lenders. Second, they bundle together mortgages from lenders into mortgage-backed securities, which they then issue back to the lenders (who typically sell the securities to investors). Therefore, the price that Fannie and Freddie are willing to pay for mortgages plays a large role in setting the interest rates that lenders charge borrowers. The Hubbard-Mayer mortgage proposal that I reviewed a while back is predicated on the observation that the mortgage spread is unusually high (as mentioned above), and it’s high because the spread for Fannie and Freddie bonds (their cost of money) is unusually high.
Clarification: Fannie and Freddie want to make profits, which means that the interest rate they charge on mortgages (I know they don’t lend directly, but by purchasing mortgages they are effectively doing the same thing as far as interest rates are concerned) has to be higher than the interest rate they pay on their own bonds. Since the credit crisis began, but especially since July, there has been a tremendous “flight to quality” in the bond markets: that is, investors have been selling everything that has even the slightest risk, and buying Treasuries instead. This pushes the yields of Treasuries down and the yields of everything else – including Fannie/Freddie debt- up, widening the spread.
If the Treasury Department can bring down the Fannie/Freddie spread to where it should be, given that Fannie and Freddie are more or less backed by the government anyway, then they will be able to pay more for mortgages, lowering the interest rates that lenders have to charge borrowers.
Clarification: There are at least two ways that Treasury can bring down the Fannie/Freddie spread. The first, which Krugman recommends, is simply to announce that debt issued by Fannie and Freddie is backed by the “full faith and credit” of the U.S. government. That will make it equivalent to Treasuries from a risk perspective. Right now, although Fannie and Freddie are government-chartered and in a government conservatorship (meaning the government is calling the shots), their debt is still not explicitly guaranteed by the government. The second, which Hubbard and Mayer recommend, would be for Treasury to issue additional debt themselves, and then lend the proceeds to Fannie/Freddie at a lower interest rate than they currently have to pay on the open market. Note that either one of these would reduce the spread, but not solely by bringing down the yields for Fannie/Freddie; Treasury yields would also go up somewhat. First, by increasing demand for Fannie/Freddie debt, this would reduce demand for Treasuries. Second, because Fannie/Freddie debt would be explicitly guaranteed, some people would think that this increases the overall riskiness of the U.S. government as a borrower. Some people would also think that it increases the risk that the government will choose to print money to pay off the debt, which would create inflation – and higher inflation expectations mean higher Treasury yields.
As always, if you see any mistakes I made, please point them out.
Update: Krugman has a nice chart with the recent spread between mortgages and 10-year Treasuries. He thinks that the spread is too high and that the government can bring it down.
Update: Thanks to the corrections by Jim W and Durable Investor, I changed my incorrect usage of “duration” to “maturity.”
Update: Simon Johnson talked to the Planet Money guys about the Fed funds rate and other interest rates. The segment starts about 2 minutes in.
20 thoughts on “Interest Rates for Beginners
1. Hi James, Thanks for the primer. I’m one of the beginners for whom you wrote it, and there were just a couple of things I didn’t follow. If you can help me, I’d appreciate it.
In your example of how a bond works, the table shown, , calls the 10 year security a “note”, the only ones identified as “bonds” have a 29 year, 6 or 9 month term (which raises a second question, why not an even 30 years?). What’s the distinction between a “note” and a “bond”?
Then, in the section on mortgage spread, in the last paragraph, you say “the mortgage spread is unusually high (as mentioned above), and it’s high because the spread for Fannie and Freddie bonds (their cost of money) is unusually high.” Why is that?
Then you go on to say “If the Treasury Department can bring down the Fannie/Freddie spread to where it should be,….”. How do they do that?
Thanks, read your blog every day, and have learned a lot and hope to learn a lot more.
2. Offhand I’m not sure what a note is. Short-duration Treasury securities are called “bills,” and long-duration Treasuries are called “bonds,” for no good reason that I know of.
I’ve addressed your other questions in the body of the post. Thanks for asking them.
3. James, what do you mean when you say that Fannie and Freddie are essentially backed by the government? I was under the impression that the government has fully taken over both entities. Thanks for all the good info.
4. 10 year and shorter debt maturities are defined as notes. Greater than 10 years are technically bonds.
Functionally, as debt instruments, there are no differences except their duration.
5. From “”
The difference between bills, notes and bonds are the length until maturity:
Treasury bills are issued for terms less than a year.
Treasury notes are issued in terms of 2, 3, 5, and 10 years.
Treasury bonds are issued in terms of 30 years, and have just recently been reintroduced in February 2006.
Probably, bonds with maturity of 29.x are considered as “bonds”.
Anyway, thank you for your great posting, Mr.Kwak.
6. Very nice intro, but please do not use “duration” for maturity or tenor of the bond. Duration is the name for measures of bond’s price sensitivity to changes in interest rates. A bond with a duration of 3 years would see its price change by about 3 percent if rates move by 100 basis points. As might be expected, duration is a function of a bond’s maturity (or time to reset for variable rate bonds) and other factors.
7. James, this is excellent. I now have a much better understanding of the relationship between 10 year Treasury rates and mortgage rates.
You, Simon, and Peter are to be commended. You have provided a great service for the public with this website.
8. Anything with a maturity longer than 10 years is a bond, regardless of the length of maturity from when it was issued. EX: the treasury issued a 30 bond in 1986, maturing in 2016. It is now a note since it matures in less than 10 years.
Bills, by definition, have no coupon. They are issued at a discount and mature at par, the differnce is considered “interest.” Therefore, a note or a bond could never be considered a bill, since notes and bonds pay regular coupon interest payments.
9. Thanks for the post. I just want to ask a quick question. If the Fed has been quite successful to keep the federal funds rate close to its target, why is the TED Spread so high? As the interest rate on the 3-month Treasury bill is around 0.01 and the federal funds rate is somewhere between 0.00 and 0.25, I think the TED Spread should be lower than it actually is. Please explain or correct me if I misunderstand something about these rates. They’re kind of confusing.
10. Pete – You invested in a “callable” bond. In exchange for a slightly higher coupon rate, you gave Freddie the right to call the bond. They will do so when interest rates fall. Freddie uses the bond to fund their portfolio. When rates decline, they can call existing callable bonds and issue new bonds at the now lower rate. This reduces their funding costs. The call decision had nothing to do with being in a govt conservatorship.
11. I want to echo Jim W’s comment: duration and maturity are not the same thing. There is a very important difference for investors. The post consistently uses “duration” when maturity is what is being discussed. Otherwise, excellent info as usual.
12. Trung,
The TED spread is a different beast – it is the spread between t-bills and Libor (the “ED” comes from the symbol for eurodollar futures on the CME, which track the Libor) The Libor is the interest rate paid on dollar deposits by non-U.S. banks (Hence the name: “London inter-bank offered rate”) For a primer on Eurodollars, go here: The Libor is a dollar interest rate that is of the exact same duration as a T-bill, but that is not guaranteed by the govt. Thus, the spread between the 2 rates is a measure of the “fear factor” in the markets.
As to the main post: good overall, but you miss the main determinant of T-note yeilds: it’s not inflationary espectations, etc, but simply projections about what the Fed will do. Since the U.S. is a sovereign currency issuer, it (through it’s central bank) can set interest rates on it’s debt at any level it chooses (in point of fact, the only real purpose for a currency issuer to sell debt is to support non-zero interest rates) Thus, since in our current arrangment the Fed closely controls short term rates while allowing long rates to float, longer rates must necessarily be set by expectations about future Fed action. If the Fed set the FF rate at 1% and announced it’s intention to keep it there for 30 years, long term rates would have to come down. See:
13. Thanks to Jim W and Durable Investor for catching my mistake with duration and maturity. I just fixed it. And thanks to Jim W and Jimbo for answering other readers’ questions.
To Tyler’s question: Fannie and Freddie are in a government conservatorship, which means that the government is running them. But they have not been absorbed into the Treasury department, so their debt is not equivalent to Treasuries.
14. On the secondary market, how is $114.31 calculated and not a different number. Is there a formula that determines this number. Also, how is yield 2.13% is calculated. What happens to the spread between $100 the origin value and $114.31.
Please explain
15. $114.31 is not calculated, it is determined by the market itself – it is just the amount that people are willing to pay for the bond at that moment.
If you pay $114.31, and then you get $3.75 per year for 10 years, and $100 at the end of 10 years, your yield is 2.13%. That is, if you plug those numbers into a spreadsheet, and discount it at 2.13% per year, you will get back $114.31.
If some investor bought the bond for $99.73 on 11/17/08 and later sold it for $114.31, he could put the difference ($14.58) and put it in his pocket as a capital gain. That’s what I meant when I said that changes in prices (and yields) of existing Treasury bonds don’t affect the Treasury Department, except insofar as they show the cost of money for Treasury when it issues new bonds.
16. could anyone please tell me what the difference is between LIBOR and the federal funds rate? I still don’t get it. If possible, could anyone please also explain a little bit about the TED Spread? Thanks
Comments are closed. |
The Ancient of Days Sermon
The Ancient of Days
Daniel 7
The study of end times or the end of the world is called eschatology; it comes from the Greek word eschaton and means “end.”
There are two warnings here.
First, don’t become obsessed with Bible prophecy.
Warning number two: Don’t be afraid of Bible prophecy.
God’s definition of prophecy: (Isaiah 46:10)
“I make known the end from the beginning, from ancient times, what is still to come. I say: my purpose will stand, and I will do all that I please.”
First, Daniel dreams of four world kingdoms or modern governments.
Daniel 7:1-3. “In the first year of Belshazzar [the king who saw the writing on the wall], Daniel had a dream, and visions passed through his mind as he was lying on his bed. He wrote down the substance of the dream. Daniel said: ‘In my vision at night I looked, and there before me were four winds of heaven churning up the great sea. Four great beasts, each different from the other came up out of the sea.”
The wind, symbolizes the constant turmoil that surrounds military conflicts.
We aren’t going to look at the details of each animal in vs 4-7, but what a nightmare!
Thankfully God explains it.
verse 16, “So he told me and gave me the interpretation of these things: The four great beasts are four kingdoms that will rise from the earth. But the saints [that’s us] of the Most High will receive the kingdom and will possess it forever–yes, for ever and ever.”
1. Lion/Eagle
2. Bear
3. Leopard
Now some scholars disagree with what these animals represent. Some say they represent ancient empires and some say modern governments that will lead up to the end times.
Some have identified the lion as Britain, the bear as Russia, and the leopard as Israel or even China.
1. Scary Beast: Roman
The fourth beast is identified as having iron teeth (military power) and conquering all the territory of the previous Gentile powers.
There is a second stage to this kingdom and the ten nations are symbolized here as ten horns.
But Daniel notices there is a “little horn” that rises up and takes control of all ten.
Daniel 7:8, 11, 21-25. “While I was thinking about the horns, there before me was another horn, a little one which came up among them; and three of the first horns were uprooted before it. This horn had the eyes of a man and a mouth that spoke boastfully…Then I continued to watch because of the boastful words the horn was speaking. I kept looking until the beast was slain and its body destroyed and thrown into the blazing fire…As I watched, this horn was waging war against the saints and defeating them, until the Ancient of Days came and pronounced judgement in favor of the saints of the Most High, and the time came when they possessed the kingdom. He gave me this explanation: ‘The fourth beast is a fourth kingdom that will appear on earth. It will be different from all the other kingdoms and will devour the whole earth, trampling it down and crushing it. The ten horns are ten kings who will arise, different from the earlier ones; he [now the “it” has become a “he”] will subdue three kings. He will speak against the Most High and oppress his saints and try to change the set times and the laws. The saints will be handed over to him for a time, times and half a time.’”
That means “one year, add two years, add a half a year, the result: three and a half years.” That’s exactly 42 months,
This is exactly the same person spoken of in Revelation 13, the Antichrist.
Keep that three-and-a-half years (or 42 months) in mind as I read a portion of it from
Revelation 13:5. “The beast [Antichrist] was given a mouth to utter proud words and blasphemies and to exercise his authority for forty-two months.”
Verse 7 says, “He was given power to make war against the saints to conquer them.”
Jesus is going to come to rapture those of us who are born again.
Arising out of the turmoil following that event, a one-world government will be established.
It will originate from the old Roman Empire area, the European community, which is roughly ten nations.
This time of political turbulence is often called the Seven-Year Tribulation.
I do not know if the Antichrist is actually alive today, but I do know this 1 John 4:3 says, “The spirit of antichrist …even now already is in the world.”
That is also reported in verses 26 and 27, the interpretation.
“But the court will sit, and his power [that’s the little horn, the antichrist] will be taken away and completely destroyed forever. Then the sovereignty, power and greatness of the kingdoms under the whole heaven will be handed over to the saints [that’s us when we reign with Christ on the earth for 1,000 years], the people of the Most High. His Kingdom will be an everlasting kingdom, and all rulers will worship and obey him.”
Standing on this side of the New Testament, who do you think the son of man is? It’s Jesus!
Daniel 7 says the Son of Man will return in the clouds and will defeat the Antichrist.
This description of God sounds exactly like what the Apostle John saw in Revelation 1:14, “His head and hair were white like wool, as white as snow, and his eyes were like blazing fire.”
Keep reading in Daniel 7:10 to see what’s happening in this vision: “A river of fire was flowing, coming out from before him. Thousands upon thousands attended him [probably angels]; ten thousand times ten thousand stood before him. The court [or literally “judgement”] was seated, and the books were opened.”
1. Books of Works
1. The Bible
John 12:48 Jesus said, “There is a judge for the one who rejects me–the very words which I have spoken will condemn him at the last day.”
1. The Book of Life
“Not everyone who says ‘Lord, Lord,’ shall enter into the kingdom of heaven… Depart from me, I never knew you.” (From Matthew 7:21ff)
I fall to my knees and say, “Okay, I was wrong. I thought religion was enough. Now I confess that Jesus is Lord!”
But by then it’s too late, for the Bible says, “One day, every knee will bow and every tongue will confess that Jesus is Lord to the glory of God the Father.” (Philippians 2:10-11) |
Latest News Editor's Choice
News / International
10 endangered animals that are being saved by Zoos
by Staff writer
28 Nov 2019 at 06:54hrs | Views
In zoos all around the world, endangered species are being eased back from the brink of extinction. As the wildlife of earth continues to struggle more than ever in the fight to survive, some species are making a comeback. That's because zoos are one of the leading sources of conservation work, and when animals face threats from human overpopulation, poaching, and pollution, they need all the help they can get.
The good news is that there are now many species of animal that have been brought back from the edge of extinction, and are now returning to their natural habitats and helping to restore ecological balance. These 10 amazing animals might not still be here if it were not for the good work being done by zoos of all sizes.
Siberian Tigers
This iconic and elegantly beautiful cat has seen some incredible fluctuations in terms of population figures. At one point in the 1940s, there were only around 40 of these magnificent cats left in the world. Now, thanks to the work of conservationists and a ban on tiger hunting, the world has seen those numbers rise to around 540 Siberian Tigers still alive today.
Arabian Oryx
Hunted to extinction in the wild, the Arabian Oryx could have been gone forever. The good news for us and for them is that there remained a handful of them still alive in zoos. Working together, zoos prioritized conservation and breeding programs, and now the Arabian Oryx has been brought back from the edge. There are now over 1,000 of the adorable antelopes running around in the wild and plenty more in the safe hands of zoos.
California Condor
Some animals come closer to extinction than others, and the California Condor has come closer than most. Although there are now hundreds of these Condors alive, there was a point where there were only 27 left. Due to the hard work of zoos looking after all 27, the California Condor is growing in population numbers.
Southern White Rhino
In the early 1900s, there were less than 50 southern white rhinos in existence; but, thanks to the efforts of the Government of South Africa and conservationists, this number has grown to roughly 18,000 today. Sadly, due to poaching, that figure has dropped over the last two years. Today, conservation teams at zoos across the country are helping to make sure that population numbers continue to rise. With the news of a baby rhino being born in the Gulf Breeze Zoo, this shows the result of the continuous work that is being done to ensure the positive population numbers.
Grey Wolf
Once thriving all across North America, Asia, and Europe, the Grey Wolf saw a huge drop in population at the turn of the last century. Over the course of the 20th Century, Americans almost hunted the grey wolf to extinction, but a steady return to their habitats has been encouraged by the work of zoos and environmental groups. Washington and Oregon have both seen Grey Wolf pups born in the last few years, and that work continues to increase their numbers.
Przewalski's Horse
This is the last real wild horse to be seen anywhere in the world. Originally from the grasslands that make up the majority of central Asia, this once thriving species has been to the edge of extinction several times and has even been prematurely announced as extinct. However, zoos have been working together to save Przewalski's Horse, and there is now a stable and sustainable population level, and the beautiful beast is slowly being reintroduced to the natural habitats where they belong.
Corroboree Frog
A small and seemingly minor fungal infection nearly wiped out this cute little black and yellow frog. The residents of just a tiny part of the world in a sub-alpine part of Australia, the Corroboree Frog was very close to being completely wiped out. Luckily, Australian zoos worked hard on breeding programs, and now the frog has been slowly returned, disease-free, to where they came from. Australian frogs have proven particularly vulnerable to the threat of extinction, but this species is not going to join their ranks anytime soon.
The Bald Eagle
There was a point where an estimated 500,000 Bald Eagles were flying around the U.S. By the 1950s, there were only 412 nesting pairs left. In 1967, the Bald Eagle made the Endangered Species list. That resulted in a countywide hunting ban, which has had amazing results. Thanks to that ban and the work of zoos all over the country to repopulate these amazing birds, numbers have risen to an estimated 70,000 bald eagles in the wild.
Sea Lions
If you've been to a zoo or aquarium lately, then you might be surprised to see sea lions on this list. A regular and common character to see, it can be hard to imagine that they were once so dangerously close to extinction. Hard work was needed to bring them back, and in 2013 they were finally removed from the Endangered Species list. Now, they continue to steadily grow in population numbers, which is why they are such a common sight to see in both zoos and aquariums around the world.
Eastern Bongo
This antelope is the resident of a very, very remote region in Kenya. The fact that it lives so remotely and is a famously reclusive animal should mean that it's no surprise that the Eastern Bongo was one of the final large mammal species that were ever discovered by humans. That elusive nature didn't save it from being hunted to near extinction, and their numbers are now shockingly low. The Eastern Bongo isn't rescued yet, but zoos are working hard to increase their numbers, and there are now more in zoos than there are in the wild.
Zoos have proven to be vital in the fight to protect and nurture the animals that are most under threat by humans and by an ever-changing world. As the planet starts looking more closely at sustainable living and being more in tune with the world around us, the fight to save endangered species goes on. With dedicated and passionate teams working hard in the zoos of the world, perhaps more endangered species can be brought back from the risk of extinction and we can find a way to live more in harmony with the planet that is home to us all.
Source - Byo24News |
Sample Scholarship Essays
Leukemia Leukemia is a disease characterized by the formation of abnormal numbers of white blood cells, for which no certain cure has been found. Leukemia is also conditions characterized by the transformation of normal blood-forming cells into abnormal white blood cells whose unrestrained growth overwhelms and replaces normal bone marrow and blood cells. Leukemias are named according to the normal cell from which they originate, such as Lymphocyte Leukemia. Lymphocyte Leukemia is where a Lymphocyte cell is transformed into a Leukemia cell. Another example of Leukemia is Myelocytic or (Granulocytic Leukemia). This forms when a Myelocytic cell is changed or transformed into a Leukemia cell.
Different Leukemia’s are located in the microscope and by how much protein they contain. These Leukemia’s are usually very severe and need treatment right away. The present incidence of new cases per year in the United States is about 25 to every 100,000 persons. The danger to the patient lies in the growth of these abnormal white cells, which interfere with the growth of the red blood cells, normal white blood cells, and the blood platelets. The uncontrolled growth of the abnormal white cells produces a tendency to unstop bleeding, the risk of getting serious infection in the wounds, and a very small possibility of obstruction of the blood vessels. Treatment of these Leukemias include chemotherapy with alkylafing agents, or antimetabodies that suppress the growth of abnormal white cells. Another treatment of some kind would be the x-ray or the administration or radioactive substances, or radiophosphorus, may be used.
We Will Write a Custom Essay Specifically
For You For Only $13.90/page!
order now
After treatment these diseases may last for many years. Age of the person diagnosed with Leukemia does play an important part in how that individual responds to any treatment. The older the person the less response he may have to treatment. Leukemia in Animals white blood cells is much less common as Leukemia in humans white blood cells. Today’s treatment mostly includes chemotherapy and or bone marrow transplantation supportive care, where transfusions of blood components and prompt treatment of complicating infections, is very important.
Ninety percent of children with Acute Lymphocyte Leukemia have received chemotherapy and fifty percent of theses children have been fully cured of Leukemia. Treatment of AML or Acute Myeolcytic Leukemia is not as successful but has been improving more and more throughout the 1990’s. Scientists that study the cause of Leukemia have not had very much success lately. Very large doses of x-rays can increase the efficacy growth of Leukemia. Chemicals such as Benzene also may increase the risk of getting Leukemia. Scientists have tried experiments on Leukemia in Animals by transmitting RNA into the body of the Animal.
Interpretation of these results in relation with human Leukemia is very cautious at this time. Studies have also suggested that family history, race, genetic factors, and geography may all play some part in determining the rates of growth of these Leukemias. Stewart Alsop is an example of Acute Myeoblastic Leukemia, or AML. On the day of July 21, 1971 Stewart was made aware of some of the doctors suspicions due to his bone marrow test. He was told by his doctor in Georgetown that his marrow slides looked so unusual that he had brought in other doctors to view the test and they could not come to an agreement so they all suggested that he take another bone marrow exam.
The second test was known to be “hypocelluar” meaning that it had very few cells of any sort, normal of abnormal. The Georgetown doctors counted, about fourty-four percent of his cells were abnormal, and he added, with a condor that he later discovered characteristics. “They were ugly-looking cells.” Most of them looked like Acute Meyoblastic Leukemia cells, but not all some of them looked like the cells of another kind of Leukemia, Acatymphoblastic Leukemia, and some of them looked like the cells of still another kind of bone marrow cancer, not a Leukemia, it is called Dysprotinemia. And even the Myeloblastic cells didn’t look exactly like Myeloblastic cells should look. Stewart has been treated with chemotherapy and is still living today but he doesn’t have very much longer to live.
Sadako Saski was born in Japan in the year of 1943 she died twelve years later in the year of 1955 of Leukemia. She was in Hiroshima when the United States Air Force dropped an atomic bomb on that city in an attempt to end World War II. Sadako Saski was only two years old when all this had happened. Ten years later, Sadako had been diagnosed with Leukemia as a result of the radiation from the bomb. At this time Sadako was only a twelve year old little girl and she died of Leukemia.
Everyday Sadako grew weaker and weaker thinking about her death and the day finally came. Sadako died on October 25, 1955. Sadako was very much loved by all of her classmates. At the time of death, her classmates folded 356 paper cranes to be buried with her. This is a symbol in Jpan of thoughtfulness. In summary to what I have learned about Leukemia it is a very painful disease. The people with Leukemia suffer very much throughout the disease and treatment of the disease, even if they are eventually cured.
The treatment it took to get there was very painful. The studies of Leukemia have helped alot of people to be cured but there are still alot of people suffering due to no cure found to help them. I’m sure like all other cures needed, the money is short funded for the research that cost so very much. Maybe someday soon, we hope, they will find a cure for all kinds of cancer.
We Will Write a Custom Essay Specifically
For You For Only $13.90/page!
order now
This type of leukemia can cause heart attacks and strokes by blocking the arteries: “It is treated by removing large numbers of white cells from the patient’s blood and increasing the intensity of the chemotherapy”(453). Over 50 percent of the patients are found with abnormalities in the chromosomes: “Evidence strongly suggests that each patient’s individual chromosomal makeup has a strong direct bearing on prognosis” (453). Patients that have abnormal genes in their leukemia cells usually have the disease. Chronic granulocytic leukemia occurs in people with ages forty to sixty. The disease starts out very slowly. Patient will not notice anything wrong until after three to six months.
Many organs such as the liver, spleen and lymph nodes will enlarge in over half of the patients. The study of chromosomes are important in this type of leukemia: “The so-called Philadelphia chromosomes, the first abnormal chromosome found in the leukemias, occurs in over 90 percent of patients” (454). Applying therapy may reduce of Philadelphia in the white blood cells. In the Cancer Book, the author explained that the basic cause of leukemia is still unknown. Factors such as exposure to radiation, chemicals, and certain drugs may cause the disease: “Certain chemicals, such as benzene, have long been known to cause damage to bone marrow calls which form the blood, and it is logical to conclude they can also cause a cancer in those cells” (378).
This shows that genetics are playing an important role in the disease. But whether heredity is also involved in all cases is still an unanswered question. According to the Personal Health Report, leukemia may be caused by other types of disease that damage the bone marrow, or anticancer drug used to treat other variety of cancer: “Diseases that cause severe depression of the marrow, such as aplastic anemia, are associated with a high incidence of leukemia.” (356) Patients that …
I'm Abigail
Check it out |
[a4] The Rise and Fall of Power.
In this problem a4
i’ve to handle the value of n^n (1 ≤ n ≤ 10^9)
I used long long int for that but it’s not good enough to handle such big number.
Any idea/suggestion how to handle that?
look at the constraints…
Each test case consists of one line containing two numbers n and k (1 ≤ n ≤ 109, 1 ≤ k ≤ 9).
maximum value of k is 9… so you need only 18 digits at max… => you can think on getting those 18 digits without really thinking about solving the complete n^n expression
on the other hand if u really want to deal with big big numbers… use arrays as in
int num[NUM_DIGITS]
and store a (0-9) in each of the int array… but you would need to write your own muliply,add, sub,division if you need them
1 Like
You don’t need to calculate all that. At max you need to calculate pow(10,k). You only need to calculate first and last k digits. Use some logarithm. If doubt go through solution of any user. Solutions are public on codechef. Use repeated squaring for calculating last k digits as follows :
long long int multiply(int n)
long long int product;
long long int mod = pow(10,k);
if(n%2==1) product = (n*product)%mod;
product = (product * product)%mod;
return product; |
Diaphragmatic hernia
Diaphragmatic hernia is a defect or hole in the diaphragm that allows the abdominal contents to move into the chest cavity. Treatment is usually surgical.
Diaphragmatic hernia
Peritoneopericardial diaphragmatic hernia.JPG
This is a photo of a peritoneopericardial diaphragmatic hernia in a cat. The photo was taken during necropsy from the right side of the cat. To the left is the abdomen, where part of the liver and the gall bladder can be seen. The diaphragm is in the middle. To the right is the thorax. The largest object seen in the thorax is the rest of the liver. Just to the right of that is the heart. The liver was connected to itself through a small hole in the diaphragm (not seen).
SpecialtyGastroenterology Edit this on Wikidata
Signs and symptomsEdit
A scaphoid abdomen (sucked inwards) may be the presenting symptom in a newborn.[1]
A right sided diaphragmatic hernia with the stomach in the chest (left side of image marked by the arrow). Note the air fluid level in the stomach.
Diagnosis can be made by either CT or Xray.
Treatment for a diaphragmatic hernia usually involves surgery, with acute injuries often repaired with monofilament permanent sutures.[2]
1. ^ Durward, Heather; Baston, Helen (2001). Examination of the newborn: a practical guide. New York: Routledge. p. 134. ISBN 0-415-19184-X.
2. ^ Turhan, Kutsal; Makay, Ozer; Cakan, Alpaslan; Samancilar, Ozgur; Firat, Ozgur; Icoz, Gokhan; Cagirici, Ufuk (June 2008). "Traumatic diaphragmatic rupture". European Journal of Cardio-Thoracic Surgery. 33 (6): 1082–1085. doi:10.1016/j.ejcts.2008.01.029. ISSN 1010-7940. PMID 18299201.
External linksEdit
External resources |
Whakapapa (Māori pronunciation: [ˈfakapapa], Māori pronunciation: ['ɸa-]), or genealogy, is a fundamental principle in Māori culture. A person reciting their whakapapa proclaims their Māori identity, places themselves in a wider context, and links themselves to land and tribal groupings and the mana of those.[1]
Experts in whakapapa can trace and recite a lineage not only through the many generations in a linear sense, but also between such generations in a lateral sense.
Link with ancestryEdit
Raymond Firth, an acclaimed New Zealand economist and anthropologist during the early 20th century, asserted that there are four different levels of Maori kinship terminology that are as follows:[2]
Te reo Māori term Literal Translation Kingroup term
whānau to give birth extended family
hapū pregnancy ramage
iwi bones; people clan
waka canoe phratry
Some scholars have attributed this type of genealogical activity as being tantamount to ancestor worship. Most Māori would probably attribute this to ancestor reverence. Tribes and sub-tribes are mostly named after an ancestor (either male or female): for example, Ngati Kahungunu means 'descendants of Kahungunu ' (a famous chief who lived mostly in what is now called the Hawke's Bay region).
Word associationsEdit
Many physiological terms are also genealogical in 'nature'. For example, the terms 'iwi', 'hapu', and 'whānau' (as noted above) can also be translated in order as 'bones', 'pregnant', and 'give birth'. The prize winning Māori author, Keri Hulme, named her best known novel as The Bone People: a title linked directly to the dual meaning of the word 'iwi as both 'bone' and '[tribal] people'.
Most formal orations (or whaikōrero) begin with the "nasal" expression - Tihei Mauriora! This is translated as the 'Sneeze of Life'. In effect, the orator (whose 'sneeze' reminds us of a newborn clearing his or her airways to take the first breath of life) is announcing that 'his' speech has now begun, and that his 'airways' are clear enough to give a suitable oration.
Whakapapa and its role in the mental health systemEdit
Whakapapa is defined as the "genealogical descent of all living things from the gods to the present time.[3] "Since all living things including rocks and mountains are believed to possess whakapapa, it is further defined as "a basis for the organisation of knowledge in the respect of the creation and development of all things".[3]
Hence, whakapapa also implies a deep connection to land and the roots of one's ancestry. In order to trace one's whakapapa it is essential to identify the location where one's ancestral heritage began; "you can’t trace it back any further".[4] "Whakapapa links all people back to the land and sea and sky and outer universe, therefore, the obligations of whanaungatanga extend to the physical world and all being in it".[5]
While some family and community health organisations may require details of whakapapa as part of client assessment, it is generally better if whakapapa is disclosed voluntarily by whanau, if they are comfortable with this.[4] Usually details of a client's whakapapa are not required since sufficient information can be obtained through their iwi identification. Cases where whakapapa may be required include adoption cases or situations where whakapapa information may be of benefit to the client's health and well-being.
Whakapapa is also believed to determine an individual's intrinsic tapu.[6] "Sharing whakapapa enables the identification of obligations...and gaining trust of participants".[7] Additionally, since whakapapa is believed to be "inextricably linked to the physical gene",[8] concepts of tapu would still apply. Therefore, it is essential to ensure that appropriate cultural protocols are adhered to.
Misuse of such private and privileged information is of great concern to Māori.[4] While whakapapa information may be disclosed to a kaimatai hinengaro in confidence, this information may be stored in databases that could be accessed by others. While most health professions are embracing technological advances of data storage, this may be an area of further investigation so that confidential information pertaining to a client's whakapapa cannot be disclosed to others.
Additionally, it may be beneficial to find out if the client is comfortable with whakapapa information being stored in ways that have the potential to be disclosed to others. To combat such issues, a Māori Code of Ethics has been suggested.[9] A Māori Code of Ethics may prevent "the mismanagement or manipulation of either the information or the informants".[10]
The New Zealand Māori rugby union team (in black) playing England Saxons in the 2007 Churchill Cup. Players now have to have their whakapapa verified.
Although this rule was not rigorously applied in the past, people today have to prove whakapapa to become members of the international Māori All Blacks rugby union team, New Zealand Māori rugby league team and New Zealand Māori cricket team to qualify.
1. ^ "Mihi - Introductions". Māori ki Te Whare Wānanga o Ōtākou / Māori at the University of Otago. University of Otago. Retrieved 11 November 2018.
2. ^ van Meijl, Toon (June 1995). "Maori Socio-Political Organization in Pre- and Proto-History". Oceania. 65 (4): 304–322. doi:10.1002/j.1834-4461.1995.tb02518.x. ISSN 0029-8077.
3. ^ a b Barlow (1994), p. 173.
4. ^ a b c Russell (2004).
5. ^ Glover (2002), p. 14.
6. ^ Glover (2002).
7. ^ Glover (2002), p. 31.
8. ^ Glover (2002), p. 32.
9. ^ Pomare (1992) as cited in Glover (2002).
10. ^ Te Awekotuku (1991, p. 13) as cited in Glover (2002), p. 30.
• Barlow, C. (1994). Tikanga whakaaro: key concepts in Mäori culture. Auckland, New Zealand: Oxford University Press.CS1 maint: ref=harv (link)
• Glover, M. (2002). Kaupapa Māori health research methodology: a literature review and commentary on the use of a kaupapa Māori approach within a doctoral study of Māori smoking cessation. Applied Behavioural Science, University of Auckland. Auckland, New Zealand.CS1 maint: ref=harv (link)
• Russell, K. (2004). Hui: A hui to discuss how to create and maintain a relationship with Māori organisations. Dunedin, New Zealand: Department of Community and Family Studies, University of Otago.CS1 maint: ref=harv (link) |
Ermak Travel Guide
The World at your fingertips
Location: Istria County Map
Description of Porec
The city of Parenzo or Poreč (in Croatian: Grad Poreč, in Italian Città di Parenzo, in Latin, Parens or Parentium, archaic German: Parenz, Ancient Greek: Pàrenthos, Παρενθος) is a city and municipality on the west coast of the Istrian peninsula, in the county of Istria (Croatia). Porec is a small charming ancient city on the shores of the Adriatic Sea. It was found over 2000 years ago many of the streets in the city still keep original orientation of the narrow ancient Roman roads then the city was called by its Latin name of Castrum. The main site of Porec is its Basilica of Saint Euphrasius that was declared an UNESCO World Heritage Site in 1997.
Its mayor is Edi Štifanić, from the Democratic Assembly of Istria (IDS). It has an area of 139 km2, with a coastline of 37 kilometers long that goes from the river Mirna near Novigrad to Funtana and Vrsar in the south. Its population (in 2001) is 17,460 inhabitants, who live mostly in the outskirts. The population density is 126 inhabitants per km2. Its main monument is the Euphrasian Basilica, from the 6th century, a place declared a World Heritage Site since 1997. Poreč-Parenzo is almost 2,000 years old and spreads around a bay protected from the sea by the small island of San Nicolás (Sveti Nikola).
Poreč is located on the western coast of Istria and cooled by sea breezes, the local climate is relatively mild and free from the oppressive heat of summer. The month of July is the warmest, with a maximum air temperature of 30 ° C in conditions of low humidity, while January is the coldest with an average temperature of 6 ° C. There are more than 2,400 hours of sunshine per year, an average of more than 10 hours of sunshine during summer days. Sea temperatures can reach 28 ° C, higher than would be expected compared to the coast of southern Croatia while air temperatures are higher. The annual rainfall of 920 mm is more or less distributed regularly throughout the year, although July and August are very dry. The winds here are Bora, bringing the cold and clear weather of the north in the summer, and the warm sirocco of the Mediterranean from the south that brings the rain. The summer breeze that blows from the land to the sea is called the maestral.
Travel Destinations in Porec
In 1844 the Austrian naval company Lloyd opened a tourist line that serves in Porec-Parenzo. The first tourist guide that is described and represented in the city was printed already in 1845. The oldest hotel is the Cote d'Azur, built in 1910. Later came the Porestina and others. Nowadays, the tourist infrastructure is dispersed along the 37 km (23 miles) of coastline, between the Mirna River and the deep Lim Valley. The south houses autonomous centers such as Laguna Plava (Laguna Azul), Laguna Zelena (Laguna Verde), Tivat Uvala (Cala Blanca) and Brulo. To the north, the centers of interest are Materada, Červar, Porat, Ulika and Lanterna. In high season, the area has a temporary population that exceeds 120,000 inhabitants. The heritage of Parenzo can be seen in the historic center of the city, in the museums and galleries housed in the houses and palaces, many of them even private homes. In the off-season, weekend visitors from Croatia, Slovenia, Austria and Italy visit the area. Sports complexes are developed and used throughout the year.
Basilica of Saint Euphrasius (Porec)
Embankment of Porec
Embankment of Porec Embankment of Porec
Embankment of Porec at the Adriatic Sea is one of the most beautiful and picturesque places in a city. Porec Embankment is long enough for morning runs as well as lengthily walks during hours of sunset that are amazing. Historically this part of the town was home to numerous fishermen, traders and sailors. So naturally it was a bad side of town. However in the 19th century all that changed as city government decided to transfer these lands to public service. Wooden shacks and huts were torn down while wide Embankment was built. Additionally new large buildings were constructed facing the Adriatic Sea. Embankment of Porec is the most popular area among local residents as well as international tourists. You can rent a local boat for a short trip around the city. Additionally you can buy fresh fish from the local fishermen in the morning.
blog comments powered by Disqus |
You are here: Home / Tags / Diseases + Dogs / All Categories
Tags: Diseases + Dogs
All Categories (1-20 of 161)
1. Endoparasite infection hotspots in Estonian urban areas
2. Dog-assisted therapy in the dental clinic: Part A-Hazards and assessment of potential risks to the health and safety of humans
Contributor(s):: Gussgard, A. M., Weese, J. S., Hensten, A., Jokstad, A.
3. How we're using dogs to sniff out malaria | James Logan
Full-text: Available
| Contributor(s):: James Logan
What if we could diagnose some of the world's deadliest diseases by the smells our bodies give off? In a fascinating talk and live demo, biologist James Logan introduces Freya, a malaria-sniffing dog, to show how we can harness the awesome powers of animal scent to detect chemical...
4. Wild Health: Dogs and Bats and Chickens, Oh My!
| Contributor(s):: Voelker, R.
5. Prevalence of Disorders Recorded in Dogs Attending Primary-Care Veterinary Practices in England
Full-text: Available
Full-text: Available
7. VetCompass Australia: A National Big Data Collection System for Veterinary Science
8. Canine Detection of the Volatilome: A Review of Implications for Pathogen and Disease Detection
11. Efeitos secundários da quimioterapia antineoplásica e seu impacto na qualidade de vida em cães e gatos com doença oncológica
| Contributor(s):: Pedro de Sousa Dâmaso da Silveira
Em Medicina Veterinária, a crescente prevalência de doenças neoplásicas vem provocar um aumento exponencial da realização de quimioterapia antineoplásica em cães e gatos. O uso de fármacos antineoplásicos está,...
12. Técnicas reconstrutivas em cirurgia oncológica de canídeos e felídeos
| Contributor(s):: Alexandre Margarido Pargana
Actualmente, a doença oncológica é uma das principais causas de morte em canídeos e felídeos. A cirurgia é o método mais antigo e, ainda hoje, de maior sucesso no tratamento de neoplasias, podendo também ser usada como método...
13. Sero-epidemiological and haematological studies on toxoplasmosis in cats,dogs and their owners in Lahore, Pakistan
| Contributor(s):: Azeem Shahzad, Muhammad Sarwar Khan, Kamran Ashraf, Muhammad Avais, Khalid Pervez, Jawaria Ali Khan
The current study was conducted to find out the epidemiological status of toxoplasmosis in cats, dogs and human population in Lahore city of Pakistan and to determine the possibility of transmission of toxoplasmosis from cats and dogs to their owners. Overall 56% cats were seropositive for...
14. Sniffing Out Cancer: Pina De Rosa & Adriana LaCorte at TEDxWilmington
| Contributor(s):: Pina De Rosa, Adriana LaCorte
15. The Epidemiology and Impact of Dog-Associated Zoonoses in Developed and Developing Countries
| Contributor(s):: Glenda Bingham
Dogs are an important part of human societies throughout the world. Although people derive many economic and health benefits from their relationships with dogs, these relationships are not without risk. Interaction with dogs can lead to an increased risk for zoonotic diseases, if the appropriate...
16. On the role of pets in GermanyZur Rolle von Kleintieren in Deutschland
| Contributor(s):: Schwarz, S.
This article discusses the number and presence of pets in the German household, especially dogs and cats; essentiality and importance of pets to the well-being of German owners; ability of pets to decrease the risk of heart disease; and function of dogs in rescue, animal assisted therapy and...
17. Detection of Verticillium dahliae in olive groves using canine detection units
| Contributor(s):: Pons Anglad a, L., Calvo Torras, M. dels A.
Verticillium wilt is one of the most significant agricultural diseases in the world, in view of the fact that not only does it affect olive groves but also a wide variety of fruit, vegetable and ornamental plants. Currently, the most efficient and economical method of control involves the use of...
18. Magnolia Paws for Compassion
Magnolia Paws for Compassion is a special program, created by Eisai Inc., which seeks to increase access to animal assistance and raise awareness of the many benefits that interaction with animals can provide to those coping with illness.As one part of this effort, Eisai is partnering with the...
19. A Walk in the Park: Zoonotic Risks Associated with Dogs that Frequent Dog Parks in Southern Ontario
| Contributor(s):: Theresa D. Procter
A cross-sectional study investigated the shedding of zoonotic organisms (Campylobacter, Giardia, and Salmonella) and antimicrobial resistant generic E. coli in dogs that visited dog parks in southern Ontario. Logistic regression models were constructed to identify risk factors. Factors for the...
20. Human-related factors regulate the presence of domestic dogs in protected areas
| Contributor(s):: Soto, C. A., Palomares, F.
|
Dimensional venturing, Part 2 – Twirling in 4-space
Last week we introduced the tesseract, which is to a cube what a cube is to a square — an extension into one more dimension. That’s why it’s also called a hypercube. The first tesseract diagrams I ever saw were so confusing — they looked like lots of overlapped squares tied together with lines that didn’t make much sense. I wondered, “Wouldn’t it be easier to understand a tesseract if I could see it rotating?”
Years later computers and I had both moved ahead to where I could generate the pictures you see in this post. What I learned while doing that was that 4-D figures have two equators. In four dimensions, it’s possible for something to rotate in two perpendicular directions at the same time. Read on and please don’t mind my doggerel — it doesn’t bite.
line2c The LINE is just a single stroke,
a path from here to there.
Stretch it out beside itself
and you will have a SQUARE.
Where’s its face when it turns around?
Gone, ’cause its back’s not there.
cube2c The CUBE’s a square
made thick, you see.
Length, breadth and depth
comprise a full 3-D.
Add yet a thickness more,
crosswise all to X, Y, Z.
A TESSERACT on a corner spins
but an XY-slice is all we see.
tess2czw But the axis, too, can rotate through
a path that’s drawn invisibly.
Four faces grow and shrink in place —
it’s hard to do that physically.
This tesseract is tumbling ’bout
two equators perpendicular.
Were I in such a state, I vow,
I’d be giddy, even sickular.
In the 4-D views, when one of the tesseract’s cubical faces appears to disappear into an adjacent face, what’s actually happening is that the face is sliding past the other face along that fourth dimension (which I called W because why not?)
You’re looking at a two-dimensional picture of the three-dimensional projection of a four-dimensional object as it moves in 5-space (X, Y, Z, W, and time — if it didn’t move in time then it couldn’t be spinning).
Next week — Herr Klein’s bottle, or rather flask, or rather surface.
~~ Rich Olcott
Dimensional venturing, Part 1 – What’s 4-D?
Whenever a science reporter uses the phrase “string theory,” it’s invariably accompanied by a sentence about tiny strings vibrating in 10 or 11 dimensions. Huh? How can you have more than three? And what does it really mean to say that that comix villain comes from the 4th dimension? Actually, we live in many dimensions, though it’s not easy to visualize them all at once. Let’s get some practice.
Right now, you’re reading along a line, a one-dimensional path from left to right. Imagine a point drawing a straight line about a foot in front of you. Let that line just hang out there in the air, glowing a gentle green color, with one “edge” (the line itself) and two “corners” (its ends).
As you read down the page, you traverse a series of lines laid out next to each other in the two-dimensional plane of the page. Imagine your green line moving upward, leaving a plane of yellow sparkles behind it. Stop when you’ve got a sparkly yellow square in front of you showing its one face, four edges (one green, three yellow) and four corners (two green, two yellow). Let’s put some red paint on one of those yellow edges.Cube
Stack up enough printed pages and you’re got a 3-dimensional book. Imagine that nice yellow square moving away from you until you’ve got a friendly cube hanging out in the air. Our original line, the green edge, has produced a green face going into the distance. The red edge has built a pink face. All together, the cube has 8 corners, 12 edges and 6 faces. OK, now make your cube disappear.
But we’re not done yet. Time is a dimension. Consider that cube. Before you dreamed it up – nothing. Then suddenly a cube. Then nothing again. During the interval the cube was floating in front of you, the green line was tracing out a green face in time. The pink face was drawing a pink cube. The whole cube, from when it started to exist until it went away, traced out a four-dimensional figure called a tesseract, also called a 4-cube or hypercube. The tesseract was bounded by a cube at the beginning, six cubes while it existed (one from each face of the initial cube), and a cube at the end of its time, for a total of eight.
Just for grins, count up the faces, edges and corners for yourself.
But wait, there’s more. The tesseract doesn’t just sit there, it can spin. Being four-dimensional, it can spin in a surprising way. We’ll get to that next week.
~~ Rich Olcott |
Anti-HIV using Nanorobots
By | April 21, 2015
There is no specific technology for the treatment of AIDS. Some drugs of specific composition are given to the patients which are able to increase the life time to a few years only. To make the treatment more specific we use the new technology called Nanotechnology which has bio-medical application. The size of nanorobots is about 100 times lesser than the size of an animal cell and hence it can easily monitor the behavior of cell inside the body. Nanorobots use nano sensors to sense the AIDS infected WBCs and convert them back into original WBCs. It operates at specific sites and has no side effects. Thus the AIDS patient is provided with the immune system so that he can defend himself from AIDS.
A. Nanorobots
Nanorobotics is the technology of creating machines or robots at or close to the microscopic scale of nanometers (10-9 meters). Nanorobots would be typically devices ranging in size from 0.1-10 micrometers, they could work at atomic, molecular and cellular level. Nanorobots are to likely be constructed of carbon atoms, generally in diamond structure because of inert properties and strength, glucose (or) natural body sugars and oxygen might be source at propulsion, Nanorobots will respond to acoustic signals.
HIV stands for Human Immunodeficiency Virus. Like all viruses, HIV cannot grow or reproduce on its own. In order to make new copies of itself it must infect the cells of a living organism. HIV belongs to a special class of viruses called retroviruses. Within this class, HIV is placed in the subgroup of lent viruses. Outside of a human cell, HIV exists as roughly spherical particles (sometimes called virions). The surface of each particle is studded with lots of little spikes. An HIV particle is around 100-150 billionths of a meter in diameter. That’s about the same as: 0.1 microns, one twentieth of the length of an E. coli bacterium, one seventieth of the diameter of a human CD4+ white blood cell. Unlike most bacteria, HIV particles are much too small to be seen through an ordinary microscope. However they can be seen clearly with an electron microscope as shown in (Fig.1).
Anatomy of AIDS virus
Fig.1 -Anatomy of AIDS virus
Zidovudine is the latest known drug that is used for treatment of aids. This drug has an affinity to the HIV genome (RNA molecule) and they binds to it before reverse transcriptase starts working and as a result DNA cannot be synthesized. But any time this drug can lose its efficiency as mutation at the codon no. 67,70 and 215 will change the conformation which will result in the reduction of affinity of Zidovudine towards viral genome and as a result RT will start its action and viral genome will be replicated and integrate with host genome.
Zidovudine can be used to resist the HIV but the virus cannot be destroyed. Destruction of viral genome is possible by using nanorobots. This type of nanorobots will consists of a nano-biosensor developed by nanoelectronics engineers, a data converter, and a container containing high concentration (say 20 u/microlitre) of DNase and RNase enzyme.
Most animal cells are 10,000 to 20,000 nanometers in diameter. This means that nanoscale devices (having at least one dimension less than 100 nanometers) can enter cells and the organelles inside them to interact with DNA and proteins. Tools developed through nanotechnology may be able to detect disease in a very small amount of cells or tissue. They may also be able to enter and monitor cells within a living body. Nanotechnology could make it possible to run many diagnostic tests simultaneously as well as with more sensitivity. In general, nanotechnology may offer a faster and more efficient means for us to do much of what we do now.
A. Nanobiosensor
The Ab for the Ag gp41 & gp120 will be tagged on its surface. So whenever it will come in contact of an infected cell the Ab will react with that by an immunochemical reaction and will identify this.
B. Nanochip
Its a chip which will receive the signal from nanobiosensor and will perform its job.
C. Nanotube
Its a tube in nanoscale. On receiving +ve signal the nanotube will be injected into the nucleus of the cell by nanochip.
D. Nanocontainer
A nanocontainer will contain highly concentrated DNase and RNase enzyme which will be delivered into the infected cell and will cleave the whole genomic DNA into single nucleotides.
The function of the biosensor is to identify a particular compound. In this case the biosensor will contain a particular antibody. The gp41 and gp120 are two unique HIV envelope protein which is found in the cell membrane of the infected cell. The antigen (gp41 and gp120 protein) and antibody reaction will give the proper signal. In case of infected cell only this reaction will take place as those viral proteins are found in the cell membrane of the infected cell only. Getting the +ve signal the nanorobot will inject its nanotube into the nucleus of the infected cell and release the DNase as well as RNase enzyme into the cell. The DNase enzyme is not sequence specific and as a result it will cleave the whole genomic DNA containing the viral genome into single nucleotides. Once the viral genome loses its sequence it loses its viral effect and after the digestion of the whole genomic DNA the cell undergoes normal programmed cell death called apoptosis. Thus the infected cell of the diseased body can be destroyed to finish off the viral genome in the body as shown in (Fig.2).
Nanorobot performing operations on blood cells
Fig. 2- Nanorobot performing operations on blood cells
• More than million people in this world are affected by this dreaded disease. Currently there is no
• Permanent vaccine or medicine is available to cure the disease. The currently available drugs can
• Increase the patients life to a few years only, so the invention of this nanorobot will make the patients to get rid of the disease.
• As the nanorobot do not generate any harmful activities there is no side effect. It operates at specific site only.
• The initial cost of development is only high but the manufacturing by batch processing reduces the
• Cost.
• The initial design cost is very high.
• The design of this nanorobot is a very complicated one.
The paper is just a theoretical justification. But the recent advancement in the field of Nanotechnology gives the hope of the effective use of this technology in medical field. This is the beginning of nano era and we could expect further improvements such as a medicine to AIDS using nanotechnology.
Suggested articles for you:
12 thoughts on “Anti-HIV using Nanorobots
1. sethu
Sir,can u please tell who proposed this technology
2. anonymous
Sir we use nanorobots here and here can we use any technology like 5G or IOT or AI
3. omar
will you please explain the computer technology behind it..
4. Khushboo Bajaj
Sr please can I get the literature review of this topic…..
5. anila
sir ilike the anti HIV topic , will you provide the paper or pdf ?
6. tejaswani
Its very nice.u give the explanation very clearly &step by step explanetion.i like this process.anyone can esaily findout the concept. Thanks for sharing u’r knowledge.plz send me the new&imp ppt topics for ece.
Did it help? Comment here.. |
Is it possible to make a MME with sound waves? Suppose we travel on a rail platform with two "mirrors" that reflect sound, what parameters would we need to simulate the light experiment? would it work if we went at a proportional speed, that is 300/1000 m/s? How far should the mirrors be,... etc?
Can you figure out what the result of such experiment would be? Is the speed of propagation of sound influenced by motion throught the air?
In the interesting link provided by Farcher they say:
The results confirm the hypothesis that the two-way velocity of sound is isotropic in a moving system, as in the case of the optical MME
Does this mean that speed of propagation od sound is not affected by motion? Does this prove the relativity does not apply only to light?
• 3
$\begingroup$ The problem with audible sound waves is that their wavelength is rather long but is can be done with ultrasound and here is an example worldnpa.org/abstracts/abstracts_5338.pdf $\endgroup$ – Farcher Feb 24 '16 at 15:35
• $\begingroup$ To me it's not even clear why they are calling that experiment a Michelson-Morley. Half the setup is missing and there is no interference part. Since air also can't flow trough the mirrors (while space and the aether can!), one can never even compare the two cases. $\endgroup$ – CuriousOne Feb 24 '16 at 20:56
I think it should be possible. Essentially you could construct a large L-shaped frame (each side being the same length) with two small flat surfaces at the ends of the L, and a source of a short sound -- a cap gun for instance -- and a microphone at the corner. Then you fire the gun and listen for the echoes, timing their arrival. How big the apparatus needs to be depends on how short your impulses are and how accurately you can measure: arms 10m long should be fine.
Now you put the whole thing on wheels, and find a very large flat surface (salt flat?) and, on a flat calm day, you drive it along in various directions, and do the experiment repeatedly.
There will be two problems to overcome:
1. there will be reflections from the ground, wheels and so on, but if you dimension things properly you can arrange life so that these arrive at times comfortably different (earlier) from the ones you care about;
2. there will be some dragging of the air by the structure itself, which is unavoidable but which you can work to minimise by making it have as small an effective cross-section as possible and by streamlining suitable bits of it (you may need to do wind-tunnel tests).
What you will find is two things (remember this is all being done on a flat calm day):
• by rotating the device and repeating the experiment at lots of angles but while stationary you will find that sound travels at a speed which is not direction-dependent, or that its speed is isotropic;
• by moving the device along with it turned at various angles to the direction of motion you will find that sound moves at a speed which is constant relative to the air, not the device, or in other words that there is 'aether' for sound, and it is, in this case, air.
What you will not get is a null result, as the Michelson-Morley experiment.
| cite | improve this answer | |
• $\begingroup$ I may not have been clear, but sound is affected by directional motion, in exactly the way you would expect. It's speed is the same in any direction (when the apparatus is stationary relative to the air), but that's a different thing. $\endgroup$ – tfb Feb 25 '16 at 11:02
• $\begingroup$ Propagation speed is the same in every direction relative to the air, not relative to you. There is no Doppler effect (there is only a Doppler effect when the source and detector are moving relative to each other, which they are not here). $\endgroup$ – tfb Feb 25 '16 at 11:58
• $\begingroup$ It's not null for sound! That's why it was a big deal when it was null for light. If I have done my sums right (not guaranteed before I have had coffee) then, if the frame is moving with one arm in the direction of movement and one perpendicular, the time for sound to travel the parallel arm and back is $t_\parallel = 2LV/(V^2-v^2)$, and for the perpendicular arm it is $t_\perp = 2L/\sqrt{V^2-v^2}$, where $L$ is arm length, $V$ is speed of sound in air and $v$ is how fast the frame is moving. And you can see that $t_\parallel < t_\perp$. $\endgroup$ – tfb Feb 25 '16 at 12:10
• $\begingroup$ I think $t_\parallel - t_\perp = Lv^2/V^3 + O(v^4)$. $\endgroup$ – tfb Feb 25 '16 at 12:23
I don't think it's practically possible. Here are some of the "why":
• It is a problem to create focused sound ray. Just cause of typical wavelengths and needed impedance parameters.
• To make it at significantly greater speed than ca. 350 m/s you would need to conduct the experiment in a liquid or (and that would be funny) in a solid substance.
• It would be really tricky to make a "half transparent" obstacle.
• In classical linear theory the speed of sound in a perfect gas is constant dependent only on ambient temperature and very weakly on humidity - not on the pressure disturbances. However, in nonlinear theory, which you probably would need to use to track very tiny differences, there is a dependence. ...and then the math suddenly became really ugly. The "nonlinear speed of sound" is for a planar progressive wave in a diatomic perfect gas usually given by:
$$ c = c_0 + 0.02u $$
where $c_0$ is the "linear sound speed" and $u$ is acoustic particle velocity. And the there is a "semilinear" approach for convective speed in which the Mach number is a rate of dissimilarity to the steady case.
Just do the discussion based on characteristic $\lambda$ to get closer to "How far should the mirrors... etc.".
There might be some effects of nonlinear wave steepening due to the motion, but it would require very good anechoic room, very silent servo drives and really punctual signal processing work.
| cite | improve this answer | |
• 2
$\begingroup$ I do not think it is totally out of the question. If you use an ultrasonic transmitter and receiver with exponential horns there is no need to focus. It might be fun doing the experiment travelling on the back of a lorry? A piece of hardboard is a very good half silvered mirror and sheets of aluminium are good reflectors. $\endgroup$ – Farcher Feb 24 '16 at 15:47
• $\begingroup$ I agree, that is doable. The question is whether you want just to "see something" or qualitatively evaluate the MME. I am not sure about the latter case. $\endgroup$ – Victor Pira Feb 24 '16 at 16:08
• 1
$\begingroup$ There is a standard interferometer set up in schools with ultrasonic TX & RX. The wavelength of the ultrasonic waves is measured by moving one of the aluminium mirrors and looking for maxima and minima on a meter which is monitoring the output of the RX. What one would need to do is to mount the apparatus on a platform which could be rotated. Then the whole arrangement needs to be put on something which can move at a relatively constant speed, a lorry or a boat. Whilst moving the apparatus would be rotated and the output meter monitored. The path lengths are about 100 cm. $\endgroup$ – Farcher Feb 24 '16 at 16:18
• 1
$\begingroup$ Well, I think you should put this as an answer as well. I would certainly vote it up. $\endgroup$ – Victor Pira Feb 24 '16 at 16:54
• $\begingroup$ Did you have a look at the reference I mentioned in a comment after your question? $\endgroup$ – Farcher Feb 24 '16 at 17:12
Your Answer
|
Bill Gates is the Founder of Microsoft Corporation, the richest and influential person in the world. Having wealth of 8.42 US billion dollars according to January 2017 estimate which is equivalent to the combined GDP of several African economies.
In 2006 Bill Gates announced his transition from full-time work at Microsoft and decided to devote the rest of the time working with his charity foundation to give society what he got. The foundation was started by him and his wife Melinda Gates which stands as “The Bill And Melinda Gates Foundation”.
Bill was born in Seattle, Washington on October 28.1995. Bill’s father William Gates was a senior lawyer and his mother was an executive for a major bank. Despite being wealthy they encouraged their children to work hard and take nothing for granted.
Bill’s parents were so supportive and encouraged his internet in computers at an early age. He dropped out of college to start Microsoft with his childhood friend Paul and attended the private Lakeside school at the age of 13 where he was first introduced to computers. After learning the basics he made a simple game “Tic-Tac-Toe”. He started to enjoy his time working with computers so he arranged with a Computer Center Corporation company to learn source code such as Fortran, Machine Code, and Lisp.
Bill enrolled at Harvard in 1973 where he chose mathematics and computer science. Although, Bill was interested in pursuing his own coding so when he got the opportunity he dropped out of Harvard without even completing his studies and started his own company.
Bill started Microsoft with his Paul Allen in 1975 with the vision of ‘a computer on every desktop and in every home’ that vision has seemed far fetched to most people. But in today’s era, it made it possible by Microsoft and other companies.
In 1980, when IBM started its new Basic operating system for the new computers Microsoft faced a big break. IBM was basically a far leading company in PC manufacturing in the early ’80s. IBM developed many PC clones where Microsoft also worked hard to sell its operating system to the compatible IBM’s companies. The personal computer markets started to boom when Microsoft gained its dominant position as a software manufacturer due to which other companies struggled to replace Microsoft as the dominant provider. But Microsoft changes the world by its program like MS Word and Excel which now have become an industry standard.
Microsoft released it first windows version in 1990 which breakthrough in operating software. Soon it became a best seller and started to capture the operating system market share and its majority.
Second windows, windows 95 was released in 1995 along with new operating system features and standards. This is also known as the backbone of all future windows which released as XP, Vista and Windows 2000.
Throughout his time Microsoft Internet Explorer became the dominant web browser although this was primarily because it came pre-installed on most new computers. But now Internet Explorer has seen its market shares slip.
Microsoft has its weakest sector which is the browsing tool. MSN live search has also struggled to gain more than 5% of market share where MSN had made the earth-shattering efforts for this cause. Nonetheless, several mistrust cases had been reported in the accomplishments of Microsoft.
However, Microsoft approximately meets its end and fall into three branches as compared to the US in the early ’20s. Thu, Microsoft was able to survive as a single firm only for a short period of time due to having intense potential companies like Google and Apple which are now growing swiftly rises. |
Oil Spill Threatens Argentine Capital
Image via Wikipedia
An oil spill 20 kilometers long is heading towards Buenos Aires and may have greater catastrophic effects if it reaches land.
The fuel was released into coastal waters after a Greek merchant ship named Syros, and the Maltese-registered merchant ship Sea Bird collided. According to officials, the collision was probably caused by an uncalculated maneuver by the Greek vessel. Luckily no one was injured. Notwithstanding this, disastrously, Syros’ fuel tank was cut open, dumping a huge 14,000 cubic meters of oil into the sea.
Environmental workers are trying to prevent the oil from reaching land by using floating barriers. Authorities say a southeast wind will probably prevent it from reaching Uruguay’s coast.
The collision took place near Montevideo, southern Uruguay, and the oil slick is going straight for Buenos Aires.Sources: 1, 2, 3, 4 |
Evidence Based This post has 104 references
4.2 /5
Artificial Blue Light: Negative / Positive Health Effects
Puya Yazdi
Medically reviewed by
Blue light chart
Nighttime blue light exposure can derange the circadian rhythm. It may be a culprit behind many modern chronic health issues, including obesity, metabolic syndrome, and cancer. On the other hand, blue light exposure during the day is very beneficial. Read this post to learn how to hack your blue light exposure for your benefit.
What Is Blue Light?
Light is electromagnetic radiation of a variety of wavelengths within the electromagnetic spectrum.
There is both visible and non-visible light. Non-visible light includes ultraviolet (UV) and infrared light, while visible light includes the whole spectrum of the rainbow, including blue light. Within the visible light spectrum, each wavelength is represented by a color.
Of all the colors of the visible light spectrum, blue light (wavelength 446-477 nm) has the strongest impact on physiology and the circadian rhythm because our pigments react to this wavelength [1, 2, 3].
Humans have evolved to rise with the sun and go to sleep after sunset. Before the advent of technology, humans only used sources that emitted yellow, orange, and red light, such as fire, candles, and lamps. These lights have much less impact on our circadian rhythm and sleep-wake cycles than blue light [4, 5].
Nowadays, advances in lighting and other technologies like television, computers, and digital clocks among others, have introduced more blue-white light to our environment. In addition, these lights are available at all hours, which can affect our health and wellbeing [5].
The higher-efficiency “green” light bulbs are not necessarily as beneficial for humans as they supposedly are for the Earth. Compact fluorescent light (CFL) bulbs contain about 25% of blue light, while this percentage reaches 35% in LEDs [5].
Generally, blue light is beneficial during the day, but harmful at night.
What Are the Physiological Effects of Blue Light?
Light exposure anchors human body functions to the rise and fall of the sun. When sunlight, which contains blue light, hits the retina, the photoreceptor cells transmit nerve impulses to the hypothalamus. The hypothalamus is the command central for hunger, thirst, temperature regulation, hormone secretion, and sleep patterns, which fluctuate according to our circadian rhythm [6].
Photoreceptor cells in the retina of the eye. retinal cells. rod cell and cone cell. The arrangement of retinal cells is shown in a cross section.
Photoreceptor cells in the retina of the eye (retinal cells, rod and cone cells). The arrangement of retinal cells is shown in a cross-section.
When light hits the eye, it hits the retina. This light-sensitive tissue is actually considered part of the central nervous system (CNS) and is connected to the brain via the optic nerve. There are several layers of nerve cells in the retina, but the ones that are sensitive to light are those with photoreceptor cells. Rods and cones are the light-sensing cells that allow us to see, while the retinal ganglion cells are important for circadian rhythm entrainment [7].
The suprachiasmatic nucleus (SCN) in the hypothalamus coordinates light exposure with bodily functions through hormonal, autonomic (involuntary nerve impulses), and feeding-related cues [7].
Source: [8]
1) Potential Negative Effects of Blue Light Exposure During the Night
The following adverse effects are commonly associated with increased exposure to blue night during the night. Note, however, that the majority of studies covered in this section deal with associations only, which means that a cause-and-effect relationship hasn’t been established. Other environmental and genetic factors may play a more important role.
1) Disrupting Circadian Rhythm
Normally, darkness at night allows a normal output of melatonin between 2:00 and 4:00 am. Then, bright daylight follows resetting the clock and beginning a new 24-hour day [9].
Intrinsically photosensitive retinal ganglion cells contain melanopsin (a light sensor protein), whose job is to synchronize the body’s circadian clock to light [10].
Exposure to blue light at night signals the body that it is daytime, consequently, messing up the circadian rhythm, which is crucial to many body processes. To entrain the circadian rhythm, it is important to both get sunlight in the morning and avoid artificial light at night [11].
1) May Lower Melatonin Production
Melatonin is an important hormone that controls the sleep-wake cycle. Light at night, especially blue light, suppresses melatonin production [12, 9, 13].
Even dim light can interfere with a person’s circadian rhythm and melatonin secretion. A mere 8 lux (2x the brightness of a night light) can inhibit melatonin. The brighter the light and the longer the exposure time, the less melatonin the eye cells produce [14, 15].
A theoretical model of different types of light and exposures indicates the following time needed for melatonin suppression [16]:
• Monochromatic red light at 100 lux, a reasonable living room lighting level, would take 403 hours of exposure to suppress melatonin by 50%
• Candle: 66 min suppresses melatonin by 50%
• 60-watt incandescent bulb: 39 min suppresses melatonin by 50%
• 58-watt deluxe daylight fluorescent light: 15 min suppresses melatonin by 50%
• Pure white high-output LED: 13 min suppresses melatonin by 50%
The extent to which light at night suppresses melatonin depends on both the wavelength (blue being the worst) and the intensity of the light [13, 9].
For example, after 1 hr of light at midnight, melatonin could be suppressed up to 71%, 67%, 44%, 38%, and 16% with intensities of 3,000, 1,000, 500, 350, and 200 lux, respectively [17].
Melatonin levels are reduced most with dilated pupils exposed to 90 min of monochromatic blue light from 2:00 to 3:30 AM at a brightness of 0.1 lux, which is equivalent to the light of a full moon [9].
Blue light, even at a dim, moonlight level (0.1 lux), suppressed melatonin production more than any other wavelength in a study in rats [9].
People with light-colored eyes (blue or green, for example) are more susceptible to melatonin suppression by blue light than those with darker eyes [18].
2) May Raise Cortisol
Exposure to either short-wave blue light or long-wave red light significantly increased cortisol levels at night, with little effect on cortisol levels during the day, in a small trial on 12 people [19].
3) May Suppress alpha-MSH Release
In an animal study, constant light exposure reduced the melanocyte-stimulating hormone alpha (a-MSH) peak, which can cause weight gain, increase inflammation, and increase autoimmunity [20].
4) May Worsen Sleep Quality
In clinical trials, nighttime blue light exposure:
• Increased the time it takes to fall asleep [21]
• Reduced total and deep sleep duration and overall sleep quality [22]
• Reduced melatonin effects, such as increased deep sleep in elderly insomniacs and increased REM sleep in people with reduced REM [23]
2) Worsening Metabolic Health
1) May Cause Weight Gain
Source: https://academic.oup.com/edrv/article-lookup/doi/10.1210/er.2013-1051
Source: [8]
In a study on almost 55,000 female nurses, the number of night-shifts was associated with an increased incidence of obesity [24].
In a questionnaire-based study on 100,000 women in the UK, obesity parameters (BMI, waist:hip ratio, waist:height ratio, and waist circumference) were significantly associated with increased exposure to light at night [25].
Mice exposed to dim (5 lux) light at night had altered core circadian clock rhythms in the hypothalamus, which resulted in increased body mass compared to mice that were not exposed to light, even though they ate the same amount of food [26].
The hypothalamus contains high concentrations of receptors for both leptin (the satiating hormone) and ghrelin (the appetite-stimulating hormone). Exposure to light has been found to alter the secretions of these hormones, which can lead to weight gain by increasing hunger and decreasing satiety [27].
2) May Raise the Risk for Diabetes
Artificial blue light may alter blood sugar metabolism, as seen in a small trial on 19 healthy volunteers [28].
A study on a working population of over 27,000 people found that shift workers tend to have higher rates of obesity and triglycerides, and lower HDLcholesterol than non-shift workers [29].
Light at night disrupted the circadian clock of pancreatic islet cells and β-cell function in cell studies. This impaired circadian clock may be responsible for altered islet function and survival and, consequently, an increased risk of type 2 diabetes [30].
In mice, dim light at night brought on metabolic syndrome symptoms, which could be reversed by returning to complete darkness at night [31].
3) May Raise the Risk for Heart Disease
Blue light disrupted the circadian rhythm in heart muscle cells, possibly putting the heart out of synch and increasing the risk of high blood pressure, insulin resistance, and cardiovascular disease [32].
Its inhibition of melatonin synthesis may lead to increased inflammation, increased blood pressure, high LDL and total cholesterol, decreased antioxidant capacity, and overall elevated risk of cardiovascular disease [32].
3) May Damage the Eyes
1) May Contribute to Age-related Macular Degeneration
Blue light exposure may lead to macular degeneration. In several animal studies, this light:
• caused cell death and destruction of the fatty acid DHA in the retina [33, 34].
• reduced melanin (not to be confused with melatonin, melanin is a pigment with antioxidant properties that can prevent eye damage) in the retina [35].
• increased free radicals in the presence of oxygen around the retina, leading to the breakdown of the retinal lining [36].
In eyes that are undergoing age-related macular degeneration, the damaged cells release VEGF, which can induce new blood vessel formation. These new blood vessels are fragile and can easily burst, which leads to blood leaking out and damaging the back of the eye where light is transformed into a picture [37, 38].
2) May Contribute to Glaucoma
Mice studies suggest that the suppression of melatonin by blue light may cause glaucoma, while filtering out this wavelength may help prevent it [39, 40].
In glaucoma patients, functional retinal ganglion cells are more susceptible to the damaging effects of blue light. The function of these cells worsens as the disease progresses [41, 42].
4) May Contribute to Other Chronic Diseases
1) May Contribute to Depression
A study in hamsters found that chronic dim light at night led to depression-like behaviors that may be caused by brain inflammation due to increased levels of TNF-alpha [43].
Similarly, dim light at night increased depressive behaviors in an animal model of obstructive sleep apnea [44].
A study of 500 elderly individuals (average age 72.8) associated exposure to light at night with depression [45].
2) May Increase Cancer Risk
Melatonin protects against oxidative stress by increasing glutathione and antioxidant enzymes, such as glutathione peroxidase and superoxide dismutase [46, 47].
Melatonin may protect against some types of cancer, particularly breast and colon cancer, based on some preliminary research in animals and cells and association studies in humans [48, 49].
As having lowered melatonin levels has been associated with an increased risk of cancer and blue light prevents melatonin production, blue light exposure at night may increase the risk of cancer, and blocking blue light may be helpful in preventing it [50].
However, studies on the effect of shift work – with the associated nighttime exposure to light – on various cancers (breast, prostate, colorectal, and lymphoma) showed inconsistent results [51].
In a study of over 50,000 women, long-term shift work, which lowers melatonin levels, was significantly associated with an increased risk of endometrial cancer [52].
Numerous studies show that lowered melatonin due to nighttime light exposure is a risk factor for breast cancer, with blind women being half as likely to develop this type of cancer as non-blind women [53].
In animal studies, nightlight exposure increased the growth of breast cancer cells and induced their resistance to conventional tamoxifen (an estrogen analog) and doxorubicin treatment [54, 55].
A global study on artificial light at night and cancer in 158 countries found a significant association with several forms of cancer, including lung, breast, colorectal, and prostate cancer [56].
2) Potential Benefits of Blue Light Exposure During the Day
During the day, exposure to blue light may have some health benefits that allow its therapeutic use. Below is a summary of the existing research.
Likely Effective
Actinic keratosis
Actinic keratoses are rough, scaly patches on the skin that may develop into skin cancers such as squamous cell carcinomas. Photodynamic therapy combining aminolevulinic acid (a photosensitizing agent) with light therapy (especially blue light) is approved for the removal of actinic keratosis. Upon activation by blue light, aminolevulinic acid damages the cells of these lesions
Multiple clinical trials attest to its effectiveness [57, 58, 59, 60].
This photodynamic therapy was also effective for similar lesions on the lower lips (actinic cheilitis) in a clinical trial on 15 people and against a condition that increases the risk of developing basal cell cancer (Gorlin syndrome) in a small pilot study on 3 people, although more research is needed to support these uses [61, 62].
Possibly Effective
1) Acne
Blue light inhibits the growth of Propionibacterium acnes, a bacteria that causes acne. Daily blue light treatments for mild-to-moderate inflammatory acne significantly reduced the number of acne lesions in 6 clinical trials on over 200 people [63, 64, 65, 66, 67, 68].
In another study on 30 patients using blue light LED treatments for acne at home, the blue light therapy required less time to improve the condition than the sham treatment [65].
The combination of blue and red light worked better than either used alone in 2 clinical trials on almost 150 people. Broad-spectrum continuous-wave visible light therapy can be used to harness the power of both blue and red light [69, 70, 71].
Similarly, blue light alone was only effective at improving inflammatory lesions but its combination with infrared light also improved non-inflammatory lesions and fat production (seborrhea) in a trial on 20 people [72].
A system comprised of LED blue-light phototherapy and photo-converter chromophores was safe and effective in a 12-week clinical trial on 90 people with acne and a 12-week extension of this study [73, 74].
All in all, the evidence suggests that blue light therapy may help with acne. You may discuss with your doctor if it may work as a complementary approach in your case.
2) Sleep Quality
In a clinical trial on 33 male students, combining a breakfast rich in tryptophan with blue light exposure during the day induced melatonin secretion and improved sleep quality at night [75].
In another trial on 30 people over 60 years old, bright light therapy during the day improved insomnia. Forty-five minutes of light therapy was more effective than 20 minutes [76].
Similarly, blue light-enriched white light during daytime work hours improved sleep quality while reducing daytime sleepiness in a clinical trial on 94 people [77].
Another study on 51 people didn’t find significant changes in older adults with primary insomnia. However, it did find that bright light was able to shift the circadian rhythm [78].
Although limited, the evidence suggests that daytime blue light may improve sleep quality. You may use blue light therapy for this purpose if your doctor determines it may be helpful.
3) Seasonal Affective Disorder
Narrow bandwidth blue light out-performed dimmer red light in reversing the symptoms of seasonal affective disorder (SAD) in 5 clinical trials on 176 people suffering from this condition [79, 80, 81, 82].
Blue light was effective at lower intensity than traditional fluorescent sources in a small trial on 18 depressed people [82].
Again, limited evidence suggests that blue light therapy may help with seasonal affective disorder. Discuss this approach with your doctor and never use it in place of what your doctor recommends or prescribes.
Insufficient Evidence
The following purported benefits are only supported by limited, often low-quality clinical studies. There is insufficient evidence to support the use of blue light for any of the below-listed uses. Remember to speak with a doctor before using blue light for any of these purposes. Blue light should never be used as a replacement for approved medical therapies.
1) Cognitive Performance
In a clinical trial on 35 healthy people, exposure to blue light during daytime improved activation of a brain region (prefrontal cortex) during a working memory task [83].
In another trial on 17 people, morning blue light improved cognitive performance, mood, and general well-being [84].
Similarly, exposure to blue light during memory consolidation improved long-delay verbal memory performance in a trial on 30 people [85].
In a clinical trial on 20 preschool children, exposure to blue light-enriched light improved task-switching performance [86].
Blue light improved cognitive performance better than red light and placebo in a clinical trial on 18 elderly people in long-term care [87].
In another trial on 29 healthy elderly people, blue light from 8 to 9 AM improved cognitive function but only in women [82].
Exposure to bright light after lunch had the same benefits on cognitive flexibility as a short nap in a clinical trial on 25 people [88].
However, blue light didn’t improve reaction time in a clinical trial on 72 male athletes [89].
Although the results are overall promising, all the trials were very small and include a study with negative results. Larger, more robust clinical trials are needed to validate this potential use of blue light therapy.
2) Cognitive Impairment
Bright light combined with melatonin improved mood and attenuated cognitive decline in those with dementia in a clinical trial on 189 elderly people [90].
In a clinical trial on 20 people with traumatic brain injury, blue light therapy alleviated fatigue and daytime sleepiness [91].
Two clinical trials cannot be considered sufficient evidence to claim that blue light therapy may help with cognitive impairment. Further clinical research is needed.
3) Skin Inflammation
Blue light improved scaling, inflammation, and redness in 2 clinical trials on 57 people with psoriasis. In one of them, it was more effective than red light [92, 93].
Similarly, full-body irradiation with blue light reduced itching, recurrence, and need for corticosteroids, as well as improving sleep and quality of life, in a clinical trial on 36 people with severe eczema [94].
Again, the results are promising but the evidence comes from only 3 small clinical trials. More clinical studies on larger populations are needed to verify these preliminary results.
4) Reproductive Hormones in Women
In a small trial on 16 healthy women, exposure to blue-enriched light in the morning slightly increased follicle-stimulating hormone (FSH), compared to red light [95].
A single clinical trial is clearly insufficient to back this potential health benefit until more research is conducted.
Animal and Cell Research (Lack of Evidence)
No clinical evidence supports the use of blue light for any of the conditions listed in this section. Below is a summary of the existing animal and cell-based research, which should guide further investigational efforts. However, the studies should not be interpreted as supportive of any health benefit.
Wound Healing
In animal studies, a blue LED version of the low-level light therapy (LLLT) improved wound healing [96, 97].
In animals, daytime blue light exposure was helpful for raising melatonin at night, which significantly slowed down prostate cancer growth [98].
Note, however, that these effects may not be the same in humans. Never use blue light as a replacement for proven anticancer therapies.
How to Use Blue Light Exposure to Your Advantage
The following strategies may help you exploit the full potential of daytime exposure to blue light while avoiding the negative effects associated with this light in the evening. Discuss with your doctor if any of them may be helpful in your case.
1) Get Plenty of Sunlight During the Day
Getting more bright light during the day may lessen your sensitivity to light in the evening as well as melatonin suppression by nighttime light exposure [99].
If you work in an office and can’t get it naturally, bright light devices are available.
2) Use Blue-Blocking Glasses After Sunset
Red or amber colored glasses block blue light from entering your eyes. UVEX glasses will only block out blue light. Users and manufacturers recommend wearing these glasses 4 hours before going to sleep.
3) Use Blue Blocking or Red Light Bulbs
Switching from regular light bulbs to blue blocking or red light bulbs can reduce blue light exposure.
4) Install the F.lux program on Your Computer and Tablet
Computers, iPads, smartphones and other digital devices emit blue light. You can reduce their blue light by installing F.lux and using a blue light filter app on your smartphone. You can also use blue-blocking red sheets to cover your iPad or other screens.
In addition to cutting blue light out at night, dimming all your light-emitting devices to the lowest setting may help.
5) Sleep in Total Darkness
Sleeping in total darkness will reduce your exposure to blue light. You can buy blackout curtains for your windows and black tape for blue-emitting electronics.
6) Removing Glasses or Contacts When You’re Outside
Glasses and contact lenses filter out UV lights. UV light should directly hit your eyes to have an effect on your circadian rhythm.
7) Supplementation with Lutein and Zeaxanthin
You can take supplements with lutein and zeaxanthin, which may reduce the oxidative effects of blue light [100].
Genes that Affect Susceptibility to Blue Light at Night
Cells that detect photons produce melatonin in a circadian manner and under the influence of Bmal1 (also known as ARNTL) and CLOCK genes [101].
In a study on almost 1,200 Norwegian shift-working women, TT carriers for CLOCK allele rs3749474 had a lower incidence of breast cancer compared to the other shift workers, indicating that they were less negatively impacted by nightlight. In the same study, GG carriers for CLOCK rs11133373 had an increased incidence of breast cancer [102].
Again, in the study of Norwegian shift-working women, TT carriers for BMAL1/ARNTL rs2278749 had a reduced incidence of breast cancer compared to the other shift workers, indicating they were less negatively impacted by nightlight [102].
Women with the highest number of successive night shifts and carrying at least one variant allele of SNPs in the two core circadian genes BMAL1/ARNTL (rs2290035, rs969485) and ROR-b (rs3903529, rs3750420) had an increased incidence of breast cancer [102].
Regarding AANAT, a gene controlling melatonin production, CC carriers for rs4238989 had an increased incidence of breast cancer associated with light exposure at night, whereas GG carriers for rs3760138 had a higher incidence of breast cancer (40%) with the highest (4 shifts) light exposure [102].
A study on 18 people found that those homozygous for the PER3 5/5 allele are particularly sensitive to blue-enriched light, which suppresses melatonin production [103].
Source: https://www.ncbi.nlm.nih.gov/pubmed/17160354
Source: [104]
The SCN in the hypothalamus is a central command for sleep patterns or circadian rhythm entrainment. It is connected to the retina via the retinohypothalamic (RHT) tract. You can read more about the SCN’s involvement in the circadian rhythm here. The photopigment melanopsin, contained in the intrinsically photosensitive retinal ganglion cells of the eye, acts as an information gatherer/communicator. Melanopsin determines the body’s reflexive response to light such as pupil size change and release of melatonin from the pineal gland.
Internal body clocks are created through regulation of the genes CLOCK, ARNTL, and PER3 in neurons. These are adjusted by light input from the photoreceptors. Read How Your Genetics Affect Sleep for a more detailed description.
About the Author
Carlos Tello
Carlos Tello
PhD (Molecular Biology)
Carlos received his PhD and MS from the Universidad de Sevilla.
Click here to subscribe
1 Star2 Stars3 Stars4 Stars5 Stars
(5 votes, average: 4.20 out of 5)
FDA Compliance
Leave a Reply
|
What is macrosomia?
Macrosomia means "large body" and is used to describe a newborn who's much larger than average. (The average newborn weighs about 7 pounds 3 ounces.)
There isn't a single definition for macrosomia. Historically, babies with macrosomia weigh more than 4,000 grams (8 pounds, 13 ounces) or more than 4,500 grams (9 pounds, 15 ounces) at birth. Macrosomic babies are more likely to have a difficult delivery. But the risk of complications is significantly greater when a baby is born weighing more 4,500 grams.
The Centers for Disease Control estimates that 7 percent of infants born in 2018 weighed at least 4,000 grams at birth, and 1 percent weighed 4,500 grams or more.
How will I know if my baby is macrosomic?
It's difficult to tell how big your baby is while she's still in the womb, but your healthcare provider may suspect macrosomia if you're measuring large for dates. Also, because bigger babies produce more amniotic fluid, excessive amniotic fluid (polyhydramnios) might be a sign.
What causes macrosomia?
Some women are just genetically predisposed to have larger babies, and birth weight also tends to increase with each successive pregnancy.
Most women who have a baby weighing more than 4,500 grams have no risk factors, but macrosomia may be more likely if:
• You have unmanaged high blood sugar levels from diabetes or gestational diabetes
• You're obese
• You're tall
• You've gained an excessive amount of weight during pregnancy
• You've already had a large baby. If you previously delivered a macrosomic baby, you're five to 10 times more likely to have another large baby.
• You're more than two weeks past your due date
• You were large for gestational age (LGA) yourself
• You're over age 35
• You have certain genetic abnormalities or syndromes (such as Sotos syndrome or Beckwith-Wiedemann syndrome
Also, male babies are more often macrosomic than females. And mothers who are white, American Indian, or Samoan are more likely to have macrosomic babies than women of other ethnicities. A study of mothers with gestational diabetes found that Latino women had macrosomic babies more often than black women.
How does a big baby affect delivery?
With a big baby, you have a greater chance of a difficult vaginal delivery. You may also have an increased risk of preterm birthperineal tearing, and blood loss.
Also, if you've had a previous c-section or major uterine surgery, a large baby would increase your risk of uterine rupture, a rare but dangerous complication.
A large baby also means you're more likely to have an assisted vaginal delivery or a cesarean. Although it's difficult to determine a baby's exact size before birth, your doctor may want to schedule a c-section if you're measuring large or have other risk factors for macrosomia.
The American College of Obstetrics and Gynecologists does not recommend that labor be induced early for suspected macrosomia as it does not have any proven benefit.
Can macrosomia cause problems for my baby?
If your baby is macrosomic, there's a higher risk for birth injury and some complications, but most of the possible complications usually resolve with no long-term consequences.
There's a small chance of shoulder dystocia, a rare but potentially serious complication in which the baby's shoulder gets caught behind your pubic bone, causing the baby to get stuck in the birth canal during delivery.
This situation is a medical emergency. Your healthcare provider will need to do some maneuvering or perform an episiotomy to get your baby out safely.
In rare cases, your baby could end up with a broken collarbone or upper arm bone. (The treatment is to immobilize the arm as much as possible until the fracture heals.) A more serious complication of shoulder dystocia is nerve damage to the arm on the side where the shoulder was trapped.
A macrosomic baby is also at higher risk for:
• Low blood sugar
• Lower Apgar score
• Childhood obesity
• Metabolic syndrome in childhood, which can increase the risk of heart disease, diabetes, and stroke.
• Breathing problems immediately after birth
What is recovery like after giving birth to a large baby?
If you had a perineal tear or an episiotomy, be sure to follow your provider's instructions for perineal care, and watch for signs of infection.
If you had gestational diabetes, your blood glucose levels should return to normal after birth. But you still have an increased risk of developing diabetes in the future, so within a few months of your baby's birth, schedule a follow-up appointment with your provider to be tested for postpartum diabetes or other problems with glucose metabolism.
Can I prevent macrosomia?
There are things you can do to reduce the risk:
• Start pregnancy at a healthy weight. Lose weight before becoming pregnant if you're obese.
• Maintain a healthy pregnancy weight.
• If you have diabetes or develop gestational diabetes, do what's necessary to control your blood sugar. Follow your caregiver's guidelines.
Learn more:
Measuring large or small for gestational age
Birth complications
Postpartum: Normal bleeding and discharge
Pregnancy weight gain calculator |
About Engine Oils
All engine oils come in varying viscositys. Viscosity is the resistance to flow that the oil has, therefore a lower number means easier flowing oil. Higher viscosity oils are thicker than lower viscosity oils. All oils flow slower at colder tempatures.
Their ratings come in two basic types. Single grade and Multi Grade. Single grades are called straight weight motor oils. Example of a typically labeled straight weight motor oil: SAE30. It is not commonly used in todays vehicles due to cold weather conditions.
Multi-grade oils are typically labeled as all season oils and an example is: 5w-30. The first number is the viscosity of the oil when cold, and the second number is at normal tempature. That may sound confusing at first since all oils flow slower when cold, but the number is smaller in that instance. Let me explain...
A 5w-30 weight oil when cold has a viscosity rating similar to SAE5 cold (cold is the most important word). SAE5 cold has a slower flow rate than SAE30 does hot. The effect of coldness on the oil is a huge effect as you can see. Once the 5w-30 oil is warmed up, it will flow at the rate of SAE30 hot. This multi-grade oil allows easier starting in cold winters and still offers thicker protection once the engine is hot.
People always ask me if they should use 10w-30 or 5w-30. Obviously 5w-30 is better for cold climates, but otherwise it makes no difference. That is why most vehicles now just recommend 5w-30 instead of the other. Mobile1 now has a new oil out that is 0w-30 that is perfectly fine and healthy for use in most cars.
Typical weights used today.
0w-20 5w-20
0w-30 5w-30 10w-30
5w-40 10w-40
20w-50 - Diesel or old cars/trucks
You can actually put thicker oil in an older engine to help hide or get rid of ticking noises and oil consumption issues. If you have ever used Lucas Oil, you would understand how thick oil could get rid of oil leaks. As an engine ages, the clearance gaps in engine parts can become enlarged and a thicker oil may be needed to properly lubricated the engine.
Thinner oils are being used in todays vehicles as thinner oil is easier to pump and easier for the mechanical parts to move in. Thinner oils can save your engine more horsepower, and increase gas mileage. I DO NOT recommend putting a thinner oil than reuired into your vehicle as engine damage may occur.
So in conclusion, the choice is your when it comes to using synthetic oils, conventional oils, or settling in between. Just keep in mind, always change your oil regularly as advised by the vehicle manufacturer. Some recommend oil changes every 3,000 miles while others may recommend it only after 30,000 miles. Consult your owners manual.
There are many different types of oils on the market. They all offer certain amounts of protection, varying price, variable viscosity, and different levels of efficiency. Their main purpose is to be used as a lubricant, but also as a way to cool engine components. I am going to try and explain as simply as possible the current types of engine oil that get used in the automotive field.
Synthetics are man produced oils derived from modified petroleum elements or other raw ingredients. They offer superior protections in all climates and engine operating conditions. Synthetic oils have several advantages and only one disadvantage - cost. They are extremely pricey when compared with natural or blend products.
Synthetics generally last longer. They do not break down as quickly as regular oil. Also, their molecules are all exactly the same size and conventional are not. That is the theory behind why switching to synthetic on an old vehicle will cause excessive leaks as the different sized molecules wont clog up the hole - but recent studies have determined there is little effect. I do not condone this statement, but rumor has it that "Synthetics have been rumored to last for several thousand miles (example 12K) as long as you change the filter."
Regardless of the type of oil used, always change them regularly based on manufacturers recommendations. The additive packs in oils are what fails after a short while. |
The coronavirus has almost put 2020 on hold with the amount of change required in such a short amount of time, and it’s creating a lot of stress. We’re here to tell you there’s no reason to panic, but we must follow some self-imposed rules to make sure our families are safe while taking precautions to prevent this virus from spreading even further.
Above all, we must think of others in a time like this. Keep in mind: infants and elderlies are far more vulnerable to this virus, and that despite being very similar to the common flu, it is far more lethal. In both Italy and China the death toll has already surpassed 3.000, and in the US we’ve had almost 200 deaths at the time of publication.
But keep in mind that panic only brings unnecessary confusion. Let’s instead arm ourselves with information and knowledge. The restrictive measures we’re taking are to protect ourselves but also to prevent the virus from spreading while a vaccine is underway.
What is the coronavirus?
The coronavirus (COVID-19) is a virus similar to the common flu that causes respiratory problems. The family of coronaviruses isn’t new, but this new strain was discovered in late 2019 in China, when the chinese government warned global authorities to take precautions in order to avoid a pandemic.
Unfortunately, the virus has now spread to over 70 countries and caused more than 10.000 deaths worldwide. For comparison’s sake, WebMD states that over 56.000 people die from the common flu each year. This might seem like the coronavirus isn’t as bad, but keep in mind: not only is the common flu widely known and treated, the coronavirus doesn’t yet have a vaccine and it’s only been a couple months since it’s wide spreading. The fact that it has caused over 10.000 deaths this quickly is nothing to scoff at.
The more we take precautions, the more we can help in preventing more casualties.
What are the main coronavirus symptoms?
Most notably fever and continuous coughing, but suspect any new symptoms that affect your respiratory system. Also watch out for:
• running nose
• sore throat
• trouble breathing
• diarrhea (in some cases)
Although this seems awfully similar to a common flu, remember that coronavirus is new and far more lethal than influenza. People over 70 years of age, pregnant women, people with long-term respiratory problems or simply weak immune systems are particularly fragile to this new disease.
How to protect yourself from coronavirus infection?
The main problem comes from places where lots of people gather.
• Any sort of social gathering such as meetings and parties
• Places with lots of people like markets
• Public transportation or sharing rides for now
And remember to wash your hands all the time. We’re always touching stuff and bringing our hands to our eyes, nose and mouth, and this is one of the main ways the virus can spread. Wash your hands with soap and keep a hand sanitizer ready.
Remember to clean your computer keyboard and your cellphone as well, since these are devices we use daily and they carry a lot of germs.
What to do if you or someone you know has coronavirus symptoms?
If you suspect you or someone in your family has these symptoms, do not leave the house – most people with mild symptoms can recover by themselves at home in about a week.
In general, it is agreed that the coronavirus’ risk is low for most people. The best course of action is to stay in and wait it out – leaving the house would just risk spreading the disease further.
As awkward as it might be, distance yourself from every other person in your household. Do not share cups, use your own bathroom if possible, and avoid direct contact. This is to prevent anyone else from getting sick.
However, if the symptoms worsen, you might have to see a doctor. In that case, call ahead to warn you’ll be coming so people can prepare for your arrival. If at all possible, wear a mask to protect others on the way, and if you don’t have one, keep a safe distance from others and always cover your coughs.
In the event of an emergency, call 911 and let the operator know you suspect a case of COVID-19, so the medical team can come prepared.
How soon can I leave the house after self-isolation?
If you’ve had no fever for at least 72 hours, all other symptoms have improved, and at least seven full days have passed since the first symptoms, then you’re probably fine to leave the house without risk of spreading the virus – though for the time being, staying at home is still recommended!
Common misconceptions and doubts about the coronavirus
Using a mask doesn’t prevent you from getting sick
The mask is actually used to help people who are already sick from spreading the disease to others. While the mask helps a bit, you can still accidently touch your own eyes and get infected regardless. It helps, but it won’t do anything if you don’t take all other precautions.
Conspiracy theories
Some people have been saying that COVID-19 was created in a laboratory, but for now there is no proof whatsoever. The family of coronaviruses is known for a long time and while this iteration of the virus was first documented in China, the chinese government alerted world leaders as soon as December 2019 to take precautions in order to prevent a global pandemic.
“I don’t have any symptoms, why do I need to self-isolate?
Firstly, because there have already been several cases of people who tested positive for COVID-19 without showing any symptoms – every body reacts differently. You could end up spreading the virus even without symptoms. But most importantly, it’s good to self-isolate as a measure to protect you and your family from becoming sick in the first place.
“It’s just a common flu, so what’s the big deal?
As stated before, though the symptoms are similar to a common flu, the coronavirus doesn’t yet have a vaccine and puts millions at risk. The virus has already caused over 10.000 deaths worldwide, and all of these preventive measures we’re taking are to buy us time to develop a vaccine and prevent further casualties. People who already suffer with respiratory problems, elderly, children with weakened immune systems, and pregnant women cannot afford to become sick at a time like this.
Do you have any doubts or need help in determining coronavirus symptoms? Check out the CDC (Centers for Disease Control and Prevention) page for Coronavirus help and if you have a medical emergency, call 911 right away!
Join the conversation |
5 Interesting Facets of April's Birthstone - The Diamond
5 Interesting Facets of April's Birthstone - The Diamond
April 13, 2018
Global production of rough diamonds hit 134 billion carats in 2017. Diamonds have many uses but their best-known use is as gemstones.
Since ancient times diamonds were mined and traded. Sought after for their beauty and other qualities, diamonds quickly became popular around the world.
While you probably think of diamonds as being the gemstone used in engagement rings, did you know the diamond is also April's birthstone?
We're going to share some interesting tidbits about diamonds which might surprise you.
What You Didn't Know About April's Birthstone
Here are some facts about diamonds you might not know.
Diamonds Come in a Variety of Colors
While you probably think of diamonds as being clear and sparkly, some diamonds come in different colors.
Available in all colors of the rainbow, the various colors of diamond are given different meanings.
This means colored diamonds not only stand out but can convey a sentimental meaning.
Diamonds Occur in Space Too
Astronomers discovered a white dwarf star believed to be a giant diamond weighing in at 10 billion trillion trillion carats.
The star was given the name Lucy in honor of the Beatles hit song "Lucy in the Sky with Diamonds." And there are probably many more diamond stars in our universe.
In fact, some astronomers believe our sun may one day become a diamond star.
The Largest Diamond on Earth
Found in South Africa in 1905, the Cullinan diamond weighed 3,106 carats. Gifted to King Edward, the diamond was cut into pieces.
The remnants created nine large diamonds and 100 smaller ones. The three largest of these diamonds became part of the crown jewels and are on display in the Tower of London.
More Than Just Beauty
While people have valued the aesthetic qualities of diamonds for centuries, they were also valued for other reasons.
In the Middle Ages, diamonds were thought to have powerful healing properties which could cure diseases and other ailments. Some believed diamonds could ward off evil spirits and wore them as talismans.
Other cultures believed diamonds gave the wearer special courage and strength making the wearer invincible. For this reason, kings wore diamonds on their armor to protect them during battle.
Diamonds Form from a Single Element
Made from 100% carbon, diamonds form under the incredible heat and pressure far beneath the earth's crust. Most diamonds range in age from 1 to 3 billion years old.
The heat and pressure help the carbon atoms to bond into the rare crystalline structure which characterizes a diamond. Formed 100 miles or deeper below the earth's surface, they reach the surface by volcanic eruptions.
Less than 20% of all diamonds found worldwide are suitable for use as gemstones.
What's in a Name?
The word Diamond originates from the Greek word "adameas" which means unconquerable and indestructible.
This is appropriate given diamonds are the hardest natural substance found on earth. Incredibly durable, they have a melting point of 6420 degrees Fahrenheit.
Beauty and History
Diamonds have been cherished for their beauty and other natural qualities for centuries. This makes April's birthstone one of the most popular gemstones.
While diamond jewelry styles change over time, the beauty of a diamond never goes out of style.
Want to know more about why a diamond is a perfect gemstone for you? Contact us today and we'll be happy to help answer your questions and help you find the perfect diamond. |
In 1905, streetcar-style service began for Blithedale Canyon residents. The Lee Street Local ran on a mile of mountain railway tracks from Lee Street to the depot where it connected with trains to Sausalito. There were stops at Bigelow (now Eldridge), the Blithedale Hotel, King, Marsh and Lee. The full ride took nine minutes and initially cost five cents, raised to ten cents in 1917. Service ended in 1927, two years before the disastrous fire of 1929 that led to the demise of the mountain railway. The original train was an open, six-passenger gasoline rail car dubbed the Black Maria. In 1906, it was replaced by a more economical steam engine pulling a wooden coach and dubbed the Dinky. It made 18 round-trips a day. In 1916 a gasoline-powered Kissel Kar with a passenger compartment specially built in San Francisco replaced the Dinky. The Kissel Kar was never as popular as the Dinky. It had a nasty temper on cold mornings and soon lost the confidence of commuters who preferred to walk rather than put up with its temperamental service. |
Americans today pause for Memorial Day.
Or perhaps they don't.
Far from being a somber day on which to remember the ultimate sacrifice that some Americans have made, Memorial Day has instead become almost a celebratory holiday, one that marks the unofficial beginning of summer - a day to barbecue and hit the lake.
Other Western countries have a similar holiday, except they call it Remembrance Day. In many of them it falls in the gray onset of winter, on November 11, an outgrowth of the Armistice Day that marked the end of the great horror of World War I.
We have a different date - a late spring date - for a very specific reason: This is when flowers are in bloom.
Our Memorial Day began as Decoration Day, a day set aside to decorate the graves of veterans.
Like many other things now part of American society (rural mail delivery and standardized sizes for clothing, among them), the custom of setting aside a specific day to decorate veterans' graves grew out of the mass slaughter of the Civil War.
It sprang up spontaneously enough that many communities lay claim to being the first to initiate the practice.
The town of Warrenton, Virginia, was astonished - and grief-stricken - when one of its sons was killed in a skirmish in June 1861. Captain John Quincy Marr was interred in the local cemetery amid much pomp and circumstance - "wept over by the old and young; flowers strewn on his grave," the Richmond Times-Dispatch later recalled.
It turns out that was only the beginning of the horror. The next month, a full-scale battle raged on nearby Bull Run - what we remember now as the first Battle of Manassas. Many of the Confederate dead were taken to nearby Warrenton for burial, and the women and children of the town sought to do for all what they had done for Captain Marr. At first, only crude headboards marked the graves and the names were soon weathered away. "A band of children, none of them over sixteen, determined to replace these boards," the Times-Dispatch reported.
Two years later, the correspondent recalled, "the Yankees made a raid through our town and camping near the graveyard, they burned the headboards to make their camp fires; but as soon as the spring flowers came, we placed the blossoms on these graves, and each year continued our memorial work."
Other communities claim different origin stories - Boalsburg, Pennsylvania; Waterloo, New York; Charleston, South Carolina; Carbondale, Illinois; and Columbus, Mississippi. All generally have one thing in common. They were Decoration Days, timed to when the flowers were in bloom. The exception was Columbus, Georgia, which presciently called its event Memorial Day.
The book "Decoration Day in the Mountains" says that in Appalachia, Decoration Day wasn't even veteran-specific, but grew out of a wider community custom. Authors Alan Jabbour and Karen Singer Jabbour write that in the mountains, funerals were usually conducted quickly, without much ceremony. There was a difference between "funeralizing" and "memorializing."
Once a year, though, the entire community gathered to decorate everyone's graves. This often happened "when the snowballs are in bloom," meaning the flowering snowball bushes.
Whatever the disparate popular origins, there is a single, more specific formal origin for the holiday we officially observe today: In 1868, the commander of the Union veterans' organization called for the annual observance of a national Decoration Day.
That commander was John Logan, and his wife later wrote that the former Union general (and Illinois congressman) was specifically trying to emulate the South, where the practice was most prominent. His wife later wrote in her memoirs: "He said it was not too late for the Union men of the nation to follow the example of the people of the South."
The date May 30 was chosen for apparently two reasons: It did not coincide with the date of any specific battle and, yes, because flowers would be in bloom. Despite Logan's fraternal intentions of postwar harmony, some in the South felt the North was co-opting the Memorial Day that Columbus, Georgia, had started - that event was quickly appended to become Confederate Memorial Day.
Over time, though, the national Decoration Day started turning into a more generic Memorial Day in the late 1800s, although it was not given that official name by Congress until the surprisingly late date of 1967.
Today, the original name has faded away, and Logan's name is forgotten altogether - except in his native Illinois, where he is specifically mentioned in the official state song:
"On the record of thy years,
Abraham Lincoln's name appears,
Grant and Logan, and our tears,
Illinois, Illinois,
Grant and Logan, and our tears."
Today, before you put the burgers on the grill, you might want to take time to do what this day was originally set aside to do: Take some flowers and put them on the grave of someone who died in the country's service, and remember that they died for you |
How Ethernet Can Secure The Connected Car
FacebookStumbleUponGoogle BuzzGoogle ReaderLinkedInOrkutShare
In-car networks could become the next favorite target for hackers. Ethernet offers many options to protect the connected car from malicious attacks.
In-car networks are increasingly being designed-in and deployed to connect systems such as infotainment, driver assist, autonomous driving and safety systems, often on shared, high-bandwidth infrastructures. These networks, and the devices that connect to them, require diagnostics and service through external interfaces. Additionally, more and more of today’s connected cars are equipped with Internet access, and oftentimes a WLAN, to communicate with devices inside and outside of the vehicle.
Consequently, the connected car could also become a prime target for hackers. Using just a laptop or tablet, hackers have the potential to take control of the electronics in your car. There is already research today that documents and demonstrates such attacks with alarming consequences.
In contrast to traditional IT networks, the in-car network is manufactured and physically insecure. So, with access to a mass produced vehicle and the appropriate time and resources, a hacker can develop a set of “attacks” against the vehicle and then distribute those attacks through an entire fleet. In other words, a single, well-engineered attack could have a wide impact.
Figure 1. The connected car is vulnerable to attacks at many different entry points into the network via firmware corruption or through an Ethernet on-board diagnostics port, Ethernet port access or gateway device. The types of attacks that can occur include network control (hackers install or corrupt a device on the network so they can control the operation of other devices), denial of service, and snooping (information theft).
Increasingly, Ethernet is being designed into in-car networks because of its high bandwidth, price-performance, ubiquity, and future technology roadmap, while new standards such as single twisted-pair and Audio Visual Bridging (AVB) are opening up many new automotive use cases. Ethernet’s already in some vehicles today.
By 2020, Frost and Sullivan estimates that most cars will have 50 to 60 Ethernet ports, with premium vehicles pushing that number toward 100. Even entry-level vehicles are expected to get in on the action with roughly 10 Ethernet port
Ethernet, particularly switched Ethernet, has been deployed in IT environments for several decades and has a long history of standards and solutions that can help secure the network.
To better understand how Ethernet can help secure the connected car, it’s important to first understand some basics about the technology. As shown in Figure 2, Ethernet uses a standard packet format that includes a source and destination address, a VLAN tag and a Frame Check. This provides a basic level of authentication, isolation and data integrity. The addresses can be globally unique or locally administered (given that the in-car network is mostly a closed network).
The Ethernet switches provide traffic isolation and filtering using a Filtering Database (FDB) or Multicast Forwarding Database (MFDB), and can act as management points for further network control. A rich set of statistics standards enable anomaly monitoring in software.
Figure 2. The Ethernet frame's header contains destination and source MAC addresses as its first two fields and a cyclic redundancy check (CRC) to verify packet integrity. It may also contain a VLAN tag, which defines a system and procedures to be used by bridges and switches to support VLANs.
Switched Ethernet offers a base level of security protection, but more is needed, and many additional features have evolved and are widely supported in Ethernet standards and/or products. Because the in-car network is typically highly-engineered and static with predictable traffic characteristics, it offers the opportunity to tightly configure and constrain the network operation according to design intent.
For instance, there are several ways to control the scope of network traffic and in turn, the potential for snooping and attack. One approach uses VLANs to create multiple broadcast domains within the physical network (see Figure 3); this is already broadly deployed and supported by Ethernet switches. Using VLANs, you can isolate traffic of different types on the shared physical network such that devices can only talk to the other devices within their domain. For example, one VLAN can be configured for Infotainment while a separate one can be configured for driver assist and another for safety.
Network isolation between the two can be enforced by the Ethernet switches. Traffic isolation also can be achieved within each VLAN through the use of unknown unicast or multicast filtering. Rogue stations and MAC spoofing can still occur, but techniques such as static provisioning of the FDB, port MAC locking, and implementation of software learning limits can all be used to mitigate this risk.
Figure 3. VLANs can be used to limit the scope of traffic and mitigate the risk of attack. Note that no connectivity exists between the VLANs themselves without a router.
In addition, access control lists (ACLs) can reduce the scope of traffic and are particularly well suited for the in-car network because of the opportunity to design in knowledge of expected device and network behavior. ACLs provide precisely configured match-action rules for packet forwarding that define which stations can transmit and where the traffic is allowed to go.
FacebookStumbleUponGoogle BuzzGoogle ReaderLinkedInOrkutShare
Speaking at a conference in Arizona, Microsoft’s COO Kevin Turner spoke quite frankly about the rapidly changing tides of the PC market, and how ultimately Microsoft has lost a big chunk of its money-making potential. “The first 39 years of our company, we had one of the greatest business models of all time built around … the Windows client operating system,” said Turner, and then spoke about how Microsoft is pivoting to become a cloud- and devices-oriented company.
Later, Turner was asked about whether Microsoft intends to use Windows 10 as a loss-leader to keep users within the Windows ecosystem, which prompted this very interesting response: We plan to “monetize the lifetime of that customer through services and different add-ons that we’re (going) to be able to incorporate with that solution.” That isn’t quite confirmation that Microsoft is moving Windows 10 to a subscription-based model — but it certainly soundslike subscriptions will play a key role in developing new revenue streams. (Turner said more details about Windows 10 pricing will be available next year.)
How to install Security/CCTV System for Your Home
FacebookStumbleUponGoogle BuzzGoogle ReaderLinkedInOrkutShare
Step 5: Designate one location for your DVR and Monitor to be stationed.
Step 10: -It is recommended that the DVR be connected to your Monitor before turning on their power supply
Step 11: This can be an easy and entertaining project once you have the appropriate tools and information.
Install and configure Remote Desktop Service in windows server 2012
FacebookStumbleUponGoogle BuzzGoogle ReaderLinkedInOrkutShare
In this Blog we will install and configure the Windows 2012 Remote Desktop Services Role. We will configure the website for RDWEB access and also configure remote app apps locally through the remote desktop client. Before we start bare in mind RDS is not supported on a Domain Controller, it may work but you may come across lots of issues while installing, also if you plan to connect to applications with the remote web site (RDWEB) and do not want an annoying certificate error for your users then you will need a certificate which matches the A Record you want to hit externally. For example
Windows 2012 Install Remote Desktop Services
As with all other roles we need to first launch Server Manager so we can install the Remote Desktop Services Role, once launched then select “Manage” from the top right hand corner and select Add Roles and Features as seen below.
add roles and features server 2012
You will now see the standard welcome splash screen, click next to continue. On the next screen you get to choose what type of Installation type we are doing. Select Remote Desktop Services Installation. Then click next.
remote desktop services installation
In my environment I will be running a single server, as you can see there is a wizard for this called “Quick Start”, select this option to continue.
Single server remote desktop
In remote desktop services 2012 you get the option of deploying full virtual desktops with their own applications or traditional session based desktops that can be published via a web-page or via remote app. Here we are deploying a session based environment. Select this option and continue.
Remote desktop session based
The following screen states that it will install all of the required roles on one server. in a multi server environment you create a pool and you can select what role is installed to each server, you can load balance etc if your environment is a large remote desktop environment. In this deployment all the roles are on one server. Click next.
Remote desktop services pool
You will now see the summary screen, to start the installation you must put a tick in the box to accept the server will reboot, Do so and click deploy.
The server will now go away and install the roles. Once done click close. The server will reboot.Upon reboot remote Desktop Services will continue to install, once done close the screen.
VNC Server and its configuration in Linux
FacebookStumbleUponGoogle BuzzGoogle ReaderLinkedInOrkutShare
Installation of VNC server
# yum install tigervnc-server
The above command will install the VNC server on your system.
Setting up VNC session
To configure VNC for a user “sam” /etc/sysconfig/vncservers file insert the following lines:
[[email protected] ~]# su sam
[[email protected] root]$ vncpasswd
[[email protected] root]$
# service vncserver start
Connecting to VNC server
$ vncviewer sdc.server:2
Connected to RFB server, using protocol version 3.8
Performing standard VNC authentication
Authentication successful
4.1. Connecting to VNC server via SSH tunel
$ vncviewer -via [email protected] localhost:2
First, you will be prompted for a password:
FacebookStumbleUponGoogle BuzzGoogle ReaderLinkedInOrkutShare
At a meeting last Thursday of the Ethernet Alliance, an industry group that promotes IEEE Ethernet standards, three major new projects were up for discussion.
To meet immediate demands in cloud data centers, there’s a standard in the works for 25Gbps (bits per second). For the kinds of traffic expected in those clouds a few years from now, experts are already discussing a 50Gbps specification. And for enterprises with new, fast Wi-Fi access points, there may soon be 2.5Gbps Ethernet. That’s in addition to the next top speed for carrier backbones and moves to adapt the technology for use in cars.
These efforts are all meant to serve a growing demand for Ethernet outside the traditional enterprise LANs for which it was originally designed. That means solving multiple problems instead of just how to get ever more bits onto a fiber or copper wire.
“What I’m hearing is lots of diversity. Lots of diversity in need, lots of diversity for the future,” Ethernet Alliance Chair John D’Ambrosia said part way into the daylong meeting in Santa Clara, California. “We’re moving away from an ‘Ethernet everywhere’ with essentially the same sort of flavor.”
The EA’s annual Technology Exploration Forum is a venue for discussing the kinds of technical details that many participants will go on to debate in various task groups of the IEEE 802.3 Working Group, which sets the official standards for Ethernet. Optical and electrical signaling, fiber strands and copper wires, processing power, energy consumption, heat, cost, and other issues all come into play in determining what to build and how.
Without diving too deep into those details, here are some of the new technologies brewing in Ethernet.
1. 25-Gigabit
A 25Gbps standard may seem like a step backward, because 40-Gigabit and 100-Gigabit Ethernet already exist. But in fact, it’s all about the need for more speed, specifically from servers in cloud data centers. Google and Microsoft are the biggest buyers of Ethernet now, largely because their cloud operations require so much data exchange between servers, according to Dell’Oro Group analyst Alan Weckel.
The key to 25-Gigabit Ethernet is that many of the components that could go into it are already developed: The 100-Gigabit standard is made up of four “lanes” of 25Gbps, so many of the same parts go into that high-end gear. That should mean higher production volumes for parts that go into both technologies, driving prices down.
Rallying around 25Gbps also gives network architects a logical way to build their data centers, with servers linking to switches at 25Gbps and the switches aggregating those connections into 100-Gigabit uplinks, Weckel said. That four-to-one ratio is what they’re used to working with.
“Right now, all clouds are greenfield, but as the cloud matures, and actually has a real business model and has to actually talk to Wall Street and explain the billions of dollars that they spend on every data center, you’re going to see reuse become very important,” Weckel said.
By contrast, 40-Gigabit Ethernet is made up of four lanes of 10-Gigabit Ethernet, a technology that the cloud giants are now outgrowing, Ethernet Alliance’s D’Ambrosia said. They need more than 10Gbps for each server, even as average enterprises start to connect more servers at that speed.
Google, Microsoft and several prominent networking vendors formed a group in early July to promote standardization of 25Gbps and 50Gbps Ethernet, saying they couldn’t wait for the IEEE to finish a standard. Later that month, the IEEE started its own25Gbps task group and said it might be done in as little as 18 months. On Thursday, D’Ambrosia said he doesn’t necessarily agree with that forecast but he’s optimistic. “Consensus is forming quickly in the industry,” he said.
2. 50-Gigabit
At Thursday’s event, attendees debated whether to seek a 50Gbps standard or go all the way to a single-lane system for 100Gbps. A 50Gbps specification is more within reach, said Chris Cole, director of transceiver engineering at Finisar. For a 100Gbps standard today, “you’re pushing the components,” Cole said. He expects to see standard 50Gbps products starting in 2016.
3. 2.5-Gigabit
It may not sound very fast, but 2.5-Gigabit Ethernet might help companies fill their buildings with very fast Wi-Fi. It’s being proposed specifically as a tool to help enterprises’ wired infrastructure keep up with wireless access points that increasingly form the edge of those networks.
Upgrading to 10-Gigabit Ethernet would give networks plenty of bandwidth, but most companies don’t have the right kind of cable to do that, Dalmia and other participants said. A 2.5Gbps version of Ethernet would work on commonly used Category 5e and Category 6 cable over the standard distance of 100 meters, so users could go beyond Gigabit Ethernet without the cost of pulling new cable.
Aquantia is already producing silicon for Ethernet gear that can run at 2.5Gbps or 5Gbps. The process of setting a 2.5Gbps Ethernet standard, which might also involve 5Gbps capability, is expected to begin at an IEEE meeting next month.
4. 400-Gigabit
Ethernet’s backers haven’t given up on reaching a new top speed, either. An IEEE task group is already working on a 400-Gigabit Ethernet standard, which is currently projected for completion in March 2017. The fast links might use multiple lanes of either 50Gbps or 100Gbps. Once finished, the superfast technology would be destined for the cores of service-provider networks.
How to secure your cloud database in an insecure world
FacebookStumbleUponGoogle BuzzGoogle ReaderLinkedInOrkutShare
There is a division of responsibility when you put your database to work in the cloud. An infrastructure as a service (IaaS) provider, such as IBM SoftLayer, secures the physical components while responsibility to secure information rests with the application developer. Of course, the software as a service (SaaS) vendor must provide the developers and technology to secure the application, and the service must run on a platform that supports security as a fully integrated stack and not as an add-on layer. IBM Bluemix is a platform as a service (PaaS) that provides functional, infrastructure, operational, network and physical security for the core platform.
By default, a database uses unencrypted connections between the client and the server. This means that someone with access to the network could watch all your traffic and look at the data being sent or received. They could even change the data while it is in transit between the client and the server.
When you need to move information over a network in a secure fashion, an unencrypted connection is unacceptable. Encryption is necessary to make any kind of data unreadable. Encryption algorithms must include security elements to resist many kinds of known attacks, such as attempts to change the order of encrypted messages or replay data twice.
The IBM Analytics Warehouse for Bluemix is already configured for a secure connection using a Secure Sockets Layer (SSL) certificate. SSL is a protocol that uses different encryption algorithms to ensure that data received over a public network can be trusted. It has mechanisms to detect any data change, loss or replay. SSL also incorporates algorithms that provide identity verification using the X509 standard. X509 makes it possible to identify someone on the Internet. It is most commonly used in e-commerce applications.
In basic terms, there should be a certificate authority (or CA) that assigns electronic certificates to anyone who needs them. Certificates rely on asymmetric encryption algorithms that have two encryption keys, a public key and a secret key that is held by the owner. A certificate owner can show the certificate to another party as proof of identity. Any data encrypted with the public key can be decrypted only by using the corresponding secret key.
In Bluemix, the Analytics Warehouse service provides a rich set of built-in security capabilities to help clients meet their security, privacy and compliance needs. They include:
• Encryption for data at rest: By default, the Analytics Warehouse service in Bluemix uses an encrypted database. The encryption uses Advanced Encryption Standard (AES) in cipher block chaining (CBC) mode with a 256 bit key. Encryption and key management are totally transparent to applications and schemas. Additionally, the service administrator manages the master key rotation period. Database and tablespace backup images are automatically compressed and encrypted. As with online data, backup images are also encrypted using AES in CBC mode with 256 bit keys. Data is compressed first and then encrypted.
• Encryption for data in transit: SSL is supported for safeguarding both the database traffic as well as the web console traffic.
• Trusted contexts: This feature allows clients to further restrict when a user can exercise a particular privilege. For example, a client can easily implement a rule that permits connecting to the database only from a given IP address. Additionally, for three-tiered applications, trusted contexts allow the mid-tier application to assert the end user identity to the database for access control and auditing purposes.
The Analytics Warehouse service is primarily used in two different ways.
• Application developers and data scientists launch the web-based console to develop a statistical and predictive analytic application using built-in R and R-studio features.
• Application developers and data scientists use their own machine learning algorithm to develop an application in the language of their choice and then use the Analytics Warehouse database to push that application to Bluemix
SecureBLU is an application hosted on Bluemix that demonstrates the approaches an application developer can take to secure an application while accessing a database in the cloud.
You can’t afford to ignore the internet of things
FacebookStumbleUponGoogle BuzzGoogle ReaderLinkedInOrkutShare
Most companies have websites. If they didn’t, we’d question their viability. Fast forward 15 years from now, and we’ll be even more shocked when we hear of companies operating outside of the IoT. However, the concept of an IoT revolution is not just an extension of the internet revolution; this convergence of the physical and digital worlds has the potential to transform industries, and our lives, on a greater scale than the internet has.
The early concept of the IoT was a system that connected objects in the physical world to the internet via sensors that gathered and reported data to a central location. That was in 1999, when Kevin Ashton, co-founder of the Auto-ID Center at MIT, proposed the term “internet of things”. Since then we have seen the machine-to-machine era, where devices began to communicate with other devices. Today, the IoT is connecting businesses, people and technology in real time, all the time. It is reshaping businesses across every sector of the economy and every industry.
From connected jet engines that reduce unplanned downtime, to connected vending machines that ensure the most in-demand beverages are always perfectly chilled and stocked, IoT is changing business models, customer relationships and organizational structures. Interestingly enough, the value being created does not come from the jet engine or the vending machine, but from the experiences and benefits that those connected devices enable. In other words, setting aside the hype around the latest IoT gadgets, the internet of things isn’t about the “things”. It’s about service. And that idea is revolutionary.
The IoT service opportunity
Connected services are not just forward-looking business opportunities: they are imperative now. Companies can’t afford to sit back and wait. In fact, 95% of chief experience officers told The Economist Intelligence Unit that they expect to launch IoT businesses in the next three years. Becoming an IoT business benefits a company in three fundamental ways: it brings the company much closer to its customers, providing a deeper, richer understanding of their wants and needs; it automates manual processes, directing focus on the most valuable parts of the operation; it brings new revenue streams and pricing strategies and makes the company’s business model more efficient. The model evolves from individual, one-time product sales to connected services that generate recurring revenue.
The automotive industry is a prime example of how extending the digital world into the physical world can unlock added value for customers and lucrative new sources of revenue for enterprises. For instance, General Motors no longer just sells cars. The company is at the cutting edge of user experience and new business models that allow it to connect to its customers in real time. It offers services through its vehicles that immediately detect when you’ve been in an accident and connect to emergency services to dispatch help. The vehicle becomes a WiFi hotspot for internet access and streaming content. By 2015, all GM vehicles in the United States and Canada will have 4G LTE technology built in, allowing passengers to use in-car apps, stream music and more. GM’s connected car strategy includes – but is not limited to – a Chevy app store that will let car owners download applications to the centre screen of a vehicle dashboard, a music app called Slacker Radio that provides more than 13 million songs, and an app called Glympse that lets drivers share real-time movement with friends. That doesn’t include other exclusive apps in the works, like Vehicle Health, which offers detailed information about vehicle performance.
Lahangir_Mohammed_ (2)
GM and other car companies have evolved into service providers. Innovators like these industry leaders realize that drivers and passengers not only want a reliable, comfortable vehicle, but also services to enhance their driving experience. By capitalizing on IoT, these original equipment manufacturers are able to provide remote diagnostics, maintenance, software updates, weather and traffic services, and much more. Not only do customers enjoy a connected experience, but the carmaker also improves its business.
This shift in business value from products to services is inspiring a wide variety of industries to redefine how they do business. For instance, Allstate, a US-based insurance company, is using connected devices to provide a usage-based insurance service called Drivewise®. In-vehicle connectivity enables Allstate to collect information on safe driving behaviour and reward drivers with preferred rates. These predictive insights replace guesswork and translate into higher customer acquisition and loyalty. Heineken, Europe’s largest brewer, has connected commercial kegs to deliver information that enables distributors and retailers to check the volume in kegs in real time. This provides the visibility necessary to make informed decisions regarding inventory planning and management. The system can also be used to report on product age and verify that kegs are being stored at the correct temperature, immediately alerting retailers and their suppliers to any issues that could compromise product quality. This capability gives establishments peace of mind, minimizes product waste and ensures that patrons receive the best possible experience.
Thousands of enterprises across dozens of industries are transforming their businesses into service businesses. Connecting a business to the IoT touches every part of the company and reshapes it for the better. The economic benefits of this transformation are profound. But how do enterprises get there?
Taking the first steps
Becoming an IoT service business unlocks incredible benefits, but it also comes with unique challenges. The IoT is a direct, always-on connection between your business and the rest of the world. When products are connected in real time, all the time, businesses are able to deliver an amazing array of new experiences to their customers. However, doing so will also fundamentally change how they operate, interact with those customers and make money. Companies must shift their focus from product-centric to service-centric business models.
For most businesses, the IoT is completely new territory, and the pace of innovation is incredible. Enterprises looking to capitalize on the IoT can’t afford to waste any time. They can learn from and emulate the handful of IoT success stories that have emerged recently, but if they want to lead in their own industries, they’ll need to move quickly to deploy their own IoT initiatives.
Navigating this kind of transition requires new business models and operating structures. Enterprises will also need to develop resources, expertise and alliances that enable them to manage and monetize these new services and relationships. They will also need capabilities that are critical to all successful IoT businesses like remote service management, customer engagement, support diagnostics, billing, etc. And finally, they will need a way to automate these actions in real time and at scale, in order thrive and grow in the IoT space.
Meet the new best friend of IoT businesses: automation
Arguably one of the most valuable differentiators for a connected enterprise is automation. It gives businesses the ability to not only gather information but to convert that information into insights and then use those insights to take action in real time. Imagine you’re running a connected ice cream vending machine company. Think of what you would want to monitor and control: inventory, temperature, coin jams, maintenance, etc. If the temperature rises too high, the ice cream melts, the quality of your service is compromised and you risk losing not only sales but your reputation as well. However, with automation, you can anticipate these types of risks and programme responses to immediately address the issues before they become problems. Temperature outside of acceptable standards? That information is immediately conveyed and the system automatically triggers necessary responses (e.g. in-machine temperature adjustment or a service call).
Whether you can get your favourite ice cream flavour is not a life or death situation. But with medical care it often is. In the world of healthcare, every second matters. Getting information in real time and responding equally as fast is crucial. Take the Boston Scientific, producer of a connected pacemaker, for example. The remote patient management system used with these devices showed a 33% relative reduction in the risk of death in patients who were remotely monitored compared to patients who were not. Additionally, these patients experienced a 19% relative reduction in hospitalizations for any cause.
The top 10 Internet and technology trends for 2014
FacebookStumbleUponGoogle BuzzGoogle ReaderLinkedInOrkutShare
1. The Internet of things
Little by little, all electronic devices are connecting to the Internet. It started with personal computers, notebooks, tablets and mobile phones. Then it was TVs, cars, glasses and watches. What’s next? Our homes. Fridges, keys, the heating, the electric meter and hoovers will all be connected. These new smart devices will not only be connected, but will also offer contextual relevance and a user-friendly experience, so are expected to have higher adoption rates, connecting every part of our lives.
2. Wearable devices
Wearable technologies will be everywhere in 2014. From activity bands to smart helmets and from smart clothing to Google Glass, we will be fully equipped with interconnected devices powered by the Internet. Wearable watches are already on the market, despite the limited functionality of the first-generation technology. Google Glass will get closer to the consumer, initiating a new trend where Web-based information is closely tied to the user’s daily habits, work and activities. Technology will also merge with clothing. Smart shoes from Adidas, for example, will have an integrated accelerometer, a gyroscope and Bluetooth, with the aim of motivating the user to exercise.
3. Augmented reality
Technologies that augment reality – connecting the physical and the virtual world – seemed super futuristic yesterday, but will soon become reality. In doing so, they will open up great opportunities for user engagement. We will basically be able to turn the whole world into a digital space where we can use the power of technologies to discover new digital horizons. The market for augmented reality mobile apps is predicted to grow this year and, according to Juniper Research, revenues will reach $5.2 billion by 2017.
The launch of Google Glass in 2014 will lead to even further growth in the market. It will shorten the distance between users and technology, allowing them to augment their everyday activities, such as watching a video on the cover of a newspaper or buying items by simply scanning them from a magazine.
4. Big data and machine learning algorithms
Traditional analytics will become obsolete. The future of technology will be shaped by machine learning algorithms – algorithms that are able to learn from the data they process and can be trained to improve as they process more data.
Machine learning is already a huge part of our lives, from filtering spam e-mails to providing relevant searches on the first page of our Google search. With the exponential growth of data, simple data analysis will no longer provide value. Real value will come with the application of machine learning algorithms that not only analyse but also predict and suggest, leading to tremendous opportunities for real-time engagement. Personalized e-commerce and mobile shopping, personalized information and business-to-business intranet portals will make information easily accessible. Machine learning technologies will provide the information you need when it is needed.
5. mHealth technologies
Our mobile phones have turned into our personal assistants. They navigate us through the day, giving recommendations on where to eat, what to watch and what to read. The active healthcare consumer is now also equipped with apps that monitor health, give advice on what to eat and encourage exercise. With the proliferation of low-cost mobile devices in emerging economies, mHealth technologies have the potential to improve the lives of millions of people and make healthcare more personalized and efficient. Analysts expect the global mHealth market to be worth $11.8 billion by 2018. What we can anticipate next is the personalization of the healthcare industry through technology and data.
6. 3D printing
The substantial decrease in the cost of 3D printing in the last year has made it affordable not only for companies but also for private users. The creative dimensions of the new technological possibilities are huge, ranging from architecture to home design to art and education. We will see various applications, particularly in healthcare. Affordable 3D-printed prosthetic devices, for example, will solve a real problem that will improve many people’s lives. The technology also holds great potential for manufacturing. Spare parts don’t have to be stored in big warehouses and sent over thousands of kilometres. Instead, they can be printed when and where they are actually needed. However, there will be negative consequences. Although some industries will make large cost savings, others might collapse.
7. Intraday delivery
E-commerce players with high-end warehouses, fast enterprise resource planning and supply chain management systems, and their own fleet of vehicles will change the way we buy online.
The competitive advantage of the bricks and mortar businesses will continue to crumble. The old way of delivery by outsourcing the logistics to companies such as traditional post, FedEx or DHL is neither fast nor innovative enough for leaders like Amazon and eBay. One-day delivery services will completely disrupt the business models of traditional retailers and transport and logistics companies. We saw the beginning with the testing of the first drones by Amazon. Although air traffic control and drone safety and usage rules will prevent quick proliferation, the one-day delivery service model will have important consequences.
8. Mobile payment and virtual currencies
With the increase in the number of mobile devices, there will be more and more new payment methods. NFC-enabled devices, digital wallets and Beacon, PayPal’s new wireless payment solution, are steadily reaching the mass market, allowing consumers to pay for things without a wallet or cash. Google will also be pushing the Google Wallet app for Android that allows users to send money using only a mobile phone. Apple’s iBeacon technology will unlock unlimited opportunities not only for mobile payments but also for indoor mapping and personalization. Finally, virtual currencies like bitcoin are the future, even if there are still a few issues to resolve.
9. Electric cars
This year, many vendors, even traditional car manufacturers, will finally launch full electric models. Unlike older vehicles, the electric cars of tomorrow will be fully equipped with computers, sensors and wireless connections, allowing cars to know more about the driver – information that could be very useful for car manufacturers. Although issues with batteries and charging remain, improvements and adaptations can be expected in the near future.
10. E-learning
Several new online learning platforms and portals such as university online portals or YouTube channels have been disrupting traditional education models. The initial idea was to provide high-quality education but the consequences have been much larger. As a result, the classroom is no longer simply a mentoring space, but is now an interactive and inspirational learning environment. The ease of access and certification offered by online learning models are key to making knowledge accessible to all.
Brain-computer Interfaces
FacebookStumbleUponGoogle BuzzGoogle ReaderLinkedInOrkutShare
The ability to control a computer using only the power of the mind is closer than one might think. Brain-computer interfaces, where computers can read and interpret signals directly from the brain, have already achieved clinical success in allowing quadriplegics, those suffering “locked-in syndrome” or people who have had a stroke to move their own wheelchairs or even drink coffee from a cup by controlling the action of a robotic arm with their brain waves. In addition, direct brain implants have helped restore partial vision to people who have lost their sight.
Recent research has focused on the possibility of using brain-computer interfaces to connect different brains together directly. Researchers at Duke University last year reported successfully connecting the brains of two mice over the Internet (into what was termed a “brain net”) where mice in different countries were able to cooperate to perform simple tasks to generate a reward. Also in 2013, scientists at Harvard University reported that they were able to establish a functional link between the brains of a rat and a human with a non-invasive, computer-to-brain interface.
Other research projects have focused on manipulating or directly implanting memories from a computer into the brain. In mid-2013, MIT researchers reported having successfully implanted a false memory into the brain of a mouse. In humans, the ability to directly manipulate memories might have an application in the treatment of post-traumatic stress disorder, while in the longer term, information may be uploaded into human brains in the manner of a computer file. Of course, numerous ethical issues are also clearly raised by this rapidly advancing field.
Older posts « |
Printer Friendly
Model selection and cross validation in additive main effect and multiplicative interaction models. (Crop Breeding, Genetics & Cytology).
MOST OF THE DATA collected in agricultural experiments are multivariate in nature because several attributes are measured on each of the individuals included in the experiments, i.e., genotypes, agronomic treatments, etc. Such data can be arranged in a matrix X, where the (i,j)th element represents the value observed for the jth attribute measured on the ith individual (case) in the sample. Common multivariate techniques used to analyze such data include principal component analysis (PCA) if there is no a priori grouping of either individuals or variables; canonical variate or discriminant analysis if the individuals in the sample form a priori groups; canonical correlation analysis if the variables form a priori groups; and cluster analysis if some partitioning of the sample is sought.
In plant breeding, multienvironment trials (MET) are important for testing general and specific cultivar adaptation. A cultivar grown in different environments will frequently show significant fluctuation in yield performance relative to other cultivars. These changes are influenced by the different environmental conditions and are referred to as GEI. A typical example of a matrix X arises in the analysis of MET, in which the rows of X are the genotypes and the columns are the environments where the genotypes are tested. Presence of GEI rules out simple interpretative models that have only additive main effects of genotypes and environments (Mandel 1971; Crossa, 1990; Kang and Magari, 1996). On the other hand, specific adaptations of genotypes to subsets of environments is a fundamental issue to be studied in plant breeding because one genotype may perform well under specific environmental conditions and may give a poor performance under other conditions.
Crossa et al. (2002) give a comprehensive review of the early approaches for analyzing GEI that include the conventional fixed two-way analysis of variance model, the linear regression approach, and the multiplicative models. The empirical mean response, [[bar]y.sub.ij], of the ith genotype in the jth environment with n replicates in each of the i x j cells is expressed as [[bar]y.sub.ij] = [mu] + [g.sub.i] + [e.sub.j] + [(ge).sub.ij] + [[epsilon].sub.ij] where [mu] is the grand mean across all genotypes and environments, [g.sub.i] is the additive effect of the ith genotype, [e.sub.j] is the additive effect of the jth environment, [(ge).sub.ij] is the GEI component for the ith genotype in the jth environment, and [[epsilon].sub.ij] is the error assumed to be NID (0, [[sigma].sup.2]/n) (where [[sigma].sup.2] is the within-environment error variance, assumed to be constant). This model is not parsimonious, because each GEI cell has its own interaction parameter, and uninformative, because the independent interaction parameters are complicated and difficult to interpret.
Yates and Cochran (1938) suggested treating the GEI term as being linearly related to the environmental effect, that is setting [(ge).sub.ij] = [[xi].sub.i][e.sub.j] + [d.sub.ij], where [[xi].sub.i] is the linear regression coefficient of the ith genotype on the environmental mean and [d.sub.ij] is a deviation. This approach was later used by Finlay and Wilkinson (1963) and slightly modified by Eberhart and Russell (1966). Tukey (1949) proposed a test for the GEI using [(ge).sub.ij] = K[g.sub.i][e.sub.j] (where K is a constant). Mandel (1961) generalized Tukey's model by letting [(ge).sub.ij] = [lambda][[alpha].sub.i][e.sub.j] for genotypes or [(ge).sub.ij] = [lambda][g.sub.i][[gamma].sub.j] for environments and thus obtaining a "bundle of straight lines" that may be tested for concurrence (i.e., whether the [[alpha].sub.i] or the [[gamma].sub.j] are all the same) or nonconcurrence.
Gollob (1968) and Mendel (1969, 1971) proposed a bilinear GEI term [(ge).sub.ij] = [[summation of].sup.s.sub.k=1][[lambda].sub.k][[alpha].sub.ik][[gamma].sub.jk] in which [[lambda].sub.1] [greater than or equal to] [[lambda].sub.2] [greater than or equal to] ... [greater than or equal to] [[lambda].sub.s] and [[alpha].sub.ik], [[gamma].sub.jk] satisfy the ortho-normalization constraints [[summation of].sub.i][[alpha].sub.ik][[alpha].sub.ik'] = [[summation of].sub.j][[gamma].sub.jk][[gamma].sub.jk'] = 0 for k [not equal to] k' and [[summation of].sub.i][[alpha].sup.2.sub.ik] = [[summation of].sub.j][[gamma].sup.2.sub.jk] = 1. This leads to the linear-bilinear model [[bar]y.sub.ij] = [mu] + [g.sub.i] + [e.sub.j] + [[summation of].sup.s.sub.k=1] [[lambda].sub.k][[alpha].sub.ik][[gamma].sub.jk] + [[bar][epsilon].sub.ij], which is a generaliza of the regression on the mean model, with more flexibility for describing GEI because more than one genotypic and environmental dimension is considered. Zobel et al. (1988) and Gauch (1988) called this the Additive Main Effects and Multiplicative Interaction (AMMI) model.
A family of multiplicative models can then be generated by dropping the main effect of genotypes (Site Regression Model, SREG), the main effect of sites (Genotype Regression Model, GREG), or both main effects (Complete Multiplicative Model, COMM). Another multiplicative model, the Shifted Multiplicative Model (SHMM) (Seyedsadr and Cornelius, 1992) is useful for studying crossover GEI (Crossa et al., 2002.)
However, one aspect that has not yet been fully resolved concerns the determination of the number of multiplicative components to be retained in the model to adequately explain the pattern in the interaction. Some proposals have been put forward by, among others, Gollob (1968), Mandel (1971), Gauch and Zobel (1988), Cornelius (1993), and Piepho (1994, 1995). All take into consideration the proportion of the variance accumulated by the components (Duarte and Vencovsky, 1999), and the more recent ones focus on cross validation as a predictive data-based methodology. However, some problems still remain, notably in optimizing the cross-validation process.
In this paper, we first summarize the AMMI model and analysis for genotype-environmental data, and sketch out the available methodology for selecting the number of multiplicative components in the model. We then describe two methods based on a full leave-one-out procedure that optimizes the cross-validation process. Both methods are illustrated on some unstructured multivariate data. Their application to analysis of GEI is then exemplified on some experimental data, and a comparison of all available methods is made on data from five multienvironment cultivar trials.
The AMMI Model
Suppose that a set of g genotypes has been tested experimentally in e environments. The mean of each combination of genotype and environment, obtained from n replications of an experiment (a balanced set of data), can be represented by the following array
The AMMI model postulates additive components for the main effects of genotypes ([g.sub.i]) and environments ([e.sub.j]) and multiplicative components for the effect of the interaction [(ge).sub.ij]. Thus, the mean response of genotype i in an environment j is modeled by:
[Y.sub.ij] = [mu] + [g.sub.i] + [e.sub.j] + [[summation of].sup.m.sub.k=1][[lambda].sub.k][[alpha].sub.ik][[gamma].sub.jk] + [[rho].sub.ij] + [[epsilon].sub.ij]
in which [(ge).sub.ij] is represented by:
under the restrictions:
[[summation of].sub.i][g.sub.i] = [[summation of].sub.j][e.sub.j] = [[summation of].sub.i][(ge).sub.ij] = [[summation of].sub.j][(ge).sub.ij] = 0.
Estimates of the overall mean ([mu]) and the main effects ([g.sub.i] and [e.sub.j]) are obtained from a simple two-way ANOVA of the array of means [Y.sub.(g x e)] = [[Y.sub.ij]]. The residuals from this array then constitute the array of interactions:
G[E.sub.(g x e)] = [[(ge).sub.ij]]
and the multiplicative interaction terms are estimated from the singular value decomposition (SVD) of this array. Thus, [[lambda].sub.k] is estimated by the kth singular value of GE, [[alpha].sub.ik] is estimated by the ith element of the left singular vector [[alpha].sub.k(g x 1)], and [[gamma].sub.jk] is estimated by the jth element of the right singular vector [[gamma]'.sub.k(1 x e)] associated with [[lambda].sub.k] (Good, 1969; Mandel, 1971; Piepho, 1995). Correspondences between SVD and PCA are as follows: [[lambda].sub.k] is the kth singular value or the square root of the kth largest eigenvalue of the arrays (GE)[(GE).sup.T] and [(GE).sup.T](GE), which have equal nonnull eigenvalues; [[alpha].sub.ik] is the ith element of the eigenvector of (GE)[(GE).sup.T] associated with [[lambda].sup.2.sub.k]; [[gamma].sub.jk] is the jth element of the eigenvector of [(GE).sup.T](GE) associated with [[lambda].sup.2.sub.k].
The GEI in this model is thus expressed as a sum of components, each multiplied by [[lambda].sub.jk], for a genotypic effect ([[alpha].sub.ik]) and an environment effect ([[gamma].sub.jk]). The term [[lambda].sub.k] gives the proportion of the variance due to the GEI interaction in the kth component. The effects [[alpha].sub.ik] and [[gamma].sub.jk] represent weights for genotype i and environment j, in that component of the interaction ([[lambda].sup.2.sub.k]). The rank of GE is s = min{g - 1, e - 1}, so the index k in the sum of multiplicative components can run from 1 to s. Use of all s components regains all the variation: SS(GEI) = [[summation of].sup.2.sub.k=1][[lambda].sup.2.sub.k] and the model is saturated so it produces an exact fit to the data, with no residual error term against which to test effects (except in the situation when an independent error is estimated). When m < s, the model is said to be truncated. However, for AMMI, one does not try to recoup the whole SS(GEI) but only the components most strongly determined by genotypes and environments. Consequently, the index is generally set to run to m < s, so the estimates are obtained from the first m terms of the SVD of the GE array (Good, 1969; Gabriel, 1978). This is a least-squares analysis that leaves an additional residual denoted by [[rho].sub.ij]. Thus, the interaction of genotype i with environment j is described by [[summation of].sup.m.sub.k=1][[lambda].sub.k][[alpha].sub.ik][[gamma].sub.jk], discarding the noise given by [[summation of].sup.s.sub.k=m+1][[lambda].sub.k][[alpha].sub.ik][[gamma].sub.jk]. Here, as in PCA, the components account, successively, for decreasing proportions of the variation present in the GE array ([[lambda].sup.2.sub.1] [greater than or equal to] [[lambda].sup.2.sub.2] [greater than or equal to] ... [greater than or equal to] [[lambda].sup.2.sub.s]). Therefore, the AMMI method is seen as a procedure capable of separating signal and noise in the analysis of the GEI (Weber et al., 1996).
Determining the Optimal Number of Multiplicative Terms in the AMMI Model
The main objective is the prediction of the true trait response in the cell of the two-way table of genotypes and environments. To achieve this, a truncated AMMI model should be used and thus criteria for determining the number of components needed to explain the pattern in the GEI term have been the objects of some research (Gollob, 1968; Mandel, 1971; Gauch and Zobel, 1988; Piepho, 1994, 1995; Cornelius, 1993; Cornelius et al., 1996).
Two basic approaches have evolved to determine the optimal number of multiplicative terms to be retained in the GEI component. One approach uses a cross-validation method in which the data are randomly split into modeling data and validation data. AMMI is fitted to the modeling data and the mean squared errors of prediction (expressed as the root mean squared predictive difference, RMSPD) are determined from the validation data. The main criticism of this approach is that the best predictive model computed from a subset of data may not be the best model when all data are considered (Cornelius and Crossa, 1999); if crossvalidation is used in MET data, the data must be adjusted for replicate differences within environments (Cornelius and Crossa, 1999). The other approach for determining the best predictive truncated model is to use tests of hypotheses about the kth component, [H.sub.0k]: [[lambda].sub.k] = 0, using the complete data set (and not a subset like in the cross-validation approach). These tests are based on the sequential sum of squares explained by the multiplicative terms.
We will now briefly review these two approaches. It may be noted that shrinkage estimators of multiplicative models have been shown recently to be good predictors of cultivar performance in environments (Cornelius and Crossa, 1999), but these estimators require df estimates that the authors find problematic. Moreover, other classes of estimators can be better than shrinkage estimators (see Venter and Steel, 1993). So we do not consider them any further.
Tests of Significance of Multiplicative Terms
The sequential sum of squares of the AMMI model for the kth component, [S.sub.k], is given by n [[lambda].sup.2.sub.k] for k = 1,2, ..., rank(GE) (where GE = [[bar]y.sub.ij] - [[bar]y.sub.i.] - [[bar]y.sub..j] + [[bar]y.sub..]). As in PCA, all of the test criteria involve, at least indirectly, the ratio of the accumulated sum of squares for the first m components to the total SS(GEI), i.e., [[summation of].sup.m.sub.k=1][[lambda].sup.2.sub.k]/SS(GEI).
One of the usual procedures consists of determining the degrees of freedom associated with a particular component of SS(GEI) for each member of the family of AMMI models. This enables mean squares to be computed for each component, together with an error mean square. Since we have an orthogonal partition of the interaction sum of squares, the ratio of the mean square of any interaction component to the error mean square is then assumed to follow an F distribution with the corresponding degrees of freedom. This implicitly assumes a normal distribution for the original response variable, and enables individual interaction components to be subjected to significance tests. However, validity of the F distribution in these circumstances is subject to considerable doubt. The eigenvalues [[lambda].sup.2.sub.k] of the matrix (GE)[(GE).sub.T] (or [(GE).sup.T](GE)) are distributed as eigenvalues of a Wishart matrix but do not have a chi-square distribution. Since the [S.sub.k] are not independent random variables following a chi-square distribution, an F test does not hold. Nonetheless, selection of the optimal model is often based on F tests for the successive terms of the interaction, the number of included terms corresponding to the number of significant components. The Gollob (1968) approximate F test assumes that n [[lambda].sup.2.sub.k]/[[sigma].sup.2] is distributed as chi-square and so obviously does not hold. Computer simulations done by Cornelius (1993) showed that Gollob tests at the 0.05 level are very liberal with Type I error rate of 66% for testing [H.sub.01]:[[lambda].sub.1] = 0. The F-approximation tests [F.sub.GH1], [F.sub.GH2] (Cornelius et al., 1992; Cornelius et al., 1993), effectively control Type I error rates, and are generally more parsimonious than the Gollob test. However, these tests are conservative for testing multiplicative terms for which the previous term is small. Simulation and iteration tests have greater power than the [F.sub.GH1] and [F.sub.GH2] tests with good control of Type I error rates. The residual AMMI, collected in the last term of SS(GEI), can also be tested to confirm its nonsignificance.
Turning to the question of degrees of freedom, Gauch and Zobel (1996) mention some methods for attributing degrees of freedom to components of an AMMI model; those of Gollob (1968) and Mandel (1971) are particularly popular. However, the authors warn that, unfortunately, there are disagreements between these methods. Choosing one requires both theoretical and practical considerations. The approach of Gollob (1968) is very easily applied, since the number of degrees of freedom for component m of the interaction is simply defined to be DF(IPCAm) = g + e - 1 - 2m, whereas most other approaches require extensive simulations before they can be used.
For instance, Mandel (1971) defines the number of degrees of freedom for component k to be DF(IPCAk) = E[[[lambda].sup.2.sub.k]]/[[sigma].sup.2], where [[sigma].sup.2] is the population variance. However, simulations then have to be conducted to evaluate the number of degrees of freedom in particular cases. Mandel gives some tables derived from such simulations for a limited set of conditions, whereas Krzanowski (1979) gives some exact versions. These tables, however, are not exhaustive and this reduces the practical utility of the method. By contrast with Gollob (1968), Mandel's (1971) system generally results in a nonlinear decrease in the degrees of freedom for the successive interaction terms, which can still be fractions.
For some years, the degrees of freedom have been obtained by Mandel's (1971) proposal, which was considered exact and therefore correct. However, this proposal has received much criticism recently (e.g., Gauch, 1992), and it is now felt to be less appropriate than the approach of Gollob (1968). The reason for this criticism centers on the assumptions made by Mandel in his simulations, that the matrix contains only noise and not signals, whereas the presence of signal affects the component patterns substantially.
Gauch (1992) discusses the question of obtaining the degrees of freedom for the multiplicative components of an AMMI model. He concludes that rigorous simulations seem unnecessary or impractical, and generally recommends the use of Gollob's system when one is using an F-test approach, bearing in mind that the procedure is an intuitive guide. In cases where there seems to be a clear division between the large components determining the systematic part and small noise components, the author suggests that assigning equal degrees of freedom {DF(IPCAk) = [(g - 1)(e - 1)]/e]} is especially useful for early components because normally there will be little interest in partitioning the noise components. In addition, definitive questions of research require the exact assigning of the degrees of freedom to each multiplicative term. Therefore by Gollob's system, the full joint analysis of variance (computed from means) has the structure as shown in Table 1.
Piepho (1995) investigated the robustness (to the assumptions of homogeneity and normality of the errors) of some alternative tests to select an AMMI model. He comments that F tests applied in accordance with Gollob's (1968) criterion are liberal, in that they select too many multiplicative terms. Of the four methods he studied, including that of Gollob (1968), the test proposed by Cornelius et al. (1992) was the most robust. The author thus recommends that preliminary evaluations should be conducted to verify the validity of the assumptions if one of the other tests is to be used.
The Cornelius test statistic with m multiplicative terms in the model is as follows:
with [f.sub.2] = (g - 1 - m)(e - 1 - m).
This is the [F.sub.R] test of Cornelius et al. (1992) that may turn out to be liberal as compared with [F.sub.GH1], [F.sub.GH], or simulation iteration tests. Under the null hypothesis that no more than m terms determine the interaction, the numerator (i.e., the residual SS(GEI) for the fitted AMMI model) is, approximately, a chi-square variable (Piepho, 1995) so the test statistic has an F distribution with [f.sub.2] and Error mean square degrees of freedom.
Thus, a significant result for the test suggests that at least one more multiplicative term must be added to the m already included. It can therefore be seen as a test of significance of the first m + 1 terms of the interaction (similar to the test of lack of fit in linear regression). When m = 0, i.e., when no multiplicative term is included, the test is just equivalent to the F test for global GEI in the joint ANOVA. It is an exact test. One also notices that the number of degrees of freedom of the numerator of [F.sub.R] is equal to the degrees of freedom for the whole interaction minus the degrees of freedom attributed by Gollob (1968) for the m first terms. It is concluded, therefore, that the application of [F.sub.R] is equivalent to the test of residual AMMI for GEI, as suggested.
Predictive Assessment Using Cross Validation
Gauch and Zobel (1988) comment that evaluations such as those above by means of distributional assumptions via the F test can be termed "postdictive," in that they search for a model to explain a great part of the variation in the observed data (with high coefficient of determination). Thus, they argue, such methods are not efficient for selecting parsimonious models and are liable to include noise. By contrast, "predictive" criteria of evaluation capitalize on the ability of a model to form predictions with data not included in the analysis, simulating future responses not yet measured, so it would be preferable to base the model choice on such criteria.
To make predictions, in general, it is necessary to use computationally intensive statistical procedures. The less the model choice or assessment of performance of a predictor is based on distributional assumptions, the more general is the result. Thus, methods that are essentially data-based and free of theoretical distributions will have the greatest generality. Such methods involve resampling the given data set, using techniques such as the jackknife, the bootstrap and cross validation. Gauch (1988) introduced the name "predictive evaluation" when it is based on cross validation (Stone, 1974; Wold, 1978), and this is the principle underlying his proposal for selection of number of components in AMMI models.
In his method, the replications for each combination of genotypes and environments are randomly divided into two subgroups: (i) data for the fit of the AMMI model and (ii) data for validation. The responses are predicted for a family of AMMI models (i.e., for different values of m) and these are compared with the respective validation data, calculating the differences between these values. Then, the sum of squares of these differences is obtained and the result is divided by the number of predicted responses. This method was developed further by Crossa et al. (1991). The authors call the square root of this result the mean predictive difference (RMSPD) and suggest that the procedure be repeated about 10 times, getting an average of the results for each member of the family of models. A small value of RMSPD indicates predictive success of the model, so the best model is the one with smallest RMSPD. The chosen model is then used to analyze the data of all the n replications, jointly, in a definitive analysis.
Further modifications have been proposed in recent years. Piepho (1994) suggests obtaining the average value of RMSPD for 1000 different randomizations, instead of the 10 suggested by Crossa et al. (1991). The author considers a modification of the completely random partition of the data (modeling and validation) when the experiment is blocked. In this case, he recommends drawing entire blocks from the experiment and not making components for each combination of genotype and environment. Thus, the original block structure is preserved. However, despite the logical coherence of this type of proposal, studies confirming its effectiveness are still not available. Gauch and Zobel (1996) suggest that the validation data set should always be just one observation for each treatment. This is because that it is most likely, from n - 1 data points, to find a model that is closest to the analysis of the full set of n data points. We thus take up this idea in the present contribution, and describe two methods that optimize the cross-validation process by validating the fit of the model on each data point in turn and then combining these validations into a single overall measure of fit.
Cornelius and Crossa (1999) used cross validation for comparing performance of shrinkage estimators, truncated multiplicative models and best linear unbiased predictor (BLUP) by computing the RMSPD as the square root of the mean squared difference between the predictive value and their corresponding validation data on replication adjusted data of five MET. The authors used a stopping rule for the number of crossvalidations that consisted in calculating the pooled mean square predictive difference on the mth execution of the loop (PMSP[D.sub.m]). The crossvalidation was terminated if the maximum absolute value of PMSP[D.sub.m]-PMSP[D.sub.m-1] was less than 0.01. The maximum number of cross validations required was 64 and the minimum was 39.
Leave-One-Out Methods
We now propose two methods based on a full leave-one-out procedure that optimizes the cross-validation process. In the following, we assume that we wish to predict the elements [x.sub.ij] of the matrix X by means of the model: [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. The methods are those outlined by Krzanowski (1987) and Gabriel (2002) respectively, in which we predict the value [x.sup.m.sub.ij] of [x.sub.ij](i = 1, ..., g; j = 1, ..., e) for each possible choice of m (the number of components), and measure the discrepancy between actual and predicted values as
However, to avoid bias, the data point [x.sub.ij] must not be used in the calculation of [x.sup.m.sub.ij] for each i and j. Hence, appeal to some form of cross validation is indicated, and the two approaches differ in the way that they handle this. Both, however, assume that the SVD of X can be written as X = UD[V.sup.T].
The standard cross-validation procedure is to subdivide X into a number of groups, delete each group in turn from the data, evaluate the parameters of the predictor from the remaining data, and predict the deleted values (Wold, 1976, 1978). Krzanowski (1987) argued that the most precise prediction results when each deleted group is as small as possible, and in the present instance that means a single element of X. Denote by [X.sup.(-i)] the result of deleting the ith row of X and mean-centering the columns. Denote by [X.sub.(-j)] the result of deleting the jth column of X and mean centering the columns, following the scheme given by Eastment and Krzanowski (1982). Then we can write
[X.sup.(-i)] = [bar]U [bar]D [[bar]V.sup.T] with [bar]U = ([[bar]]), [bar]V = ([[bar]]), and [bar]D = diag([[bar]d.sub.1], ..., [[bar]d.sub.p]),
[X.sub.(-j)] = UD[V.sup.T] with U = ([]), V = ([]), and D = diag([d.sub.1], ..., [d.sub.(p-1)]).
Now consider the predictor
Each element on the right-hand side of this equation is obtained from the SVD of X, mean-centered after omitting either the ith row or the jth column. Thus, the value [x.sub.ij] has nowhere been used in calculating the prediction, and maximum use has been made of the other elements of X. The calculations here are exact, so there is no problem with convergence as opposed to expectation maximization approaches that have also been applied to AMMI, but are not guaranteed to converge.
Gabriel (2002), on the other hand, takes a mixture of regression and lower-rank approximation of a matrix as the basis for his prediction. The algorithm for cross-validation of lower rank approximations proposed by the author is as follows:
For given (GEI) matrix X, use the partition
and approximate the submatrix [X.sub.\11] by its rank m fit using the SVD
where U = [[u.sub.1], ..., [u.sub.m]], V = [[v.sub.1], ..., [v.sub.m]], and D = diag([d.sub.1], ...., [d.sub.m]).
Then predict [x.sub.11] by
[x.sub.11] = [x.sup.T.sub.1.]V[D.sup.-1][U.sup.T][x.sub..1]
and obtain the cross-validation residual [e.sub.11] = [x.sub.11] - [x.sub.11].
Similarly obtain the cross-validation fitted values [x.sub.ij] and residuals [e.sub.ij] = [x.sub.ij] - [x.sub.ij] for all other elements [x.sub.ij], i = 1, ..., g; j = 1, ..., m; (i,j) [not equal to] (1,1). Each will require a different partition of X.
These residuals and fitted values can be summarized by [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] and PRECORR(m) = Corr([x.sub.ij], [x.sub.ij]|[for all]i, j) respectively.
With either method, choice of m can be based on some suitable function of
However, the features of this statistic differ for the two methods. Gabriel's approach yields values that first decrease and then (usually) increase with m. He therefore suggests that the optimum value of m is the one that yields the minimum of the function. The Eastment-Krzanowski approach produces (generally) a set of values that is monotonically decreasing with m. They therefore argue for the use of
[W.sub.m] = [PRESS_(m - 1) - PRESS_(m)/[D.sub.m]]/PRESS_(m)/[D.sub.r],
where [D.sub.m] is the number of degrees of freedom required to fit the mth component and [D.sub.r] is the number of degrees of freedom remaining after fitting the mth component. Consideration of the number of parameters to be estimated together with all the constraints on the eigenvectors at each stage, shows that [D.sub.m] = g + e - 2m. [D.sub.r] can be obtained by successive subtraction, given (g - 1)e degrees of freedom in the mean-centered matrix X, i.e., [D.sub.1] = (g - 1)e and [D.sub.r] = [D.sub.r-1] - [g + e - (m - 1)2], r = 2,3, ..., (g - 1), (Wold, 1978). [W.sub.m] represents the increase in predictive information supplied by the mth component, divided by the mean predictive information in each of the remaining components. Thus, "important" components should yield values of [W.sub.m] greater than unity. Basing choice of m on [W.sub.m] in this way can thus be seen as a natural counterpart to the selection of a best set of orthogonal regressor variables in multiple regression analysis.
On a computational level, the best accuracy of prediction seems to be achieved when the entries ([x.sub.ij]) in different columns of X are comparable in size and there is relatively little variation among the [d.sub.i]. The most stable procedure is thus one in which the mean [[bar]x.sub.j] and standard deviation [s.sub.j] of column j (j = 1, ..., e) are first found from the values present in that column. Existing entries [x.sub.ij] of X are then standardized to [x'.sub.ij] = ([x.sub.ij] - [[bar]x.sub.j])/[s.sub.j], estimates are found by applying [x.sub.ij] = [x.sup.T.sub.i.]V[D.sup.-1][U.sup.T][x.sub..j] to the standardized data, and then the final values are obtained from [x.sub.ij] = [[bar]x.sub.j] + [s.sub.j][x'.sub.ij].
Turning to the case of genotype-environment data, it would appear that X should be the array of interactions previously denoted GE. However, since we are merely looking for the appropriate number of multiplicative terms in the model, and any additive constants can be absorbed into the [[epsilon].sub.ij] component of the model, we can apply the leave-one-out procedure directly to the data matrix Y. Indeed, this may often be preferable given the small values taken by most elements of GE.
Cornelius et al. (1993) compared results of cross validation with those obtained after computing the PRESS statistics in multiplicative models in a complete MET data. The data splitting involved three replicates for modeling and one replicate for validation. They computed the RMSPD of PRESS by adjusting the value of PRESS as [[PRESS/ge + 3[s.sup.2]/4].sup.1/2], where g and e denote the number of genotypes and sites in the MET and [s.sup.2] is the pooled within-site error variance. The term in [s.sup.2] is an adjustment for the difference in variance of the validation data on cell means, to make results comparable to the RMSPD from 3-1 data splitting. Results on an MET with nine genotypes and twenty sites showed that PRESS is more sensitive to overfitting than is data splitting. Table 2 shows that PRESS differentiates more clearly the model forms than does data splitting. For some model forms (SHMM and SREG), the model with smallest PRESS predicted data in a deleted cell better than they were predicted by three replicates of data with all cells present. On the other hand, many overfitted models gave very unreliable prediction of a deleted cell.
Illustrative Data Sets
Krzanowski (1988) considered a simple multivariate data set from Kendall (1980, p. 20), relating to 20 samples of soil and five variables: percentages of sand content, silt content, and clay content; organic matter; and pH. The author called attention to the fact that the three percentages added to 100 so that applying any regression-based technique to the raw data would incur multi-collinearity problems, but the singular-value approach could be applied directly without any computational drawbacks.
Jeffers (1967) described two detailed multivariate case studies, one of which concerned 19 variables measured on each of 40 winged adelgids that had been caught in a light trap. Of the 19 variables, 14 are length or width measurements, four are counts, and one (anal fold) is a presence/absence variable score 0 or 1.
In Table 3, we show a comparison between the Eastment-Krzanowski and Gabriel methods for the soil data. In this case, when the data matrix was standardized, the rank of the resulting matrix was four and all submatrices associated with columns four and five had a singular matrix D. Thus we used a Moore-Penrose generalized inverse instead of the ordinary inverse when implementing Gabriel's method. We can see that the PRESS(m) values are much lower with the Eastment-Krzanowski method than with the Gabriel method up as far as m = 4.
This shows that the Gabriel criterion is sensitive to the appearance of a singular matrix D. On the other hand, the use of the Eastment-Krzanowski criterion W suggests two components in the model, whereas the Gabriel approach indicates that all four are needed. In this case, both methods yield similar PRECORR values for all components.
In Table 4, we show a comparison between both approaches using the Jeffers data. Now, with standardized matrix X, no singular submatrices appeared. We can see from PRESS(m) that the Gabriel values are lower until the third component. From the W statistic, we can see that the Eastment-Krzanowski method suggests that four components should be retained in the model, whereas the Gabriel criterion suggests two.
Genotype x Environment Examples
Vargas and Crossa (2000) present a complete data set from a wheat (Triticum aestivum L.) variety trial with eight genotypes tested during six years (1990-1995) in Cd. Obregon, Mexico. In each year, the genotypes were arranged in a complete block design with three replicates. The eight genotypes correspond to a historical series of cultivars released from 1960 to 1980. We divided the original data by 1000, and analyzed the mean grain yields (kg [ha.sup.-1]). Results of analysis of variance incorporating both the Gollob and Cornelius F tests are shown in Table 5.
This table shows that genotypes, years, and GEI are highly significant (P < 0.01) and account for 39.29, 45.20, and 15.51% of the treatment sum of squares, respectively. At the 1% significance level, both the Gollob and the Cornelius F tests indicate that the first two interaction components (IPCA1 and IPCA2) should be included in the model.
The result of a full cross validation across 1000 randomizations of the data can be seen in the first three columns of Table 6. For each randomization, one of the three observations at each treatment combination was randomly selected and used to create the interaction matrix, whereas the other two replicates were averaged to form the validation data. Also shown in the remaining columns of Table 6 are the Eastment-Krzanowski and Gabriel leave-one-out results, as obtained from the single matrix of averages across the three replicates. The full cross validation yields minimum RMSPD at four components, although the RMSPD values for three, four, and five components are very similar in size and any could be chosen to represent the optimum number of components for the model. However, both the Eastment-Krzanowski and the Gabriel methods suggest that one component should be retained in the model.
To obtain a broader comparison of methods, we turn to the data sets in Cornelius and Crossa (1999) who describe five multienvironment international cultivar trials, all in randomized block designs. Trial 1 was a wheat trial with 19 durum wheat cultivars, one bread wheat cultivar, and 34 sites. Trials 2 to 5 were maize (Zea mays L.) trials with numbers of cultivars and sites equal to (16,24), (9,20), (18,30), and (8,59), respectively. For each trial, the number of multiplicative terms was obtained from a range of methods and the results are given in Table 7.
The three distinct approaches to selection of number of multiplicative interaction components have yielded different results on the data of Vargas and Crossa (2000). Distributional F tests indicate two components as optimum; randomization cross validation suggests three or four, whereas leave-one-out methods indicate just one important component. So how do we assess these differences?
The first point to make is that the F-test methods all rely heavily on distributional assumptions (normality of data and validity of F distributions for mean squares), which may not be appropriate in many cases. Also, it is documented that the different F tests can come up with conflicting recommendations on a particular data set (Duarte and Vencovsky, 1999), while Piepho (1995) has noted that some of the tests select too many interaction components. This feature can be seen clearly in the comparisons of Table 7 also. So, in general, it seems that a data-based cross-validation method should be more appropriate.
Turning then to the full cross-validation randomization approach, the weakness here is that a large portion of the data must be set aside for the validation set. This means that the model is fitted to only a relatively small part of the data. For example, in the analysis reported in Table 6, the fit was to just one observation at each genotype-environment combination, while the assessment was on the mean of the other two replicates. Between-replicate variation may generally be very high, which inflates assessment error sums of squares and has probably contributed to the high number of components selected by this method.
By contrast, the leave-one-out methods make the most efficient use of the data and result in the most parsimonious model (AMMI 1) for the example of Tables 5 and 6. This model has 23 df (5 for years plus 7 for genotypes plus 11 for interaction PCA component 1) and is twice as parsimonious as AMMI 5 (in the sense that AMMI 5 contains twice as many degrees of freedom as AMMI 1). Thus, we conclude that a final model may be constructed by applying AMMI 1 to all the data (i.e., all three replications). The first interaction component recovers 43 % of the GEI SS in only 31.4% of the interaction df (Table 5). The higher interaction components are judged by predictive assessment to be just noise for the purpose of yield prediction, and thus may be pooled with the residual.
The feature of parsimony is illustrated most clearly by the results of the Eastment-Krzanowski method in Table 7. Trials 1 through 4 are complex trials in which there is clear evidence of GEI, whereas Trial 5 is much simpler and probably free of interaction. From a practitioner's point of view, capturing the essence of any interaction in relatively few components is an attraction as these components can be interpreted clearly, whereas fitting many components may create problems of interpretation. Most of the methods shown in Table 7 exhibit quite a large variability in the number of components selected, with large numbers in some data sets for each method. Such large numbers are undesirable in practice. At the other extreme, PRESS provides a maximum of one component for several complex trials in which the interaction structure is evidently not so straightforward, and several trials with no components including one (Trial 5) in which there is clear evidence of interaction. By contrast, the Eastment-Krzanowski method provides a stable pattern of relatively low and hence interpretable numbers of components.
In summary, therefore, distributional F tests are often based on questionable assumptions while full cross-validation randomization methods remove too much of the available data for validation purposes and hence lead to less reliable fitted models. Use of leave-one-out method is therefore recommended in general. Of the two such methods investigated here, the Eastment-Krzanowski method, has shown the greater parsimony and stability of fitted model.
Abbreviations: AMMI, additive main effects and multiplicative interaction model; COMM, completely multiplicative model; DF, degrees of freedom; GEI, genotype x environment interaction; GREG; genotype regression model; IPCA, interaction principal component analysis; MET, multi-environment trials; NID, normally and independently distributed; PCA, principal components analysis; PRESS, predictive sum of squares; PRECORR, predictive correlation; RMSPD, root mean square predictive difference; SHMM, shifted multiplicative model; SREG, sites regression model: SS, sum of squares: SVD, singular value decomposition.
Table 1. Full joint analysis of variance computed from averages
using Gollob and Cornelius's system.
Source of variation Degrees of freedom Sum of squares Gollob
Genotypes (G) g - 1 SS(G)
Environment (E) e - 1 SS(E)
Interaction (GEI) (g - 1)(e - 1) SS(GEI)
IPCA 1 g + e - 1 - (2x1) [[lambda].sup.2.sub.1]
IPCA 2 g + e - 1 - (2x2) [[lambda].sup.2.sub.2]
IPCA 3 g + e - 1 - (2x3) [[lambda].sup.2.sub.1]
IPCA s g + e - 1 - (2xs) [[lambda].sup.2.sub.s]
Error mean/n ge(n - 1) SS(Error mean)
Total gen - 1 SS(Total)
Source of variation DF ([dagger]) Cornelius
Genotypes (G) --
Environment (E) --
Interaction (GEI) --
IPCA 1 (g - 1 - 1)(e - 1 - 1)
IPCA 2 (g - 1 - 2)(e - 1 - 2)
IPCA 3 (g - 1 - 3)(e - 1 - 3)
IPCA s --
Error mean/n --
Total --
Source of variation Sum of squares Cornelius
Genotypes (G) --
Environment (E) --
Interaction (GEI) --
IPCA 1 [[summation of].sup.s.sub.k=2]
IPCA 2 [[summation of].sup.s.sub.k=3]
IPCA 3 [[summation of].sup.s.sub.k=4]
IPCA s --
Error mean/n --
Total --
([dagger]) Degrees of freedom.
Table 2. RMSPD from 3-1 crossvalidation and adjusted
RMSPD(PRESS) for models fitted to an multi-environment
Model form ([dagger])
Terms AMMI GREG SREG COMM SHMM
Data splitting
0 980 -- -- -- --
1 915 954 908 962 947
2 934 907 935 911 906
3 951 926 947 930 924
4 955 946 949 951 944
5 963 957 967 957 959
Adjusted RMSPD(PRESS) ([double dagger])
0 970 -- -- -- --
1 956 939 892 942 925
2 2 725 912 9 155 935 886
3 30 708 994 7 071 1 557 925
4 19 030 1 071 14 670 2 682 2 246
5 30 540 3 251 8 165 8 688 5 094
([dagger]) AMMI: Additive main effect and multiplicative
interaction model; GREG: Genotype regression model; SREG:
Sites regression model; COMM: Completely multiplicative
model; SHMM: Shifted multiplicative model.
([double dagger]) RMSPD: Root mean square predictive
difference; PRESS: Predictive sum of squares.
Table 3. Data on twenty samples of soil and five variables
(from Kendall, 1980, p. 20, based on Krzanowski, 1988).
Rank PRESS_m
m ([dagger]) PRECORR W
1 4.36 0.9963 27.78
2 2.23 0.9981 2.14
3 2.14 0.9982 0.05
4 2.13 0.9982 0.00
m PRESS_m PRECORR W
1 8.08 0.9932 13.60
2 7.45 0.9937 0.18
3 5.60 0.9952 0.45
4 0.21 0.9998 10.20
([dagger]) PRESS: Predictive sum of squares; PRECORR: Predictive
correlation; W: Eastment-Krzanowski criterion.
Table 4. Data on forty winged aphids and nineteen variables (from
Jeffers, 1967, based on Krzanowski, 1987).
Eastment-Krzanowski Gabriel
Rank m ([dagger]) PRECORR W PRESS_m PRECORR W
1 0.4500 0.9799 29.04 0.4240 0.9810 31.56
2 0.3391 0.9849 3.71 0.2883 0.9871 5.34
3 0.3389 0.9849 0.00 0.2934 0.9869 -0.18
4 0.2865 0.9874 1.85 0.2957 0.9868 -0.07
5 0.2823 0.9876 0.14 0.3031 0.9864 -0.23
6 0.2815 0.9876 0.02 0.3096 0.9862 -0.18
7 0.2760 0.9878 0.16 0.3117 0.9861 -0.05
8 0.2723 0.9880 0.10 0.3239 0.9855 -0.28
9 0.2679 0.9882 0.11 0.3668 0.9836 -0.80
10 0.2677 0.9882 0.00 0.3589 0.9839 0.13
11 0.2666 0.9883 0.02 0.3687 0.9835 -0.14
12 0.2651 0.9883 0.02 0.4222 0.9812 -0.59
13 0.2640 0.9884 0.01 0.4842 0.9786 -0.50
14 0.2622 0.9885 0.02 0.5039 0.9776 -0.12
15 0.2616 0.9885 0.00 0.4986 0.9778 0.02
16 0.2610 0.9885 0.00 0.5004 0.9778 -0.00
17 0.2604 0.9885 0.00 0.5443 0.9759 -0.03
18 0.2601 0.9886 -0.00 0.5778 0.9744 0.03
correlation; W: Eastment-Krzanowski criterion.
Table 5. Additive main effects and multiplicative interaction analysis
of the Vargas and Crossa (2000) data, up to the first five interaction
principal component analysis (IPCA).
Sum of DF [F.sub.
Source of variation squares ([dagger]) Gollob]
Block 0.2001 2 0.63
Treatment 108.8393 47 14.65 **
Genotypes (G) 42.7587 7 38.65 **
Years (E) 49.1997 5 62.27 **
Interaction (GEI) 16.8809 35 3.05 **
IPCA 1 7.2428 11 4.16 **
IPCA 2 5.4232 9 3.81 **
IPCA 3 2.9696 7 2.68 *
IPCA 4 1.1906 5 1.50
IPCA 5 0.0545 3 0.11
Error 14.8543 94
Correct Total 123.8939 143
Sum of D[F.sub. [F.sub.
Source of variation squares Cornelius] Cornelius]
Block -- -- --
Treatment -- -- --
Genotypes (G) -- -- --
Years (E) -- -- --
Interaction (GEI) -- -- --
IPCA 1 9.6379 24 2.54 **
IPCA 2 4.2147 15 1.78 *
IPCA 3 1.2451 8 0.98
IPCA 4 0.0545 3 0.12
IPCA 5 -- -- --
Correct Total
* The 0.05 probability level.
** Significant at the 0.01 probabfiity level.
([dagger]) DF: degrees of freedom.
Table 6. Cross-validation data analysis and leave-one-out method
on the Vargas and Crossa (2000) data.
Randomisation Eastment-
Cross-validation Krzanowski Gabriel
Rank m ([dagger]) PRECORR PRESS_m W PRESS_m W
0 0.5040 0.8436 -- -- -- --
1 0.5149 0.8386 0.1861 2.8587 0.1886 2.7882
2 0.4968 0.8521 0.1989 -0.1029 0.2020 -0.1057
3 0.4830 0.8617 0.1721 0.1167 0.2610 -0.1695
4 0.4776 0.8655 0.1615 -0.0218 0.3543 0.0877
5 0.4812 0.8635 0.1394 -0.3171 0.5285 0.6592
([dagger]) RMSPD: Root mean square predictive difference;
PRECORR: Predictive correlation; PRESS: Predictive sum of
squares; W: Eastment-Krzanowski criterion.
Table 7. Number of AMMI multiplicative terms in five data sets
that are statistically significant for various tests, and using the
PRESS, crossvalidation, Eastment-Krzanowski, and the Gabriel criteria.
Test ([dagger]) Trial 1 Trial 2 Trial 3 Trial 4 Trial 5
JG 4 1 1 4 0
AL 4 4 4 7 0
[F.sub.GH1] 5 5 2 7 2
[F.sub.R] 5 6 2 8 3
PRESS 1 1 1 0 0
Crossvalidation 4 8 1 10 0
Eastment-Krzanowski 2 2 2 1 1
Gabriel 5 6 7 4 1
([dagger]) JG = Seydsadr-Cornelius/Johnson-Graybill/
Schot-t-Marasinghe test.
AL = Anderson-Lawley test of equality of the last p - k + 1
principal components (Jackson, 1991, Section 4.4.1)
[F.sub.GH1] = approximate sequential tests against the pure error
based on the Goodman-Haberman theorem (Cornelius, 1993).
[F.sub.R] = test of the residual mean square.
The authors thank Dr. Jose Crossa for his very generous contributions that significantly improved the draft, and Dr. Joao Batista Duarte for his constructive criticism. This research was financially supported by FAPESP proc. 00/12292-1.
Cornelius, P.L. 1993. Statistical tests and retention of terms in the additive main effects and multiplicative interaction model for cultivar trials. Crop Sci. 33:1186-1193.
Cornelius, P.L., and J. Crossa. 1999. Prediction assessment of shrinkage estimators of multiplicative model for multi-environment cultivar trials. Crop Sci. 39:998-1009.
Cornelius, P.L., J. Crossa, and M.S. Seyedsadr. 1993. Tests and estimators of multiplicative models for variety trials, p. 156-166. In Proceedings of Annual Kansas State University Conference on Applied Statistics in Agriculture, 5th., Manhattan, KS. 25-27 Apr. 1993. Dep. of Statistics, Kansas State Univ., Manhattan, KS.
Cornelius, P.L., J. Crossa, and M.S. Seyedsadr. 1996. Statistical tests and estimators of multiplicative models for genotype-by-environment interaction, p. 199-234. In M.S. Kang and H.G. Gauch (ed.) Genotype-by-environment interaction. CRC Press, Boca Raton, FL.
Cornelius, P.L., M. Seyedsadr, and J. Crossa. 1992. Using the shifted multiplicative model to search for "separability" in crop cultivar trials. Theor. Appl. Genet. 84:161-172.
Crossa, J. 1990. Statistical analyses of multiiocation trials. Adv. Agron. 44:55-85.
Crossa, J., P.L. Cornelius, and W. Yan. 2002. Biplot of linear-bilinear models for studying crossover genotype x environment interaction. Crop Sci. 42:619-633.
Crossa, J., P.N. Fox, W.H. Pfeifer, S. Rajaram, and H.G. Gauch. 1991. AMMI adjustment for satatistical analysis of na international wheat yield trial. Theor. Appl. Genet. 81:27-37.
Duarte, J.B., and R. Vencovsky. 1999. Interacao genetipos x ambientes-uma introducao a analise "AMMI". Ribeirao Preto, S.P.
Eastment, H.T., and W.J. Krzanowski. 1982. Cross-validatory choice of the number of components from a principal component analysis. Technometrics 24:73-77.
Eberhart, S.A., and W.A. Russell. 1966. Stability parameters for comparing varieties. Crop Sci. 6:36-40.
Finlay, K.W., and G.N. Wilkinson. 1963. The analysis of adaptation in a plant-breeding programme. Austr. J. Agric. Res. 14:742-754. Gabriel, K.R. 1978. Least squares approximation of matrices by additive and multiplicative models. J. Roy. Stat. Soc. Series B 40:186-196.
Gabriel, K.R. 2002. Le biplot-outil d'exploration de donnees multidimensionelles. Journal de la Societe Francaise de Statistique 143 (to appear).
Gauch, H.G. 1988. Model selection and validation for yield trials with interaction. Biometrics 44:705-715.
Gauch, H.G. 1992. Statistical analysis of regional yield trials; AMMI analysis of factorial designs. Elsevier Science, New York.
Gauch, H.G., and R.W. Zobel. 1988. Predictive and postdictive sucess of statistical analysis of yield trials. Theor. Appl. Genet. 76:1-10.
Gauch, H.G, and R.W. Zobel. 1996. AMMI analysis of yield trials. p. 85-122. In M.S. Kang and H.G. Gauch (ed.) Genotype by environment interaction. CRC Press, Boca Raton, FL.
Gollob, H.F. 1968. A statistical model which combines features of factor analytic and analysis of variance techniques. Psychometrika 33(1):73-115.
Good, I.J. 1969. Some applications of the singular decomposition of a matrix. Technometrics 11(4):823-831.
Jackson, J.E. 1991. A user's guide to principal components. Wiley and Sons. New York.
Jeffers, J.N.R. 1967. Two case studies in the application of principal component analysis. Appl. Stat. 16:225-236.
Kang, M.S., and R. Magari. 1996. New developments in selecting for phenotypic stability in crop breeding, p. 1-14. In M.S Kang and H.G. Gauch (ed.) Genotype by environment interaction. CRC Press, Boca Raton, FL.
Kendall, M.G. 1980. Multivariate analysis (2nd ed.). Charles Griffin & Co., London.
Krzanowski. W.J. 1979. Some exact percentage points of a statistic useful in analysis of variance and principal component analysis. Technometrics 21:261-263.
Krzanowski. W.J. 1987. Cross-validation in principal component analysis. Biometrics 43:575-584.
Krzanowski. W.J. 1988. Missing value imputation in multivariate data using the singular value decomposition of a matrix. Listy Biometycze-Biometrical Letters XXV (1,2):31-39.
Mandel, J. 1961. Non-additivity in two-way analysis of variance. J. Am. Statist. Assoc. 56:878-888.
Mandel, J. 1969. The partitioning of interaction in analysis of variance. J. Res. Int. Bur. Stand. Sect. B 73:309-328.
Mandel, J. 1971. A new analysis of variance model for non-additive data. Technometrics 13(1):1-18.
Piepho, H.P. 1994. Best linear unbiased prediction (BLUP) for regional yield trials: a comparison to additive main effects and multiplicative interaction (AMMI) analysis. Theor. Appl. Genet. 89:647-654.
Piepho, H.P. 1995. Robustness of statistical test for multiplicative terms in additive main effects and multiplicative interaction model for cultivar trial. Theor. Appl. Genet. 90:438-443.
Seyedsadr, M., and P.L. Cornelius. 1992. Shifted multiplicative models for nonadditive two-way tables. Commun. Stat. B Simul. Comp. 21:807-832.
Stone, M. 1974. Cross-validatory choice and assessment of statistical predictions (with Discussion). J. Roy. Stat. Soc. Series B 36:111-148.
Tukey, J.W. 1949. One degree of freedom for non-additivity. Biometrics 5:232-242.
Vargas, M.V., and J. Crossa. 2000. The AMMI analysis and graphing the biplot. CIMMYT, INT., Mexico.
Venter, J.H., and S.J. Steel. 1993. Simultaneous selection and estimation for the some zeros family of normal models. J. Statist. Computation Simulation. 45:129-146.
Weber, W.E., G. Wricke, and T. Westermann. 1996. Selection of genotypes and predictions performance and analysing genotype-by-environment interactions, p. 353-371. In M.S. Kang and H.G. Gauch (ed.) Genotype by environment interaction. CRC Press, Boca Raton, FL.
Wold, S. 1976. Pattern recognition by means of disjoint principal component models. Pattern Recognition 8:127-139.
Wold, S. 1978. Cross-validatory estimation of the number of components in factor and principal component models. Technometrics 20:397-405.
Yates, F., and W.G. Cochran. 1938. The analysis of groups of experiments. J. Agric. Sci. 28:556-580.
Zobel, R.W., M.J. Wright, and H.G. Gauch, Jr. 1988. Statistical analysis of a yield trial. Agron. J. 80:388-393.
Carlos T. dos S. Dias * and Wojtek J. Krzanowski
C.T. dos S. Dias, Dep. of Ciencias Exatas, Univ. of Sao Paulo/ESALQ, Av. Padua Dias 11, Cx.P.09, 13418-900, Piracicaba-SP, Brazil; W.J. Krzanowski, School of Mathematical Sciences, Laver Building, North Park Road, Exeter, EX4 4QE, UK. Received 8 Apr. 2002. * Corresponding author (
COPYRIGHT 2003 Crop Science Society of America
Copyright 2003 Gale, Cengage Learning. All rights reserved.
Article Details
Printer friendly Cite/link Email Feedback
Author:Dias, Carlos T. dos S.; Krzanowski, Wojtek J.
Publication:Crop Science
Date:May 1, 2003
Previous Article:Clustering environments to minimize change in rank of cultivars. (Crop Breeding, Genetics & Cytology).
Next Article:Base temperatures for seedling growth and their correlation with chilling sensitivity for warm-season grasses. (Crop Physiology & Metabolism).
|
The Conservative's Guide to Socialism
August 31, 2018
Political words are notoriously slippery. The project of defining them can sometimes feel like stapling Jello to the wall. This is especially true of words that develop strong positive or negative connotations. Connotation can attach itself to the meaning of words if the connotative use becomes common enough.
The word "socialism" has taken on a life of its own over the last few months. The variety of uses of the term are making political dialogue terribly confused. Average Americans honestly discussing their political differences will often find themselves arguing over what does or does not qualify as "socialism." This is because words are weapons, and both sides have an interest in dulling their opponents blades and sharpening their own.
I'm here to help. I have done my best to parse the common uses of the term "socialism" and subdivide them into their proper categories. Keep in mind that these categories will only reflect term usage in the aggregate. Individuals tend to equivocate.
The first thing to notice is a broad categorical distinction between "socialism" referring to a historical thing and "socialism" describing modern political feelings. There is debate within both categories, of course. But let's begin with the historical.
Properly defined, historical socialism includes fascism (national socialism) and communism (international socialism), both of which sought to empower the working class by giving it control over the means of production. National socialism gave workers control through nationalization of industry (accomplished either by state regulation of nominally private corporations or by direct state ownership). Communist socialism sought to empower workers by overthrowing the state on behalf of a global proletariat—a project which, ironically, always necessitates tyrannical use of state power.
Democrats unanimously desire to exclude the national socialists from historical "socialism," for good reason. The farther one goes toward the political Left, the fewer historically socialist regimes actually count as "socialist." For Democrats, there are countless ways to parse history based on which socialist regimes you want to identify with, and which ones you want to throw under the bus. For some, Soviet communism was a form of socialism that was too radical. Out on the fringe, the Soviets were not radical enough to be "true socialists."
For a number of historical reasons too complicated to get into, here, the word "socialism" developed a negative connotation in American politics in the 20th century. Because of this, modern Republicans had a political interest in broadening the term, using it to describe Social Security, Medicare, ObamaCare, and many other federal programs. Republicans hoped to smear some of the negative connotation of the word onto Democrat policies. And it worked! But, in doing so, Republicans gave birth to the modern, colloquial usage.
The way younger generations use the term "socialism" is determined by a memory that only extends over the last few decades. Millennials have largely been taught that history is a story of diverse people-groups standing up to oppression. And, in this story, America has no special place, and is more often than not the oppressor. What little they learn of the history of socialism is seen through this lens: socialism, being on the side of the people, is on the right side of history.
These Millennials have grown up listening to Republicans refer to basic social programs dismissively as "socialist policies." They have also heard Republicans defend income inequality on the basis of merit, which, to them, seems little different than a blanket assertion that successful races have more merit than less successful ones. For Millennials who lean to the Left, "socialism" became a reactionary badge of honor much like the word "deplorable" did for Trump supporters. The term more precisely describes what they are against (Reagan-era meritocracy) than what they are for.
The best definition I can come up with for this new "socialism" (often called "democratic socialism") that is sweeping the Democratic Party is this: "the prioritization of equal economic outcomes among social groups over meritocracy." Conservatives who want to be able to speak intelligibly to Millennials need to understand this.
Please reload
Check out our Podcast
Please reload
• Facebook - White Circle
• Twitter - White Circle
©2020 by Think Outside Politics. |
University of Cambridge > Talks.cam > Darwin College Sciences Group > Echo mapping the gravitational potential well of black holes
Echo mapping the gravitational potential well of black holes
Black holes are the most extreme objects found in the universe. They provide a one-way passage to the unknown, places where our understanding of physics breaks down. Pioneering work over the last century has transformed black holes from theoretical curiosities, into the domain of the observational astronomer. These gravitational monsters reside in the centre of all galaxies in the universe, and are intimately linked to the formation and evolution of stars and galaxies we observe today. Despite their enormous size, our current telescopes are unable to spatially resolve them on the sky. We therefore resort to indirect methods to zoom in on the region directly around the black hole. In this talk, I will describe current efforts to spatially map the gas in the immediate vicinity of a black hole as it spirals down the deep gravitational potential well. These observations provide us with information about the two fundamental properties of black holes: their mass and spin.
This talk is part of the Darwin College Sciences Group series.
Tell a friend about this talk:
This talk is included in these lists:
Note that ex-directory lists are not shown.
|
Protests for racism equality spread across the globe. It was prominent in Australia as well. Black lives matter sees participation in form of rallies. Amid the COVID-19 situation, quite a number of people turned up.
Protesters at a Black Lives Matter rally at Langley Park in Perth. The protest was organised to raise awareness of Aboriginal deaths in police custody
Thousands of people took part in Black Lives Matter and pro-refugee protests and marches across Australia. Moreover, refugee advocates in Sydney defied a court order to take to the city’s streets.
The protests came as Victoria. Along with that they recorded eight new coronavirus cases. A GP who worked at three medical clinics while he may have been infectious.
At Perth’s Black Lives Matter event, more than expected number of people came in support. In addition to that, a torrential downpour midway through the rally was not enough to disperse them.
Black lives matter before The safety for corona virus
Administrators were requesting to not come for protests. the were afraid of the rise in corona virus cases. Organisers ignored the pleas of the West Australian premier, Mark McGowan, and Aboriginal affairs minister, Ben Wyatt.
They pleaded to delay the protest until after the coronavirus pandemic was over. But social distancing requests were largely followed by attendees. Most of them wore face masks and used available hand sanitiser.
Thousands protest in Sydney. Organisers urged attendees to try to observe social distancing
Human rights lawyer and activist Hannah McGlade asked for an independent investigation of the 432 Indigenous deaths in custody recorded in Australia in the past 30 years.
Know where the Black lives matter and George Floyd’s story started. |
Prejudice in the tech industry is spilling into technologies
3 min read
prejudice-tech-industryTechnology, by nature, cannot be discriminatory. It can’t hold prejudice beliefs against anyone, and won’t treat anyone differently because of their color, race, creed, sexual orientation, socioeconomic status, etc. However, many people have claimed that some technologies are discriminatory, and it has shown to be true. Facial recognition programs and dating apps have shown to be built with algorithmic biases. However, could it be that the people making these technologies hold biased beliefs?
An important distinction needs to be made when people say that some technologies are discriminatory. The technology itself isn’t prejudiced; it’s the people programming these technologies that may hold racist and sexist preconceptions. Before this article gets started, it’s good to point out that the people who program these technologies are probably not actively racist. It would be more plausible to consider the fact that the tech industry predominantly consists of white men who gear these technologies the way they understand the world and not taking people of color into account when entering extensive data into predictive technologies such as AI, machine, and deep learning.
How can this be? Read below to find out some gender and racial disparities happening in the tech industry, and how it could explain racial and sexist connotations of some technologies.
The gender gap
You may be aware that there is a gender gap in the tech industry, but you may not know the magnitude of the situation. The tech industry has infamously adopted the term “tech bro culture,” equating the goings on in tech sectors to that of a frat house; women in the tech industry have endured immature boys, bullying, sexism, and even reports of sexual harassment.
Many cases of unwanted sexual advances and other sexual harassment have been swept under the rug, perpetuating the volatile culture in the tech industry. However, women continue to fight back. One of the leading female figures battling against sexual harassment in the tech industry is Ellen Pao. Business Insider reported that:
“In 2012, VC Ellen Pao famously sued Kleiner Perkins alleging sexual discrimination, not harassment. But in the trial, she alleged that one of her co-workers tried to retaliate after she ended an affair with him. She ultimately lost the case. That partner, Ajit Nazre, left the job and was accused of sexual harassment by another female VC at the firm.”
Ellen Pao is just one of the many women that have been discouraged; many others have considered leaving the tech field for a new career. As a result, the tech industry is becoming more and more male-dominated. This gender gap is only the beginning of males isolating themselves in the tech sector — that is, until more tech companies like Alibaba start advocating for women in the tech field.
Racial disparities
Being a woman in the tech industry is tough, and being a non-white woman in the tech industry is even more so. Minorities — men and women — are finding that they are running into a brick wall when trying to make a name for themselves or simply even breaking into the tech field.
City Lab sheds light on disparities in the tech industry when they state, “white women were 31 percent more likely than Hispanic men to be executives, and 88 percent and 97 percent more likely than Asian and black men respectively. Meanwhile, for minority women, the ‘race-to-gender factor’ has only worsened since 2007.” Taking into account the adversities women face when falling victim to the gender gap, City Lab shows that the racial divide is even more significant.
If tech companies are willing, even if reluctantly, to hire white women, they will be even more hesitant to hire Hispanic, Asian, and Black men. Just imagine the hardships of a woman who is one of these races; it could prove to be almost impossible to break into tech.
The gender gap, combined with racial disparities, is why white males dominate the tech industry. A white male mindset may explain why when programming predictive technologies and algorithms, minorities aren’t taken into account. This mindset is making for some embarrassing racial assumptions of particular technologies.
Biased Technologies
Do a quick Google search for the word “handsome.” About 90 percent of what will pop up will be white males. The results are like this because when a white man thinks of the word handsome, they only think of handsome in their worldview — which happens to be predominantly white people.
This same scenario can be run, on a much larger scale, when algorithms are executed, and this is when racial assumptions can be made through technology. Black people have been mistaken for gorillas in facial recognition technologies, and cameras have mistaken Asian people’s eyes as shut in camera software. Again, this is not likely active racism, rather it’s that the people who program these technologies simply just aren’t taking minorities into account for all-encompassing data for everyone.
Until the tech industry learns to build more diverse companies, this will continue to happen and may get worse. Different minds and perspectives are needed to develop more comprehensive algorithms. Without the inclusion of minorities, a white male perspective is all the data an algorithm will receive.
Ben Dickson goes more in-depth in a great article about algorithmic bias. We can combat algorithmic prejudices by inviting a more inclusive atmosphere in the tech industry. To execute machine and deep learning (predictive algorithms) accurately, it makes sense that the more data points you enter, the better the algorithm will be. So, for the sake of the best programming, the tech industry must include women and minorities — not only because it is the ethical thing to do but also to create the best programs possible.
Leave a Reply
|
Page images
the north side, at nearly an equal distance from the riva er and lakes on the south, and Hudson's Bay on the north. Canada is also bounded on the south by the great chain which runs through the United States, and which separates Canada from Maine.
902. Rivera. Lower Canada is penetrated by the great river St. Lawrence, which is the outlet of five of the largest lakes on the globe. From the sea to the isle of Orleans, that is, a distance of more than 300 miles, this river is from 12 to 15 miles wide. Above Orleans, it narrows to a mile in bredth, at Quebec.
903. Smaller Rivers. On the south, the Chaudiere, runs from the mountains which divide Canada from Maine, and enters the St. Lawrence, not far above Que. bec. The St. Francis issues from lake Memfremagog, and falls into the same river. The Sorell, the outlet of Lake Champlain and Lake George, discharges the waters of those lakes into the St. Lawrence, below Montreal, On the north the St. Lawrence receives the Sagunau, a considerable river, with Bustard river, Black river, and some smaller ones, below Quebec. Above Quebec, the principal river is the Utawas, which comes from the north west and unites with the St. Lawrence just above Montreal.
904. Climate and Productions. The winters in Canada are long and cold ; the rivers are covered with ice, and the earth with deep snow, for four months., But the heat of summer is sufficient to ripen all kinds of grain, even the smaller kind of maiz. Wheat is raised in great quantities, as well as all other grains and garden vegetables which are produced in New-England. Cana ada is also a good country for grass and timber. The an, imals are mostly the same as in the United States.
905. Chirf Towns. Quebec. The chief town in Lower Canada, and the metropolis of the British colonies in North America, is Quebec. This city, whose name in the Algonkin language, signifies a narrowing or strait, the St. Lawrence here being contracted from a broad es. tuary to a mile in bredth, stands at the confluence of the St. Lawrence and a small river called St. Charles, about 320 miles from the sea. Between the city and the isle
of Orleans is a large bason a league in length, which forms à spacious harbor. Quebec is in north latitude 46 degrees 47 minutes, and in 71 degrees 10 minutes west longitude.
906. Description of Québec. Quebec is situated upon a rocky point, composed of marble and slate. It consists of the lower and upper town. The lower town is at the foot of a steep hill, near the water; and from this there is a passage to the upper town by steps. It contains some handsome squares and buildings; among which are the church, convents, and bishop's palace.. The houses are mostly of stone, and the fortifications are strong. The inhabitants, about 10 or 12,000, are mostly French, and many of them well bred and intelligent. The vicinity of Quebec exhibits a variety of picturesk scenery ; of which the fall of Montmorency, a beautiful sheet of water, of 40 feet high, is not the least romantic.
907. Montreal. Montreal, which name is a corruption of Mont Royal, royal mountain, is situated on the east side of a considerable island, 150 miles south west of Quebec, at the junction of the Utawas with the St. Lawrence. The Island of Montreal is about to leagues in length, and 4 in its greatest bredth. The mountain from which it receives its name, is about half a league from the south shore. On the declivity of this mountain, as it ascends from the shore, is built the city, which has its upper and lower town. It is of a quadrangular form, and contains 6 or 8000 inhabitants, with a regi. ment of British troops. Ships of 400 tuns may ascend with difficulty to this place, but here ends the navigation of large vessels.
908. Government. Canada is governed by the govern. or general of the British possessions, who resides at Quebec, a legislative council and assembly. The governor is appointed by the king ; the legislative council consists of seven members, selected by the governor, and holding their offices for life. The Assembly consists of at least 50 members, chosen by the freeholders, once in four years. The governor, and certain members of
the council, appointed by the king, form a court of civil jurisdiction.
909. Commerce. The exports of Canada consist chiefly of furs and peltry, purchased of the Indians, with a few other articles, as wheat, flour, pot-ash, fish, oil and genseng. The imports are wine, spirits, salt, sugar, coffee, tobacco, melasses, dry goods, drugs and hardware. The amount of exports is about half a million sterling..
910. Inhabitants. The whole population of Lower Canada is about 150,000 ; the greatest part of the people are descendants of the French, and speak their native language. Nine tenths of them are Roman Catholics, whose religion is tolerated. Their dress is the same as in the United States, except that in winter they wear more fur, to guard against the severe cold. The fur cap for the head, and the moggason for the foot, are much used, and the French peasantry still wear the wooden shoe.
UPPER CANADA. 911. Situation and Limits. Upper Canada lies to the westward of Lower Canada. Its southern limit is the line through the center of the great Lakes, which separates it from the United States. On the north it is bounded by New Britain, and on the west the limit is undetermined. Its latitude is from 42 to 50 degrees north. Its bredth is extremely various, and its length east and west not ascertained. It is divided into nineteen counties.
912. Face of the Country. Upper Canada is in general a level country, but a chain of high lands on the north throws the waters towards the lakes on the south, and Hudson's Bay on the north. No territory of the same extent exhibits a greater variety of interesting scenery. The southern part presents those vast bodies of water, the great lakes, which resemble inland soas; connected by a current, which forms a large river. Here is the stupendous fall of Niagara, the greatest cataract, and one of the most surprising curiosities on the globe.
913. Rivers. The point where the St. Lawrence issues from the Ontario is in Upper Canada. The stream which connects the great lakes is a large river ; between Erie and Ontario, it is called Niagara, and is from half a mile to a mile broad. Below Ontario it is from 6 to 10 miles wide, and embosoms numerous islands. The Utawas proceeds from lake Temiscaming, or rather from the sources of that lake, in the high lands west and north, and after a course of 500 miles, falls into the St. Lawrence a few miles from Montreal.
914. Lakes. In addition to the great lakes on the south of Upper Canada, the Temiscaming is a considerable sheet of water. The Nepissing also is a considerable lake, whose waters are discharged into lake Huron by French river. The lake is about 35 miles in length and twelve in bredth ; French river is about 75 miles in length, and its banks are mostly bare rocks. The high lands between the great lakes and Hudson's Bay are full of small lakes, the sources of innumerable streams which run into the great lakes, the St. Lawrence and the Bay.
915. Towns. Newark, on the west side of Niagara river, at its entrance into Ontario, contains about 100 families, with two churches and a court house. Queenstown, seven miles above, is the place where goods are unladen from the water craft, and sent by land carriage round the great fall. York, on the west side of Ontario, 35 miles from Niagara, is the seat of government, and contains 3 or 400 families. Kingston, near the egress of the St. Lawrence from the Ontario, and the old fort Frontenac, contains about 100 families.
916. Inhabitants. The inhabitants of UpperCanada are mostly emigrants from the United States. The number is not known, but it is constantly increasing. The prevailing religion is Methodism, but the settlements are recent, and few churches are established. The government is modelled in the same manner as that of Lower Canada. The country resembles the adjacent territory of NewYork, in climate and productions. Agriculture is in a state of improvement. The trade consists chiefly in
the export of peltry, and the purchase of dry goods, liquors, and other foreign commodities.
NEW BRITAIN. 917. Situation. To the north of Canada lies an extensive country, along the western border of the Atlan. tic and around Hudson's Bay, which is claimed by the British government, but which is inhabited only by savases, except the trading factories, which are small set. plements for the purpose of collecting furs. The exa clusive privilege of collecting furs is granted to a company of English merchants. The extent of the British claims is not known, and to the north and west, the country has been explored only by a few traders.
918. General Tiety of the Country. Beyond the limits of Canada, the climate is so cold and the soil so forbilding, that little can be expected from cultivation. The face of the country exhibits barren mountains and broken rocks, interspersed with marshes and lakes. The southern parts abound with pine, larch, birch, willows, cedars, and a variety of shrubs producing berries, as currants and gooseberries. In the northern part all vegetation ceases; a few inches only of the surface of the earth are liberated from frost, even in the midst of summer; and the face of nature is one bleak dreary waste, the solitary haunt of the wild beast and the roam. ing savage.
919. Bays. In this territory is the vast bay called Hudson's, from its discoverer, Capt. Henry Hudson, who first entered it in 1610, where his crew mutinied, and set him and seven of his most faithful men afloat in an open boat, and he perished. A narrow part of this bay on the south, is called James' Bay, and on the north, is Repulse Bay. The entrance into Hudson's Bay is by a long strait opposit to Greenland, called Hudson's Strait.
920. Rivers. Hudson's Bay receives the waters of scveral large rivers, among which the principal are the Slude, Ruperts, Harricanaw, Abbitiby, Moose and Albany, all which proceed from the borders of Canada and enter James's Bay. The Saskashawin or Saskachiwin, with the Askow and Red River, fall into lake Winipic,
« PreviousContinue » |
Design and Build Diagonal crossings
1. Plan
2. Design and Build
3. Sell
4. Evaluate
Case study
Diagonal cycle crossing might be a solution on signalised intersections where many cyclists are turning or switching to the other side of a main road. They can also be used for transition between unidirectional cycle lanes or paths on both sides of the road and bidirectional path on one side. They allow cyclists to stop only once instead of twice, on two legs of the intersection. Diagonal crossings are most efficient if integrated with the left/right turning phases for motor vehicles, and not requiring their own dedicated phase. |
How could the sun and moon be harmful? (Psalm 121:6)
5 The Lord watches over you—
the Lord is your shade at your right hand;
6 the sun will not harm you by day,
nor the moon by night.
Clarify Share Report Asked February 19 2019 My picture Jack Gutknecht
Mini Tim Maas Retired Quality Assurance Specialist with the U.S. Army
I would say that, although the sun is necessary for life, overexposure to it can be harmful either as a result of radiation or the effects of excessive temperature such as dehydration or heat stroke, and that this would have been recognized even in biblical times (especially in the Middle East). The shade spoken of in the psalm represents relief from those aspects. And perhaps the reference to the moon pertains to ancient beliefs in the power of moonlight to induce madness (as indicated by the term lunacy), from which it would also have been felt that people needed protection.
February 19 2019 0 responses Vote Up Share Report
Add your Answer
All answers are REVIEWED and MODERATED.
Please ensure your answer MEETS all our guidelines.
What makes a good answer? ▼
1. Adhere to the eBible Statement of Faith.
2. Your answer should be complete and stand-alone.
5. For more info see The Complete Guide to eBible
1. 4000 characters remaining |
GO Electrical
0 votes
The filters $F1$ and $F2$ having characteristics as shown in Figures $(a)$ and $(b)$ are connected as shown in Figure $(c)$.
The cut-off frequencies of $F1$ and $F2$ are $f_{1}$ and $f_{2}$ respectively. If $f_{1} < f_{2}$, the resultant circuit exhibits the characteristic of a
1. Band-pass filter
2. Band-stop filter
3. All pass filter
4. High-Q filter
in Analog and Digital Electronics by (9.3k points)
recategorized by
Please log in or register to answer this question.
912 questions
38 answers
27,631 users |
King Essays Crafting 1000 Term Essay to Acquire the best Grades
8 خرداد 2020
Don’t neglect that if you are in a general performance, you can assess the expertise of being a portion of a group or output. Media and Performance Evaluations.
Evaluate how a latest intimate film portrays modern-day romance. Consider a typical romantic motion picture and what it states about the roles of men and females in the course of that time.
Compare a latest passionate film with a classic and assess which is most effective. Examine an motion journey film and clarify why it works for the viewers. Appraise a war movie and chat about no matter if it helps respond to latest considerations about war and peace. Assess a historical film for how it teaches heritage by way of drama, setting, and costume.
Assess how a movie based mostly on actual activities compares with the genuine history. Look at a vintage musical. Explain why it was popular or unpopular. Assess a drama and explain to how it correctly or ineffectively portrays that remarkable situation.
Assess how effectively a film which is dependent on a guide is true to that guide. Which is greater (reserve or film)? Appraise a sequel. Does the 2nd or third movie just replay the very first, or does it include anything contemporary and new? Appraise a foreign movie and explore what that movie states about the lifestyle of that nation.
Examine the operate of a composer for videos.
How does that composer adapt to unique films? Review an animated version of a movie with a serious lifetime variation of the exact tale. Evaluate which medium is much more helpful for telling that sort of tale. Evaluate a remake of a basic or foreign movie. Appraise how the tale improvements in the next version and whether it truly enhances the unique.
Assess an actor or actress in a number of films. Speak about how she/he adapts to diverse how many pages is 1000 words double spaced 12 font roles and focus on what sort of function that human being does greatest. Study various performs by the identical director and the eyesight that director provides to a task. What is the director seeking to say with their get the job done? Assess the special effects in quite a few recent movies. What will make them effective or ineffective? Do some use special outcomes just for present and not to move the plot? Is that a trouble or not? Assess a children’s movie for what it teaches young children. Does the movie have a favourable impact? Is that important? Assess a movie that is rated G or PG for how it tries to appeal to each adults and children.
How successfully does it have interaction both of those audiences? Assess a recent concert you’ve attended to other individuals by that same artist, or to that person’s recorded function. Did you go to a compact intimate live performance a short while ago? Look at that knowledge to a massive concert. In this paper, you can talk much more about the expertise of likely instead than the precise artist’s get the job done. Evaluate two variations of a participate in or musical perform. Often you can obtain unique variations of a participate in, live performance, dance or other manufacturing on the net.
Enjoy a ballet or an orchestra general performance both live or on line. How effectively was the piece executed? This is specifically attention-grabbing to produce about if you have performed the piece your self. Are you in a creation? You can evaluate your possess group’s functionality or evaluate the practical experience of staying in a concert, a enjoy, a band, a choir or an orchestra. Movie Evaluation Case in point.
How to Generate a Motion picture Evaluation. When you examine a movie, T. V. sequence, or theatrical show, you require to initially figure out:What style is it? (Drama, comedy, passionate comedy, motion journey, documentary, historic fiction, or musical?) What are the attributes of that kind of manufacturing? What is the most effective case in point of this kind of movie, T.
Unique Pros And Cons Essay Topics
Sample essay writing is a form of writing that can be used for advertising purposes. Many companies provide sample essays. This allows them to demonstrate their areas of expertise. Customers that view a Sample essay know immediately the standard of writing that a particular company is capable of.
A: This is true of books, essays, movies. Anything can get overdone; students have to be aware of that. A video about fire poi all by itself will get tiresome to admissions officers. Students need to work in how this relates can you write my essay for me to their personal messages, their viewpoints. And really, it’s the same with the essay: Can you think of cause effect essay that make the admissions officers cringe? The key is to make it personal, with as much distinctive detail as possible.
Learn how to Make a good
Essays For Sale
Let us go back to the nurse who is preparing to take the IELTS exam. A nurse tends to think in terms of improving public health. Her point of view comes from her training in medicine and the social sciences, as well as her experience with many real patients in a hospital setting. A nurse might think, topic for cause and effect essay example, how crime in the streets increases the number of stress-related diseases in the general populace.
• Ways to Finish a good
Employ Someone To Publish A Paper
• Ideas on how to Write articles the actual
Acquire Essays Online
• The best way to Finish good
School Essay Support
• Ideas Blog a
Help With My Essay
• Ways to Re-write excellent
• Ideas Re-write an excellent
Essay Creating Assistance
• Learn how to Create articles the actual
Essay Writing Services
A: See the “laundry list” in the book: kids who just recite their extracurriular activities. They get so interchangeable; one kid’s endless list is no different from another’s. Another pet peeve of ours is muddy sound and murky images. It’s like sending your essay with a coffee stain on it or in illegible handwriting. Also, skip the testimonials; that is the job of the letter of recommendation. Admissions officers don’t want an infommercial!
If you are bored with or uninterested in your topic for cause and effect essay, it will shine through in your writing. Spend some time listing activities in which you are involved, hobbies you enjoy, or world issues about which you are passionate. Then reflect on poignant and/or significant instances or stories from your experiences. You will probably find that you can mold a “learning moment” into a usable essay to answer your prompt. If one story does not work, try another. Finding a unique angle for your essay is easiest if you can draw on your life experiences.
Day planner: Advanced version of sticky notes. It also serves as a good way to visually see how busy you are. College is for learning and working, but you do not want to exert yourself too much.
Pregnant Ashley’s mom tells her that her Aunt Lisa and Uncle Kenny want to adopt the baby. To have a formal phone conversation, Ashley and her mom for some reason go to a public place and put the phone on speaker, and there are no subtitles. Weird on MTV’s part. “Like, I know it’s the best thing,” Ashley tells Lisa of their plan. Later, Ashley and mom take a trip upstate to see Lisa and Kenny. “Thank you for doing this, and stuff,” Ashley tells them. Now there are tough decisions to be made about whether Lisa and Kenny will be there for the birth, etc.
Where are your strengths? Can you write from imagination? Or is your strength creativity? Are you good with facts? Or are you good with fiction? You need to know your strengths before you can make a good decision about the topic that you are going to write on.
Unlike sweepstakes, it is legal for contests to demand consideration of some kind. They may either charge an entry fee or use the product you deliver to help them promote their product. For instance, your winning recipe may be used to promote an ingredient or a winning piece of art may be used to endorse a new drawing tool.
Don’t just regurgitate what you read, analyze it and develop a unique way of discussing the issues covered in the book. In a college essay (or any essay for that matter) you are free to argue whatever point you want, as long as you can back it up with supporting evidence. Don’t write something that you think your teacher wants to hear, and don’t spit your professors’ opinions back at them. Develop your own distinctive opinion, and argue it thoroughly.
Don’t cheat your future self by racking up a bunch of debt in your college years. Follow the advice given in this article and you will graduate college with little or no debt. You will be thankful you did!
دیدگاهتان را بنویسید
نشانی ایمیل شما منتشر نخواهد شد. بخشهای موردنیاز علامتگذاری شدهاند * |
Dry Eye FAQ
What are dry eyes?
The Tear Film and Ocular Surface’s recent DEWS II definition of dry eye essentially says that dry eye is a multifactorial disease of the ocular surface characterized by a loss of the normal functioning of the tear film. In the process, the tear film becomes unstable and the hyperosmolar (think of it as being a bit saltier than it should be). Additionally, there is ocular surface inflammation and the nerves may be affected as well. Pretty vague, right? Well, that’s because dry eye is hard to define. It’s considered a large umbrella term.
What is ocular surface disease?
The term “dry eye” does seem to have some connotation to the general public that it is only a minor nuisance—that it isn’t a big deal. But as thousands of people can attest to, that’s simply not the case and it can be vision threatening. Ocular surface disease may be a better term that gives a better sense of gravitas to the condition.
How do I know if I have dry eyes?
It can be difficult, if the correct questions are not asked. Symptoms can range from burning, tearing, blurry vision, fluctuating vision, foreign body sensation, itching, etc. We use a questionnaire in the clinic to help track patients’ subjective symptoms.
How can I have dry eyes if my eyes tear a lot?
There are two major sets of tears. One is the baseline tear, produced by the lacrimal glands, which you produce every time you blink. If this is inadequate for whatever reason (e.g. poor quality, poor blink rate or function), then a signal is sent through a neurosensory pathway to the other set of accessory aqueous glands to produce reflex tears. Think of this as the body’s way of trying to solve the problem, similar to the tearing you would experience if you were poked in the eye. Unfortunately, it is an all-or-nothing response so the eye becomes flooded with fluid.
How can I best treat dry eyes?
Artificial tears are what most people reach for first, because they are easily accessible over-the-counter. However, because dry eyes are a chronic, progressive, inflammatory disorder, treating with a chronic, anti-inflammatory eye drop make more sense, at least as a primary treatment. Failing that, conservative treatment with omega-3 supplements and hot masks would be the next step. Lastly, procedures are reserved for patients who are still symptomatic or showing signs of more permanent damage to the meibomian glands.
Are get-the-red-out drops ok?
We are not big fans of the Visine get-the-red-out drops. The drops use vasoconstrictors to temporarily shrink the blood vessels but they tend to come back with a vengeance. Additionally, the time spent red-free starts to decrease, making the drop less effective. It is always better to treat the cause, instead of the symptom, so treating the dry eye properly would be preferable. If a temporizing measure must be used, we recommend Lumify drops instead.
How do I put in eye drops?
There are a few ways of putting in eye drops. Remember that you only need a small fraction of the drop to get into the eye so a little waste is expected!
How do I put in ointment?
It is important to be careful when applying ointment to the eyes. Otherwise, you could accidentally poke your eye with the tip of the tube!
How do I do warm compresses and lid massages?
The warm compresses need to give off heat at a certain temperature for at least 10-12 minutes to be effective. For this reason, warm washcloths are no match for something that gives off dry heat. There are a number of masks that exist, but this technique is good for many of them. Make sure to carefully read the instructions for your particular brand.
Are there any medications that cause dry eyes?
Oh boy, there are lots! Antihypertensives, antidepressants, antihistamines, reflux medications, pain medications, acne treatments, glaucoma eye drops and that’s just the tip of the iceberg! Many of these meds can be found secreted in the tear film as well.
I just turned fifty last year and my eyes are driving me crazy! What is going on?
For women especially this is a common problem because of menopause. As the estrogen levels decrease, we can see profound and rapid changes in the tear film. Treatment can restore normal function, but it can be very surprising for patients as it can have a rapid onset.
I was never told I had dry eyes but after having LASIK (or cataract surgery) my eyes have been dry ever since. What happened?
More than likely, you had some mild, relatively asymptomatic dry eye that decompensated after surgery. Sometimes, it is temporary and goes away within the first 3 months postoperatively. Other times the symptoms are more permanent. We usually give patients three months to recover on their own if they are symptomatic postoperatively unless the symptoms are too great. If it is still present after three months, we typically start treatment.
Where can I go to get treated?
At the Dry Eye Lounge, of course! If you are not able to see us due to distance, then we advise you to see a reputable ophthalmologist in the area who feels comfortable treating your condition.
Can you guarantee that the treatments will work and I can go back to normal?
We wish we could. While we have treatment algorithms and multiple therapies at our disposal, and the vast majority of patients do really well, there is no guarantee a given treatment will work on a patient. We reserve procedures for patients who don’t respond fully to medical therapy and continue to work our way up the rungs of the ladder depending on the level of severity and resistance to conventional therapy. What we can promise you is we will never give up on you and we will exhaust every effort to make you well again.
Contact Us
We look forward to hearing from you
Find us on the map
Office Hours
Our Regular Schedule
8:00 am-4:30 pm
8:00 am-4:30 pm
8:00 am-4:30 pm
8:00 am-4:30 pm
8:00 am-4:30 pm |
The only two things you need to know about PATH command line
iggredible profile image Igor Irianto Updated on ・2 min read
"I was a bit challenged when I was younger to stay on the right path" - Dwayne Johnson
Such wisdom. Not all path leads to happiness. The wrong PATH will lead you to unhappiness. Here we will learn the right path and stay in it!
There are many things you can learn about path. I think the two important ones are:
1. Finding your path
2. Updating your paths.
Finding your PATH
In mac, you can find path from command line by typing echo $PATH. Mine looks something like this:
echo $PATH
Path is colon (:) separated and it reads from left to right.
For example, if I execute node, my terminal would first search node executable at (/Users/iggy/.nvm/versions/node/v10.15.1/bin), then (/usr/local/bin), etc. If node was not found anywhere, it will return command not found: node.
To find which path node currently uses, run which node, in my case, I see:
Note the similarities between my one of my paths and node path:
#node path
Updating your path
You can either prepend or append your path
This type of change is temporary. It will disappear when the terminal is closed. To make it permanent, update path inside .bash_profile or .profile
export PATH="~/new/path:$PATH"
Application: let's hack a path!
Suppose you are an evil person and wanted to modify the node command of your coworker so when they run node, they are running your script instead. All you need is to prepend your own path so when they run node, path will execute your node executable first. Here is how you can do it:
Create /for-fun dir, inside create a file named node. Make sure to add #!/bin/bash (shebang) on first line:
Save, then grant permission chmod +x ./node. Adding shebang and permission are required so they can run node directly instead of ./node
Prepend path:
(replace Users/iggy/for-fun with whatever path you used. You can use pwd if you're not sure where you're at)
Check your newly appended path (echo $PATH) to make sure our prepend path is the first path displayed. Check also your node path (which node) - you should see the updated path.
Cool! Next time someone runs node, they'll see:
That's all folks. Happy hacking!!
Posted on Aug 5 '19 by:
iggredible profile
Igor Irianto
markdown guide |
Hill’s Science Diet can assist you to drop pounds.
This does is supply the metabolic rate necessary to burn more calories and develop muscle. Your body weight percent has been reduced via the use of their individual growth hormones, and there are no unwanted effects.
To be able to start to eliminate weight with this 18, you need todo the dieting. This approach takes you to swallow protein and calories in addition to to make sure that you get enough nutrition.
Bulking up on junk food will not assist you to eliminate pounds. It is important that these exact things cut as much as possible. Instead, you ought to start eating foods your body needs. The objective is always to get your body by making an aerobic work out to burn up more energy.
Opt for those which can be full of nutrients and are high in protein If you’re looking for balanced foods. These are meals that will give you the energy you require with this particular workout.
It payforessay is crucial that you stay busy because you keep your own calorie intake. This can allow you to lose excess weight without being hungry all the time. You will have to pick one kind of activity to focus on and stay with this.
The optimal/optimally admissions.scranton.edu way to try so would be to complete it on a weekly basis. A day, do not move and then jump back and forth between two unique types of physical work out.
It is a known fact that once you attempt to reduce your weight, you will do whatever it’s possible to to do so. This means that you are going to eat whatever you want, but it is advisable that you stick to a daily diet regime.
The absolute most crucial thing to keep in your mind is that you want to acquire your nourishment that you eat. Now is the time to try to get if you’re currently lacking nutrients.
The human body is able to conduct the workout in the all-natural manner without even the use of tools, While it might seem impossible. However, in the event that it’s necessary to speedup, the fat burning capacity isn’t going to function as fastest.
You may observe a decrease in body weight percentage, by raising the quantity of calories that you burn off during your workouts. Keep in mind that it is impossible to raise the number of energy you burn off at any certain position.
The single way to rise the range of energy you burn off calories is to understand to burn. You gradually increase the sum https://www.masterpapers.com/ you eat up and also after that must begin using a reasonable calorie ingestion.
It is ideal to be aware of the components from the nutrition services and products you utilize therefore you could be informed about what exactly is more safe to make use of. Utilize caution when dealing with all nutrition solutions, and be intelligent. Consistently make sure that you fully grasp the ingredients and product labels prior to using these. |
How to greet the ancestors of the Slavs?
Significant in terms of the initiation ritual greeting. Since the shape of greeting can be understood, respected or not the other person, you can see the floor and the social status of the person to whom is assigned a greeting. Many mysterious and interesting fraught with this custom. Slavs past and present, here, too, all clear. But, something to tell the cost. So basic, sterzhneoobrazuyuschim is the wish of health companion. So let's say, the most famous greeting "Thou goy." This wish health Slav. Everyone remembers the epic "goy thou, my good fellow?"
Here are tales of this expression came from. Explain that the word "hello" — is the wish of health, I do not need. Just wish health can hear the greetings of "common either," "Zdorovenki Buly" and many others. Wishing health companion, is a sign of good manners and respect. If you would like to welcome home and all his relatives that said "Peace to your home!". It seems that this goes back to the ritual greeting homes and Chur. The phrase "Peace to your home," probably meant greeting houses. Brownie, not only preserves order in the hearth and home, but also the latest incarnation of a god Rod. Simply a process of transformation Roda — ancestors — Brownie was not fast. Kind of started to forget in the 10th century, and later centuries have worshiped Rozhanitsy. But the cult of ancestor stayed in. Remember the expression when finding ownerless things, "Chur, my!". It is an ancient call to Rod witness discovery. Slavs greeted not only with each other but also the gods. Hence the hypothesis of self-name of the Slavs from the word "glory." Slavs are not only praised the gods, but always treated correctly and politely for the natural environment. In the epics is preserved in the phenomenon that the characters often greet field, forest, river. As mentioned above, the Slavs believed that the world is alive, and with every living soul to say hello. You do not wonder why the villages are still greeted with even stranger, everyone, even children? Slavs can not be called his true name, but he has to say hello. This goes back to the phenomenon that if you wanted the person's health, then he'll wish him well. And accordingly, the people and not previously familiar, become psychologically closer. And this convergence, as it were exhibited oberezhny circle. And from a stranger no longer expect bad.
Greetings dear people in the community, always accompanied by a low bow to the ground. Friends and acquaintances greeted bowing. Strangers could meet in different ways, but most often hand was applied to the heart and then fell down. A simplified version of the first two types. While in the first two cases, the hand was applied to the heart, so heartfelt intentions. Just a stranger could find a simple nod. Characteristically, the movement in this greeting are not going to the sun, it's like trying to treat some modern Rodnoverie, and to the ground. And it is more than logical, given the time that the Slavs worshiped the ground for the Divine. In studying this issue, and significant characteristic is the name of the Christian clergy pagan Slavs as "idolaters". Bow to the idol as an expression of welcome and respect. What characterizes the outlook of the Slavs, as idols is dead ancestors, and about or respectful or as. There is no written source describing the motion of the heart to the sky as a greeting.
The welcome was as initiating conversation. What he wants in return? Your own or someone else's (it's about an example of "goy Thou")? Today, greeting applies strictly to the hallmark. So let's say a greeting ritual handshake is not a brush, and wrist. In Rodnoverie is not just a typical greeting, and identity. Acclaim is due to the antiquity of its use, as to whether there are weapons in the hole. Esoteric same meaning in this form is the welcome that is transmitted by contact wrist pulse, and thus the other person's biorhythm. This is the greeting as it reads the code of another person. Today you can find a lot of cheers and "Glory to Rod," "Good day," and many combinations of the above. And today, Rodnoverie wish health and prosperity of old. All word forms greetings convey warmth and participation in the life of another person. I am glad that such a variety of greetings, though partly forgotten, but still came to our days and evolved a little!
Like this post? Please share to your friends: |
sv-sofija-ohrid- 02
St. Sofia church in Ohrid
There is no precise historical data about the construction of the church. According to some, the present church was built on an older cult site. There are opinions that the old church probably existed in the time of Tsar Samuel, but for an unknown reason it was ruined and the later the church of St. Sofia was built during the time of Archbishop Lav (Leo) I (1035-56).
saint-sofia-ohrid-mk-old-photo 5
saint-sofia-ohrid-mk-old-photo 3
saint-sofia-ohrid-mk-old-photo 2
Church St. Sofia is one of the largest medieval churches in Macedonia and for long time is cathedral church of the Ohrid Archbishopric.
During the Turkish rule, the church St. Sofia was transformed into a mosque. The act of transformation of the church into a mosque was likely due to the rebellion of Archbishop Dorotej (Dorotheus) against the Turkish authorities.
saint-sofia-ohrid-mk-old-photo 1
After the transformation of church Saint Sophia into a mosque, its exterior and interior appearance were changed and adapted to Muslim service. Then iconostasis completely lost his native look.
At the north side of the outer porch was built a minaret, which was crumbled in 1912.
At the north side of the church, somewhere in the nineteenth century, the Turks have built an open porch using stone columns from other ruined buildings.
Later additions, repairs, a test of time, and the consequences of earthquakes, took the church Saint Sofia in a very unenviable position.
saint-sofia-ohrid-mk-old-photo 4
saint-sofia-ohrid-mk-old-photo 8
After the Second World War and liberation of Macedonia, immediately were taken conservation works on the architecture and paintings, which made the church Saint Sofia a representative object which can be proudly shown to the whole world.
With extensive conservation works carried out in 1950-56, the paintings became visible with the remarkable frescoes that represent great achievements of medieval painting in Macedonia and the world.
In the church, Saint Sofia are preserved paintings from several periods: XI, XII and XIV century. |
Foods That Clean Teeth as You Eat
There are foods that are bad for teeth, and foods that are good for teeth, but did you know that there are foods that clean your teeth as you eat them? Here are a few foods that clean your teeth as you eat them.
Similar to apples, carrots are full of fiber and clean teeth by scrubbing plaque as you eat. Carrots also stimulate saliva production, which naturally cleans teeth. In addition to cleaning teeth, carrots also contain multiple B vitamins, which fight gingivitis!
Leafy Greens
Leafy greens like kale and spinach are high in fiber and low in calories, which makes them awesome vegetables for teeth! Like apples and carrots, the fiber content in leafy greens helps by scrubbing away food debris and plaque while you eat them. But, did you know that kale also contains calcium and B vitamins? Calcium strengthens teeth, and B vitamins help treat and prevent gingivitis, often called gum disease. Try adding some shredded kale or spinach to your child’s sandwich instead of lettuce! Or, make kale wraps with carrots for some powerful tooth-cleaning action!
Cheese doesn’t clean as you eat, but it prevents other foods from hurting your teeth as you eat it, so we thought we should mention it here. Cheese is high in calcium, which promotes strong teeth. But the benefits of cheese don’t end there. It also contains a protein called casein which strengthens tooth enamel and helps to prevent cavities. Cheese also helps prevent acid from destroying tooth enamel. Try adding a couple of slices of cheese to your child’s lunch every day to give them more calcium and casein.
Visit our Office
|
Recognizing Depression
Recognizing Depression
Everybody gets the blues once in a while. It”s normal to feel sad on a rainy day, get sentimental over a lost love, or feel so terribly lonely during really low moments of your life.
But once depression gets out of hand, it can wreak havoc on your mental state and drive you to such emotional lows – to the point that you might seriously choose ending your life. So if you think you”re experiencing extreme emotional lows, then you”d better do something about it.
What are the signs of depression?
1. Feeling sad without any apparent reason.
3. Thinking that your life is getting nowhere.
4. Feeling that whatever you do is not enough.
5. Feeling that you”re not good enough for anything.
6. Always feeling tired.
8. Feeling that you don”t deserve to live in this world anymore.
These are some of the most common symptoms of depression. Recognizing these telltale signs can help lead you to take action before it becomes more serious. Knowing the root cause of these symptoms further boosts the chance of recovery.
Whatever the reason behind depression, it is always related to your state of mind, environment, and/or present circumstance. You may feel low if you are facing issues on work, marriage, or your financial status. The process of resolving these issues, however important, will inevitably result in stress and/or body aches. Emotional pain coupled with physical ills can really affect the way you view your life.
Another cause of depression is bad experiences: the death of someone important, loss of something significant, or similar unpleasant experiences that would haunt you for a long time. This could mean a humiliating event at your workplace or school, traumatic environment at home, etc.
The best way to treat depression is to think positively. Thinking negatively about an already gloomy situation would only aggravate your mental state. It”s not the end of the world, and there”s a solution to every problem, yours included. Moping and sulking about it won”t do any good.
Unfortunately, not all people see it that way. This is when depression starts to settle in. You think you”re the unluckiest person alive. No one is there when you need help the most. It”s better to die than suffer all the injustice being delivered to you.
Going to a psychiatrist to ask for help is one step toward finding the cure for depression. Various drugs can help you cope. However, these medications treat not the actual cause of depression, but only the symptoms. Complete recovery rests solely on your ability to have a positive outlook in life. Admittedly, this is easier said than done, so going to a psychiatrist doesn” |
New model predicts Painted Lady butterfly migrations based on breeding sites data
New model predicts Painted Lady butterfly migrations based on breeding sites data
Researchers from the Institute of Evolutionary Biology (IBE) have developed a model that allows predicting the migratory movements of the Painted Lady butterfly between Europe and Africa based on data from breeding sites. The study confirms that populations need to continuously migrate to other latitudes to secure the best conditions for the immatures to survive. Based on climatic data from 36 years, and the location of 646 breeding sites in 30 countries, the model reveals for the first time where the species might overwinter after their trip to tropical Africa. This new approach could be used to study potential effects of climate change in migratory insects.
Painted lady caterpillar (credit: Gerard Talavera).Researchers from the Institute of Evolutionary Biology (IBE), a joint research institute of the Spanish National Research Council (CSIC) and Pompeu Fabra University (UPF), in Barcelona, Spain, and from the University of Grenoble-Alpes in France, have developed a method that allows predicting where the populations of the migratory Painted Lady butterfly (Vanessa cardui) distribute along the year and across their Europe-Africa migratory range. Their findings are published today in the journal Proceedings of the Royal Society B.
In a previously published study, the researchers demonstrated that Painted Lady butterflies migrate from Europe to tropical Africa by the end of summer, crossing the Mediterranean Sea and Sahara Desert. In a follow-up study, the researchers showed that the offspring of these migrants reverse their migration towards Europe in spring. Thus, the Painted Lady butterfly travels 15,000 km between Africa and Europe through multiple generations to seasonally exploit resources and favourable climates in both continents. “The challenge now is to understand how migratory species are able to optimize time and space as to properly find the environmental requirements that each generation need for their survival” states Gerard Talavera, the leading author, postdoctoral researcher at IBE and a National Geographic Explorer.
The key is to find the caterpillars
Migratory insects are in a continuous move, and it is difficult to track from where to where they migrate. One of the main reasons for species to migrate is to find the optimal environmental conditions to raise a new generation. The immatures (eggs, caterpillars and cocoons) are key stages in the butterfly life cycle, which, unlike the adults, cannot scape from adverse situations. Thus, their breeding habitat is a very good indicator of the specific requirements that the species need to survive. The present study has gathered information of up to 646 breeding occurrences of Painted Lady butterflies in 30 countries. By using time-series of 35 years of monthly climatic data, the researchers have build a model that defines the breeding requirements of the species, and produced a map of the most probable areas for the species to breed every month. “We thought that we could learn about the movements of the adults by looking at where the caterpillars grow at different times of the year” says Mattia Menchetti, member of the research team. “If we can map in space and time the sites where they breed along the year, then we can understand from where to where the adults can migrate”.
The species rely on their reproductive success in both continents: Africa and Europe
The model shows that the species is forced to permanently move across its overall range, since suitable breeding habitat is rarely permanent all the year. “Because the species breeds continuously for the entire year, its reproductive success relies on both continents. The results show the relevance of the sub-Saharan winter population stock in sustaining the migrations of the species into Europe”, says Talavera.
However, the situation could eventually revert if the overall permanent suitable extent grows substantially in the future, as a consequence of global warming. “We cannot discard that the impact of rapid climate change may affect the butterfly migratory phenomena in unpredictable ways, as has already been shown to happen in migratory birds”, adds Talavera.
The overwintering missing generations might be near the equator
Even if it has been proved that most populations of the Painted Lady butterfly spend the winters in the sub-Sahara, many of the precise localities are still unknown. Thanks to this new modeling approach, the researchers have identified the potential niche requirements of the species during the winter in Africa, and thus the sites where these could aggregate to breed. According to the results of the study, butterflies could locate near the equatorial latitudes between December and February. This scenario confirms that the overall migratory circuit undertaken by the annual successive generations might encompass up to 15,000 km, from the equator (e.g. Kenyan and Cameroonian highlands) to northern Scandinavia.
A global project
The findings published in Proceedings of the Royal Society B are part of a wider project aimed at studying the Painted Lady’s migratory behaviour and routes worldwide. With that goal in mind, the team leads a long-term global citizen science project called The Worldwide Painted Lady Migration, which invites citizens from all over the word to communicate observations of the Painted Lady butterfly. More information on this project is available here:
This research was funded by the National Geographic Society, the British Ecological Society, and the Fundació Barcelona Zoo.
FULL CITATION: Menchetti, M.; Guéguen, M.; Talavera, G. (2018) Spatio-temporal niche modelling of multigenerational insect migrations Proceedings of the Royal Society B. DOI: |
What Is a Lease Extension?
A lease extension refers to a legal agreement that extends the term of an existing lease or rental agreement. Extensions are not a requirement in a business relationship but are often granted just before an original agreement is set to expire. They are common in relationships between landlords and tenants of commercial and residential property, or between parties who lease vehicles, machinery, plants, and equipment.
Key Takeaways
• The lease extension should name the parties involved, the dates on which the extension begins and ends, and should reference the earlier agreement being extended.
• Lease extensions are common in landlord-tenant relationships, or for the use of vehicles, equipment, machinery, and/or plants.
How Lease Extensions Work
A lease is a contract that requires the lessee, or the user, to pay the lessor, or owner, for the use of an asset for a specified period of time. Leases are common for rental properties or for the use of equipment, vehicles, or machinery and plants. When the asset being rented is tangible property, it may also be referred to as a rental agreement.
When a lease expires, both the lessor and the lessee have a few options available. The lessee can vacate or give up access to the property, or the two parties can agree to a lease renewal. This option may require some renegotiation of the terms of the new lease. The final option is to extend the lease. The terms of the original lease are normally still in force, but the time-frame for an extension tends to be shorter. So in the case of a residential rental property, the landlord may keep some of the original lease terms like the rental amount due, but extend the period of tenancy for the lessee.
The lease extension is a formal document that must include certain details. It should name all the parties involved in the agreement, as well as the dates on which the extension begins and ends. The extension document should also reference the earlier agreement being extended. Some lease extensions—especially in real estate—are granted automatically. They may specify a certain length of time for the extension or may allow for the use of the property on a month-to-month basis.
Special Considerations
Lease extensions are an important part of the lessor-lessee relationship as they reduce the risk involved for each party. For example, a landlord who agrees to a lease extension can keep the original lease terms intact including any provisions about notices to vacate. This means the tenant has to provide prior written notice before vacating the property. The landlord can rest assured there are no surprises, and won't have to risk an empty unit. Similarly, a lease extension can give tenants some stability. With a proper extension in place, tenants won't have to give up their units after the lease expires.
Although not a requirement, lease extensions reduce the risk involved for both the lessor and the lessee.
Businesses enter lease agreements and agree to lease extension agreements for a variety of reasons. The primary reason for leasing an asset rather than buying it is risk management. A business may decide to lease a parcel of land so that it is protected from the risk of fluctuations in land prices. This allows the business to focus on its core competency rather than real estate.
Another reason for leasing is to simplify disposal. A construction company, for instance, may decide to lease a piece of heavy equipment, rather than buy it, so that it does not have to deal with selling the equipment after it is no longer needed. A lessee may pay more per hour in order to use the equipment, but this can be worthwhile if it saves time and the cost of selling the equipment at a later date.
Examples of Lease Extensions
A lease extension may be executed between a landlord and a tenant. In this case, if both parties choose to continue the tenancy, the landlord may issue a lease extension when the original lease is set to expire.
Lease extensions may also be granted to lessees by car dealerships. Let's assume a consumer leases a car for four years. After that period, the lessee may decide to buy or begin another lease for a brand new car. The dealership may grant an extension of the original lease if the new replacement vehicle is not yet available. |
What are lumen and watt?
Until recently the power of a light was defined by its wattage, however nowadays you will see that some lighting packages will also show lumens. The difference between lumen and watt is that lumen is a unit for the amount of light/brightness produced by a light source, whereas a watt is the unit that measured the amount of energy/power required to operate a light. Lumen is actually a better way of knowing how bright a bulb really is as it indicates the light intensity and the output of that specific light whereas watt only indicates the energy consumption of the light. So the higher the lumen output is the brighter the light is, this is why now lumen is at the forefront on bulb and light packages when it comes to indicating brightness.
It is impossible to convert wattages to lumen, they are two completely different measuring units. It used to be indicated that a higher wattage would give out a certain amount of light, however that has changed as it's now more understood that even low wattage lights can give out a lot of light output also. Therefore, for this reason watts cannot be converted into lumens. So now its best to look at the lumen output of a light source rather than the wattage.
If you require assistance or have questions, please contact us.
Contact us via:
We are available during the following opening hours:
• Monday - Friday: 8am - 5pm |
By Jenn Gidman, Newser Staff
Posted Mar 18, 2019 8:18 AM CDT
(Newser) – A dead whale washed up on a Philippine beach was distressing enough, but museum workers have deemed what they found inside the creature's gut "disgusting." The BBC reports that the Cuvier's beaked whale turned up on the shoreline east of Davao City on Saturday, and when workers from the D'Bone Collector Museum opened up the whale's belly, they found "the most plastic we have ever seen in a whale," the educational NGO said in a Facebook post. Although they're still working on documenting all the contents of the whale's belly, workers say they've so far extracted 88 pounds of plastic, including "16 rice sacks, four banana plantation-style bags, and multiple shopping bags." CNN Philippines reports the cause of death for the whale—which the New York Times notes was 15 feet long and 1,100 pounds—as dehydration and starvation due to plastic ingestion.
Starvation comes about because eating plastic makes the whales feel full, which leads them to eat less and not get the nutrients they need. Marine biologist Darrell Blatchley explains that the reason dehydration set in as well is because whales hydrate from the water in their food sources, not from drinking it. "I was not prepared for the amount of plastic," he tells CNN, adding to the Times, "The plastic in some areas was so compact it was almost becoming calcified, almost like a solid brick." The BBC notes that, based on reports from environmental groups, countries in Asia are to blame for much of the pollution in the world's oceans, with China, Indonesia, the Philippines, Vietnam, and Thailand listed as the prime offenders. "This cannot continue," Blatchley tells CNN. "The Philippines needs to change from the children up or nothing will be left." (Read more whales stories.)
|
Oregon weather watch: Hailstones, earthquakes and landslides
Landslides, earthquakes and hailstones.
All can be extremely damaging in their own right.
The 12-minute video examines the earthquake and "major gaps in current U.S. earthquake preparedness."
The Oso landslide, which killed 36 people last month in Washington state, has been animated in a video by the U.S. Geological Survey. Seven people are still missing.
Hailstones—some as large as tennis balls—
Hail is an infrequent visitor to the Pacific Northwest due to the lack of thunderstorms, but there are at least a few instances in Oregon that had a lasting impact.
According to "The Oregon Weather Book," on July 24, 1991, a thunderstorm rumbled through Deschutes County, dumping 4.5 inches of rain in an hour near La Pine.
Hail from the storm broke windshields, dented cars and damaged roofs and satellite dishes around the county. In some places, hail piled up to 8 inches deep.
In Heppner on June 14, 1903, heavy rain and flash floods killed more than 200 people. Hailstones as to 1 ¼ of an inch piled so high that crews recovering bodies found some still perfectly preserved in large drifts of hail.
-- Stuart Tomlinson |
What prevents people from healing today?
There are too many self-imposed and societal barriers to health:
1. Stress
2. Disparities
3. Access
4. Cost
5. Knowledge Gap
6. Regulation
7. Wait times
8. Pollution
9. Plastics
10. Chemicals
11. The mindset that the magic pill will cure me – I can’t heal myself
The modern medical model focuses on acute symptom relief. This is limited to focusing on the illness, not the cause. This type of care is fragmented and does not optimally serve the individual’s best interest when you can only heal or address one body part at a time. The body is viewed in parts, not as a whole.
The medical approach is shifting the focus on prevention, and care is coordinated by a team of health care professionals who communicate to each other to obtain the best result available for the patient. This model involves sitting and listening to the patient.
According to Dr. Jerry Mysiw, if you listen long enough, the patient will give you a diagnosis. Physicians are relying heavily on metrics that limit what they can do to support a patient. This new model of teamwork, communication, collaboration, and treating the person as a human being instead of a number was around hundreds of years ago. Sages, doctors, medicine women would use roots to heal the body. The body responds best to natural substances, not chemicals or sulfur medicines with so many adverse side effects. We are now going back to plant-based medicine. As the philosophers say, if you wait long enough you will see it happen again.
Coordinated, long-term care increases life expectancy and improves the health of the patient, and they make fewer trips back to the hospital.
1. Community
2. Peer support
3. Resources and policy
This type of patient is more involved in their own care which helps them feel empowered, as they become proactive in improving outcomes.
Being heard and seen is one of the most important things in health care.
Working Together to Find the Answer
I had the opportunity to work with a motor vehicle accident client who was in a minor accident. He presented with concussion and whiplash. After working with him a couple of times, I referred him to a medical doctor that does trigger point injections. When he returned to my office after the injections, his muscles were becoming more hypertonic and the symptoms were bigger than a concussion, but he was being judged as a malingerer and chronic complainer.
I requested he ask this medical doctor to prescribe an MRI, which he did. It turns out a tumor was present in his sinus cavity, causing many of the symptoms on the left side of his temple, chronic headaches, and visual pain. Taking the time to listen it is the secret to discover what is beneath the pain.
Does your health care team support you? If not, it’s time to find the doctor that supports your need for prevention and longevity.
For more information on how you can hire Simone as a consultant to help build your health care team, email info@simonefortier.com or call (403) 422-0881
Facebook: Fascia Training Institute
Instagram: fasciatraininginstitute or simonefortier
Twitter: @simonefortier
LinkedIn: Simone Fortier |
Skip to Content
MIT Technology Review
A crypto project to make internet names censorship-proof is now live
Two people shaking hands.Two people shaking hands.
The Handshake Network, an ambitious public blockchain project whose developers want to reinvent how internet domain names are assigned, has finally launched its main network after the project was revealed more than a year ago.
Meet the DNS: When you enter a website name into your browser, you make a request of a network of computers called the domain name system, or DNS, which keeps track of all the names on the internet. The DNS converts the text name you entered (e.g., into a string of numbers called an IP address. This number lets your browser locate and connect to the server for the website you are trying to visit.
ICANN haz centralized authority: The DNS is a hierarchical global network, and at the top of the hierarchy is the so-called DNS root. A Los Angeles–based nonprofit called the Internet Corporation for Assigned Names and Numbers (ICANN) oversees the DNS root and is responsible for allocating new “top-level domains,” which include .com, .org, .net, and most two-letter country codes. Internet freedom advocates have argued that relying on a single organization to do this makes the internet more vulnerable to censorship and hacking. Governments have been known to use the DNS to block access to certain sites. The Handshake Network’s backers say control over the root can be decentralized, using a blockchain.
Bitcoin-esque: Handshake’s network will be similar to the Bitcoin network. Computers will compete to add new transactions to the blockchain and earn cryptocurrency. But the blockchain will also keep track of registered domain names, and the top 100,000 of the internet’s most popular names are already in the chain. If a name isn’t on the blockchain, the software will redirect your request to regular DNS servers, Steven McKie, a developer and investor in the project, told me in June
I’d like to try. How do I do it? It’s possible to change your computer’s DNS settings to point to a publicly available Handshake name resolver and start using it to look up names today. You won’t need cryptocurrency or any special software to do that. You can also participate more directly in the network by installing and running the Handshake software, a “light” version of which can be embedded in your browser. To register a name, you’ll need to participate in an online auction for it using the network’s cryptocurrency, called HNS, which you can buy here.
Now that it’s built, will they come? To work, Handshake will need to build up a large community of “miners” willing to run the software in pursuit of new coins, entice developers to build applications on top of the network, and convince regular users to switch from the traditional DNS. Can the project succeed where a number of other blockchain-based internet naming systems have already failed? Now that it’s finally in the wild, we’re going to find out.
|
Share on Facebook
50 Summer Health Dangers You’re Probably Ignoring
Don't dismiss these very real, often overlooked summer dangers.
Human eye close-up. dark green human eyeLijphoto/Shutterstock
A sunburn on your eyeballs
Yes, you read that right. It’s called photokeratitis, and it can happen in just a few hours of exposure to strong, unblocked sun. If you don’t wear sunglasses regularly, you’re also putting yourself at an increased risk for cataracts, macular degeneration, and growths on your eye. According to the Vision Council, people are very lax about eye protection: A recent survey found that only 31 percent wear UV-protective sunglasses every time they go outside and just 44 percent wear them at the beach. This type of damage is “cumulative and irreversible.” So consistently wear sunglasses with UVA and UVB protection to avoid a problem that’s a lot bigger than crow’s feet. Check out these 39 simple habits that protect your eyes over the summer and all year long.
footprints in the sand close-up. walk along the ocean shore alonecontent diller/Shutterstock
E. coli at your local beach
We’ve all heard about E. coli popping up in summertime food…but in the sand on the beach? Researchers at the University of Hawaii say that being exposed to fecal contamination and its associated bacteria on even the most beautiful sandy shore is a real risk. They found that fecal bacterial levels in the sand were 10 to 100 times higher than in the surrounding water. This may be because bacteria decay at a slower rate in the sand than in seawater, so the bugs accumulate in “biofilms” and in areas that the sun can’t reach. To avoid potential infection, make sure to cover any cuts and also wash your hands frequently.
Young girl's wet bare feet dangling from the stone jetty Georgy Dzyura/Shutterstock
Brain-eating amoebas
Think you’re safe from being eaten alive because you’re not heading into shark-infested seas? Think again. Meet the Naegleria fowleri, also delightfully known as the brain-eating amoeba. It loves warm freshwater sources, such as lakes, rivers, and hot springs. According to the CDC, from July through September, the southern-tier states are the most at risk, with about half of the cases popping up in Texas and Florida. Limit possible exposure by keeping your head above water when swimming and avoiding stirring up sediment on the ground.
Top view of red meat skewers being grilled in a barbecue.michelaubryphoto/Shutterstock
Carcinogens from your BBQ
Nothing says “summer” like a good, old-fashioned barbecue. But the good, old-fashioned way of grilling over an open flame can create two carcinogens—heterocyclic amine (HCA) and polycyclic aromatic hydrocarbons (PAHs). According to a study, if you eat charred meat frequently, your risk of pancreatic cancer can jump by 60 percent; postmenopausal women are also at an increased risk of developing breast cancer. To reduce these risks, use a spicy or alcohol-based marinade; studies have shown that both decrease carcinogen creation. You should also cook your food for a longer time at a lower temperature: HCAs start to form when the grill hits 325 degrees.
Woman walking barefoot on green grass backgroundAfrica Studio/Shutterstock
Walking barefoot in the park (or anywhere else)
There’s nothing like feeling the grass on your feet…until you accidentally step on something sharp. Puncture wounds are common in summer, and stepping on a rusty nail or another sharp object will require a tetanus shot within 48 hours if you haven’t had one in the past five years. For people with diabetes and other nerve damage to the feet, things could be even worse: “If they step on something sharp that breaks the skin without feeling it, that injury could introduce an infection that threatens the viability of their toes, foot, or even lower leg,” says Pat Salber, MD, a board-certified internist and emergency physician and the founder of the website The Doctor Weighs In. “If you have a foot neuropathy, never go barefoot, and wear shoes with firm soles. It is far better than risking amputation due to an infection related to a ‘silent’ injury.” Here are 9 other things that diabetics should watch out for this summer.
woman's feet walking on the grass. old woodeakkaluktemwanich/Shutterstock
Creeping eruptions
Stepping in poop with your bare feet can be even more disgusting than you think. If you come into contact with hookworm-infested animal excrement, you can develop a something called a creeping eruption, which causes an itchy, threadlike rash. Children are more at risk for this than adults since they tend to go barefoot outside more frequently and venture into places adults would stay away from. A course of antibiotics will clear things up—but get to the doctor fast since it spreads quickly.
Nurse Injecting Senior Male Patient In Hospital BedMonkey Business Images/Shutterstock
The “July effect” in hospitals
This one has nothing to do with the sun, sand, or surf, but it’s an equally terrifying summer health danger. According to a study published in the Journal of General Internal Medicine, there is a 10 percent increase in deaths at teaching hospitals every July. That’s when new doctors-in-training start their residencies and, as a result, are more apt to make mistakes—namely when it comes to prescribing and administering medication. Be a strong advocate for yourself, enlist a family member or friend to help, and never be afraid to speak up if something seems off. Here are 50 other secrets hospitals don’t want to tell you.
Close up of a iv drip in patient's handDziewul/Shutterstock
Post-surgery infections
Even if your doctors are top-notch and your surgery goes well, you’re at an increased risk for surgical-site infections in the summer months. The journal Infection Control & Hospital Epidemiology reports that when the thermometer rises above 90 degrees Fahrenheit, a patient’s odds of being hospitalized for this post-op complication rises by 28.9 percent compared to when temperatures are under 40 degrees. While patients should always be vigilant about wound care, they should be especially aware that this could be an issue in the warmer weather and seek medical attention at the first sign of a possible infection.
Japanese parents and children playing in the parkmilatas/Shutterstock
Going down a slide with your child
Young toddlers at great heights are enough to give you heart palpitations, but resist the urge to climb up there and slide down with them. Why? Because your little one might end up with a broken leg. According to information gathered from ERs nationwide from 2002 to 2015, the most common injuries on slides occurred with children between 12 and 23 months, with 36 percent of them suffering lower-leg fractures. In these cases, a child’s leg likely got caught on the edge of the slide, and the adult’s added weight likely made them go faster and twist that leg dangerously. If you still want to do some tandem-sliding, keep your little one’s extremities secure and away from the slide’s sides.
Runner girl tired feeling exhausted from jogging outside in summer heat -sun stroke headache or dehydration during difficult training outdoors. Blonde fitness woman model touching forehead.Maridav/Shutterstock
Deadly heat-related illnesses
If you dismiss the early signs of heat exhaustion and heat stroke as the normal effects of a hot day, you could be putting your life on the line. The National Weather Service reports that extreme heat kills 175 people in the United States every year. Warning signs you shouldn’t ignore include dizziness, headache, nausea, fatigue, and sweating. That said, when your body temperature reaches 104 degrees in the throes of heat stroke, you will actually stop sweating. Another caution from Everyday Health: Certain medications could make you more vulnerable to the heat, including antihistamines, blood pressure and heart medications, diuretics, laxatives, antidepressants, and seizure medications. Be safe by staying cool, staying hydrated, and listening to your body. Check out these tricks to cool down in the summer.
Close Up Of Driver's Hands On Car Steering WheelMonkey Business Images/Shutterstock
Driving without sunscreen
Getting behind the wheel without protecting your skin can be just as hazardous as baking on the beach. According to the Skin Cancer Foundation, Americans incur more photodamage on the left sides of their faces—aka the driver’s side—than the right. Glass blocks UVB rays and windshields are specially treated to block UVA rays, but side and rear windows still let UVA through. That’s why it’s essential to wear sunscreen every day, without fail. “Both UVA and UVB rays contribute to the development of skin cancer, but because UVA penetrates deeper, it is the larger contributor to wrinkles and sagging skin,” says Anne Chapas, MD, the founder and medical director of Union Square Laser Dermatology. “Also don’t forget reflection off water and sidewalks. Patients are always encountering potential UV damage even when they don’t realize it.”
Poison ivy leaves (disambiguation)dvande/Shutterstock
Poison ivy, oak, and sumac
These itch-inducing plants have been around forever, but in recent years, they’ve become stronger and more populous. This may be a result of warmer temperatures and rising carbon dioxide levels due to climate change. To protect yourself, know what these problematic plants look like: Poison ivy and oak often have branches with three leaves, while poison sumac can have clusters of seven to 13 leaves, as well as black spots that look like paint splatters. And if you’ve touched them? The U.S. Forest Service suggests cleaning the affected area with rubbing alcohol within 10 minutes, as well as washing any clothes or tools that have come in contact with them since the plants’ urushiol oil can stay on surfaces. Check out these 10 poison-ivy treatments you’ll be thankful to know.
Wild parsnipstumpic/Shutterstock
Wild parsnip
It sounds innocuous enough, but this plant with yellow flowers that resemble wildflowers or celery leaves can cause serious damage to your body. Wild parsnip contains something called psoralen, and when touched and exposed to sunlight, it can cause a rash, blisters, and burning, scalding pain. Even when the blisters heal, dark red or brownish discoloration in those spots can linger for months. You’re most likely to encounter these problematic plants between May and July.
Close view of an umbel of common hogweed (Heracleum sphondylium) flowers. Background of blurred vegetation.JohnatAPW/Shutterstock
Beware of pretty plants. This weed, which can grow up to 14 feet tall, is another poisonous plant you should have on your radar, especially if you live in New England, the Northwest, and the Mid-Atlantic. With delicate white flowers and green stems sporting red or purple spots, hogweed can cause burning, blistering, and long-term photosensitivity, similar to what happens with wild parsnip—but it could be so severe that it might necessitate a skin graft. Plus, if hogweed’s sap gets into your eyes, it could cause blindness.
Two people standing on a beachXiXinXing/Shutterstock
Wearing flip-flops
The ubiquitous summer sandal should come with a warning label. According to various studies, flip-flops can be bad for your feet, joints, and muscles. Since they usually don’t have any arch support, they can change the way you walk, causing you to take shorter steps and putting more stress on your body. People who are overweight are at an increased risk of developing health problems, including the painful foot condition plantar fasciitis. And the risks extend beyond your own body, believe it or not: According to one U.K. survey, flip-flops may contribute to more than a million car accidents each year. How? They could make it difficult for a driver to brake quickly and efficiently, something confirmed by simulator tests. Don’t miss these other scary reasons you should never wear flip-flops.
Two children swim in the ocean with snorkeling mask. Summer backgroundKucherAV/Shutterstock
While drowning can occur year-round, it only makes sense that it’s more common when people frequent pools and beaches. According to the CDC, around 4,000 people in the United States drown each year, and the two biggest at-risk groups are children under five and those between 15 and 24. Alcohol is often a factor in fatalities involving adolescents and adults. So, adults: Stay sober to stay safe, and make sure that older kids know the risks of drinking while near water. As for younger children, Dr. Salber says, “Home pools should have childproof fencing to avoid accidental drownings, and children should be taught to respect the dangers of unguarded bodies of water.” Teach them to swim, make sure they wear inflatables if they can’t swim well, and always keep a close eye on them. Remember: People often don’t splash and scream when they go under, so vigilance is key.
textured overhead view of wave crashing into shorelineKarl_Sonnenberg/Shutterstock
Rip currents
Every year, rip currents claim the lives of more than 100 people. Live Science describes them as “strong river-like channels” in the ocean, and while they don’t technically pull you under, they will carry you away. At that point, people tend to panic and their risk of drowning increases. So, how can you identify and avoid a rip current? Beware of calm patches of water between intense breaking waves, especially at low tide, when the water is already moving out. If you do get caught in a rip current, don’t swim against it; that will tire you out and get you nowhere. Instead, experts recommend swimming sideways out of the current or even just treading water until you’re released from it. Check out these 9 beach-safety rules that could save your life.
inside colourful outdoor playground equipment in park.Coyoteblabla/Shutterstock
Plastic playground equipment
If you’re a parent, you probably touch metal playground structures before letting your child go on them since you know they can get super hot in the sun…but do you do the same with plastic ones? You should. According to Good Housekeeping, plastic can also heat up enough to cause second-degree burns on your child’s delicate skin. Be wary of darker-color plastic, which absorbs more of the sun’s heat, and be particularly careful with children under two years of age.
Asian female in life jacket with other people near sea on shipChaninny/Shutterstock
Poor judgment while boating
Always wear a life jacket, says the Coast Guard: 80 percent of boating accidents involved drowning, and of that number, 83 percent of victims weren’t wearing a life jacket. Also, make sure that your boat driver is experienced. More than 75 percent of fatalities happened on boats on which the operator wasn’t certified in boating safety.
Close up on woman's upper arm and hand spreading sun cream at the beach on a hot, sunny day. Tanning, sunblock spread, skin care, ultraviolet rays protection, cancer prevention conceptJosu Ozkaritz/Shutterstock
Sunburns on cloudy days
It’s easy to forget to slather on the sunscreen when there’s not a ray of sunshine in sight, but that’s when you may need it the most. According to the Mayo Clinic, around 80 percent of the sun’s UV rays can pass through clouds and to your unprotected skin. And when it’s cloudy, hazy or cool, you’re more likely not only to forget your SPF—you’re also more likely to stay outside unprotected for longer. Also, sand and water can reflect UV rays, and in the Mayo Clinic’s words, burn your skin “as severely as direct sunlight.” Check out these 8 other surprising things that could cause a sunburn.
old women heart attack signRarin Lee/Shutterstock
Your heart health
Recent studies have shown that you’re more likely to have a heart attack in the winter months, but that doesn’t mean you’re in the clear during the hot, hazy days of summer. Matthew Mintz, MD, a Bethesda-based primary-care physician and internist, says that normal summer temperatures shouldn’t increase a person’s risk of heart attack but that things can get complicated for those who take blood pressure or other heart medications. “When the temperature goes up, the body keeps itself cool by increasing blood flow to the skin to ‘let off some steam,'” he explains. “However, blood pressure medications can block some of the body’s normal responses to heat and can increase the risk of heat stroke and/or dehydration.” As a result, those people should be particularly careful about staying cool and hydrated.
Hands of couple toasting martini glass near poolwavebreakmedia/Shutterstock
Alcohol-related dehydration
That poolside cocktail might be refreshing and relaxing, but you should think twice before ordering another round. That’s because, as Dr. Mintz explains, “alcohol is a diuretic, meaning it makes you urinate more, so even though you’re drinking a beverage, you can lose more fluid than you take in.” While he says that a few drinks alone likely won’t cause dehydration, they can contribute to or accelerate it. Plus, as he mentions above, blood-pressure medications can pose an added risk: “A 60-year-old man who is otherwise healthy but on blood-pressure or cholesterol medicine might have a beer or two by the pool on a hot, sunny day and get lightheaded, dizzy, or pass out without much warning.” Here are 7 unexpected signs of dehydration you should know about.
Brother and sister play videogamesRuslan Guzov/Shutterstock
Leaving kids home alone
School’s out for summer… which means kids have a lot more time to get themselves into dangerous situations that could land them in the hospital. While that could be anything from drowning to ingesting a poisonous substance, an advisory list from the Consumer Federation of America and reveals that kids who are home alone are three times more likely to be injured or harmed in some way than when they are with an adult. So, make sure that kids know what to do in case of an emergency, and depending on their ages and temperaments, possibly reconsider their home-alone time.
selective focus of african american woman applying cosmetic cream on face in bathroomLightField Studios/Shutterstock
Skimping on sunscreen
People who diligently apply sunscreen every morning think they’re safe, but their false sense of security can put them at risk. In a survey of 156 U.S. dermatologists, 99 percent said their patients don’t use enough sunscreen. So what is the magical amount? According to Y. Claire Chang, MD, of Union Square Laser Dermatology, people should apply 1 ounce (about the size of a shot glass) of a broad-spectrum sunscreen with SPF 30 or higher—every two hours. “Reapplication is just as important as applying it in the morning,” she says. Plus, women shouldn’t fall into the trap of thinking that their SPF-infused makeup provides enough protection all day. “Many makeups only have SPF 15,” Dr. Chang adds, “and most people do not apply a thick enough coat to allow for full protection.” So, sunscreen first, makeup second, and reapplication third if you’re going to be outside for an extended period of time. Here are 10 sunscreen myths that make dermatologists cringe.
Gestational diabetes
Most moms-to-be know that they should keep their weight and junk-food intake in check to help prevent gestational diabetes, but something as seemingly innocuous as being pregnant in the summer can increase their chance of developing it. A Toronto-based study published in the Canadian Medical Association Journal found that with every 10-degree Celsius rise (50 degrees Fahrenheit), a pregnant woman’s risk goes up by 6 to 9 percent. They theorize that the body’s subcutaneous brown fat, which heats you up when it’s cold out, helps the body regulate sugar levels and, therefore, offers protection against diabetes. It apparently does less of that when it’s hotter out. Of course, it’s also good to be active when pregnant, but make sure not to get overheated and to stay in air conditioning when possible.
Close up on red button of the safety belt protecting a baby in its car seat.Manuel Findeis/Shutterstock
Heat-related car-seat deaths
Each year, more than 36 children die from the heat after being forgotten in a car, and the majority of those incidents happen in the summer. Texas and Florida have the dubious distinction of being the states with the most deaths, but it can happen anywhere—and things can get deadly fast. According to CNN, the temperature of a car can go up by 20 degrees in just 10 minutes, and children’s body temperatures rise three to five times fast than adults’. You may think that Forgotten Baby Syndrome only happens to “bad” parents, but there’s actually a scientific reason that your brain goes on autopilot, and that type of judgment could put your child at risk. So always check the backseat before exiting your car, and think about leaving your purse or cell phone back there to help you remember.
Four pieces of rotten avocado on wooden background top viewAnna Lo/Shutterstock
Spoiled food
While we all know that we shouldn’t leave food in the heat for too long, many people don’t keep a close eye on the clock when they’re at a party or picnic. But they should if they don’t want to get sick. According to the FDA, food shouldn’t stay in the so-called danger zone—between 40 and 140 degrees Fahrenheit—for more than two hours. That number gets knocked down to just one hour if the mercury rises above 90. That’s when bacteria, mold, and yeast start forming on your food and release waste products and toxins. Here are 8 rules that will help you avoid food poisoning.
Fried meat with blood with hunting knife. Well done steak closeup. Rustic style.Konstantin Zaykov/Shutterstock
Undercooked, contaminated, and otherwise dangerous food
Approximately 128,000 people in the United States are hospitalized after contracting a food-borne illness every year, and the most fatalities result from salmonella, toxoplasma, listeria, and norovirus, according to the CDC. Problems can occur when food such as hamburgers and chicken aren’t thoroughly cooked, when raw shellfish is contaminated with certain bacteria, and when vegetables and leafy greens aren’t washed properly.
Great white shark showing its teeth in clear blue water.VisionDive/Shutterstock
Shark attacks close to the shore
Experts swear that shark attacks are rare (and the statistics confirm that), but no one wants to end up as a snack for one of the world’s most terrifying predators. According to the National Ocean Service, attacks are more prevalent near the shore, usually in front of a sandbar or between sandbars, where sharks can get trapped by low tide, and near steep drop-offs, where shark prey congregates. Be shark-smart in the water by staying in groups, by swimming during the day, and by ditching brightly colored bathing suits and shiny jewelry, which can apparently attract the wrong kind of dorsal-finned attention.
People at the helm of a luxury yacht.hxdbzxy/Shutterstock
Drinking while boating
Drinking and driving don’t mix—and goes for boats as well as cars. According to statistics from the Coast Guard, there were 4,463 boating accidents, 701 deaths, 2,903 injuries, and $49 million of property damage in 2016. Alcohol was the biggest contributing factor in accidents with fatalities.
wasps on skin humanJukkapong Piyarom/Shutterstock
Things that sting
More time outside means more risk of summertime stings. The problem is that many people don’t know they’re allergic to a bee or wasp sting until after they’ve been stung, and sometimes it takes a number of stings to trigger a full-blown allergic reaction. While life-threatening reactions to insect stings are rare, they’re not as rare as you might think: They’re a problem for nearly seven million people in the United States. If the pain of a sting is accompanied by hives, chest tightness, difficulty breathing, swelling of the tongue, or dizziness, see a doctor immediately. Here are other dangerous bugs to watch out for this summer.
Child swims in swimming pool underwater, happy active girl dives and has fun under water, kid fitness and sport on family vacationJaySi/Shutterstock
Dry drowning
Dry drowning has been in the news a lot recently because it often affects children and because it happens when people think they’re in the clear, after a near-drowning scare. It occurs when inhaled water makes the vocal cords swell and closes off the airway. And then there’s the similar secondary drowning, during which water enters the lungs, causing fluid build-up and possibly pulmonary edema. While rare (1 to 2 percent of drownings), both dry and secondary drowning can be fatal if not caught early. The signs to look out for, according to WebMD: coughing, chest pain, labored breathing, and extreme lethargy. If your child is exhibiting any of these symptoms or just seems “off” after an incident, head to the ER immediately.
Man grills sausages. Grilled sausage on barbecue, grill. Shallow depth of field.Vlajko images/Shutterstock
Grilling injuries
Barbecuing is a summer staple, but it’s one that requires more safety know-how than just firing up the grill and slapping on some hamburgers. According to the National Fire Protection Association, between 2009 and 2013, there was an average of 8,900 home fires that started with a grill, hibachi, or barbecue. This resulted in an annual average of 10 deaths, 160 injuries, and $118 million in property damage. Five out of six of the culprits were gas grills. To avoid a similarly devastating situation, make sure to clean equipment properly, keep it away from things that could catch fire, and never leave it unattended when it’s on.
Lawn mower close-up. A machine is mowing a grass.Yunava1/Shutterstock
Lawnmower accidents
Approximately 80,000 people in the United States head to the hospital every year for lawnmower accidents, according to Prevention. Fortunately, the injuries aren’t always as gruesome as you might think. The majority of them happen when a mower accidentally runs over an object like a rock or a stick and the mower blades inadvertently send it hurtling in your direction. Take particular care to clear the yard of debris before mowing, and make sure to mow in full daylight. It’s also a good idea to wear sunglasses, long pants, long sleeves, and closed-toe shoes for protection.
Rear View Of A Man Cleaning Air Conditioning SystemAndrey_Popov/Shutterstock
Hidden air-conditioner mold
You might not see it, but that doesn’t mean it’s not there and causing a potentially big problem. Dust, mold, allergens, and pollution accumulate on air-conditioning-unit filters, complicating things for people whose lung health is already compromised. Who might that be? Those with allergies, asthma, or other respiratory diseases. CNN advises people to clean or replace filters every three months, and when they do, to wear gloves and a mask for protection. Here are 11 other surprising ways your house might be making you sick.
Group of Friends Drinking Beers Enjoying Music Festival
Music-festival debauchery
The unique atmosphere of outdoor music festivals can create the perfect deadly storm. One of the biggest problems comes from mixing drugs and alcohol. As Live Science explains, that can lead to the creation of new compounds in the body, making the alcohol and drugs even more toxic. Alcohol also worsens dehydration, which is already often an issue because of warm temperatures and dancing. A combination of these factors could lead to hyperthermia and even the deadly rhabdomyolysis, which can cause kidney failure and death. In short: When your judgment is impaired, you might not even realize that you’re in danger.
Lady is spraying perfumes from the bottle.Yevgen Belich/Shutterstock
Sunburn from perfumes and essential oils
Talk about hidden dangers. Both perfumes and essential oils can make your skin more sensitive to the sun and, as a result, increase your risk of sunburn. Benzophenones, retinoids, and certain fragrances are the culprits in perfumes, according to Livestrong, while essential oils “act like cooking oils” and make your skin absorb more UV rays. It’s probably a good idea to ditch the scents if you plan to be in the summer sun.
Young casual woman is having stomach ache. Woman making heart shape on her stomach. Gynecology, period, female healthcare, digestive system, Urinary Tract InfectionsSimikov/Shutterstock
Urinary tract infections
A urinary tract infection is annoying in any season, but studies show that you have a higher likelihood of developing one in the summer, especially if you’re a woman under 44. This may be due to a higher risk of dehydration or an increase in sexual activity during the summer months. So take particular care to stay hydrated, which will dilute urine and flush out bacteria, and always use the bathroom shortly after having sex.
Culex mosquito seeking blood, House mosquito.Oteera/Shutterstock
Zika, West Nile, and other infections carried by mosquitoes
Every year, there seems to be a new dangerous mosquito-borne disease wreaking havoc on the country. In the past decade, there have been surges of West Nile, which can cause debilitating muscle weakness and encephalitis, as well as Zika, which can cause the devastating birth defect microcephaly in the fetuses of pregnant women. The problem is, mosquitoes are so prevalent, it feels like a losing battle, so people end up getting lax or doing nothing. Dr. Salber advises avoiding exposure when possible: “I am not saying you should never venture into nature. Rather, you should take precautions, like avoiding being outside uncovered during periods when mosquitoes are most active, covering up with long-sleeved shirts and pants, and using a high-quality insect repellent.” For added protection, check out these 8 foods that can protect you against bug bites.
Professional pedicure in the beauty salon. The beautician cuts the skin with fingernails and performs professional pedicure.Robert Przybysz/Shutterstock
Post-pedicure infections
That pretty pedicure can get pretty disgusting pretty quickly—and it has nothing to do with going too long between appointments. Fungal infections occur more frequently when it’s warm out, according to Yale Medicine, and they can be difficult to treat. Medication works only 50 percent of the time, and a new nail could take more than a year to regrow. That’s why it’s so important not to leave yourself vulnerable to infection. Make sure to go to a reputable nail salon, bring your own tools, and clean them properly with warm, soapy water before sterilizing them with rubbing alcohol.
selective focus of pile of pills in male handLightField Studios/Shutterstock
Your daily medications
Sunscreen isn’t enough to protect you from the summer sun when you’re taking certain medications that prime you for a burn from the inside out. “There are phototoxic and photoallergic reactions that occur when the drug is distributed throughout the body and then exposed to UV light,” explains Dr. Chapas. “The biggest offenders we see in dermatology are topical tretinoin and oral doxycycline, but there are tons of drugs that can cause this reaction.” To avoid this problem, talk to your doctor about the risks and potential side effects of your prescriptions. Dr. Chapas usually advises patients to stop non-essential medications, like ones for acne, if they’re taking a beach vacation or will otherwise be in the sun for prolonged periods of time.
Close up on woman's hand spreading sun cream on her leg at the beach on a warm, sunny day. Sunblock, skin care protection conceptJosu Ozkaritz/Shutterstock
Forgetting to put sunscreen on these spots
When applying sunscreen, the obvious spots are, well, obvious. But there are other equally vulnerable areas that are regularly exposed to the sun, and when you overlook them, you increase your odds of developing skin cancer. Dr. Chapas says that men often forget to protect their scalps and ears. Aside from applying sunscreen, she recommends wearing a large-brimmed hat for protection. And for women? “Women tend to develop skin cancers on their legs,” she says. “They need to be vigilant about applying sunscreen and not tanning their legs while the rest of their body is under the umbrella.” Other at-risk spots: lips, the neck, and the chest. Here are 37 other ways to cut your cancer risk.
this is a close up of a fruit batSusan Flashman/Shutterstock
Rabid animals
An animal doesn’t have to foam at the mouth like Cujo to be infected with rabies—and it doesn’t even have to bite you to transmit it. Rabies, which attacks the nervous system, can also be spread if an animal’s infected saliva comes into contact with a person’s eyes, nose, mouth or a cut, and incidences rise in summer because of increased outdoor time. Bats can be a particular risk, especially for kids, because bats have very small teeth; children might not realize they were bitten or might not be able to explain that they were. So if your child has any direct contact with a wild animal, call a doctor right away and see if vaccinations are recommended. Incubation periods can be long, and once symptoms start to appear—including fever, headache, and paralysis—rabies has spread to the brain and is usually fatal.
Pretty ethnic woman sitting on bed alone and wrapping in blanket looking away with great feeling of sadness and depression.Daniel_Dash/Shutterstock
Seasonal depression
People are just happier in summer, aren’t they? Not necessarily. Seasonal affective disorder (SAD) affects around one in ten people during the summer, causing depression, feelings of hopelessness, insomnia, and anxiety. Those with summertime SAD may also feel isolated because everyone seems happy except them. According to, this particular form of SAD may be triggered by too much sun exposure or heat, allergies, or shifting bedtimes. Whatever the cause, the treatment is the same for those affected in the winter: Talk to a professional about what’s happening, and know that you’re not alone. Watch for these 8 silent signs of SAD.
Chubby kid trying to wear pantsESB Professional/Shutterstock
Weight gain in children
Idle minds lead to idle hands—which may end up reaching into a cookie jar. According to NPR, a study in the journal Obesity found that kids are at an increased risk for weight gain and childhood obesity during their two months off from school. Researchers looked at the data from children in kindergarten through second grade and saw that obesity rates jumped from 8.9 to 11.5 percent; the prevalence of being overweight also went from 23.3 to 28.7 percent. Researchers point to irregular sleep schedules in summer for a possible cause, as well as an increase in screen time. So, try not to let those summer days be all that lazy: Keep kids active and on a schedule. Check out these seven things every parent should do to prevent summer weight gain in their kids.
LightningJohn D Sirlin/Shutterstock
Lightning might not strike in the same spot twice, but it strikes more often than you think. According to an analysis by the National Oceanic and Atmospheric Administration, between 2006 and 2015, 313 people died after being hit by lightning in the United States: 11 percent of the fatalities occurred when fishing, 6 percent on the beach. Live Science says that the waves might make it hard for beachgoers to hear approaching storms and that people often wait too long to seek shelter because they’re hoping the storm won’t actually hit.
Closeup of a tick on a Skin.Marlon Boenisch/Shutterstock
Tiny ticks carrying Lyme disease
In recent years, Lyme disease has soared, especially in the Northeast, the Mid-Atlantic, and part of the Upper Midwest. Around 30,000 cases are reported annually, though the actual number may be closer to 300,000. The early signs of Lyme—which can damage joints, the heart, and the brain—are fever, headache, fatigue, muscle, and joint pain, as well as a distinctive bulls-eye rash around the site of the bite. Fortunately, “being treated early almost always leads to a complete recovery,” says Dr. Mintz, so do a full-body check on yourself and kids every day. Prevention is also important. “People should avoid wooded and brushy areas with high grass,” advises Dr. Mintz, “walk in the center of trails when hiking, and use bug repellent on exposed skin.” Check out these 6 smart tips for avoiding insect bites and stings.
Boy With Scraped ElbowBoys Club/Shutterstock
Antibiotic-resistant infections
Shorts season brings innumerable scuffs and scrapes for kids, but if you’re not careful, even the tiniest of cuts could pose a serious health hazard. Research out of Johns Hopkins reveals that antibiotic-resistant staph infections (aka MRSA) affecting children spike during summer months. Plus, 74 percent of those under 20 who contract MRSA do so from a “community setting,” as opposed to a hospital. To protect kids, teach them about good hygiene, keep cuts covered with dry bandages, and don’t let them share towels or other items that contact bare skin. Also, see a doctor immediately if a wound becomes red, swollen, pus-filled, or red-streaked, as well as if it is accompanied by a fever.
garter snake, a snake biting on handmr.kie/Shutterstock
Snake bites
Not to frighten you, but there’s a good chance you walked right by a snake today. Snake bites not only increase during the summer when people spend more time in the great outdoors—they’re also increasing overall by about 100 to 200 per year. Between 7,000 and 8,000 people are bitten in the United States every year, according to the CDC; thankfully only a small handful of snake bites will be fatal. Experts advise staying on alert when outdoors, wearing boots or closed-toe shoes, as well as keeping hands and feet out of crevices in rocks, wood piles, and high grass. And above all, if you are bitten, get professional help quickly. This is not the time to wait and see what happens or to try out an old-wives’ tale like sucking out the venom (which, by the way, doesn’t work).
Taxi Driving in ShanghaiWang Jie (Jay Wang)/Shutterstock
Driving on any summer holiday
Sure, there are holidays all year round, but the summer ones can be particularly hazardous to your health if you’re on the road. One study showed that people are four times more likely to die in a traffic accident over Memorial Day weekend than over a regular weekend. As for the single deadliest day of the year? The Fourth of July. The takeaway: Consider extending your trip and traveling on less high-risk days, and above all, be careful out there! Next, don’t miss these healthy ways to prepare your body for summer. |
How Do I Choose Wood for Woodworking?
Dan Cavallari
Choosing the best wood for woodworking starts with deciding what you will be building. Wood generally falls into two categories — hardwood and softwood — and each type is useful for certain applications. Construction projects, such as framing a house, will usually require softwoods, while hardwoods are useful for furniture making, fine woodworking, and so on. The durability of each type will vary: hardwoods tend to be exceptionally strong and resistant to damage, but they will be more expensive; softwoods dent more easily and can warp, but they are less expensive. Choose the right wood for woodworking by determining whether you need soft or hard materials.
A power sander, which is used for woodworking.
A power sander, which is used for woodworking.
Wood for woodworking is further broken down by its grade. The grade refers to the overall quality of a particular piece; high-grade pieces are likely to have few defects and imperfections, while low-grade pieces will have knots, cracks, splits, or other damage that will cause parts of the piece to be unusable for construction. High-grade wood for woodworking will, of course, be more expensive than low-grade pieces, though low-grade woods are still usable for some applications and can save the builder money if he or she is willing to go through the extra effort of cutting out the best pieces.
Hardwoods are usually used for making furniture and in fine woodworking projects.
Many high-end furniture makers as well as woodworkers creating pieces that will be shown off as a centerpiece will use only high-grade wood for woodworking. In most cases, the woodworker is likely to use hardwood only, because the grain is likely to show through the finish. The grain is often one of the most attractive aspects of a piece, so if you are building furniture or other pieces intended to be shown off, be sure to choose a material with a strong, pronounced grain. Oak, mahogany, and teak often feature prominent and attractive grains, though again, such pieces are likely to be more expensive.
Construction projects usually call for less expensive softwoods because a significant amount of lumber will be needed for the project. Pine is a common choice for such applications, as it is generally inexpensive and easy to work with. The disadvantages of pine can include cracking, splitting, and warping, and the bare pieces of pine will require chemical treatment to protect them from water damage and bug infestation. Some types of pine tend to have a significant amount of knots as well, which can affect the piece's usefulness to the builder.
Mahogany has an attractive grain.
Mahogany has an attractive grain.
Discussion Comments
There are some really nice courses you can take if you want to start up woodworking.
It is a really interesting craft to take up, particularly if you have access to suitable lumber.
However, woodcraft can take a lot of work. Particularly if you want to do it from scratch. Drying wood for woodworking takes a while, up to several years depending on the wood in fact, and you also need to know how to cut it properly, as well as put it together.
So, if you have some beautiful wood from a tree that came down, and fancy making something out of it, you might be better off giving it to someone else who has the skills.
If you do decide to learn how to make something of it, good luck! It's a wonderful skill to learn.
If you are going to be using the wood in certain kinds of craftwork, you probably want to use balsa wood.
It's pretty good for whittling, but what's really nice about it is that it is one of the lightest woods in the world.
So, you can use it to do things like make kites or model aircraft. They sell thin sticks and planks of it in most craft stores.
I used it to try and make a model hot air balloon once as well, because if you purchase very thin sticks, they will bend a little bit.
Ultimately, I didn't manage to get it to work!
But, at any rate, balsa wood is a really good choice for wood craft. It should be part of any kid's craft set.
Post your comments
Forgot password? |
Saturday, July 29, 2017
The Military as a Social Experiment
It is an overused expression that the purpose of an army is to "kill people and break things". While it is undeniably true that this is the main purpose of the military, throughout history armies have done much more for the societies they defend.
This week, there is an ongoing argument about whether transgendered people should be allowed to serve in the military. One of the arguments against their serving is that the military should not be the place for social experimentation.
While I have NO pertinent data about the transgendered in the military, I can tell you that contrary to popular opinion, armies have always been the laboratory for societal experiments and the leading edge of cultural change.
Military service has always brought together people from different locations, backgrounds, and economic conditions. Think of any war movie made during the 1940’s—there is always a scene where the recruit from the Bronx, the hillbilly from the Ozarks, and the tall lanky kid from Texas all meet in the barracks. Culture shock is the norm for the newly-enlisted.
This could be called social experimentation, but military leaders since the time of the Roman Republic have learned the value of creating legions with troops in a balanced mix of age, class, and wealth. Polybius, writing in 150 B.C., said that to insure that each legion contained a proper mix, all recruits were gathered together in one place, then tribunal officers took turns selecting men in rotation, as if they were picking softball teams in a schoolyard.
Historians have long speculated that one of the reasons the fledgeling United States quickly created a sense of nationalism was the binding effect of soldiers from different colonies serving together during the Revolutionary War. Julius Caesar certainly understood this effect, since he took great pains to settle retiring soldiers in towns of captured territory.
The bond formed by men serving together during war is so strong that some historians have theorized that it delayed the American Civil War by at least a decade.
In the United States, the military has always been an important part of the melting pot that assimilates immigrants. Though rarely shown in movies, during the Civil War, a third of the Union Army were foreign born. Even today, more than 8,000 immigrants annually enlist in the US Army, where they usually do very well. Immigrants in basic training have a 10% smaller “wash out” rate than the native born. And immigrants are more likely to complete a term of service than the native born. Today, the military is actively trying to recruit immigrants, finding that cultural diversity adds value in an increasingly global mission.
The American military was also the first to break racial barriers. Long before President Truman ordered the integration of the services, military duty offered opportunities for racial minorities. The Revenue Cutter Service—one of the forerunner agencies making up today’s Coast Guard—allowed African-Americans to be hired as early as 1831. By 1887, an African American, Captain Michael Healey, commanded the cutter Bear. Healey went on to retire as the third highest ranking officer in the cutter service.
Long before women were accepted in a number of occupations in civilian life, they had access to these jobs in the military. During both World Wars, women entered the work force due to labor shortages, and after both wars, the number of women working outside the home failed to drop to pre-war levels. It was during wartime that women were accepted as nurses, as truck drivers, and even as pilots. It wasn’t just men who refused to “go back on the farm” during peacetime.
Historically, the military has been a laboratory of social experimentation for new technology and medical procedures. During the Revolutionary War, George Washington was criticized for experimenting on his troops by having them inoculated for smallpox. This radical new procedure was considered risky, yet by the end of the war it proved to be wildly successful. Vaccinated troops had a better chance of surviving to the end of the war—even though they were serving in combat—than did non-vaccinated civilians who avoided combat.
The needs of feeding large numbers of men during wartime resulted in dietary experiments, too. Canned and preserved food exist because the French government offered a cash prize to anyone who could develop a way of preserving food on French warships. The experiment was successful and was soon adopted by civilians.
The first steps towards understanding the dietary requirement for vitamins came from the military. The Egyptians, after examining the bodies of Persians following the Battle of Pelusium, in 525 B. C., noted that the skulls of the Persians, who habitually wore turbans, suffered more cranial fractures than the Egyptian soldiers who wore no headgear. The Egyptians correctly attributed this to something beneficial of the sunlight. Today, we know that exposure to sunlight enhances production of Vitamin D. The Egyptians also noted a link between the ability to see at night and the consumption of liver, a natural source of Vitamin A.
Thousands of years later, it was the British Navy that realized that scurvy could be prevented if sailors consumed citric acid. The term “limey” originates from the British naval custom of adding lemon juice to the sailors' daily grog. (Early in the 19th century, the word lime could be used interchangeably to describe either limes or lemons.)
It is the military that frequently first introduces new technology into society. Perhaps the best example is the electronic computer. It might be impossible to find an American home without some form of digital computer today, but in 1946, the world’s only electronic computer was the 27-ton ENIAC in Philadelphia. ENIAC's development was funded by the Army to calculate artillery firing tables.
The list of technological innovations that came about to fill military need is practically endless: From velcro to interstate highways, from radial tires to jet transport, from penicillin to the earliest days of plastic surgery, it is the social experiments of the military that have brought change to the civilian world.
The main goal of the military is not social experimentation, but maybe—just maybe—we need to rethink this: Perhaps it should be.
1. Great blog!. I never considered most of these details, although I do see a close parallel to the fallout benefits of the space program which is often criticized for its huge cost and supposed lack of benefit to the general citizenry.
2. Gonna pass this one on. You have altered my thinking somewhat and that is the greatest compliment I can give a teacher. |
Definition of Osteopathy
What’s Osteopathy ? Where does it come from ?
Osteopathy was founded in the United States at the end of 19th century by Dr. Andrew Taylor Still who developed this field after much reflexion on health and life.
His life was marked by the death of his wife and four of his children, which caused him to search for and find new ways of healing.
“Osteopathy is a science, an art and a philosophy.” A.T.Still (Autobiography)
Osteopathy is a science because it is based on an extensive and detailed knowledge of the body’s anatomy, physiology, biology and neurology and all manipulation is founded in this knowledge.
Osteopathy is an art because each treatment is unique, specifically tailored to the patient’s particular needs.
Osteopathy does not aim to directly treat the symptom itself but rather it is focused on finding and treating the origin and cause of the person’s symptom/pain/disease.
Osteopathy is a philosophy that views the human being as an interconnected whole which also encompasses the person’s environment. Osteopathy works on the physical level as well as the emotional and spiritual level.
Why choose Traditional Osteopathy?
“Only the tissues know.” Rollin Becker (Osteopath 1910-1996)
We practice osteopathy in a way that is faithful to the teachings of its founders, mainly A.T. Still, W. Sutherland and R. Becker.
Osteopathy is a traditional medicine which requires very precise knowledge of anatomy and is rooted in a holistic philosophy. The human being is viewed as a whole (body, mind and soul); the body is not viewed as a machine that needs to be fixed in a mechanical way.
Once the osteopath has done a treatment, the body is able to heal itself independently by self-regulating and self-adjusting.
Our objective is to empower the patient by helping their body guide their healing.
Our way of practicing
“Medicus Curat, Natura Sanat” Hippocrates (“the physician treats, nature cures”)
As Osteopaths, we accompany the tissues with our hands to allow the body to find its balance.
Osteopathy is based on several principals :
• The practitioners rely on their hands as their only tool to feel, analyse and treat their client.
• Osteopaths see a human being as a whole, not as a sum of parts.
• Osteopaths know that structure and function are reciprocally interrelated.
• Osteopaths are very aware of the body’s self healing capacity.
“”Allow physiologic function within to manifest its own unerring potency rather than apply a blind force from without.” W.G Sutherland – The cranial Bowl
An unhealthy lifestyle, repetitive movements, a high level stress, emotional shock, any kind of physical trauma, or psychological tensions can all generate pain and/or structural difficulties in the body.
Osteopathy can be effective treating these structural imbalances in the body due to these kind of dysfunctions. Osteopathy is not effective, however, in treating pain that is due to a clinical pathology.
Osteopathy enables the different systems of the body to balance themselves; this is often achieved by working with the client’s vertebrae. If a client is undergoing medical therapy they can also receive osteopathic sessions; there is no conflict between the two treatments.
We search for and treat the origin of the pain. This means that we often work on non painful areas.
A painful area is very often an area that has adapted and compensated for a disturbance or difficulty somewhere else in the body. Our goal is to find out why and how the body is behaving in this particular way.
Once we have found this out, we then treat the difficulty that is causing the other parts of the body to compensate. For example, we may discover that a tension in the wrist has led to tension in a cervical spine, which in turn has adapted to the tension by causing intense pain in the person’s shoulder. We will treat the tension in the wrist and in this way enable the body to return to its balance. The pain in the shoulder will not only go away, it will not come back again.
Our objective as osteopaths is to find out the “why” and the “where”.
Osteopathy is a preventive medicine. It is much more effective and fast if you come before you body maladapts in hundreds of different ways and causes pain in many areas. |
In the recent past eco-tourism became more and more attractive for tourers worldwide. This paper gives a definition of eco-tourism and is seeking to reply the inquiry why it is non promoted as a major touristry sector of the United Kingdom ( UK ) . The place of eco-tourism and its noticeable deficiency of publicity within the UK are examined, while the function and potency of ecotourism in the UK is discussed.
Definition of ecotourism
Based on Fennell ( 2008 ) , Ecotourism has assorted significances but he suggested that five single aims have to be set to make ecotourism:
Minimal impact management/small graduated table
We Will Write a Custom Essay Specifically
For You For Only $13.90/page!
order now
Nature-based product/low impact
Contribution to community
Environmental instruction
Contribution to preservation
Mc Laren ( 2003: 91 ) defined ecotourism as
“ … a participatory experience in the natural environment. At its best, ecotravel promotes environmental preservation, international apprehension and co-operation, political and economic authorization of local populations, and cultural saving. When ecotravel fulfils its mission, it non merely has a minimum impact, but the local environment and community really profit from the experience and even have or command it. At its worst, ecotravel is environmentally destructive, economically exploitative, culturally insensitive, ‘greenwashed ‘ travel. “
Due to the development of ecotourism, a assortment of new finishs have been encountered which have been antecedently dismissed as stray and unapproachable for tourers. Some illustrations of this tendency could be tropical rain forests, oceans and even desert environments, where the bulk are situated in the less-developed countries on the Earth. Most of these new finishs are hapless and developing.
Timothy and Boyd ( 2003 ) explain that ecotourism and heritage touristry convergence, where ecotourism encompasses the natural and protected types of landscape, which include eco-tourists sing heritage attractive forces. This could be for case province houses, palaces and national Parkss.
The job with the term ecotourism is, that any tourist-operator can label and advance its merchandise as ecotourism, because there are disappointingly no limitation that rule the usage of it. The term ecotourism may be used inappropriate out of ignorance of the rules and ideals that the term carries, but abuse on intent as a selling tool besides appears to be really common ( Black and Crabtree 2007 ) . Another mention from book
Forms of ecotourism in the UK
The UK has four national tourer bureaus, the English Tourism Council, the National Ireland Tourist Board, VisitScotland and the Wales Tourism Board. These promote each state to international and domestic tourers. The Green Tourism Business Scheme in the UK accredits different topographic points for tourers which are seeking to pare down their environmental impact. Every concern is acquiring tested in a 2-year period to guarantee they fulfil the standards ( i.e. support of public conveyance, usage of local green goods, … ) . ( Green Tourism 2009 ) .
Ecourism is already acquiring promoted within the UK. An illustration could be the “ ECO-Guide 2010 ” of the Tourist Information which promotes to people who love to walk in nature how they can cut down their environmental impact. It offers different walks such as some in the Lake District and where you can detect the hill carvings in Oxfordshire.
Hall et Al ( 2007 ) describes the beach as critical national plus for the international and domestic touristry in the UK, and a new Marine and Coastal Access Bill from 2009 made by the UK authorities was created to procure a long-distance path around the seashore of England. The purpose was to supply public entree for coastal walking and other recreational activities, every bit good as designate Marine preservation zones to protect them from damaging activities ( ) .
. Assorted different eco-tourism operators promote finishs which are carry throughing -or partly fulfill the constituents for ecotourism.
Patterson ( 2007 ) relates that the growing of the ecotourism market has stimulated the development of eco-operators. An illustration of this is the growing of seal-watching at musca volitanss on the UK coastline.
The Wales Tourism Board is offering through operators wildlife escapade boat trips to see the landscape scenery and see sea birds, seals, giants and mahimahis. These are saying on their web site that they are acutely cognizant of their duty to the alone eco-system within which they operate and follow the codifications of behavior to supply a low impact, educative ( ) .
The troubles to bring forth Ecotourism in UK
The jobs ecotourism operators are confronting when they are looking for a possible finish is that there are non a batch of natural comparatively untasted countries left within the UK. Consequently it ca n’t really fulfill the standards of low impact and little graduated table Orthodox touristry. There are about 62 million people populating in the UK and the population denseness sums to 659,6 people per square stat mi, which is the 51st highest rate in the universe. Furthermore, the Office for national Statisticss predicts that the UK population will increase by 4,3 million by 2018. If that tendency continues, in 2033 there will be 71,6million people populating in the UK ( ) .
Beeton ( 1998 ) identified that the chief ecotourist group are the 20-40 twelvemonth old, followed by a 2nd big group, 55 old ages and older. She indicates that people of this age are seeking for different types of vacation. In add-on to that she states that ecotourists tend to be higher educated than other tourers and holding a higher incomes, which is by and large linked with that. Due to the fact they have a higher income, they have hence the money to pass it on more expensive and alien ecotours abroad. In finishs abroad they can full make full their desire to see nature and wildlife which they ca n’t see in the UK.
Tendencies and Potential in the UK
Responsible travel has been having rather strong coverage in UK travel media. Ecotourism is lifting as a considerable market tendency in the UK, as broad consumer market trends towards lifestyle selling and ethical ingestion spread to touristry. and places this in the context of runs by Voluntary Service Overseas and Tearfund. Between 1999 and 2001 the per centum of UK tourists draw a bead oning to be willing to pay more for an ethical vacation increased by 7 per cent from 45 per cent to 52 per cent ( ) . There have been many developments in the UK with respect to the acceptance of sustainable patterns and techniques amongst touristry suppliers.
Case Study: Paradise Wildlife Park, Broxbourne, Hertfordshire
Paradise park is a Zoo located in Broxbourne, Hertfordshire and has a passion for wildlife preservation and is involved in assorted genteelness programmes for endangered species. They even managed to rise up two White Lion Cubs, of these merely a few 1s are bing in the universe. The Park has late opened a new Discovery Centre which is committed to educate visitants in their new schoolrooms. The Park is doing changeless attempts to go more green and sustainable, it introduced recycling of trash throughout the park. Paradise Park became the figure one visitant attractive force in Hertfordshire if looked at figure of visitants, and is supplying non merely Jobs inside the park, it besides contributes to the local community by conveying tourers into the metropolis. ( )
Ecotourism has the features of sustainability, preservation and grasp of the attractive force being visited. Due to the named grounds wholly Orthodox ecotourism in the UK is improbable, but if the more inactive aims like natural environment were removed, there is a great potency to bring forth more ecotourism. These may fulfill all the standard ‘s of other active constituents ( i.e. environmental instruction, part to preservation ) , even it is a more unreal type of ecotourism. There are many ecotourism activities taking topographic point in the UK but it does n’t acquire promoted as a major market because non that many ecotourism finishs are bing. The tendencies reveal that the client demand is altering to more sustainable types of vacation which offers a great possible to eco-tourist operators to advance and sell more of their Tourss.
Written by
I'm Colleen!
Check it out |
How to Print a Poster?
Increase resolution on this image so that I can make a 12 ✕ 18 print?
A 12-inch by 18-inch looks like a small poster – about 30 cm ✕ 45 cm or approximately A3-paper size, which is the standard drawing book size that I used in elementary school.
Such a small poster would likely be observed on a standard reading distance, which at about 30 cm or one foot away. Hence the recommended resolution would be 300 dots-per-inch (dpi). Refer to this article on how to calculate the the optimal resolution for printing.
With 300dpi, the image would need to be at least 3600 pixels wide by 5400 pixels high, or about 19 megapixels. To put this into perspective, the iPhone 11 back cameras outputs 12 megapixel images, or slightly less than the optimal resolution for this print.
Thankfully there are applications that can enhance images to increase its resolution. Unlike traditional up-sampling techniques like bicubic interpolation, these applications uses machine learning which can “hallucinate” the missing pixels – hence producing sharp details in scales where previous algorithms can only produce blurry outputs.
One of such application is BiggerPicture. This application can enhance images up to 8✕ of the original pixel dimensions without causing blurry or blocky output.
Follow these steps to prepare small poster prints using BiggerPicture.
1. Load the source image into BiggerPicture.
2. Choose the Printable tab in the Target Size pane (shown to the right of the image).
3. Enter 300 dpi in the Print resolution text box.
4. Drag the slider until the print dimensions meets or exceeds the target real-world size.
5. Click on the Save all images button.
Enhance Poster for Print
BiggerPicture will then begin the process to enhance your image. This should take about five minutes – depending on the speed of your system.
Now that you’ve got the 19-megapixel version of your image, its time to send it out to a good professional print shop.
* indicates required
Tags: , , , , ,
%d bloggers like this: |
Skip to main content
Can I trust you?
Eline N. Lincklaen Arriens
Views, thoughts, and opinions expressed in the text belong solely to the author, and do not represent the views of the Support Centre for Data Sharing or the European Commission.
What does it mean to trust? Trust in itself can be defined as having confidence, faith, or hope in someone or something, such as trusting that the sun will rise in the morning1. But how do you come to trust that someone or something? Through observation? Because someone you respect said so? Or because you never had a reason to doubt it? And before even knowing that you trust someone or something, how do decide that you can trust them, especially with your personal information?
In regard to trusting someone, it feels like a no brainer. Sharing information about yourself can come as second nature in a conversation, depending on your familiarity and connection with the other person. The question of how much we trust our family, friends, acquaintances, and colleagues, is entirely dependent on our relationship with them and how we feel in a given moment. The closer you are, the more you trust and share. It is common sense that you are more likely to share information about yourself with your boy-/girlfriend or best friend than you are with a stranger or a colleague. This is not a strict rule, but when conversing with someone there is a feeling of control in what information you are sharing and how it can be used and distributed.
However, when it comes to sharing personal information with something, the answer is far more complex. In the start of the digital era, computers, the internet, and our phones were becoming an integral part of our daily life. Users were happy to trust the devices (and by extension the businesses behind them) with their sensitive personal information such as their name, address, age, gender, as well as health and financial data for quick convenience. For example, at a café or airport, most people were, and still are, willing to share their data without a second thought for free WiFi2or by accepting the terms and conditions for an application on your phone in a blink of an eye without reading the fine print3.
Now, this is starting to change – people are becoming more aware of the implications of sharing their personal data with ‘trusted’ parties in favour of protecting their privacy. People are installing adblockers, using VPNs, and are refusing to ‘consent’ to certain terms of agreements4, even for free WiFi. However, this change in perception is not universal and may be a bit hypocritical. An international survey found that even though 63% of respondents found connected devices “creepy” and 75% don’t trust the way their data is being shared, approximately 70% of the participants stated that they owned one or more of these devices, including smart home appliances, fitness monitors, and gaming consoles like Alexa, Amazon’s Echo speakers, and Furbo’s pet camera/treat dispenser5.
So, even with this change in level of trust and increase in valuing privacy, people continue to buy and use devices that collect and use their personal data. This highlights the question posed earlier, how do we decide what we can trust with our personal information? Or from another angle, does it matter if we trust someone or something as long as it is more convenient for us? As stated, even with an increase in demand for additional security and privacy measures, people continue to buy and download the latest gadgets and applications to make their lives easier with (semi-)full awareness that tech companies are collecting, analysing, and are likely sharing their data for profit.
So, I pose the question: to what extent would you trade your personal information for convenience, and what would change your mind? |
Show Mobile Navigation
History |
10 Foreign Fighters Who Helped America Win Its Independence
Mark Oliver . . . Comments
The American Revolution was about more than just America. It was a worldwide event. America did not fight alone. They got help from every part of the globe.
And we don’t just mean Marquis de Lafayette and Casimir Pulaski. Countless soldiers from all over the world stood up and fought with America, and without them, the United States never would have won its independence.
10Crispus Attucks
The Slave Who Was The First Casualty Of War
Photo credit: Wikimedia
The first man to fight and die in the War of Independence was born in America, but most of his fellow Americans didn’t think of him as a countryman. His name was Crispus Attucks, and he was a runaway African slave.
Attucks was working as a sailor, even though there was a price on his head. His master wanted him back, and he was willing to pay anyone who would drag him back into slavery. Nobody tried it, and if someone had, the American Revolution might never have happened.
Attucks and his fellow seamen were in a pub when a British soldier walked in. Attucks and his friends didn’t take kindly to the British presence, and they started taunting the soldier. Staring down a hulking 6’3″ man, the soldier got nervous. Seven of his friends, other British soldiers, rushed in to help. In short time, things got out of hand, and the British opened fire.
Attucks fought back. He grabbed a soldier’s bayonet and knocked him over, but the British gunned him down before he could do any more. Four other men in that bar would die before the massacre was over.
History has debated whether Attucks was a hero or just a violent drunk, but it can’t deny his impact. He was the first to die in the Boston Massacre, a moment that would spark the American Revolution.
9Von Steuben
The Prussian Who Trained The American Army
Photo credit: Wikimedia
The Americans who fought for Independence weren’t all seasoned veterans. Before Friedrich Wilhelm von Steuben came in from Prussia, they were using bayonets to skewer meat more often than they were using them to skewer their enemies.
Von Steuben crossed the ocean to teach the Americans how to fight. He was the Inspector General of the American Army, in charge of drilling the soldiers and organizing their training, and he barely spoke a word of English. Von Steuben would bark at people in Prussian, his secretary would translate it into French, and then another secretary would translate that into English.
It was complicated, but it worked. He taught the American army how to fight and how to use bayonets, and that made a huge difference in the war.
In 1779, General Wayne used Von Steuben’s lessons to take Stony Brook. He and his men took a fort protected by 750 men without firing a single shot. They won the battle entirely with bayonets. Without filling the night with the sound gunfire, they were able to launch a sneak attack the British didn’t expect. Thanks to Von Steuben, Stony Brook was taken.
8Tadeusz Kosciuszko
The Polish War Hero Who Tried To Free The Slaves
Photo credit: Wikimedia
Tadeusz Kosciuszko was one of the chief engineers for the US Army. He planned the defensive strategy in Saratoga, a moment that turned the war in America’s favor. He built the military fort at West Point, which, today, is the site of the US Military Academy.
The real story for Kosciuszko, though, happened after he died. He became close friends with Thomas Jefferson, and when he died, he trusted the president to carry out his final wishes. Every penny he had, he said, should be used to free and educate African slaves.
Thomas Jefferson was almost 75 years old, so he passed the job on to someone else. That man didn’t want the responsibility of trying to get white people to educate black people, though, and he passed it on, too. Eventually, Col. George Bomford was put in charge of it, and he decided to blow the money on himself instead.
By the time Col. Bomford died, only $5,680 of Kosciuszko’s $43,504 was left. His will made it into the hands of the Supreme Court, and they just threw it out. Despite his wishes, not a single penny was put toward freeing slaves.
7De Galvez
The Spanish Governor Who Secretly Supplied The American Army
Photo credit: Wikimedia
Bernardo de Galvez was the governor of Louisiana, which, at the time, was a Spanish colony. He wasn’t exactly invested in the cause of democracy, but he was deeply involved in the cause of messing with England.
And so, when America went to war with England, he started sending them everything he could. He promised them all the weapons and medicine he could get them, warning them, “It must appear that I am ignorant of it all.”
Spain entered the war in earnest in 1779, and De Galvez didn’t have to hide it anymore. He could fight, and he did. Within a year, he’d chased the British out of Mobile, Alabama. The year after that, he chased them out of Florida.
6Moses Hazen
The Man Who Led A Canadian Regiment For America
Photo credit: Wikimedia
Canada was a British colony during the Revolutionary War. They were, quite directly, America’s enemies, which makes it surprising that some of them fought alongside America. The Americans sent out political tracts and messengers to try to get Canadians to switch sides, and some of them did. A ragtag group of Canadians, most of them French, joined the American army.
The American army had two Canadian Regiments. The first group of turncoats, appropriately enough, was commanded by Benedict Arnold. They tried and failed to take over Quebec and then spent the rest of the war stationed in New York.
The Second Canadian Regiment, commanded by Moses Hazen, was a bit more successful. Hazen was a Canadian himself, and he led his army through some of the most important battles in the war. That included the Siege of Yorktown, the battle that ended the war.
When the war ended, Moses Hazen and the Canadians who fought with him no longer had the option to return home. They had to give up everything they’d known to fight for American Independence and had to live, from then on, in the United States.
5Antonio Barcelo
The Spaniard Who Fought The Biggest Battle Of The War
Photo credit: Wikimedia
We usually think of the American Revolution as a war on American soil, but it was more than that. The Spanish and the French took the fight straight to the English. In fact, the biggest and longest battle of the whole war took place in Europe.
It was on Gibraltar, a tiny, 3-square-mile island that happened to be in an important strategic location. On June 24, 1779, a fleet of French and Spanish ships tried to take it, and they kept trying for more than three years.
Their best attack was the brainchild of Antonio Barcelo. He set up a fleet of small ships loaded with cannons called “floating batteries” and sent them against the British. It didn’t work. The British held them off, but it was the closest they got.
The siege didn’t end until the peace treaty was signed. Antonio Barcelo and his men failed, but even if it was a waste, 3,000 Spanish soldiers gave their life fighting in Gibraltar.
The Dutchman Who Led A Guerrilla Army
Photo credit: Donna White
In its early years, there were a lot of Dutch settlers in the United States. They had their own community, one that seemed separate from the rest of America, and when the Revolutionary War started, that let them do things the Americans couldn’t.
After the British took New Jersey, John Mauritius Goetschius formed a guerrilla militia of Dutch farmers and struck back. They would attack and raid the British under the cover of night, and then, when morning came, pretended to be nothing more than farmers.
They might have been farmers, but they were capable of a lot more than they seemed. That became clear when, in 1781, Washington sent his army to take Fort Lee from the Loyalists. By the time the American troops had made it to their destination, the Loyalists were gone. Goetschius and his Dutch guerrillas had already taken the fort on their own.
The Native Chief Who Fought For The Us
Photo credit: Allison Giles
No one could be more American than the Native Americans, but they weren’t treated that way. They played a role in American Revolution, though, and it’s one that’s often overlooked.
Most, if they picked a side, went with the British. That only makes sense: Part of the reason the Americans wanted independence was so that they could move into native land.
The Oneida tribe, though, refused to believe that the Americans had any intention of hurting them. Their main contact with Europeans had been through a missionary named Rev. Samuel Kirkland, and he had been good to them. And so, when they knew that Kirkland’s people needed their help, they raised up their arms and fought alongside them.
The Oneida tribe worked as guides, harassed British sentries, and even joined some of the battles. They were good at it, too. In the Battle of Oriskany, their War Chief Tewahangarahken single-handedly took out nine British soldiers.
Despite that, they still had to struggle to convince America they were on their side. At one point, they sent them six prisoners from another tribe and a rescued American soldier. The Americans had asked for scalps instead, but they sent along a letter that apologetically explained, “We do not take scalps.” They ended it, “We hope you are now convinced of our friendship toward you and your great cause.”
The French General Who Made The British Surrender
Photo credit: Wikimedia
The decisive battle of the American Revolution came when George Washington led a troop of American soldiers into battle against the British at Yorktown. Washington, though, was not alone. He was joined by an even bigger army of French soldiers and ships, led by Comte de Rochambeau.
The Siege of Yorktown ended in the British surrender. Lord Cornwallis was the leader of the English soldiers there, but he refused to stand in front of his enemy and surrender—instead, he sent his deputy, Brigadier General Charles O’Hara.
O’Hara offered the sword of surrender to Rochambeau, but Rochambeau refused it. This, he believed, was America’s war. He insisted that the English surrender to George Washington instead.
Washington, too, refused the sword. He made O’Hara surrender to his second-in-command, Benjamin Lincoln. Lincoln had been overwhelmed by the British in Charleston and was denied the honors of a proper surrender. Washington wanted to see he got to experience one firsthand.
1Hyder Ali
The Indian Sultan Who Fought The British
Photo credit: Wikimedia
The last battle of the American Revolution wasn’t on American soil. It was in India. In the 18th century, communication was far from instant, and so the men fighting on the other side of the world had no idea it was over.
India had been a battleground for the American Revolution for the last five years of the war. When France declared war on England, the British East India Company started attacking their colonies there. Hyder Ali, the Sultan of Mysore in India, took the side of the French and led the fighting there.
When Hyder Ali died in 1783, the British started making serious advances on French India. They moved their forces to Cuddalore, a city on the Bay of Bengal, and very nearly took it. The French, however, managed to send a fleet in time to fight them off.
That French fleet kept the battle going. An army of French and Mysorean soldiers fought across India, struggling to hold back the British. Then, on June 29, 1783, word finally came in that the war had been over for eight months. The last fighters of the American Revolution put down their arms and went home, a whole world away from the country they had liberated.
Become Facebook Friends with Listverse Founder Jamie Frater
Mark Oliver
Read More: Wordpress |
Zinc Oxide Burning Off as a Blue Flame
Pictured is a blue flame caused when Zinc vapour and oxygen reacts when heated to around 1000°C (in combination with other elements such as copper, silver, gold, hydrogen and nitrogen).
Zinc (Zn) decomposes into Zinc Oxide (ZnO: zinc vapor and oxygen) at around 1975 °C with a standard oxygen pressure. In a carbothermic reaction, heating with carbon converts the oxide into zinc vapor at a much lower temperature (around 950 °C). (Source: Wikipedia)
Zinc Oxide Burning Off as a Blue Flame
Top Tips for Re-Using Precious Scrap Metal
Here are some tips for when you are reusing your clean scrap or customer’s metal. Please comment if you have any tips you wish to share with us.
1. Be Clean and Tidy: If you know what’s in your scrap it will make it much simpler to troubleshoot any issues that might arise later.
2. The Periodic Table: You don’t have to have a degree in chemistry, but do try and learn about how metals behave. Remember that if they are close to each other on the table, they may behave similarly.
3. Precious Metals vs. Non-Precious Metals: Alloys usually contain non-precious metals which will affect their behaviour. High copper content such as reds and pinks can affect the crystal structure making the metals prone to cracking. As can nickel, or even silicon from casting scrap.
4. Quenching: Quenching in hot water (or metho) can help when dealing with alloys containing non precious metals. This helps by making the cooling rate of the different metals more consistent.
5. Fluxing/Gases: If your metal is questionable, an additional fluxing step is recommended to remove impurities. Likewise, if you see bubbles/flaring when melting it is worth cooling, then reheating the metal while stirring to try remove the gas. If the metal is still not acceptable after these steps we recommend that you refine and start again with fresh metal.
6. Oxidising: When melting/pouring, use of a cover flame will help avoid oxidisation. If this is not done it can result in burning off of metal (silver especially) but also in the hardening of the outside layer which can later cause issues especially if worked in to the metal.
7. Molds: This is purely a safety tip, but please make sure you heat your molds well before use.
OMF Metals Report 11_03_15
By Kevin Morgan
*Gold Hits Three Month Low *Silver Slumps, Eyes Major Support *Chinese Copper Imports Lowest Since 2011 *US Now on Daylight Savings Time
Full report: |
Beat Around the Bush
Butt as it hatfi be sayde full long 1 agoo,
Some bete the bussh and some the byrdes take,
Sources: [X] [X] [X]
Raining Cats and Dogs
Earlier this week marked the first day of Winter. For many of us this means snow and lots of of it. For those of us in the Pacific Northwest, it seems to mean rain. In fact, I overheard someone describe the conditions outside with the phrase “It is raining cats and dogs out there.” The most amusing scene popped into my head as I really thought about this phrase I had heard so many times before.
The phrase “Its raining cats and dogs” is another example of an idiom, or a group of words that have a meaning unrelated to the actual written words. In this case the words, when said together, mean that it is raining really hard outside.
The origins of this phrase are mostly unknown. The fist recorded use of the phrase in written word dates all the way back to 1651. British poet Henry Vaughan described a house with a roof strong enough to endure “dogs and cats rained in shower.”
One possible theory for the origins of the this phrase comes from Greek Mythology. Odin, the god of storms, kept with him a variety of dogs and wolves as his attendants and sailors associated them with rain. Witches, who were known to take the shape of their cats, rode on the wind. Perhaps, over time, cat and dogs became associated with heavy wind and rain for these reasons.
Another potential explanation comes from the Greek phrase “cata doxa”, which means contrary to belief or experience. If it is raining harder than a person could believe, it is not that far of a stretch to see how using the phrase cata doxa to describe it could become cats and dogs over time.
One final theory, and perhaps the most unbelievable, comes from Great Britain in the 1500’s. During this time, roofs were made of thatch, which was essentially piles of hay with no wood underneath. Apparently, at times, small animals (cats and dogs?) would climb on the roof and bury themselves in the hay to keep warm and stay safe. If it rained hard enough, these roofs would become very slippery and wash the animals right out from their cozy little hiding spot. Imagine walking by a house and having pack of small pets land on your head. You might try finding a phrase to describe the phenomenon. Its raining cats and dogs would seem the only natural fit.
image courtesy of the French Wikipedia
Although we do not know the exact origins of the phrase “raining cats and dogs”, we do know that it has been used to describe heavy rain for quite some time. We also know that there are many theories out there. Which one is your favorite? Comment below!
Straight From the Horse’s Mouth
The English language can be confusing. Certainly, many of our idioms have fallen behind the times. Hold the phones – What phones? Don’t have a cow! – Surely, I’ll pass. Where did these phrases come from and why have they stuck around despite their falling out with modern technology and culture?
On my morning walk I heard someone mention that they had “heard it straight from the horse’s mouth”. Well, unless his horse is Mr. Ed, it seems quite unlikely that the man and his horse had a chat over breakfast and coffee. Who’s to say the horse’s information would even be slightly reliable; or more reliable than say, hearing something straight from a person’s mouth?
The phrase “straight from the horse’s mouth” has become an idiom. An idiom is a group of words established by usage as having a meaning not deducible from those of the individual words. What meaning do we deduct from this word cluster?
We take the phrase “straight from the horse’s mouth” to mean getting information from an authoritative or credible source. It might seemingly indicate that horses know best, however the phrase has an unexpected origin.
One possible origin comes from buying and selling horses back before the invention of the automobile. As a means of labor and transportation, horses were very valuable in American civilization. When buying a horse it was very important to obtain information about the animal’s health history and extenuating attributes, similarly to the way someone today would want to know the details about a car for purchase. Apparently an excellent way to gage a horse’s age is by looking at its teeth, literally getting the facts straight from the horse’s mouth despite seller chicanery. In one of my personal favorites, ‘Fiddler on the Roof”, Tevye discloses to a man that the allegedly six year old horse he was sold actually turned out to be twelve. “It was twelve! It was tweeeelve!” he shouts during the iconic opening scenes as the town sings of “traditions, traditions“!
Another possible origin relates to horse racing. When betting on a horse, gamblers would put their money behind someone “in the know” who possessed that golden tidbit of information. How’d they bet on the winning horse? Why, they heard it straight from the horse’s mouth! The people who you’d imagine have the best conjectures about which horse might win are the trainers and stable boys. So, to say that it was heard straight from the horse’s mouth humorously suggests that someone is a step ahead of even the inner circle of stable boys, hearing it from the horse himself.
Though you’d never guess that’s where a phrase we use all the time and understand the meaning of comes from, it seems to have stuck through the decades. Though most of us don’t go around checking horses’ mouths to see if they’re a good purchase or spend our Sunday afternoons at the race track, you can be sure to trust it if you heard it from the horse’s mouth!
“Bite the Bullet”
18th Century Amputation Kit
18th Century Amputation Kit
Know Your Onions
To “know your onions” means knowing a lot about a subject. It’s a phrase that isn’t so common anymore. It’s a child of 1920’s slang, a slang that dreamed up such gems as “the bee’s knees”.
This is perhaps one of the stranger idioms you will find. What do onions have to do with being smart, anyway?
It all starts with a man with the unfortunate last name of ‘Onion’. English language expert Charles Talbut Onions edited the Oxford English Dictionary from 1895 through the mid-20th century. C. T. Onions knew his stuff where the English language was concerned, which creates the possibility that has name alone was enough to get the phrase going.
But there was more than one Onions. Mr. S. G. Onions of the numismatic industry produced coins for English schools starting in 1843. These coins were not used as real currency, but instead as learning tools for students learning to count. They had inscriptions that explained how currency added up, similar to “60 cents make a dollar” and so forth.
However, the first print appearance of “know your onions” didn’t occur until the 1920’s – in the U.S., far from either Onions’ lineage. The fact that the phrase seemed to first pop up in America suggests that neither of the Onions had a hand in its evolution.
Similar phrases, like “know your apples,” were created in the 1920’s, but only onions stuck around.
The idiom also makes for a great song.
It Costs an Arm and a Leg
According to this common idiom, anything that costs “an arm and a leg” is very expensive.
Many claim to know where the phrase “an arm and a leg” came from. But what is the actual source of this strange idiom?
One incorrect source, part of a popular email titled “Little History Lesson” that spread like wildfire in 2000, claimed that something costing “an arm and a leg” comes from the days of George Washington. Some paintings, the email said, show Washington with an arm behind his back, and other paintings show all his limbs. The painters purportedly charged by the number of limbs in the painting.
But this story is false. While painters might charge for extra details or larger paintings, there is no evidence to suggest a per-limb fee.
The phrase only really shows up after WWII – way after Washington’s time. The earliest known source that finds is from The Long Beach Independent in 1949: “Food Editor Beulah Karney has more than 10 ideas for the homemaker who wants to say ‘Merry Christmas’ and not have it cost her an arm and a leg.”
As part of the cost of WWII, many soldiers had lost limbs during the war. Perhaps these amputations created a dark influence over the English language.
Most likely, however, is the combination of two previous phrases from the 19th century: “I would give my right arm” and “If it takes a leg”.
Why Do You “Lose Your Marbles”?
To “lose your marbles” means to go crazy. Once you lost your marbles, your sanity is far gone. But that seems like an odd association – what do marbles have to do with sanity?
There are a number of different possible origins of the phrase. But what was the first use? The meaning likely comes from the connection with a child losing his toys, such as his marbles, and not being happy about it. In 1886, the St. Louis Globe-Democrat published this sentence in an excellent summation of the connection of ideas: “He has roamed the block all morning like a boy who has lost his marbles.”
In the late 1800’s to “lose your marbles” meant getting angry. For a while “marbles” danced the line between meaning “anger” and “sanity”. One interesting note is that in the 1920’s, a person who had lost control had “let his marbles go with the monkey”, a phrase that came from a story about a boy whose marbles were taken by a monkey.
The meaning changed by the 20th century, however. In 1898 The Portsmouth Times published this line: “Prof. J. M. Davis, of Rio Grande college, was selected to present J. W. Jones as Gallia’s candidate, but got his marbles mixed and did as much for the institution of which he is the noted head as he did for his candidate.”
And in 1927, American Speech sealed the deal by defining losing your marbles as “Marbles, doesn’t have all his (verb phrase), mentally deficient.”
Keep the Ball Rolling
You’ve probably used the idiom before; you want to “keep the ball rolling”. The phrase, if you aren’t familiar, means to keep up a situation or activity, to keep it going.
The source of this phrase is early. It starts with an eccentric man named Jeremy Bentham, who wrote to George Wilson in 1781 to try and keep a conversation going. He wrote, “I put a word in now and then to keep the ball up.” (“Keep the ball up” was an older, British version.)
But the guy who really established the phrase was none other than Benjamin Harrison in his 1888 presidential campaign. His supporters created a giant ball covered with campaign slogans and rolled it from campaign to campaign across the country, chanting “keep the ball rolling”. They rolled the ball about 5,000 miles, across many states, to Indiana, Harrison’s home state.
Don’t you hear from every quarter, quarter, quarter / Good news and true, / That swift the ball is rolling on / For Tippecanoe and Tyler Too.
Harrison’s campaign was the first with a political slogan (“For Tippecanoe and Tyler Too”). Harrison’s campaign against Grover Cleveland was a close one, but Harrison won in the end.
That Just Takes the Cake!
You’ve heard the phrase before – often as an expression of incredulity. “That just takes the cake!”
But what does cake have to do with winning the prize, so to speak?
You may think it comes down to the game that revolves all around cakes, the cake walk – but the first “take the cake” reference occurred circa 420 B.C. Aristophanes’ fourth play The Knights, a tale of Athens during the Peloponnesian War, contained a line that literally translates to, “If you surpass him in impudence the cake is ours.” Of course, this doesn’t refer to a literal cake (though that would be pretty cool too). It uses “cake” as a metaphor for victory.
“The true cake walk at the new circus.”
While this is a logical origin of the phrase, the use came and went in just the one line – disappearing until the 19th century. This is when William Trotter Porter’s A Quarter Race in Kentucky used this line: “They got up a horse and fifty dollars in money a side…each one to start and ride his own horse…the winning horse take [sic] the cakes.” Once again, cake refers to victory.
This is where the cake walk comes in. In black southern communities of the U.S., couples dressed their best and paraded through a course with cakes with their best walk. The best-dressed, most charismatic couple won the walk, often winning some of the cakes they had walked through.
See this 1874 reference to a cake walk: “The cake-walk, in which ten couples participated, came off on Friday night, and the judges awarded the cake, which was a very beautiful and costly one, to Mrs. Sarah and John Jackson.”
It’s still a mystery as to why Aristophanes’ first real “take the cake” disappeared for centuries, and why it only reappeared in the 19th century.
Historical Origins
More Bang for Your Buck
The phrase means “more value for your money,” and it has a more political origin than you might expect.
It all starts in the 1950’s with President Dwight D. Eisenhower’s Secretary of Defense, Charles Erwin Wilson. He used the word “bang” quite literally as a reference to nuclear weapons because the new New Look security policy called for greater reliance on nuclear weapons. In this context, Wilson used the phrase as “more bombs for your money”. The U.S. Military wanted to use more weaponry power and the phrase “bigger bang for your buck” could hardly be a better summation.
Charles E. Wilson and Chairman of the Joint Chiefs, Arthur W. Radford observing a controlled explosion.
Thanks to the phrase’s catchy alliteration, it stuck around. But it did lose its political connotation as time passed on, moving instead toward the meaning that we know and love today.
The first transcribed account of “bang for the buck” appeared in New Language of Politics in 1968, where the author William Safire recounts Wilson’s invention of the phrase.
The earlier equivalent of “more bang for your buck” was Pepsi’s 1950 slogan “more bounce to the ounce”.
Web Citation |
Home » Posts tagged 'Shale Gas'
Tag Archives: Shale Gas
Possible Effects of Hydraulic Fracturing and Shale Gas Development in Durham County
by Zheng Lu
Hydraulic fracturing, a process that extracts oil and natural gas from underground rock formations, is a process utilized to increase oil and gas yields. In nature, gas and oil are not typically found in underground caverns. Instead, these energy sources are found in the pore spaces of underground rocks. In order to reach these sources to produce oil or gas, a well needs to be drilled into the rock formation so that the oil or gas can be retrieved. The hope is that the gas or oil from the rocks flows into the well from the surrounding rock and then up the well to the surface (Hall).
Fluids flow easily through rocks which have a high permeability. However, if interconnections between pores in the rocks are too narrow or there are too few pores, permeability is low. Extracting oil or gas from these rock formations is not economical using conventional methods. Fracturing, a method in which cracks or fissures are created in the underground rock formations, gives oil or gas additional paths to flow through (Hall).
The first instance of fracturing occurred in the 1860s and was known as explosive fracturing. In explosive fracturing, an explosive charge, a “torpedo,” is lowered into the well. The resulting explosion fractures in surrounding rock and significantly increases the rate of oil or gas production compared with the production prior to fracturing (Hall).
In the 1940s, the process of hydraulic fracturing was developed. Unlike explosive fracturing, hydraulic fracturing does not use explosive charges. Rather, water at high pressures is pumped into the rock formations. The high pressure causes fractures in the rock formation and the water flowing through increases the size of the fractures. In order to prevent the fractures from “closing” when the pressurized water is removed, proppants – small particles such as sand, ceramic, or sintered bauxite – are used to prop open the fractures. These proppants are mixed into the fracturing water before being pumped into the rock formation. Water carries the proppants into the fractures and leaves them behind. Due to the high permeability of the proppants, oil and gas extraction are not impeded. Fracturing fluid consists of around 99.5 percent water and proppants (Hall).
The hydraulic fracturing process has been used in more than a million wells since the process was developed in the 1940s. It has been used in low permeability rock formations and in coal-beds in order produce coal-bed methane. In recent years, hydraulic fracturing has received more notice due to its use in the production of oil and gas from shale, a process which has only become economical with the advent of horizontal drilling (Hall). In 2000, shale gas provided around 1 percent of the natural gas supply in the United States. By 2011, shale gas accounted for nearly 25 percent of the natural gas supply (Hagström and Adams 95).
In 2010, shale gas production in the United States amounted to around 5 trillion cubic feet. Projections by the U.S. Energy Information Administration show that production will triple by 2035 (Boersma and Johnson 571). “Early-adopters” of hydraulic fracturing: Texas, Oklahoma, and Pennsylvania, have emphasized the economic development, job creation, and state income associated with drilling. Other states such as New York, Delaware, and Vermont, have emphasized environmental concerns from polluted drinking water, anthropogenic seismicity, and the large carbon footprint created (Boersma and Johnson 572).
Shale gas is a relatively clean fuel when compared to coal or oil, releasing a lower amount of greenhouse gases when used as a source of energy. Hydraulic fracturing combined with horizontal drilling have made the production of natural gas commercially viable in many sites across the country. One advantage of allowing hydraulic fracturing is the creation of jobs. The development of the Marcellus Shale in Pennsylvania has added more than 100,000 jobs in 2011 and generated over $10 billion for the state’s economy (Simmons). Residents in areas where fracking is utilized may also be entitled to a royalty payment of around 12.5%-21% per unit of gas extracted (Muehlenbachs, Spiller and Timmins 3).
Another advantage lies in the fact that electricity generation utilizing natural gas instead of coal produces about half the carbon dioxide and less than a third of the nitrogen oxides. Sulfur oxides released by natural gas total less than 1 percent of those produced by coal. Unlike “clean” sources of electricity such as solar and wind, natural gas can be used to produce energy on demand based on consumer needs (Olson). Due to the reduced greenhouse gas emissions of natural gas, it can be effectively used as a “bridge fuel” until alternative clean sources of energy become more efficient and widespread (Durham Environmental Affairs Board 16).
However, there are some drawbacks to the process as well. One key disadvantage to hydraulic fracturing is the potential for a negative impact on the water supply of an area. Water is heavily used throughout the entire process of hydraulic fracturing. According to the EPA, approximately 90% of the injected fracking fluid is composed of water. Estimates of water usage range up to 13 million gallons required for shale gas production. Acquiring the amount of water required for fracking might limit the amount of water available for other uses. Even if enough water remained after withdrawing the necessary requisites for the process, the water quality may be negatively affected (US Environmental Protection Agency 14).
The injection of the well with fracking fluid also could have potentially harmful results to the water supply, as there could be an accidental release of the fluid due to a well malfunction. The fracturing fluid could also migrate into water aquifers underground as a result of the induced fractures intersecting with existing natural faults (US Environmental Protection Agency 17). During the flowback portion of hydraulic fracturing, the pressure in the wells is reduced. As a result, the fluid returns to the surface as wastewater. In addition to the components of the fracking fluid, wastewater can also contain hydrocarbons and other natural products of oil. Wastewater is typically stored onsite in pits of tanks. Potential leakages could also result as a consequence of improperly built or maintained sites. Finally, wastewater disposal could be another source of water supply contamination. (US Environmental Protection Agency 19).
Air pollution is always a large concern when looking at the oil and natural gas industry. The opening of a new well requires a range of equipment and construction. As a result, wells are typically a large source of volatile organic compound emissions. These compounds contribute to the formation of ground-level ozone, or smog. Wells are also significant producers of methane, a greenhouse gas about 20 times more potent than carbon dioxide. Benzene, ethylbenzene, and n-hexane, which are known as air toxics, are also produced by wells (Basic Information: Emissions from the Oil & Natural Gas Industry).
North Carolina is rich in many different natural resources. However, the state has little experience in dealing with petroleum or gas extraction. As a result, there has not been much regulation concerning shale gas. The Mesozoic basin, formed 225 million years ago, runs through the state of North Carolina. Durham County lies in the Piedmont physiographic province. Technological advances have made it economically feasible to exploit the gas reserves in the Piedmont (Durham Environmental Affairs Board 4-5).
The shale formations that lie underneath most of the lower half of Durham County are shallower when compared to the formations in other states such as Pennsylvania, West Virginia, or Texas. The smaller distance might be beneficial in reducing technical difficulties of the drilling process. However, the distance would also reduce the distance between the drilling sites and groundwater resources (Durham Environmental Affairs Board 6).
Many residents of Durham County depend on well water as their primary water supply. The Durham County Health Department estimates that there are around 6,000 private wells in the county. As most of these wells are outside the city limits, switching to public water is not an option. Residents that depend on well water are more likely to be negatively affected by the process of fracking (Durham Environmental Affairs Board 11).
Durham does not currently have the water capacity to support fracking operations. Additionally, a long lead time is required for the planning, funding, purchasing, and constructing of additional reservoirs. Disposal is also an issue in Durham, as the exact composition of the wastewater that is generated is typically a trade secret. Because of this fact, Durham waste water treatment plants cannot process the waste water (Durham Environmental Affairs Board 9-10).
Presently, hydraulic fracturing is not approved in North Carolina. However, on February 27, 2013, the North Carolina Senate approved a bill which would allow the North Carolina Mining and Energy Commission to start issuing permits for fracking by March 2015. The bill is currently being debated in the North Carolina House (Drye). In order to study the effects of hydraulic fracturing, a comparison can be made to other areas that have recently permitted fracking. Washington County, Pennsylvania provides a good source for comparison. Shale gas wells have recently been drilled in that area.
Data is gathered using Zillow in conjunction with a Google maps database on oil and gas wells in Pennsylvania. Zillow gives a brief overview of house attributes while gas well positions are highlighted on the map. The distance to gas wells is then estimated by noting the straight line distance between house location and the nearest gas well.
The hedonic model is a widely used approach that values certain characteristics of different products which are not specifically given values in their own markets. Under ideal conditions, the hedonic model can show the marginal values of changing attributes of a product. The hedonic model has been widely used in the housing market when used to study topics ranging from air quality, crime, and school quality. (Pope 499).
Utilizing a hedonic model, it is possible to assess how proximity to a shale gas well affects local residents by looking at the changes in property values over time. A hedonic model can estimate the average “willingness-to-pay” for a particular attribute. The simplest model compares the prices of properties based on their vicinity of a well. This results in the regression:
As Zillow only gives the basic attributes of a particular property, the attributes for houses were limited to the number of bedrooms, the number of bathrooms, the square footage of the living area, the lot size, and the age of the property:
20 individual properties in in located in Washington County are selected in order to perform the analysis. Taking a robust regression results in the following coefficients (and standard errors) for each term:
Covariate β Robust Standard Error P Value
Well Distance (miles) 0.1136 0.05123 0.045
Bedrooms 0.0502 0.1617 0.761
Bathrooms 0.1251 0.0773 0.130
0.3177 0.2238 0.179
0.1504 0.0785 0.078
-0.2435 0.0978 0.027
Looking at the p-values, it can be seen that , , and are significant at the 10% level. A 1% increase in lot size would lead to about a 0.15% increase in property value. A 1% increase in age would lead to a decrease in value of around 0.24%. For each mile away from an active well, house values rise about 11.4% for each mile.
The problem with the above analysis lies in the fact that there might be a correlation between well distance and the error term. Perhaps gas wells are purposely placed next to run-down houses. Perhaps lease payments for rights would incentivize people to live close to wells. The key factor for isolating the effects of the shale gas wells is to control for correlated unobservable attributes which may influence and bias the resulting estimators. Using property fixed effects is an easy way to separate the unobserved factors of each property from each other. Looking at the variation in housing prices over time with respect to the change in proximity of a shale gas well allows for the implicit value of that well to be estimated.
Including a dummy variable for each house allows for factors that do not change over time to be controlled. Due to the difficulty of connecting specific well construction times with the data provided by Zillow, the assumption that no gas well is present before 2004 is made. Changing the variable from well distance to inverse well distance allows for the analysis to proceed. This results in the following regression:
Utilizing the 11 remaining data points, the following results are found for the covariates:
Covariate β Robust Standard Error P Value
Change in Inverse Well Distance 0.4367 0.2229 0.079
0.5872 0.3474 0.122
Inverse well distance is significant at the 10% level. This regression gives a different result than the last one, showing that housing price actually goes up as the distance to any gas well decreases. This could be due to the fact that these houses might have municipal water rather than well water. However, with data from Zillow, it is currently impossible to know this fact.
A problem with the regression analysis performed is the lack of data points. Cross-referencing Zillow with a map of shale gas well locations is not the most efficient way to collect data. Increasing the sample size by a factor of 10 would increase the accuracy of the results. Ideally, a data set with thousands of samples would be used in this analysis. Zillow also does not provide a detailed report of each house. At times, even the most basic information (lot size) is omitted. Further analysis using more detailed data would provide more conclusive results.
Most of southern Durham County’s inhabitants use municipal water. Fracking and its impact on groundwater might affect these particular residents less than those that depend on well water as their primary water source. Most of the residents of eastern Durham County rely on well water. Zillow currently shows that the price of housing in eastern Durham County is generally much higher than the price of housing in the south. Fracking could impact the property values there greatly.
In the end, the most important aspect to hydraulic fracturing regulation is that relevant monitoring systems are established. If hydraulic fracturing begins in Durham County, the water resources of Durham’s residents need to be protected. A more thorough analysis of the effects of hydraulic fracturing would point to better and safer regulation.
Works Cited
Boersma, Tim and Corey Johnson. “The Shale Gas Revolution: U.S. and EU Policy and Research Agendas.” Review of Policy Research 29.4 (2012): 570-576.
Drye, Kelley. Hydraulic Fracturing: State Regulatory Roundup Vol. 15. 8 March 2013. Web. <http://www.lexology.com/library/detail.aspx?g=f0badb74-6409-430f-b8bc-d22839d68b58>.
Durham Environmental Affairs Board. Report to the Joint City/County Planning Council on Some Potential Environmental Impacts of Hydraulic Fracturing in Durham County, and Recommendations to Consider for Future Implementation. Durham, 2012.
Hagström, Earl and Julia Adams. “Hydraulic Fracturing: Identifying and Managing the Risks.” Environmental Claims Journal 24.2 (2012): 93-115.
Hall, Keith B. “Hydraulic Fracturing – a Primer.” The Enterprise 41.11 (2011).
Muehlenbachs, Lucija, Elisheba Spiller and Christopher Timmins. “Shale Gas Development and Property Values: Differences Across Drinking Water Sources.” NBER Working Paper Series (2012): 1-37.
Olson, Jon. Natural Gas Is an Energy Solution That Works Today. 29 November 2011. 17 March 2013. <http://www.usnews.com/debate-club/is-fracking-a-good-idea/natural-gas-is-an-energy-solution-that-works-today>.
Pope, Jaren C. “Buyer information and the hedonic: The impact of a seller disclosure on the implicit price for airport noise.” Journal of Urban Economics 63.2 (2008): 498-516. Web.
Simmons, Daniel. No Evidence of Groundwater Contamination From Fracking. 29 November 2011. 17 March 2013. <http://www.usnews.com/debate-club/is-fracking-a-good-idea/no-evidence-of-groundwater-contamination-from-fracking>.
US Environmental Protection Agency. Basic Information: Emissions from the Oil & Natural Gas Industry. 10 October 2012. 16 March 2013. <http://www.epa.gov/airquality/oilandgas/basic.html>.
—. Study of the Potential Impacts of Hydraulic Fracturing on Drinking Water Resources: Progress Report. Washington, DC, 2012. Web.
Possible Effects of Hydraulic Fracturing and Shale Gas Development in Durham County
by Zheng Lu DPPT_LuZheng |
SpaceVR aims toward a VR camera in space
SpaceVR aims toward a VR camera in space
SpaceVR is a virtual reality platform set to share live 3D, 360 degree content from the International Space Station (ISS) so that anyone with virtual reality gear can feel like an astronaut. The company was founded in January this year by Ryan Holmes, CEO, and Isaac DeSouza, CTO.
The San Francisco-based company is crowdfunding for their 3D into . "Through the use of 3D, 360-degree cameras, SpaceVR technology feeds livestream footage from the International Space Station's (ISS) Cupola observatory module back to Earth so consumers can experience space travel in immersive 3D virtual reality," they said.
The Kickstarter page for their effort said, "Only 536 people have ever been to space; at SpaceVR we ask, what about the other 7 billion."
They will send a 360-degree camera to the International Space Station to collect footage that anyone can experience using virtual reality headsets. That's their dream. They would like as many people as possible to experience a VR view of space.
Their camera is called the Overview One. According to the plan, existing camera components are to be combined with parts that will be 3D-printed in space by their partner, Made in Space, they said, and will be assembled on the ISS. "The full camera will be assembled by an astronaut (following directions put together by the SpaceVR team)."
Once that is assembled, the team will collect the first virtual reality footage from space with the help of the ISS Cupola observatory module, with its large windows. (Its windows are used to conduct experiments, dockings and observations of Earth.) There will be footage for backers to view who can use a VR device, including even Google Cardboard.
The SpaceVR team have turned to Kickstarter to fund their first year of operations and the launch of Overview One into space.
In addition, they said they have partnered and are on the flight manifest with NanoRacks, a company that specializes in getting payloads to space.
They have a $500,000 goal on Kickstarter. Pledge amounts and package details are varied; you can visit the page and wade through the options if interested. That range involves posters, T-shirts, three months of access to the first footage, a SpaceVR Cardboard virtual reality kit, and other options. Prices run from $1 to $10,000.
Somehow, the words exploring space through does little justice to a very energized message in their video about why this matters to them and to our futures. "Being in space and looking down at the Earth, astronauts are hit with an astounding reality: our planet is a tiny, fragile ball of life, 'hanging in the void', shielded and nourished by a paper-thin atmosphere. Astronauts refer to this as the Overview Effect. The idea of national boundaries vanishes, the conflicts that divide people become irrelevant, and the need to come together as a civilization to protect this "pale blue dot" becomes both obvious and imperative." They want to bring that experience to everyone.
They said if they pass their funding goals, the is just the first step. They have interesting plans for the future. These include sending a VR camera to the moon in 2017, landing a VR camera on an asteroid in 2022, launching a remote controllable cube-sat VR system into orbit, where you can control where the satellite goes, and see what it sees from your headset. They also listed going to Mars as soon as 2026.
Explore further
YouTube makes virtual reality push with 360-degree 3-D
© 2015 Tech Xplore
Citation: SpaceVR aims toward a VR camera in space (2015, August 11) retrieved 14 July 2020 from https://techxplore.com/news/2015-08-spacevr-aims-vr-camera-space.html
Feedback to editors
User comments |
After SARS (severe acute respiratory syndrome) in 2003 and MERS (Middle East respiratory syndrome) in 2012 a novel coronavirus COVID19 has shaken the stability of the health system yet again.
Due to its longer incubation period and higher contagiousness the outbreak got the title of the global pandemic with world cases counting millions nowadays. Despite a serious public health threat we are seeing a surprising antagonism: the choice between public health and local economies. The same mercantilist and conservative approach holds true for the search of the cure, with pharmaceuticals pouring billions into the development of vaccines while completely ignoring readily available immunity-boosting solutions like ozone therapy or vitamin D.
In this article I would like to point out the benefits of ozone therapy, ways of getting this treatment and illustrate case studies proving the valuable effects on human health.
Recently, the Journal of Infectious Diseases and Epidemiology published an article very shocking. At the conclusion, the authors, Robert Jay Rowen, MD and Howard Robins, DPM, states:
Many viruses require reduced sulfhydryl groups for cell fusion and entry. Corona viruses, including SARS-CoV-2 are rich in cysteine, which residues must be intact for viral activity. Sulfhydryl groups are vulnerable to oxidation. Ozone therapy, a very inexpensive and safe modality may safely exploit this critical vulnerability in viruses like SARS-CoV-2.(…) Ozone therapy could be easily deployed worldwide, even in very poor countries. With few conventional treatments for viral pneumonia, this epidemic could provide impetus to study ozone therapy under the auspices of an institution’s review board in treating, with ozone therapy, seriously ill patients, who might otherwise expire. Milder cases could also be treated to study the ability of ozone therapy to slow or halt clinical deterioration. Such study could bring ozone therapy to the forefront of all-around infectious disease management. [i]
But what is Ozone and how do you get it? A single atom of oxygen is deficient in electrons. This is why this atom is very unstable and cannot exist in nature by itself. However, two atoms of oxygen can join together to share electrons creating a very stable molecule named as O2. This is the oxygen that we breathe. In some cases, however, when molecules of O2 are subjected to electricity or UV light they split apart again, and, in 2-3% of cases three atoms of oxygen unite to become O3, known as ozone. Ozone is an unstable molecule, which leads to a serious mitochondria stimulation if found in your body. Ozone regenerates your cells ten times better than O2. Ozone is the most powerful oxidant found in nature. Ozone therapy (OT) consists of the use of the 1-5% ozone in 95-99% oxygen as a gas. This mixture is called “medical ozone”.[ii]
Dr Shallenberger highlighted some benefits of the OT: (1) increases the delivery of oxygen to the cells; (2) increases oxygen utilization; (3) stimulates cytokines function; (4) increases nitric oxide production; (5) increases hemoxygenase-1 (HO-1); (6) stimulates detoxification; (7) stimulates antioxidant enzymes; (8) ozone is directly toxic to all microbes, bacteria, fungi, yeast, viruses, protozoa.[iii]
Due to the abovementioned benefits zone therapy is being used to treat cancer, autoimmune disease, acute and chronic infections, Lyme disease, parasitic infections, macular degeneration (age-related), MRSA, acute and chronic ulcers, candidiasis, HIV, slipped disc, chronic fatigue syndrome, dentistry, fibromyalgia. The treatment and the prevention may be given directly to the tissue, intravenously or intramuscularly.
Apart from having beneficial effect on numerous health conditions, ozone has yet another enormously important application due to its antibacterial qualities. Let us, firstly, consider these data: (1) 2.5 billion people do not have access to improved sanitation, i.e more than 35% of the world’s population; (2) unsafe drinking water, inaccessibility of water for hygiene, and lack of access to sanitation together contribute to about 88% of deaths from diarrheal diseases; 801,000 children younger than 5 years of age perish from diarrhoea each year, in the vast majority in developing countries; 11% of the 7.6 million deaths of children (2,200) under the age of five are dying every day as a result of diarrheal diseases; in countries without a good sanitation infrastructure, about 250 million people are infected every year by waterborne pathogens, with about 10 million deaths. And so on, so forth… this tragic list is interminable. According to the Emeritus professor Velio Bocci:
Ozone is possibly an even more potent drinking water disinfectant able to inactivate several human pathogens, e.g. as many as 63 different bacteria (Salmonella, Shigella, Vibrio, Campylobacter jejuni, Yersinia enterocolitica, Legionella, etc.), some 15 viruses (polio-, echo-, Coxsackie viruses, etc.), some 25 fungi and mould spores (Aspergillus, Penicillium, Trichoderma, etc.), several yeast varieties, and up to 13 fungal pathogens (Alternaria, Monilinia, Rhizopus, etc.).[iv]
Dr Velio Bocci has spent all his career researching the benefits of ozone therapy and has published about 480 publications and monographs, the last two are in English: Oxygen ozone therapy (2002), and Ozone. A new medical drug (2005).
According to Dr Bocci, “in addition to the disinfection of drinking water, the use of ozone can also improve its taste. However, the most amusing discovery of Dr Bocci is his claim that ozone therapy can fight infections which have become resistant to any known antibiotics. As he concludes, “this is a complex story, partly due to the extensive use of antibiotics in animal food and the improper use in patients”. Even without a vaccine, we are not powerless. Why cannot we use ozone therapy?
Dr Bocci, Dr Rowen, Dr Robins and Dr Shallenberger are not the only physicians who know benefits and defend the large application of the ozone therapy. Making rapid research on the web about OT we can figure out these results: On Google Scholar there are nearly 113,000 results of scientific articles; On there are 3588 results of scientific articles. Even on, there are 47 books. Are there articles related directly to COVID-19? We find out 14 articles just on Google Scholar.
There are at least 18 investigational drugs – including the most famous Chloroquine, Hidroxichloroquine and Ivermectin – to combat the SARS-CoV-2. The general researchers’ preoccupation is the disastrous side-effects of these remedies. In order to safely overcome the current health crisis we need a safe, inexpensive and efficient drug. If any single one of these requirements is not met, problem will stay unsolved. The colossal advantage of ozone therapy is its safety and very accessible even in remote places. As explained by Dr Rowen and Dr Robin:
The treatment requires an ozone generator, medical-grade compressed oxygen, a syringe (and a butterfly needle for DIY method). The generator can be run off a car battery in remote areas. Ozone therapy is exceptionally safe, with a reported complication rate of only 0.7 per 100,000 treatments. Most side effects were found to be due to improper administration.[v]
Having read all the advantages of using ozone it is still mildly unclear why the therapy is so little known. There several possible explanations, neither of which, however, justifies a complete inaction and disinterest of medical professionals in trying this inexpensive and efficient solution. Accoridng to Dr Bocci despite his numerous trials to convince medical establishment to try ozone therapy, the ruling dogma in the USA is that “ozone is a toxic gas and should not be used in medicine”. Beyond these barriers exposed by Professor Bocci, there are some others. According to Dr Rowen and Dr Robin:
Ozone’s challenge is that it does not bring profit to justify private research to advance it towards regulatory agency “approval”, a process requiring tens of millions of USD. Hence, few in the medical field are aware of it, and fewer will consider “unapproved” therapy even to save lives. It suffers from the “tomato effect” because many of its achievements are regarded as impossible to believe. Virtually all use is in private offices, where most practitioners have no access to an institutional review board, now a requirement to gain acceptance of research for publication. Hence, the advancement of ozone therapy into mainstream medicine languishes, and most patients, with no alternatives to conventional therapies, suffer.
If these problems listed above had not exist, it is possible that thousands of people could have been saved from various diseases, potentially even COVID-19.
Italy and Spain reported high death rates from coronavirus and in both countries medical authorities have shown to be more adventurous with trying unconventional treatments. In Italy, the Italian Scientific Society of Ozone Therapy (SIOOT) reported that 15 hospitals have started using Major Autohemotherapy (MAH) ozone treatments on patients with Coronavirus infections. In Spain, a polyclinic in Ibiza has authorized the use of ozone therapy on COVID-19 patients and so far “the results have been spectacular” according to Dr Alberto Hernández, Assistant Physician for Anaesthesia and Resuscitation. Nevertheless, the majority of states recognize the benefits of ozone treatment painfully slow.
Ozone therapy is just one of dozens of highly efficient but overlooked treatments which are easily accessible for general public. The problem of low monetization of such treatments does lead to the situation when we have to wait years and spend thousands for a half baked solution when we could have used a reliable treatment if we had the reliable information at the right time.
The problem of ignoring “alternative” treatments can jeopardize the future proposed by transhumanists. If the techno-scientific developments are always limited by an economic rationale or the lack of openness to the new ideas, our quest of building a more progressive and advanced society will always be inachievable. The transhumanist movement needs to put this debate in the spotlight. If we cannot disseminate a new safe, inexpensive and efficient auxiliary treatment to help us fight against this pandemic, how are we expected to believe that someday all people will have access to an antiaging treatment?
We cannot always condition our prosperity, development, and wellbeing to economic profit. Ozone therapy is not a solution to all our worries. However, we do not have to solve them all at once. As they say, – Rome wasn’t built in a day. But eventually it has been built. Just as we will solve the current crisis. One step at a time.
[i] ROWEN; ROBINS, p. 2-3, 2020.
[ii] SHALLENBERGER, p. 13. 2017.
[iii] SHALLENBERGER, p. 26-27, 2017.
[iv] BOCCI, 2005.
[v] ROWEN; ROBINS, p. 2-3,2020.
Please enter your comment!
Please enter your name here |
Epigenetics: You Determine Your Destiny
“The caterpillar has the exact same genes as the butterfly, but the genes are expressed differently depending on its given life stage.”
The university that I attended did not have one or two libraries, but six large ones. The first time I set foot in its endless hallways of information, I was overwhelmed, to say the least. When I required a specific reference for a project, I amazingly managed to find it. I then proceeded to find the particular page I needed.
Our genes work much like a library. They contain a plethora of information, which provides the body with instructions on how to operate. Your body can actually choose which genes to express, much like my library search. This well recognized phenomenon of ‘reading’ specific genes is called epigenetics.
In an article published in Time magazine in 2010 entitled, “Why Your DNA Isn’t Your Destiny,” the work of Dr. Lars Olov Bygren is explored. Dr. Bygren, a preventative-specialist and researcher, studied the effects of feast and famine on children growing up in the 1900s in Norrbotten, Sweden. He discovered that dietary and lifestyle conditions not only affected the genetic expression of each individual, but also that of their children and grandchildren. He concludes “it is through epigenetic[s]…that environmental factors like diet, stress and prenatal nutrition can make an imprint on genes that is passed from one generation to the next.”
There are many great examples of epigenetics.
In a beehive, thousands of worker bees serve the queen bee. The only difference between the queen bee and the worker bees is their food source. The queen bee is the only bee allowed to consume the protein-rich royal jelly. Although the queen bee is genetically identical to all of the other bees, she lives up to 28 times longer, grows three times larger and lays about 2000 eggs in her most fertile state.
Another example of epigenetics is demonstrated with the metamorphosis of a caterpillar into a butterfly. The caterpillar closes itself in a cocoon, at which time it completely dissolves, and then reforms into a butterfly. The caterpillar has the exact same genes as the butterfly, but these genes are expressed differently depending on its given life stage.
Epigenetics tells us that we are in control of our genetic destiny. By choosing a healthy diet and lifestyle, we may influence optimal genetic expression for our own selves and generations to come.
Josh Gitalis consults with clients worldwide, teaches clinical nutrition, and is a noted expert for various media outlets.
Visit: joshgitalis.com
Click on the Image Below to Zoom & Print:
Leave a Reply
|
Mountain Goat Research
Baranof Island Research Project
caption follows
Figure 1. GPS radio-collared adult male mountain goat in the Clear river drainage on Baranof Island.
Baranof island harbors a unique mountain goat population. In 1924, mountain goat were transplanted to the island from Tracy Arm (south of Juneau) due to the belief that mountain goats were not present in the area. However, recent genetics research has revealed evidence suggesting that mountain goats may have always been present on the island and survived the last glacial maximum in a small “cryptic” refugia on the southwestern end of the island (a contention supported by historic Russian fur trade era documents). Thus, the current population is composed of 2 distinct genetic lineages (the Tracy Arm lineage and the “endemic” lineage).
The mountain goat population on Baranof island is generally productive but in recent years high levels of harvest of female mountain goats has resulted in population declines in local areas, particularly in popular hunting areas directly accessible from Sitka. Current research has focused on gathering detailed field data related to population size, reproduction, survival and habitat selection/movement patterns in order to inform management strategies for this recovering population. Since 2010, 31 mountain goats have been captured and marked with GPS radio-collars; monitoring these animals is a central aspect of research studies on Baranof island.
In addition, expansion of hydroelectric projects on the island are predicted to inundate mountain goat winter range in the Blue Lake area. Research objective also include collection of data needed to precisely assess the extent to with mountain goat winter range will be affected by development activities. Future plans to expand the hydroelectric network across the island to the Takatz Lake area also require detailed information about mountain goat spatial and population biology to assist with planning and mitigation efforts.
Project Collaborators
Alaska Department of Fish and Game, City of Sitka, U. S. Forest Service, University of Alberta
Shafer, A. B. A., K. S. White, S. D. Cote, D. W. Coltman. 2011. Deciphering translocations from relicts in Baranof Island mountain goats: Is an endemic genetic lineage at risk? Conservation Genetics 12:1261-1268.
White, K. S., P. Mooney and K. Bovee. 2012. Mountain goat movement patterns and population monitoring on Baranof Island. Research progress report. Alaska Department of Fish and Game, Juneau, AK.
From Alaska Fish and Wildlife News
More Information
For more information please contact: Kevin White ( or Phil Mooney (
Additional Figures
caption follows
Figure 2. Seasonal movement patterns of a GPS radio-collared male mountain goat illustrating utilization of winter range habitat near the shore of Blue Lake; near area proposed to be inundated via hydroelectric development.
caption follows
Figure 3. Sitka Area wildlife biologist, Phil Mooney, handling an immobilized male mountain goat on Baranof Island.
caption follows
Figure 4. Remains of 14-yr old male mountain goat that died during the winter of 2012. This was the oldest male mountain goat yet documented in southeast Alaska research studies (1 year shy of the North American record). |
Sunday, 28 July 2019
Saving our pride!
Tigers are the pride of our nation. They are incredibly beautiful, intelligent animals. If you have been following our tiger series, by now you know how important tigers are to our economy and our ecology. It is beyond all doubt that tigers are worth more alive than dead. What is it then that still and really threatens tiger conservation?
Tigers are currently listed as endangered (by the IUCN Red list). Most dangerous to tiger conservation are trophy hunting, poaching for illegal trade, habitat loss, loss of primary food source, human wildlife conflict etc. Over the years, a number of strategies have been implemented (by NTCA) for tiger conservation, and yet more needs to be done.
Among initiatives, ones that can most positively impact tiger population are controlled tourism with proper monitoring and development of local communities through creation of jobs. Equally needed are: Protection and separation of tiger habitat from human reach; creation of wildlife corridors so that tigers can migrate to other national parks; implementation and execution of strict laws around poaching; education of local communities about tiger behaviour; and relocating villages around wildlife corridors or tiger reserves.
As individuals also, we can contribute significantly to tiger conservation. Strictly boycott any form of tiger products, tiger temples, zoos, artificial safari parks or any other tourism outside of their natural habitat. Visit as many tiger reserves as you can. When you are inside the reserve, make sure you follow the rules, be respectful to the animals and don't disturb them. Sustainable tourism will help the locals generate income and incentivize them for coexisting with the animals. Drive carefully around the park and report any illegal snare or activities to the forest department immediately. Using your photos and videos, encourage others around you to do the same. Together we can save our pride.
No comments:
Post a comment |
EU air bridge: hundreds of dedicated flights brought European citizens back home
After the outbreak the coronavirus pandemic, the EU has funded the repatriation of tens of thousands of European citizens through the EU Civil Protection Mechanism. Yet not all EU governments decided to take advantage of this form of support from Brussels.
The EU's Civil Protection Mechanism
In addition to the EU member states, there are six participating countries in the EU Civil Protection Mechanism (CPM): Iceland, Norway, Serbia, Northern Macedonia, Montenegro, and Turkey. The program was set up in 2001 and since then more than 330 coordinated actions have taken place. If an emergency exceeds a country’s ability to respond, in essence the CPM is a mechanism from which it may request assistance. The CPM is a form of cooperation, best adapted to situations where a country knows what help it needs but cannot obtain it. In such cases assistance can be coordinated quickly and efficiently at EU headquarters.
Following a request through the Mechanism, the Emergency Response Coordination Center (ERCC) can mobilize assistance or expertise. The ERCC monitors such situations 24 hours a day and provides emergency support in cooperation with national civil protection authorities. Satellite maps produced by the Copernicus Emergency Management Service provide additional support for the operations. Copernicus provides geographical information (GIS) that is useful for mapping affected areas and planning disaster relief operations.
Any country in the world, even the United Nations and its agencies or an NGO, can request assistance via the CPM for help. The mechanism was used in the Ebola outbreaks in West Africa (2014) and the Democratic Republic of the Congo (2018), in the aftermaths of tropical cyclone Idai in Mozambique (2019), an earthquake in Albania (2019) and forest fires in Sweden (2018), Bolivia (2019), and Greece (2019).
The coronavirus has redefined the CPM
The coronavirus pandemic is a major challenge for the Civil Protection Mechanism. In this case there is not a state or a region that needs coordinated assistance, but rather each member state desperately trying to improve its situation, looking for stocks on an otherwise empty European market, or trying to develop its own production. But whether it is masks, tests, ventilators or medications, or even medical and nursing staff, the demand far exceeds the supply.
In principle, EU coordination is important in order to get stocks to where they are needed, in line with the flattening or rising of the epidemic curve in each member state. But states tend to be reluctant to give up hard-won assets, preferring to acquire new stocks from China, making cooperation more difficult.
In this context, the benefits were mainly PR ones when Romanian and Norwegian medical teams went to Italy under the CPM, and when Austria sent 3,000 liters of disinfectant there. Ursula von der Leyen, President of the European Commission, has repeatedly spoken out to direct aid where it is most needed, and called upon the member states “not to stockpile of medicines, to restrict online shopping, and not to order an export ban”.
Hungary has decided to the contrary twice. First, an export ban was imposed on chloroquine and its derivatives (which are also being tested for vaccines), and there was later another one on antibiotics, painkillers and sleeping pills .
Solidarity flights
In particular, the Emergency Response Coordination Centre (ERCC) has undertaken the largest operation in its history, that of repatriating Europeans from other countries. The system is relatively simple. One state decides to charter a plane back home from a non-EU country where otherwise this would be difficult or impossible. Typically half of the passengers are citizens of the country and the rest of the seats are filled with as many EU citizens as possible as a form of solidarity.
It takes a day or two to organize such a flight, meaning much coordination work was involved. And there were not just one or two planes, but by May 8, 269 such flights had departed, carrying more than 66,000 people.
We collected relevant data from all flights based on the information from the European Commission so as to provide accurate information on which countries chartered how many flights and when, the number of passengers and their nationality, and whether non-EU citizens were on board too. It became clear which countries made most use of this opportunity, and which countries did not organize a flight, many preferring to request a lift on the departure plane.
The very first plane was launched by France on January 31 with 180 passengers – not surprisingly, from the starting point of the epidemic, Wuhan, China. Two days later, another French flight arrived from Wuhan, on which there were Hungarians. Seven left the city under quarantine.
A total of 18 EU member states (Ireland, Portugal, Spain, Luxembourg, Belgium, France, Germany, Lithuania, the Czech Republic, Sweden, Latvia, Italy, Finland, Austria, Denmark, Hungary, the Netherlands, and Slovakia) have chartered aircraft. Most flights (156 so far) have been organized by Germany, followed by France (25), Spain (15), Czechia (surprisingly many, 13), Belgium (13), and Austria (10). More than 32,000 German citizens have returned home in this way, followed by the French with nearly 6000.
Logical correlations were observed in the organization of the trips. For example, the Dutch chartered only five flights as part of this cooperation, but used Belgian or German planes. Hungarians and Slovaks usually arrived in Prague on Czech flights and got home easier from there. Most of them wanted to return from the Far East, India, Indonesia and Malaysia as soon as possible, and many planes also arrived from Southern Africa, Namibia, and South America including Peru.
There were also genuine air bridges: Germany chartered five flights from Denpasar, Indonesia, and seven from Windhoek, Namibia, at the end of March. Between 3 and 11 April nine aircraft came from Auckland and Christchurch, in New Zealand; and between 22 March and 6 April another nine German planes were chartered from Costa Rica.
The Hungarian way
EU Commissioner for Crisis Management Janez Lenarčič sent a letter to all member states on 31 March, reminding them that the CPM remained available and asking governments to make use of it. During this period in Hungary the government was involved in a spat with the opposition. The government claimed that the EU was offering no help in fighting the epidemic and its effects. Among other things, the opposition mentioned the CPM mechanism as a tangible opportunity that should be seized. They claimed that Hungary is receiving EU money for the fight against the coronavirus, whereas according to the government this is a lie. We have looked into the issue and found that the reality, as usual, is more complicated than the slogans.
At an online press conference on 3 April, Foreign Minister Péter Szijjártó recalled Lenarčič’s letter as follows: “Earlier this week, we received a letter that Brussels is putting in place some kind of support mechanism, but the vast majority of European countries have been bringing their citizens home from around the world for almost a month now, and so are we, so this coordination attempt is somewhat late.”
Three days later, on April 6, Péter Szijjártó commented in a Facebook post with slightly different content but similar criticism when he wrote: “On 31 March, a letter was received from the European Commissioner for Crisis, informing us that, under very serious conditions, even reimbursement could be made on the return flights. So it took the European Commission a month to switch: many hundreds of EU citizens are stuck abroad and waiting for help to get home.”
According to the foreign minister this was a new initiative. However, the EU had already provided an opportunity to partake in the program (as we have seen, the planes have been in operation since the end of January), and the CPM itself is almost twenty years old. The "very serious conditions" imply that citizens of other countries were to have been brought home, just as others brought Hungarians home.
The Hungarian authorities brought thousands of Hungarian citizens home, mainly with Wizzair's planes or special flights, without recourse to the EU reimbursement option. Of course, that would have required citizens of other states to travel on the planes as well. Among those repatriated was Balázs Dzsudzsák , the captain of the Hungarian national football team, who traveled on a special plane with a diplomatic passport.
Then on April 17, the government changed its approach. The Hungarian government chartered a flight from Phnom Penh, the capital of Cambodia, and submitted its invoice to Brussels. This flight was interesting in itself: usually, such flights are large passenger planes with hundreds of people on board, but there were only ten passengers on board the Hungarian plane – five Hungarians, four Slovenians, and one Slovak.
On May 7, a Wizzair plane also landed, bringing Hungarian and other Central European passengers back from New York, Miami, Toronto and Reykjavik. All this appeared on Péter Szijjártó's Facebook page, where the foreign minister explained that, “in the framework of Central European cooperation, we also brought home twenty-one Slovak, four Czech, four Slovenian and two Austrian citizens.” This flight also appeared in the EU figures, meaning that it was not just Central European cooperation, but an EU cooperation, where three-quarters of the costs are paid by the EU. (There was also an American citizen on the plane.)
Notwithstanding the previous examples, Hungary has preferred bilateral agreements to the EU air bridge. One of the many examples of this was the return of most of the Hungarian military contingent serving in Mali. At dawn on 15 April, a Belgian air force plane landed at a military airport near Brussels with 15 Hungarian soldiers and three civilians aboard. There were 87 passengers from 14 countries on board, including Belgians, Americans, Germans, Spaniards, and Slovenes. Ambassador Tamás Iván Kovács welcomed the Hungarians and expressed his thanks for the help of the Belgian government.
Monday 01 June 2020
Laszlo Arato
Translation by:
A. Szoczka | VoxEurop
share subcribe newsletter |
June 25, 2017
In early times, baptisms were held in public places where family and friends could gather. This public witness marked the believer as a follower of Christ. Today, baptisms often take place in church buildings for the sake of convenience, but a public statement still is a part of the meaning. The person who is baptized identifies with Jesus Christ as Saviour and Lord. |
Sunday, May 29, 2016
Special and differentiating investigations in Anemia
Here, we would try to summarize all the investigations useful to differentiate various types of anemias--
1. Microcytic hypochromic anemias
-S. Ferritin, Total Iron Binding capacity, Transferrin saturation help in distinguishing IDA, AOCD, Beta thal trait.
HbA2 levels between 3.5-8% are diagnostic of beta thal trait.
2. Macrocytic anemias
S. VitB12 and S.Folic acid assays to differentiate megaloblastic from non megaloblastic macrocytic anemias.
PBS f/s/o megaloblastic anemia - macovalocytes, hypersegmented neutrophils, pancytopenia +/-
3. Warm Antibody against P antigen and cold antibodies (I antigen) to detect AIHA and also to differentiate AIHA from HS.
4. G6PD Assays- suspected G6PD deficiency anemias
Friday, May 27, 2016
Redistribution of drug
I knew what is is Distribution of drug but I think somewhere along my medical school I might have missed reading about REdistribution of drug. Here's what it is:
Biochemistry – How to study?
We see many requests coming to our study groups, asking for few tips to study Biochemistry. Today I thought we should talk about it for the sake of our 1st year Medicowesomites!! Yay!
Ok, Biochemistry, as the name itself implies is about Chemistry in Biological systems. So what do we most encounter in Biochemistry and how to tackle them?
Thursday, May 26, 2016
Ulcerative Colitis, Crohn's disease and rectal involvement
Greetings everyone!
Here's a short post on how to remember that rectum is involved in Ulcerative Colitis (And spared in Crohn's disease.)
Wednesday, May 25, 2016
ICE syndrome mnemonic
A short post of mnemonics on one of the coolest syndrome of the eyes.....
Thalassemia mnemonic
I was reading thalassemia today and I thought of sharing few facts and this trick for learning the beta chain variants of hemoglobin (Hb) in Thalassemia.
Facts about thalassemia:
Zollinger Ellison syndrome mnemonic
Hello! Here's a short concept for the day!
Normally, secretin decreases gasrtin and gastric acid production,
In Zollinger Ellison syndrome, however, secretin increases gastrin production.
Tuesday, May 24, 2016
Non caseating granulomas mnemonic
The mnemonic for non caseating granulomas is RBCS
Bernard Soulier syndrome mnemonic
This mnemonic would not help you to remember all the aspects of the syndrome but two quite important points would be on the tip of tongue for sure.
Remember the dog - St. Bernard's
Age of completion of ossification mnemonic
For those who forget the age at which ossification centres close, this post is for you!
Monday, May 23, 2016
Bartters, Gitelmans and Liddles syndrome mnemonic
Bartters, Gitelmans and Liddles syndrome present with chloride resistant (high urinary chloride) hypokalemic metabolic alkalosis.
What differentiates them:
Bartters: Hypercalciuric (Furosemide like! Loops lose calcium, remember?)
Gitelman: Hypocalciuric (Thiazides don't!) and Hypomagnesemia. Presents with cramping and spasms.
Liddles: Presents with hypertension, metabolic alkalosis and hypokalemia (Aldosterone excess like!)
Here's a mnemonic for it!
"FaceBook GoT ALL HYPER about a Little syndrome"
FB - Bartter's is like Furosemide
GoT - Gitelman Thiazide
Alhyper little - Liddles is like HyperALdosteronemia
These syndromes are rare, so it’s important to rule out more common causes (Like diuretics)
That's all!
Here's an aphorism by Sir William Osler: “Care more for the individual patient than for the special features of his disease.” :)
Bile acid sequestrants mnemonic
Hello! The bile acid binding resins are:
I'll talk about Cholestyramine in this post!
Iron deficiency anemia
-The commonest nutritional anemia in India
Decreased Intake
Lack of absorption (eg. Celiac disease)
Increased loss ( in the form of blood loss through any system)
- More common in women d/t menstrual bleed, increased requirement in pregnancy and lactation.
Increased fatiguability
May present as a triad with dysphagia and esophageal web in Plummer Vinson syndrome
Low Hb
Low Rbc count
Low S. Ferritin
Raised TIBC
Reduced Transferrin saturation
Microcytic hypochromic picture on Peripheral blood smear; Pencil cells may be seen
Friday, May 20, 2016
Heyde's syndrome mnemonic
Greetings! Short post for the day about Heyde's syndrome!
The mnemonic is: Heydes' hidden bleeding heart.
Pathophysiology of achalasia mnemonic
This post is about the pathophysiology of achalasia!
In achalasia, there is loss of NO and VIP releasing inhibitory neurons. Thus, the loss of the inhibitory innervation in achalasia results in the manometric consequence of failure of LES relaxation as well as loss of esophageal peristalsis.
Classification of enzymes mnemonic
This mnemonic on classification of enzymes was submitted by Mohd. Ayub Ali.
The mnemonic is, "On The Himalayas, Lyf (life) Is Lightened."
Thursday, May 19, 2016
Intermediates in Gluconeogenesis mnemonic
Hi guys!
So today I wanted to talk to you about Gluconeogenesis.
The first thing is that gluconeogenesis takes place in the mitochondria.
Now when anyone says, "mitochondria", I (and probably all of us) immediately jump to, "mitochondria is the powerhouse of the cell".
Wednesday, May 18, 2016
Why does Digoxin toxicity result in increased automaticity?
Hey everyone!
Digitalis and other cardiac glycosides are known to cause an AV nodal delay.
Then why does too much Digoxin result in some arrhythmias that are due to increased automaticity? Brady arrhythmias are explainable. But why tachy arrhythmias?
Atrial fibrillation in WPW syndrome
Random fact that I learnt today!
If a patient with WPW syndrome develops symptomatic atrial fibrillation, what is the drug of choice?
Answer is procainamide.
Stable patients suspected of having WPW with atrial fibrillation should not receive agents that predominantly block atrioventricular conduction, but they may be treated with procainamide or ibutilide.
Because if you block the AV node using beta blockers, calcium channel blockers or digoxin, you will favour conduction to the accessory pathway. This will worsen the arrhythmia.
That's why, in stable patients, chemical cardioversion is preferred.
If instability is present, electrical cardioversion is required.
That's all!
Related post: Supraventricular tachycardia mnemonic
Tuesday, May 17, 2016
Organisms covered by Ampicillin mnemonic
So here it is...
Ampicillin HELPS to clear Enterococci!
Haemophilus influenzae
E. Coli
Listeria monocytogenes
Ps: Gram-negative organisms have 'porin' channels in their outer lipid membrane through which the Beta-lactam antibiotics enter the cell. Also the lipopolysaccharide layer that contains endotoxins! (Gram-positive organisms do not have such things in their cell wall)
The only exception is Listeria monocytogenes that has little amounts of such endotoxins, inspite of being Gram-positive bacteria!
That's all!
-JasKunwar Singh
Interesting facts about testing 9th, 10th and 11th Cranial nerves
Hey guys!
So here's my first blog! Hope you like it!
Did you know that when 11th cranial nerve is involved on one side, you check for turning of head to opposite side and shoulder shrugging on the same side?
But when involved bilaterally, the patient can't turn their head.
So to test bilateral sternocleidomastoids, you ask the patient to sit up from sleeping position. He'll have head lag!
Here's another interesting fact:
Gag reflex is involved in 9th or 10th cranial nerve nerve palsy... This specifically localises lesion at medulla because both nerves originate there.
That's all!
Thanks ☺
Viral hepatitis - A histologic clue to the causative virus
Viral hepatitis is predominantly caused by hepatotropic viruses, although others like EBV, CMV are also implicated in the causation. Though serological markers serve as a gold standard for diagnosis, the following histologic clues help a pathologist to suspect the causative virus.
HAV - The portal tracts show a large amount of plasma cell infiltrates.
HBV - Presence of Ground Glass cytoplasm
HCV- Lymphoid aggregates in the portal tracts with macrovesicular steatosis of hepatocytes (most marked with Type 3)
Steatosis in zone 1 is mainly due to HCV while steatosis in zone 3 is mainly due to metabolic causes or alcohol.
EBV - Beads on a string pattern of sinusoidal infiltrates of Atypical lymphocytes.
CMV- Formation of microabscesses with intracytoplasmic and intranuclear inclusions.
Herpes virus- Nonzonal punched out necrosis with nuclear ground glass (Cowdry A) inclusions.
Thus, a good pathological suspicion would add to the confirmatory serological reports.
Monday, May 16, 2016
How to write for Medicowesome (And instructions for new authors)
You can write for Medicowesome and share your awesomeness with everyone around the world! It goes on the "Submissions" page. If you want me to share your notes / knowledge / mnemonics on Medicowesome, email them to me ( and I'll post it for you!
If you wanna be an independent author at Medicowesome, here's what you need to do:
Email me your id at asking that you want to write for Medicowesome. I'll say yaay! Of course, yes! :D (You could also send a few sample blog posts and a fancy CV. jk.)
Make a blogger account ( using your Gmail account.
Send me your email address. I will send you an author invitation, you must accept it within 24 hours.
Caffeine in Migraine!
Does Caffeine play a role in therapy of migraine? Or does it cause migraine?
Asking a doctor, he said yes caffeine heals pain in migraine attack. OK yea fine. But it can cause an attack too!! This is what I found something Amazing!
Migraine is a disorder characterised by acute pulsating headache, usually restricted to one side of head. Pulsatile dilatation of cranial blood vessels is the immediate cause of pain.
But we know headache is usually caused by vasoconstriction of cranial vessels and not vasodilation!
Actually, excess vasoconstriction or vasodilation, both cause less blood to reach brain parenchyma. This makes brain tissue cry for its necessary nutrients from blood!
In migraine there's excessive vasodilation of the vessels. So is the cause of acute pain. Caffeine constricts cranial blood vessels ( all other systemic vessels are dilated ). It is a CNS stimulant. (That's why we have more coffee at night while studying). :p
Now here comes the point. 1-2 cups of coffee (100-200mg) heal the pain by vasoconstriction. More than this will tend to decrease blood flow and so less supply to brain tissue.
That's why some people, who are in a habit of taking excess coffee or soft drinks, are more prone to headaches!
》 Caffeine is one of the constituents of medicines specific for treating migraine.
MIGRIL: Ergotamine 2mg, Caffeine 100mg, cyclizine 50mg tab.
VASOGRAIN: Ergotamine 1mg, Caffeine 100mg, Paracetamol 250mg, Prochlorperazine 2.5mg tab.
CAFERGOT: Ergotamine 1mg tab. + Caffeine 100mg.
Other medicaments-
Crocin Pain Relief: Paracetamol 650mg + Caffeine 50mg tab.
Micropyrin: Aspirin 350mg tab. + Caffeine 20mg
PS: Remember, the moment you feel migraine symptoms, have coffee. It is the best and most effective way to heal pain, without significant side effects.
That's all
Thanks :)
Sunday, May 15, 2016
Bromocriptine in Type-2 Diabetes Mellitus
Type-2 Diabetes Mellitus is a chronic metabolic disorder characterised by Hyperglycaemia, Insulin-resistanthe state, increased lipolysis, and high risk of cardiovascular disease! We all know that. And much more to it..
But how can Bromocriptine be used to control blood glucose levels in diabetics??
Increased sympathetic activity in diabetics leads to breakdown of fats and high levels of free fatty acids in blood, which makes them obese! Insulin resistance in turn activates endogenous glucose production cycles which results in glucose intolerance and high risk of cardiovascular diseases, hepatic failure, kidney problems and other systemic abnormalities!
• An ergot derivative
• Acts as a potent agonist of dopamine D2 receptors.
• Intracerebral injection of 0.8mg Bromocriptine mesylate- quick release formulation, in Insulin-resistant state, is given after first meal in morning within 2-hours of awakening.
• It acts in the Supra-Chiasmatic and Ventro-medial nuclei of hypothalamus and regulates circadian rhythm of Insulin sensitive-resistant cycles and controls Dopaminergic-Serotonergic neurotransmitter activity.
• Simply saying, Bromocriptine reverses circadian rhythm from insulin-resistant state back to insulin-sensitive state, thus decreasing blood glucose levels back to normal.
• It reduces blood glucose levels, but does not bring back to normal. That's why it is prescribed as an add-on drug with insulin or sulfonylureas. This makes an additive effect in the anti-diabetic therapy!
That's all!
Thanks :)
Hypertrophy- is it just all about size?
Hypertrophy is a form of cellular adaptation mainly seen in the nondividing tissues of the body. It simply means increase in the individual cell size. But, is this all about hypertrophy?
Carrier types mnemonic
Someone asked me to post a mnemonic for carrier types.. So I made one.
Just remember 2-3 examples from each category.
Mnemonic: PSM
Mnemonic: CD
Hepatitis B
Mnemonic: IM PHD, also notice most of them are from the immunization schedule.
That's all!
- IkaN
Gastric pathologies and blood group association mnenonic
Okay, this is a very simple mnemonic and I'm sure many people are already using it. Here it goes anyway. The blood groups associated with CA stomach and ulcer can get confusing so,
1. An ulcer is round, so it is more
common in people with group 'O'
2. CA contains an 'A' so carcinoma
stomach is more common in
people with group 'A'.
That's it :-p
Saturday, May 14, 2016
Secretomotor pathway to submandibular gland mnemonic
This was asked on the study group - Any mnemonic for secretomotor pathway to submandibular gland?
Superior salivatory nucleus (pons)-nervus intermedius - facial nerve -geniculate ganglion - chorda tympani branch - joins with lingual nerve -submandibular ganglion -submandibular gland
So I made a mnemonic (:
SS - Superior Salivatory nucleus
NI - Nervus Intermedius
F - Facial nerve
G - Geniculate ganglion
CT - Chorda Tympani
Lin - Lingual nerve
G - submandibular Ganglion
G - submandibular Gland
That's all!
Thursday, May 12, 2016
Step 2 CS: Domestic violence
Hey everyone!
I talk about approach to a patient with domestic violence in the video. I also stress on how to counsel.
These are some points from the PowerPoint Slide.
Tuesday, May 10, 2016
Diagnosis of Infective Endocarditis ( Duke's Criteria )
Hello Everyone!
Today, I read about a case of Infective Endocarditis, and it came up with something interesting! A definite way to diagnose a typical case of IE is by Modified Duke's Criteria -
1. Positive blood cultures
○ Typical organisms in Two blood cultures
○ Persistent positive blood cultures taken >12 hours apart
○ Three or more positive cultures taken over more than 1 hour
2. Endocardial involvement
○ Positive echocardiographic findings of vegetations
○ New valvular regurgitation
( Priya Found Emban In the BalCony. )
1. Predisposing factors (any cardiac abnormality, hypertension, valvular defect, congenital heart disease, i.v. drug abuse)
2. Fever >38°C
3. Embolic phenomenon (Embolus formation in Lungs, Brain, Spleen)
4. Immunological phenomenon (Vasculitis, Glomerular Nephritis)
5. Blood Cultures positive - organisms grown but not fulfilling major criteria.
For definitive IE, the simple rule is
☆ 2 major and No minor criteria (2-0)
☆ 1 major and 3 minor criteria (1-3)
☆ No major and 5 minor criteria (0-5)
That's all!
Thanks :)
Wednesday, May 4, 2016
Mechanism of action of Everolimus in breast cancer
Did you guys know that everolimus, an immunosuppressant, is used for cancers like renal cell carcinoma, pancreatic neuroendocrine tumors, etc?
The mechanism of action is really cool, especially in ER (Estrogen receptor) +ve, HER2 -ve breast cancers.
Sometimes ER +ve tumors develop resistance to endocrine treatment such as aromatase inhibitors.
The mechanism of endocrine resistance is mainly driven by aberrant signaling along the phosphoinositide 3-kinase (PI3K) - Akt - mammalian target of rapamycin (mTOR) signaling pathway. mTOR is a Ser/Thr protein kinase that constitutes a central downstream part of this intracellular signaling pathway. Its activation enhances cell growth, proliferation and metabolism, and promotes angiogenesis. The inhibition of the mTOR pathway by targeted therapies, such as everolimus or temsirolimus, can therefore block tumor growth and induce apoptosis.
Isn't that awesome?
ACE Inhibitors in Diabetic Nephropathy
Whats the role of ACE inhibitors in diabetic nephropathy?
I was asked this question in viva..
》ACE Inhibitors retard the progression of Diabetic Nephropathy.
Here is the mechanism-
Renin-Angiotensin system (RAS) gets activated in Diabetes (both type 1 and 2). So there is increased production of Angiotensin and its products, which leads to various vascular and metabolic changes.
Angiotensin-II induces several fibrogenic chemokines, viz.
Monocyte Chemo attractant Protein-1 (MCP-1) and Transforming Growth Factor- beta (TGF-B)
AT-II activates transcription factors
Nuclear factor-KB and thus synthesis of MCP-1 in renal cells. MCP-1 has a role in monocyte immigration which transmigrates through vascular endothelium and gets differentiated to macrophages. This leads to increased Extracellular Matrix production and Tubulo Interstitial Fibrosis.
Slow acting drugs like Lisinopril, Enalapril, Ramipril are employed for 12 months therapy in Diabetic Nephropathy. Assessment of proteinuria, creatinine clearance, uMCP-1 is done before and after this period.
A decrease in protein content in urine, increase in creatinine clearance, and a massive decrease in urinary MCP-1 levels are seen.
Angiotensin Receptor Blockers also retard the renal damage in type 1 and type 2 diabetes.
That's all!
Thanks :)
- JasKunwar Singh
ACE Inhibitor (Captopril) Adverse effects mnemonic
Let's memorize from the name itself-
C Cough
A Angioedema/ Agranulocytosis
P Proteinuria/ Potassium excess
T Taste changes ( Dysguesia )
O Orthostatic Hypotension
P Pregnancy/ Pancreatitis/ Pressure drop
R Renal failure and Renal Artery stenosis (contraindicated) / Rash
I Indomethacin inhibition
L Leukopenia/ Liver toxicity.
Hope now you won't forget it. ;)
- JasKunwar Singh
Tuesday, May 3, 2016
Of reservation, proving yourself and deserving things
"Hello IkaN,
First of all I'd like to tell you how amazing your blog is and how glad I am to have found it.
I am in first year at a government medical college. But I don't deserve to be there. I have reservation (Yes, THAT hated word). I am also well aware that I probably took this seat from someone who scored more than me. The feeling of being less first hit me in the first month of college, when I saw a lot of people speaking out against reservation.
I wanted to be a doctor. But not like this. The self loathing got so much so that I considered dropping out but I couldn't ask my parents to pay the bond just because I got exactly what I wanted just not in the way I wanted. Nobody in college is as such discriminating towards me but I know they feel a bit differently if they knew how I got here.
It hinders my studies. I don't feel the same amount of interest in becoming a doctor as I did before. My parents want me to study well, even get a post graduate degree but I can't bear the thought of living all my life in the shadow of reservation.
I don't want to sound ungrateful for the opportunity I have been given. Being a doctor is a prestige few people get and I know I'm lucky to have got it. I just wish it would have been differently. Which is why I wanted to apply for USMLE.. At least there, things would be fair. If I got something I would know it was because I deserved it. But then I wonder if I couldn't even get an undergraduate seat by myself how would I manage a post graduate one especially one in America ?
I'm not poor but neither am I rich. I don't know how much the exam fees and the books required to study for the exam cost but I'm pretty sure it is not cheap. And if after giving the exam I fail, what would I do?
I really hope you reply but I would understand if you can't because of time constraints or because you don't want to. Thank you for your time either ways."
- Sent through email.
Aromatase inhibitors and ER positive breast cancer
Happy Monday everyone!
Why are aromatase inhibitors effective in post menopausal women who have breast cancer and why not in women who are premenopausal?
I was asked this question during rounds.
Sturge- Weber syndrome
Talking about this rare syndrome, I read about in ophthalmology lecture (class of secondary glaucomas) today... So let's start with it-
- It is also known as Encephalo Trigeminal Angiomatosis (ETA)
• Rare congenital Neurological and skin disorder (phakomatoses)
• Caused by Somatic Acivating Mutation in GNAQ gene.
Port -wine stains (nervus flammeus) -
○ Usually seen on Forehead and Upper Eyelid of one side of face; present since birth.
○ Light Pink to Deep Purple
○ Caused by Overabundance of capillaries around Ophthalmic branch of Trigeminal nerve.
• Associated with
○ Secondary Glaucoma (in 50% patients)
Buphthalmos (enlarged eyeball due to increased intraocular tension)
○ Leukocoria (white pupillary reflex)
Neurologic manifestations- seizures, convulsions (on side of body opposite to birth mark), mental retardation, calcification of tissue and loss of nerve cells in cerebral cortex.
Ipsilateral Leptomeningeal Angioma (on same side of birth mark; calcification of underlying brain and atrophy of affected region)
Malformed blood vessels in the piamater overlying the brain on same side of head as the birthmark.
Radiological appearance shows Tram-Track Appearance on CT, bilaterally.
Treatment strategies include Laser surgery, Hemispherectomy
- Latanoprost, a Prostaglandin analogue, is suitable drug for decreasing intra ocular pressure. (1 drop daily in evening)
That's all
- Jaskunwar Singh
Monday, May 2, 2016
An Eye to Cyanide - Part 2
Hello awesome people :)
So, talking about differentials for the cyanide case.. (see previous post Here)
What all u can guess could be the case in such patient if not cyanide..?? Anyone? Think before you read it further..
Here it is...
It could be Carbon Monoxide poisoning. Reason? OK so you remember the patient had shown Cherry Red color of skin when she was brought to the emergency. Carbon monoxide also causes such discoloration of skin as well as blood. This is due to formation of carboxyhemoglobin, which causes decrease in oxygen carrying capacity of blood. So less supply of oxygen and essential nutrients to brain as well as other vitals of the patient, leading to neurological, cardiac as well as other systemic problems!!
An Eye to Cyanide
Hello awesomites!
Hope your weekend is going well :D
I just read about a case of cyanide poisoning and learnt some important points related to it. So here it is -
"A 27 year old comatose woman was brought to emergency by paramedics, and a strong odor of bitter almonds is present. Her past medical history is significant for malignant hypertension. For the past few months, she was given injections of sodium nitroprusside for the treatment, according to her accompanying relatives. She is a housewife and has been under stress regarding family problems. Also she had consumed few almonds a day ago."
Sunday, May 1, 2016
Do it for the better, to achieve your Best!!!
Thinking of starting something new? That's great! Don't wait, just start it. Open the first page. C'mon you can do it. Haha.. don't worry. I know it's difficult, but trust me once you start it, you will enjoy it. Yes you will.
Remember one thing always.. Consume your time in doing something productive. Keep yourself busy in some task, create new ideas, think about something and do it. Dream a lot. Yes you really should.
Do it for the betterment of society. Do it for yourself. To achieve the best of yourself. Trust me, the day when u do that, you will feel really happy. Because you did what you wanted to. That you loved. You lived!!!
Thanks! :)
-Jaskunwar Singh.
Step 2 CS: Challenging questions
Here are some of my sample closures for challenging questions.
All my closures are generic with little word play. Whatever the SP says, I would say it back to them saying I understand it. So your sympathy - empathy is done.
See how all three closures are almost the same -
"Will I need surgery?"
I understand that you are concerned about the possibility of having a surgery. Yes, you might require a surgery. But I assure you that we will be there to support you, throughout the treatment, regardless of the diagnosis. Does that sound okay to you? |
The U.S. must enforce its Existing Trade Agreement with Peru before Signing the Largest Trade Agreement in History
The United States government is gearing up to finalize a major international trade agreement called the Trans-Pacific Partnership (TPP). If signed, the TPP will be the largest trade agreement in history, standardizing trade rules among 12 countries on the Pacific Ocean--Australia, Brunei, Canada, Chile, Japan, Malaysia, Mexico, New Zealand, Peru, Singapore, Vietnam and the U.S.--with the potential for others, such as Korea and China, to join in the future. The trade among these countries accounts for 40% of the world's GDP and 26% of the world's trade; no doubt, then, that it could be a real boon for business. Yet there are reasons to have grave concerns about this agreement's potential for serious environments impacts.
The U.S. government says the TPP will promote strong environmental protection, with "robust environment standards and commitments from member countries, and addressing some of the region's most pressing environmental challenges." They are holding up the U.S. Trade Promotion Agreement (TPA) with Peru as the standard for the environmental conditions we can expect to see in the TPP. NRDC has set out its views on what should be in this agreement and what should be left out. Indeed, the U.S.-Peru TPA includes a stringent environment chapter, which itself contains a robust Annex on Forest Sector Governance to help tackle "trade associated with illegal logging and illegal trade in wildlife."
However, in order for those standards to be meaningful, they must be enforced. Developments in Peru signal that, in fact, the TPA's environment chapter is not being enforced; instead, Peru is flagrantly violating its obligations.
At the heart of the issue in Peru is the 2014 Law 30230 (or PL 30230), "Tax Law Establishing Measures, Simplified Procedures and Permits for the Promotion and Dynamization of Investment in the Country," which reduces the oversight and enforcement capabilities of environmental authorities when dealing with private companies (including U.S. companies) whose projects could negatively impact the environment. The provisions of this new law explicitly contradict Article 18.3.2 of the U.S.-Peru TPA:
"The Parties recognize that it is inappropriate to encourage trade or investment by weakening or reducing the protections afforded in their respective environmental laws. Accordingly, a Party shall not waive or otherwise derogate from, or offer to waive or otherwise derogate from, such laws in a manner that weakens or reduces the protections afforded in those laws in a manner affecting trade or investment between the Parties."
So far, we have yet to see a real response from the U.S. government to this breach of a bilateral agreement. Without prompt, serious enforcement of the conditions in the environment chapter (or any chapter, for that matter), the entire document lacks teeth.
So while USTR is touting the potential benefits of TPP it is important to not lose sight of the fact that strong agreements with weak enforcement do not protect people and the planet. Therefore, it is critical to ensure that existing trade agreements are being respected and enforced.
NRDC is proud to have recently sent a letter along with eight other organizations to the United States Trade Representative, urging Ambassador Michael Froman to take action to ensure the TPA with Peru is enforced. The letter is available here and it details how PL 30230 weakens Peru's environmental authorities.
Without real enforcement of the provisions in the U.S. TPA with Peru - or with any country - how can we be sure that the environmental conditions in the TPP will be respected? The answer is simple: we can't.
The U.S. government must use all the enforcement tools provided under the trade agreement to ensure that Peru does not weaken its environmental standards and violate its responsibilities under the TPA. Unless countries stand by their pledges to uphold the environmental standards written into trade agreements, these become meaningless. It must be clear that strong requirements in trade agreements are backed up with strong enforcement, and that language must fundamentally translate into action on the ground.
About the Authors
Amanda Maxwell
Director, Latin America Project
Join Us
|
What is Glaucoma
Glaucoma is a group of related eye diseases that cause damage to the optic nerve. This damage causes a loss of vision. Without yearly eye exams Glaucoma can become a "silent killer of sight".
Types of Glaucoma
• Primary open-angle glaucoma (POAG) - This common type of glaucoma causes peripheral vision loss without any symptoms.
• Narrow-angle glaucoma - Produces sudden symptoms such as eye pain, headaches, halos around lights, dilated pupils, vision loss, red eyes, nausea and vomiting. Narrow-angle glaucoma attacks in waves that can last a few hours or can be continuous.
• Normal-tension glaucoma - Pressures in the eye will be normal with this type of Glaucoma but patients will progressively lose sight. The cause of Normal-tension glaucoma is not really know but may be related to poor blood flow to the optic nerve.
• Pigmentary glaucoma - This is a rare form a glaucoma. It is a caused by a piece of pigment in the iris breaking off and clogging the drainage canal.
• Secondary glaucoma - This form of Glaucoma is caused by pressure building up from an eye injury, eye infection, dense cataract, inflammation or tumor.
• Congenital glaucoma - This is inherited at birth, children are born with narrow angles causing poor drainage in the eye. Since babies can not tell someone what is wrong, parents should look out for cloudy, white, hazy, enlarged or protruding eye.
Diagnosis, Screenings, and Tests, for Glaucoma.
• IOP Pressure. During a routine eye exam the Doctor will measure your Intraocular pressure, or IOP, with a device called a tonometer. There are two types of tonometers one that touches the surface of your eye and the other sends a puff of air to the eye, both give an accurate measurement of the IOP. Normal pressures in the eye should be below 21 mmHg (millimeters of mercury). Higher pressures over time will lead to damage of the optic nerve and vision loss.
• OCT (Optical Coherence Tomography). An OCT provides baseline imaging of your optic nerve. This test is used as a monitoring tool to make sure the that your optic nerve is not becoming too big. Doctors will measure the cup to disc ratio of your optic nerve and compare it over time.
• Visual Field Testing. involves staring straight ahead into a machine and clicking a button when you notice a blinking light in your peripheral vision. The visual field test may be repeated at regular intervals to make sure you are not developing blind spots from damage to the optic nerve or to determine the extent or progression of vision loss from glaucoma.
Treatments for Glaucoma.
Treatments for Glaucoma range from medication to surgery. Doctors will always start by prescribing medicated eye drops to lower the pressure in the eye. Surgeries are a last resort as the may lead to further vision loss. Patients who are on drops must be compliant with the dose and amount of drops through out the day. Most patients who go blind while are drops are usually not compliant with their drops.
Make an appointment and ask our Doctors about glaucoma screenings.
Call 914-277-5550.
Glaucoma the "silent killer of sight".
Somers Eye Center has the technology to track your eye health to help prevent Glaucoma from taking your sight.
(914) 277-5550
380 US-202, Somers, NY 10589, USA
Schedule an Exam Today! |
Ahmad Yaseen
SQL Server read-ahead mechanism; concept and performance gains
December 21, 2017 by
The user’s read requests in SQL Server are managed and controlled by the SQL Server Relational Engine, that is responsible for determining the most optimized access method, such as index scan or table scan, to retrieve the requested data. These read requests are also optimized internally by the SQL Server Storage Engine, the buffer manager components specifically, that is responsible for determining the general read pattern to be performed.
When you submit a query to request data in SQL Server, the SQL Server Database Engine will request that data pages that are required for your query from the buffer cache, performing a logical read. If these pages are not found in the buffer cache, a physical read will be performed to copy the pages from the disk into the buffer cache.
Although the SQL Server query optimizer tries to do its best in providing the most optimal execution plan that helps to retrieve the data requested by the user, you may still face CPU or I/O performance issues while executing the query. SQL Server provides us with many features that help in optimizing the data retrieval performance in order to respond to the user’s requests as fast as possible. One of these useful features is the read-ahead mechanism. As the name indicates, using the read-ahead mechanism, the SQL Server Storage Engine brings the data and index pages into the buffer cache, up to 64 contiguous pages per each file, before they are actually requested by the SQL Server Relational Engine, to respond for the user’s query. This provides more possibilities to find the data page in the buffer cache when it is requested and optimizes I/O performance by performing more logical reads, which is faster than physical reads. It allows also for computation overlap that helps in reducing the CPU time required to execute the queries.
SQL Server provides us with two types of read-ahead mechanisms: sequential read-ahead and random prefetching read-ahead mechanisms. In the sequential read-ahead mechanism, the pages will be read in allocation order or index order depending on what is being processed. For tables that are not sorted, in any order, due to having no clustered index, also called heap tables, the data will be read in the allocation order. In such cases, the SQL Server Storage Engine builds its own sorted list of addresses to be read from the disk by reading the Index Allocation Map pages, that contains a list of extents that are used by each table or index. The sorted addresses list allows the SQL Server Storage Engine to perform optimal sequential reads for the data in the disk, based on the extent addresses stored in the IAM. On the other hand, index pages will be read sequentially in key order. In this case, the SQL Server Storage Engine will scan the intermediate nodes of B tree structure of the index to prepare a list of all keys to be read from the leaf level nodes, recalling that the keys are stored in the leaf level nodes of the index.
A random prefetching read-ahead mechanism is used to speed up the fetching of data from non-clustered indexes, where the leaf level nodes contain only pointers to the data rows in the table or the clustered index. In this case, the SQL Server Storage Engine will read the data rows, asynchronously, that it already retrieves its pointers from the non-clustered index. In this way, the underlying table’s data rows will be fetched using the SQL Server Storage Engine before completing the non-clustered index scan. The number of pages to be read ahead is not configurable and depends on the edition of the SQL Server, with the Enterprise edition having the most number of allowed pages.
To understand how the read-ahead will affect the performance in practical terms, let’s go through the following example. We will create a simple testing table, using the CREATE TABLE T-SQL statement below:
Once created, we will fill that table with 100K records, using ApexSQL Generate, a SQL test data generator tool:
The table is ready now to start our testing scenario. To track the read-ahead, we will enable the IO statistics in the session that we will execute the query within, using the SET STATISTICS IO ON command, or by ticking the SET STATISTICS IO checkbox from the Query Options Advanced tab as shown below:
We will also use the DBCC DROPCLEANBUFFERS command to flush all data pages in the buffer cache before running our SELECT query so that the buffer cache will be empty and the read-ahead reads can take place. This is used for testing purposes and not recommended to be used in a production environment. After enabling IO statistics, TIME statistics, the actual execution plan and cleaning the buffer cache, we will run the SELECT statement below to retrieve data from the previously created table, using the T-SQL query below:
We are not interested here in the retrieved data as we will check the IO and TIME statistics for performance comparison purposes only. From the messages tab of the query result, we will see that the number of read-ahead reads performed while retrieving data for this query is 708 pages. This means that 708 pages were brought into the buffer pool while executing that query. The query took 1444ms to be executed and consumed 141ms from the CPU time as shown in the statistics below:
From the execution plan generated after executing the previous query, right-click on the SELECT node properties, you will see that the PAGEIOLATCH_SH wait type occurred 4 times and stayed for 3ms, as shown in the snapshot below:
If we execute the previous SELECT query again, enable IO and TIME statistics for that query and enable the actual execution plan, but this time we will not clear the buffer cache content as shown in the T-SQL query below:
You will derive from the TIME and IO statistics, shown in the Messages tab, that there is no need to perform read-ahead reads this time, as the requested pages are already in the buffer cache. The same result can be derived also from the TIME statistics, showing that it took the query only 889 ms to be executed completely, which is 60% of the time consumed in the previous query, and consumes 62 ms from the CPU time, which is 44% of the previous query CPU consumption. All of that due to the same reason; the data already exists in the buffer cache as a result of previous read-ahead reading operations. The IO and TIME statistics in our situation will be as follows:
Checking the SELECT node properties in the execution plan generated by the executing the previous query, you will see that, when the read-ahead reads are not performed, the PAGEIOLATCH_SH wait type occurred 397 times and stayed for 97ms, as shown in the snapshot below:
If we try to run the same previous query, providing a different value in the WHERE clause, and enabling both TIME and IO statistics for performance comparison purposes, as in the SELECT statement below:
You will see that the query will ask for new pages that are not available in the buffer cache. Because of this, read-ahead reads will be performed, to retrieve extra pages in addition to the requested pages and copy it to the buffer cache using complex algorithms to predicate the pages that the user may request in the coming queries, as shown in the Messages tab snapshot below:
The read-ahead mechanism is enabled by default. Which means that, whenever a read-ahead read is required, it will take place. There is a Trace Flag 652 that can be used to disable the default read-ahead mechanism. Recall the first SELECT query in our demo in which we clear the buffer cache before executing the query, that forced the read-ahead to take place. If we turn on the Trace Flag 652 before executing the same query, as shown in the T-SQL script below:
Checking the IO statistics of the previous query, you will find that, no read-ahead read is performed in the query that has 708 read-ahead reads previously, due to turning on the TF 652 before executing it, that disabled the read-ahead mechanism, as shown in the statistics below:
I hope this all made sense. If anything was unclear, please feel free to comment below!
Ahmad Yaseen
About Ahmad Yaseen
|
Pellet Wood StovesFor as many as one million American homeowners, the pellet-burning stove represents the cutting edge in home heating technology. Environmentally friendly thanks to their recycled fuel source materials but still possessing most of the same modern controls and conveniences of traditional fireplaces, the pellet stove (or pellet wood stove) continues to draw new fans as home heating costs continue to rise.
The modern pellet stove uses specially-constructed “pellets” composed of recycled sawdust and other biomass material. Yet the pellet stove is actually not a recent innovation, and traces its roots back to one of the United States’ most pressing economic emergencies. At the same time, the modern pellet stove offers attractions and inducements that energy- and cost-conscious homeowners will find almost impossible to ignore.
The History of the Pellet Stove
Burning sawdust and scrap lumber in barrel stoves, braziers, and other simple stoves has been common practice for hundreds of years. However, in 1930, near the beginning of the Great Depression, the first wood pellet, the Presto-Log, was invented at a sawmill in Lewiston, Idaho. For much of the Depression, as scarcity and high prices continued to make heating oil expensive, the burning of wood and wood by-products continued to grow. Over time, the pellet stove continued to evolve, eventually gaining widespread public notice in Washington state during the 1980s.
Biomass stoves and ovens became the focus of widespread research and innovation during the 1973 OAPEC oil embargo of several Western nations. In more recent years, as demands for environmentally-conscious fuel sources and continuing high oil prices drive interest in alternative fuels, pellet stoves are entering something of a Renaissance. The nonprofit Pellet Fuels Institute estimates that more than 824,000 pellet stoves were made in the United States between 1998 and 2010. PFI believes approximately one million pellet stoves are operating throughout America each winter.
Pellet Fuel Construction
Pellet Wood StovesPellets are made from a variety of densified biomass material, including wood, cord wood, waste paper, wood chips, and dozens of agricultural byproducts including corn and cornstalks, and many forms of forestry and forest treatment byproducts. In short, biomass is organic material left behind after any of dozens of treatments and procedures.
Pellet manufacturers take these waste materials and refine them into pellets resembling pencil segments or corks; they average about the size of a small finger. Manufacturers remove most of the material’s moisture, in order to increase BTU capability and make the pellets easier to use in freezing conditions.
How Pellet Stoves Operate
Modern pellet stoves are a far cry from the straightforward “burn bins” of yesteryear. In fact, many of them share the same electronic components, remote controls, and other sophisticated features as contemporary gas stoves and fireplace inserts. The stoves receive pellet fuel from electronically controlled bins, allowing them to maintain a steady warmth output as long as their fuel bins contain pellets.
The Costs of Pellet Stove Fuel and Maintenance
Pellet Wood StovesPFI estimates that the average family will consume about three tons of pellet fuel per heating season, at an average cost of approximately $825. Of course, this number varies according to the size of the wood pellet stove and especially the frequency of use. In comparison, each ton of wood pellets has the fuel efficiency of about 2.8 barrels of #2 fuel oil.
One disadvantage pellet stoves face compared to natural gas and propane fireplaces and inserts rests in the refueling process. Pellet stoves require periodic replenishing of their hoppers, and some models’ exhaust may generate soot and debris more than others.
Pellet stoves also share one of the chief drawbacks of conventional wood-burning fireplaces. Because they tend to warm only their immediate areas, they can “fool” household thermostats into letting other parts of the house grow cold. As such, home heating experts recommend installing the stove away from home thermostats and other heat-measuring instruments. Remember to choose a spot that leaves ample and convenient room for any necessary stovepipe and venting.
Installing and Refueling A Pellet Stove
The installation and venting of a pellet stove, like a fireplace insert or direct vent fireplace, is a task best left to qualified, certified experts. In particular, consumer advocates recommend selecting an installation specialist certified by the National Fireplace Institute. They will be able to safely estimate the pellet stove’s best place in the home and to ascertain all venting and exhaust needs.
Wood pellet fuels are available from a variety of online and real-world retailers. The Hearth, Patio, and Barbecue Association provides a free locator service on their website. |
Kamus Online
suggested words
Hasil cari dari kata atau frase: To take sides (0.00964 detik)
Found 1 items, similar to To take sides.
English → English (gcide) Definition: To take sides Side \Side\ (s[imac]d), n. [AS. s[=i]de; akin to D. zijde, G. seite, OHG. s[=i]ta, Icel. s[=i]?a, Dan. side, Sw. sida; cf. AS. s[=i]d large, spacious, Icel. s[=i]?r long, hanging.] 1. The margin, edge, verge, or border of a surface; especially (when the thing spoken of is somewhat oblong in shape), one of the longer edges as distinguished from the shorter edges, called ends; a bounding line of a geometrical figure; as, the side of a field, of a square or triangle, of a river, of a road, etc. [1913 Webster] 3. Any outer portion of a thing considered apart from, and yet in relation to, the rest; as, the upper side of a sphere; also, any part or position viewed as opposite to or contrasted with another; as, this or that side. [1913 Webster] Looking round on every side beheld A pathless desert. --Milton. [1913 Webster] 4. (a) One of the halves of the body, of an animals or man, on either side of the mesial plane; or that which pertains to such a half; as, a side of beef; a side of sole leather. (b) The right or left part of the wall or trunk of the body; as, a pain in the side. [1913 Webster] One of the soldiers with a spear pierced his side. --John xix. 34. [1913 Webster] 5. A slope or declivity, as of a hill, considered as opposed to another slope over the ridge. [1913 Webster] Along the side of yon small hill. --Milton. [1913 Webster] 6. The position of a person or party regarded as opposed to another person or party, whether as a rival or a foe; a body of advocates or partisans; a party; hence, the interest or cause which one maintains against another; a doctrine or view opposed to another. [1913 Webster] God on our side, doubt not of victory. --Shak. [1913 Webster] We have not always been of the . . . same side in politics. --Landor. [1913 Webster] Sets the passions on the side of truth. --Pope. [1913 Webster] 7. A line of descent traced through one parent as distinguished from that traced through another. [1913 Webster] To sit upon thy father David's throne, By mother's side thy father. --Milton. [1913 Webster] 8. Fig.: Aspect or part regarded as contrasted with some other; as, the bright side of poverty. [1913 Webster] By the side of, close at hand; near to. Exterior side. (Fort.) See Exterior, and Illust. of Ravelin. Interior side (Fort.), the line drawn from the center of one bastion to that of the next, or the line curtain produced to the two oblique radii in front. --H. L. Scott. Side by side, close together and abreast; in company or along with. To choose sides, to select those who shall compete, as in a game, on either side. To take sides, to attach one's self to, or give assistance to, one of two opposing sides or parties. [1913 Webster]
Cari kata di:
Custom Search
Touch version | Android | Disclaimer |
Originally a Greek word, "Kleitoris", meaning "divine, famous, Goddess like."
There are a lot of allegories representing the clitoris from Priapus to Kleite to Artemis. Indeed it is closely associated with menstrual blood too, as in the story of the city of Clitor, as it was sacred to Artemis or to Demeter. The city stood at the genital shrine of the earth, the headquarters of the Styx. As this was the belief that the Styx represented Mother Earth's menstrual blood.
Naturally with the dominance of the new patriarchal society, the clitoris was entirely ignored. The church taught that women should not enjoy sexual pleasure, but instead, only endure it for the sake of procreation purposes.
The church was so staunch in this mandate, that young girls and boys were kept ignorant of female sexuality. Even allopaths (male conventional Doctors) of the time, were convinced by the church, that no virtuous woman would even have a clitoris!
Virtuous women rarely showed themselves naked in front of any man, including her husband, instead, she was reduced to wearing a bulky nightgown with a hole cut out at the genital area to enable her husband to impregnate her without actually seeing or touching her skin.
At a Witch trial in 1593, the investigating gaoler (married man) discovered his wife's clitoris, and identified it as a devil's teat. This was sure proof that the woman was a Witch. The gaoler exposed his naked wife's clitoris to observers, they were so shocked at the sight of this on a woman, that she was indeed convicted as a Witch.
The church and medical authorities (the church) in the 19th century (not so long ago is it?) were very anxious to stop women from discovering their own sexuality. Girls who discovered orgasm through masturbation, were regarded as having a "medical problem". Often they were "treated" or "corrected" by amputation or cautery of the clitoris, or were made to wear miniature chastity belts, sewing the lips of the vaginal lips together so that the clitoris became unreachable. It doesn't stop there...Even castration by surgical removal of the ovaries! What lengths the church went to, to suppress women! Naturally there is no record or documentation of a boy ever having his testicles removed, or amputation of his penis to stop masturbation.
If you think this was only during the medieval times, how about this? A clitoridectomy was performed on a 5 year old girl to stop her exploring her genital region in 1948....In the United States!
The catholic church's definition of female masturbation as "a grave moral disorder" in 1976 (yes, only about 30 years ago) had a profound and devastating effect on the woman's psyche.
And less than a century ago, during the Victorian era, priests and doctors agreed unanimously that it was essential to totally repress women's sexuality, if indeed they were to succeed in ensuring her thorough subjugation. Leading authorities performed many clitoridectomies to cure women's nervousness, hysteria, catalepsy, insanity, female dementia, and other catchwords to describe forms of female sexual frustration! |
Provided by: manpages-pt-dev_20040726-4_all bug
setlocale - set the current locale.
#include <locale.h>
The setlocale() function is used to set or query the program's current locale.
If locale is not NULL, the program's current locale is modified according to the
arguments. The argument category determines which parts of the program's current locale
should be modified.
LC_ALL for all of the locale.
for regular expression matching (it determines the meaning of range expressions and
equivalence classes) and string collation.
for regular expression matching, character classification, conversion, case-
sensitive comparison, and wide character functions.
for localizable natural-language messages.
for monetary formatting.
for number formatting (such as the decimal point and the thousands separator).
for time and date formatting.
The argument locale is a pointer to a character string containing the required setting of
category. Such a string is either a well-known constant like "C" or "da_DK" (see below),
or an opaque string that was returned by another call of setlocale.
If locale is "", each part of the locale that should be modified is set according to the
environment variables. The details are implementation dependent. For glibc, first
(regardless of category), the environment variable LC_ALL is inspected, next the
environment variable with the same name as the category (LC_COLLATE, LC_CTYPE,
LC_MESSAGES, LC_MONETARY, LC_NUMERIC, LC_TIME) and finally the environment variable LANG.
The first existing environment variable is used. If its value is not a valid locale
specification, the locale is unchanged, and setlocale returns NULL.
The locale "C" or "POSIX" is a portable locale; its LC_CTYPE part corresponds to the 7-bit
ASCII character set.
A locale name is typically of the form language[_territory][.codeset][@modifier], where
language is an ISO 639 language code, territory is an ISO 3166 country code, and codeset
is a character set or encoding identifier like ISO-8859-1 or UTF-8.
If locale is NULL, the current locale is only queried, not modified.
On startup of the main program, the portable "C" locale is selected as default. A program
may be made portable to all locales by calling setlocale(LC_ALL, "" ) after program
initialization, by using the values returned from a localeconv() call for locale -
dependent information, by using the multi-byte and wide character functions for text
processing if MB_CUR_MAX > 1, and by using strcoll(), wstrcoll() or strxfrm(), wstrxfrm()
to compare strings.
A successful call to setlocale() returns a string that corresponds to the locale set.
This string may be allocated in static storage. The string returned is such that a
subsequent call with that string and its associated category will restore that part of the
process's locale. The return value is NULL if the request cannot be honored.
ANSI C, POSIX.1
Linux (that is, GNU libc) supports the portable locales "C" and "POSIX". In the good old
days there used to be support for the European Latin-1 "ISO-8859-1" locale (e.g. in
libc-4.5.21 and libc-4.6.27), and the Russian "KOI-8" (more precisely, "koi-8r") locale
(e.g. in libc-4.6.27), so that having an environment variable LC_CTYPE=ISO-8859-1 sufficed
to make isprint() return the right answer. These days non-English speaking Europeans have
to work a bit harder, and must install actual locale files.
locale(1), localedef(1), strcoll(3), isalpha(3), localeconv(3), strftime(3), charsets(4), |
Provided by: wine1.4_1.4-0ubuntu4_amd64 bug
winedump - A Wine DLL tool
winedump [-h | sym <sym> | spec <dll> | dump <file> ] [mode_options]
winedump is a Wine tool which aims to help:
A: Reimplementing a Win32 DLL for use within Wine, or
B: Compiling a Win32 application with Winelib that uses x86 DLLs
For both tasks in order to be able to link to the Win functions some
glue code is needed. This 'glue' comes in the form of a .spec file.
The .spec file, along with some dummy code, is used to create a
Wine .so corresponding to the Windows DLL. The winebuild program
can then resolve calls made to DLL functions.
Creating a .spec file is a labour intensive task during which it is
easy to make a mistake. The idea of winedump is to automate this task
and create the majority of the support code needed for your DLL. In
addition you can have winedump create code to help you re-implement a
DLL, by providing tracing of calls to the DLL, and (in some cases)
automatically determining the parameters, calling conventions, and
return values of the DLL's functions.
Another use for this tool is to display (dump) information about a 32bit
DLL or PE format image file. When used in this way winedump functions
similarly to tools such as pedump provided by many Win32 compiler
Finally winedump can be also used to demangle C++ symbols.
winedump can be used in several different modes. The first argument to the program
determines the mode winedump will run in.
-h Help mode. Basic usage help is printed.
dump To dump the contents of a file.
spec For generating .spec files and stub DLLs.
sym Symbol mode. Used to demangle C++ symbols.
Mode options depend on the mode given as the first argument.
Help mode:
No options are used.
The program prints the help info and than exits.
Dump mode:
<file> Dumps the content of the file named <file>. Various file
formats are supported (PE, NE, LE, Minidumps, .lnk).
-C Turns on symbol demangling.
-f Dumps file header information.
This option dumps only the standard PE header structures,
along with the COFF sections available in the file.
-j dir_name
Dumps only the content of directory dir_name, for files
which header points to directories.
For PE files, currently the import, export, debug, resource,
tls and clr directories are implemented.
For NE files, currently the export and resource directories are
-x Dumps everything.
This command prints all available information (including all
available directories - see -j option) about the file. You may
wish to pipe the output through more/less or into a file, since
a lot of output will be produced.
-G Dumps contents of debug section if any (for now, only stabs
information is supported).
Spec mode:
<dll> Use dll for input file and generate implementation code.
-I dir Look for prototypes in 'dir' (implies -c). In the case of
Windows DLLs, this could be either the standard include
directory from your compiler, or a SDK include directory.
If you have a text document with prototypes (such as
documentation) that can be used also, however you may need
to delete some non-code lines to ensure that prototypes are
parsed correctly.
The 'dir' argument can also be a file specification (e.g.
"include/*"). If it contains wildcards you must quote it to
prevent the shell from expanding it.
If you have no prototypes, specify /dev/null for 'dir'.
Winedump may still be able to generate some working stub
code for you.
-c Generate skeleton code (requires -I).
This option tells winedump to create function stubs for each
function in the DLL. As winedump reads each exported symbol
from the source DLL, it first tries to demangle the name. If
the name is a C++ symbol, the arguments, class and return
value are all encoded into the symbol name. Winedump
converts this information into a C function prototype. If
this fails, the file(s) specified in the -I argument are
scanned for a function prototype. If one is found it is used
for the next step of the process, code generation.
-t TRACE arguments (implies -c).
This option produces the same code as -c, except that
arguments are printed out when the function is called.
Structs that are passed by value are printed as "struct",
and functions that take variable argument lists print "...".
-f dll Forward calls to 'dll' (implies -t).
This is the most complicated level of code generation. The
same code is generated as -t, however support is added for
forwarding calls to another DLL. The DLL to forward to is
given as 'dll'.
-D Generate documentation.
By default, winedump generates a standard comment at the
header of each function it generates. Passing this option
makes winedump output a full header template for standard
Wine documentation, listing the parameters and return value
of the function.
-o name
Set the output dll name (default: dll).
By default, if winedump is run on DLL 'foo', it creates
files 'foo.spec', 'foo_main.c' etc, and prefixes any
functions generated with 'FOO_'. If '-o bar' is given,
these will become 'bar.spec', 'bar_main.c' and 'BAR_'
This option is mostly useful when generating a forwarding DLL.
-C Assume __cdecl calls (default: __stdcall).
If winebuild cannot determine the calling convention,
__stdcall is used by default, unless this option has
been given.
Unless -q is given, a warning will be printed for every
function that winedump determines the calling convention
for and which does not match the assumed calling convention.
-s num Start prototype search after symbol 'num'.
-e num End prototype search after symbol 'num'.
By passing the -s or -e options you can have winedump try to
generate code for only some functions in your DLL. This may
be used to generate a single function, for example, if you
wanted to add functionality to an existing DLL.
-S symfile
Search only prototype names found in 'symfile'.
If you want to only generate code for a subset of exported
functions from your source DLL, you can use this option to
provide a text file containing the names of the symbols to
extract, one per line. Only the symbols present in this file
will be used in your output DLL.
-q Don't show progress (quiet).
No output is printed unless a fatal error is encountered.
-v Show lots of detail while working (verbose).
There are 3 levels of output while winedump is running. The
default level, when neither -q or -v are given, prints the
number of exported functions found in the dll, followed by
the name of each function as it is processed, and a status
indication of whether it was processed OK. With -v given, a
lot of information is dumped while winedump works: this is
intended to help debug any problems.
Sym mode:
<sym> Demangles C++ symbol '<sym>' and then exits.
Perl script used to retrieve a function prototype.
Files output in spec mode for foo.dll:
This is the .spec file.
These are the source code files containing the minimum set
of code to build a stub DLL. The C file contains one
function, FOO_Init, which does nothing (but must be
This is a template for 'configure' to produce a makefile. It
is designed for a DLL that will be inserted into the Wine
source tree.
C++ name demangling is not fully in sync with the implementation in msvcrt. It might be
useful to submit your C++ name to the testsuite for msvcrt.
Jon P. Griffiths <jon_p_griffiths at yahoo dot com>
Michael Stefaniuc <mstefani at redhat dot com>
winedump's README file
The Winelib User Guide
The Wine Developers Guide |
Tuesday, 29 January 2019
What Made the 1920s Different?
Most history professors and journalists prefer to avoid long discussions on the era. If you do go and research the period....it's probably best nine years of American society (up until 1929 and the fall of Wall Street) of the past century.
What made the era different? I lay the change that occurred over seven key things.
1. WW I and the survivors of the war came back and lived remarkable lives. Whether it was boot-camp, the itself, or the comradery.....guys changed.
2. Prohibition. Saloons were the norm until 1919. After that point, speak-easy operations and hidden clubs became the norm. It should be noted that with the prohibition era.....came women drinking more.
3. Cars and roads went to make society mobile.
4. Movies started to get made, and people were introduced to a new form of entertainment.
5. Radios started to become reality.
6. Baseball went to become a national sport, and was often discussed by the working-class.
7. A housing boom started to take place in this era.
Life changed in this era, and in some ways.....we laid the path to 1929 and the depression that followed.
What the Howard 'Starbucks' Schultz Candidacy Does
As he came out yesterday and announced he's running an independent campaign in 2020 to be President, it has several themes and scenarios attached to it. If you haven't noticed, he's leveled criticism across the board toward both parties. Even I would admit that there is much frustration over both the Democrats and the GOP.
So what is the real strategy here? Let's go back to 1992 and examine Ross Perot's end-result....roughly 19-percent of the vote, and 19.7-million votes. I think Schultz could easily attain 24-million votes, with a decent social media campaign, and aim simply at five topics that most people feel frustration over.
But here's the interesting thing.....if he just concentrated the bulk of his time and effort on Michigan, Penn, and Wisconsin.....he'd likely win against Trump and the Democratic contender. By taking those 46 electoral votes.....there would be NO ONE to hold 270 or more votes. So the Electoral College would pass, with no winner.
Then in 2021, the House would be given the job of the vote, and it'd go state by state. Right now today.....the GOP barely holds an edge, with 26 states being 'GOP-control', and 22 states being Democratic-control. The rest are 'even'. This differs from 2016's election where the GOP held control over 32 states. So there would have to be at least two states where things would change drastically, and bring the GOP-control down to 24.
This could be a situation where the Democrat....possibly with less than 55-million votes....would end up as the winner.
So this is a Electoral College gimmick? Yes, I think so. But a lot depends on the House elections in 2020, and if they can manage to hold some control. If Schultz does all of this, and the GOP ends up controlling the House state votes? Then all of this was for nothing.
'Rube' Comes Back Around?
'Rube' as a word, got used in the past month by a Washington Post journalist....to describe the people voting for President Trump in 2016. This was a chatter episode which was supposed to explain to naive or innocent folks.....how so many folks got stupid and voted that way.
For those who aren't familiar with 'Rube'....a little history lesson.
To be honest, it hasn't been used to any significant degree since the 1930s (that was probably the peak). 'Rube' Waddell is where most folks remember the word. Waddell was one of the premier pitchers of the American League, Pitching from 1897 to 1910....he was probably one of the five best pitchers in the league, and was washed-up by age 33. Roughly four years later, he'd pass away. In simple terms, Waddell was a nutcase. Everytime a fire truck would pass a stadium in the midst of a game that he was pitching.....it was a 50-50 possibility that he'd take off after the fire truck and leave the game.
The term 'Rube' was supposed to be a county guy with no real recognition of the big world, and bound with 'innocence'.
Generally, it's said that Rube came from the early 1800s, and was supposed to be slang to identify a guy without class, or one with no sophistication....in simple terms.....a non-intellectual.
But here's the funny thing about the term, it's not really been used for almost eighty years. In terms of slang-value, it's been dead since the 1950s. You could bring up the term with folks who are over sixty, and they have a basic idea of the term. Someone who was 20 to 30 years old? There might be one person out a dozen who has heard of the word or has some idea of the meaning.
So where did the journalist dig the term up? Unknown.
Does this mean we are reverting back to some 1920s slangs? Let's talk about some of these:
1. 'Giggle Water'. That's something with an alcoholic punch to it.
2. 'Hay Burner'. That's a car that gets exceptionally poor mileage.....like less than ten miles to the gallon.
3. 'Ms Grundy'. A 'dame' who has an exceptional number of personal rules, and questions anything related to fun.
4. 'Sheba'. A 'dame' who is exceptionally fit, in loose clothing, and dangles a cigarette from her lips....suggesting to 'blow this joint' (to leave the bar), and do unimaginable things in a hay-barn down the road.
5. 'Bearcat'. A 'dame' who is fairly dangerous when 'juiced-up' (drinking), who might show fits of rage or assault, if a guy wasn't careful.
You have to wonder if this journalist is trying to lead folks back around to three-hundred-odd phrases which died in the 1930s. |
It has long been argued that some renewable energy sources, in their current states, are not practical enough to meet the energy demands of the world. With the added pressure of decarbonisation, many have decided to commit to nuclear energy as a practical method of significantly reducing carbon emissions. The road to a carbon zero future is a global challenge, but is nuclear really the best way to set us on the right course? Well, that’s what we’re trying to figure out!
The many problems caused by nuclear energy generally put a lot of people off and have a tendency to scare people. We hope this blog post can help narrow down on some issues and debunk some myths for you.
Nuclear Energy
I am sure you are well aware of how nuclear power generates electricity. However, just for those who may not know and are curious, we’ll briefly cover it here.
Nuclear energy generates electricity by a continued chain reaction that involved the splitting of an atom. This process produces an intense amount of heat. The heat is then used to create steam, which then turns a turbine linked to a generator that creates electricity. The process of splitting the atom is known as nuclear fission.
Another misconception with nuclear energy is that just because it is, essentially, green that it is renewable. Unfortunately, much like coal or gas powered stations, the fuel runs out and has to be replenished. The reason we say ‘essentially green’ is because Nuclear does produce some carbon emissions. Though in comparison to coal and gas it is completely incomparable.
But, it seems to many that Nuclear is the way forward to a carbon zero future, as renewable technology just isn’t practical yet!
Is it safe? Yes… and no
Possibly the most significant reason that nuclear is shunned or feared is that it is considered dangerous. This is undoubtedly the biggest hurdle will public approval. Though most would argue that you would be more than right to feel this way. We have, unfortunately, seen our fair share of catastrophe and tragedy with nuclear power. The most significant being Chernobyl. There is also the Fukushima Daiichi nuclear disaster, as well as the Three Mile Island accident.
To play devils advocate, those for nuclear energy would suggest that these incidents, to a point, have led to better safety procedures and nuclear energy today has never been safer. Learning from our mistakes has also been a wholly human trait. It is only right that we look at how we failed to see how we should be succeeding.
It is entirely up to debate, though you will of course have your own opinion. Do the benefits out weigh the negatives? Are we at risk with nuclear? Or is it worth the risk when it is a viable solution to reaching a carbon zero future?
The health problem
Health is a massive concern rising alongside the climate crisis. As coal burns and is being traced to numerous health issues, the more attractive green or renewable energy becomes. Though there are some health side effects to Nuclear power too. The workers are at risk and those living within the radius of the power station are also at risk.
It should be said however that there are several studies and reports on the actual effect Nuclear Power has. One in particular by the US National Cancer Institute, they found that; ‘typically less than 3 millirem per year, to those exposed to the station, were too small to result in detectable harm’. They also state that; ‘Such levels are, in fact, much smaller than the population exposures from natural background radiation which amounts for 100 millirem a year.
Both sides make compelling cases, but again, it is down to you to decide whether you think nuclear power is worth it.
The waste problem
Unlike renewables, Nuclear Power will produce radioactive waste. Let’s talk about that! When the uranium fuel is depleted it will need to be replaced with new fuel. The resultant waste still gives off immense heat until it has decayed. It needs to be cooled first and then placed in long-term storage until it has decayed completely and no longer poses a radiation hazard.
The other side of this issue is that the decay of radioactive elements are called a half life. The half life of most uranium fuel could be between 30 years, 24,000 years and even in the most extreme case, 16 million years.
Are you interested in green energy? |
The Negation of Humanity
Of the many forms of discrimination that are typical of the contemporary world, racism is distinguished by its basis in the differences between different human groups, whose features – such as skin color or hair shape – have been defined as constituting a race and are then converted into signs of inferiority or superiority. A clear example of this can be seen in the history of the United States, where slavery was based on the idea of the black inferiority, an idea that persisted after its abolition in 1864 with the passage of laws that prevented them from using public spaces, enrolling in college and, up until a few decades ago, requiring them to cede their seat on the bus to whites. The repeal of these laws did not end racism, whose institutional expressions became more subtle, but it nevertheless remains intense in its social manifestations.
The prevailing racism the United States can serve as a reference to identify some of its more evident traits: spatial segregation, inequality before the law (prisons are disproportionately full of African-Americans), police brutality, the violation of the most elemental human rights, the low quality of the schools in certain neighborhoods, a lack of job opportunities, greater obstacles in terms of personal development and the way in which African-Americans are seen and the characteristics that are attributed to them: violent, lazy, conflictive and stupid, as can be seen in the movies – that is, when they’re not simply assigned secondary roles. Two distinct fates for people of the same nationality, sharing the same territory.
Strange Fruit
José Clemente Orozco Linchamiento, 1930 Litografía Acervo INBA/MACGJosé Clemente Orozco. Lynching, 1930, Lithograph. INBA/MACG Archives.
One fundamental trait of racism is inequality before the law. Lynching, a common practice in the southern United States, is illustrative due to the regularity with which it was practiced. Whenever a crime was committed, the enraged white community found it easy to blame an African-American and then immediately whipped him, hung him, burned him. This was a public spectacle in which even children were present. Images of African-Americans hanging from a tree – strange fruit, as memorably depicted in the Billie Holiday song -, even circulated on postcards.
The Mexicans who lived in the United States, who were likewise discriminated against, just as they are today, were also lynching victims, as can be seen in the photographic record. African-Americans were able to stop this practice thanks to their organization in social movements, legal battles and countless protests. The struggle for civil rights, access to universities and equality before the law reached new heights in the 1960s under the leadership of Martin Luther King, Jr., who was assassinated in 1968, and it has not let up to this day. The contemporary Black Lives Matter movement shows that demands for equality and the need to fight the racism in American society are just as relevant today.
1.6 Mauricio Gómez Morín. Segregation Sign in the United States, Undated, Acrylic and Tar on Wood. La Penca Producciones Transdiciplinarias A.C. Collection.
Ernest Whiters I am a man, sanitation workers strike, 1968 Impresión digital - See more at: Whiters, I Am a Man, Sanitation Workers’ Strike, 1968, Digital Print, Ernest Whiters Archive.
1.5Stephen Caton, Black Lives Matter, 2015, Digital Print.
1.7Mauricio Gómez Morín. Segregation Sign in the United States, Undated, Acrylic and Tar on Wood, La Penca Producciones Transdiciplinarias A.C. Collection.
logo la penca La Penca Producciones Transdiciplinarias A.C. |
A Guide to Learning Photography With Any Camera You Have Right Now
A Guide to Learning Photography With Any Camera You Have Right Now
This tutorial is for beginners who may want to start learning photography during the lockdown periods around the world.
If you have been meaning to start photography as a new hobby but never had the time to do it, this downtime caused by the coronavirus may be the perfect time to do so. You may think that it’s not at all possible to do because of the fact that you won’t be able to leave the house or that you can’t be in contact with an actual teacher, but know that anyone with a camera of any sort right now can start to learn photography on their own simply because of the abundance of time to practice.
In this tutorial, we will go through a simple approach to teach yourself basic photography and understand the foundations of the craft using whatever camera that you may have available.
The Basic Approach
Learning photography, contrary to how many photographers may make it seem, is easiest with a simple camera. Much of the beauty of a photograph, in general, relies on two fundamental factors: exposure, which is the balance of lights and shadows in frame, and composition, the creative placement of everything in the image. Having a simple camera with automatic functions can help you fully understand the two factors before you move on to a more advanced and more complicated camera.
Learning Composition With a Basic Smartphone Camera
A basic smartphone camera is one of the best tools for learning photography. Basic smartphone cameras use the smartphone’s computer to determine the necessary exposure setting for you to achieve a balanced shot. Having this allows you to focus on composition for the time being and lets you explore the possible perspectives of shooting your desired subject.
If you’re reading this at the time of the worldwide lockdowns, I strongly suggest that you practice on still life for the time being. You can use different items like toys, food, plants, or any other small object. After a bit of practice, you can move on to shooting portraits of people you live with or your pets if you can make them hold still.
At this point, your focus should be on composing your image. Composition, in this simple sense, is placing your subject creatively within the four corners of the frame and making sure that the other objects in the frame are complementing it or giving it some sort of emphasis. Below are a few of the simplest ways to compose your images.
Rule of Thirds
The rule of thirds is by far the most basic and most popular composition technique. Though it may be misleading as it is not an actual rule, following the rule of thirds can be one of the easiest ways to compose your photos. This is done by simply dividing your frame into thirds horizontally and vertically, leaving you with nine equal rectangles (or squares if your aspect ratio is 1:1). Once you have divided your frame into the thirds, simply place your subject on one of the four intersection points of the lines. In situations where there is more than one object in the frame, your supporting elements or objects can be placed at the other intersection points. This then gives a perception of balance in your frame. Your use of empty space can also give additional context, so be sure to explore zooming in or cropping to give due emphasis to your subject. Empty spaces that are too big can often overpower your subject.
Look for things inside your house or maybe scenes outside your window.
Another easy and satisfying technique is the use of symmetry. Symmetry makes use of the satisfaction of seeing the absolute balance in your frame. It is, however, not as readily applicable in most things. In the simplest sense, achieving symmetry requires putting a single object strictly in the middle of the frame, and the two halves of the frame must be mirror images of each other.
Rule of Edges
To achieve an aesthetically pleasing image, your subject should be given enough emphasis, and any other object in the frame should be able to complement or, at the very least, not clash with your subject. One simple way is to make sure that no elements along the edges or your frame are either significantly large or significantly bright. Anything that fits that classification may take away the attention from your subject and confuse the viewer. Simply exploring different angles and cropping your image can help you attain the necessary compositional harmony.
Learning Manual Exposure With an Intermediate Camera
For the sake of this tutorial, we will classify both smartphone cameras with manual modes and point-and-shoot cameras with manual modes as intermediate cameras. For the first step of your learning, it would be helpful to first learn composition by setting your camera in automatic mode and using it as a basic camera, as mentioned above. Once you have practiced your composition enough, you can move on to manual exposure settings and explore the different effects that they have on your shot.
The Door Analogy to better understand the 3 factors of exposures
Shutter Speed
Shutter speed, also known as exposure time, is the amount of time that your camera is recording light. A camera works by recording light that travels from your subject, through the lens, and onto the sensor. Imagine that your shutter is a door, and the longer you keep the door open, the more light comes in. In the same way, the longer your shutter is open (and your sensor is exposed to light), the brighter your image becomes. However, for most cameras with standard zoom lenses or wide-angle lenses, a shutter speed longer than 1/50th of a second might create blurry images because of camera shake. To explore this, it would be good to use a tripod or at least secure your camera on a flat surface to make sure that it does not move while taking a photograph.
This pertains to the sensitivity of your camera to light. The higher the number on the ISO, the more sensitive to light your camera becomes, and consequently, your image will also become brighter. To stick to our door analogy, think of ISO to be how attractive the door is to people. However, using higher ISO gives more affinity to having digital noise. In the simplest sense, digital noise decreases the quality of your image. An overabundance of noise in your shot can lead to losing detail.
If the shutter speed is the time that your door is open, the aperture is the size of the door. If your door is bigger, more people can enter the room within the same period. Aperture is a setting expressed in f-stops on your camera. For now, all you need to understand is that as your f-number goes up, the opening gets smaller, and your image gets darker (unless you compensate with the other exposure settings). Consequently, as your aperture gets smaller, your depth of field becomes wider. That means that a larger fraction of your frame becomes in focus.
The Exposure Triangle
The exposure triangle is based on the fact that for a shot to have a balanced exposure, the adjustment of one factor should be compensated by the others
The relationship between the three exposure settings has been referred to as the exposure triangle. This is simply because to achieve a balanced exposure, you need to find the right balance between the three inter-related factors. However, the definition of a balanced exposure is entirely based on the photographers’ preference. Still, for the sake of achieving clarity, your shot should show enough detail without losing depth and contrast.
What to Do Next
Once you’ve mastered the basic controls of your camera, you can eventually move on to more advanced pieces of gear. Having the foundations of exposure and composition secure will allow you to deal with the other factors of photography that can make it too complicated for someone who might not understand them yet. Once you have understood how all these factors affect your images, you can now explore more advanced cameras and the wide array of lenses available for them. You can also begin to explore shooting in raw and editing on dedicated software. The important thing to do at this point is to seek inspiration on social media and forums like what we have on Fstoppers and practice by trying to achieve the same effect.
Log in or register to post comments |
was successfully added to your cart.
What Top 5 Green Cities are doing?
As green becomes more popular in our businesses and homes, it also becomes more popular in our cities as a whole. Local municipalities are striving to greenify their neighborhoods, downtowns and perimeters in an ongoing effort to make our world and daily living habits more healthy, pleasant and eco-friendly. From encouraging walkable neighborhood development to extensive recycling programs, cities around the globe are striving to become greener with sustainable business practices and consumer action.
Using data from the U.S. Census Bureau, the National Geographic Society’s Green Guide, and various online information, articles and city websites, we have compiled a list of five cities around the United States who have made notable steps in their efforts to go green, setting an example for other cities around the nation. Fortunately for us all, this list is not all inclusive – it is simply a sampling of cities that have made notable achievements in sustainability and environmental awareness.
San Francisco, California San Francisco often tops various lists for sustainable, green cities and has for some time. One of the biggest reasons that San Francisco is so green is because of its walkability; this is a great city for using foot power to get from place to place, and then occasionally jumping on any variety of public transportation for journeys too long to walk comfortably. More than half of the city’s residents use public transportation or alternative transportation (bicycles, roller blades, etc.) for their daily commute.
Golden Gate Bridge in GBB BLOG
Portland, Oregon It might be relatively easy to think of beautiful and crisp Portland Oregon as a green metropolis, but let’s look at some specifics anyway. Always a very active city, Portland has created various bicycle and carpool lanes and now sports an impressive 13% or more of their residences using carpools or other environmentally friendly methods of commuting. They also have extensive recycling programs, looking to recycle glass, plastic and yard waste. Finally, in an effort to improve air quality around the globe Portland is doing its part by getting a full 44% of its city power from sustainable sources such as wind and water power.
Boston, Massachusetts Among their notable projects are various efforts to use methane gas for power. Some of this will include methane pumped from landfills while other supplies are being intentionally created from anaerobic composting of yard waste. The city itself has embarked on the development of a huge power plant that will make use of yard waste (leaves, grass clippings, etc.) from around the city to generate as much as 1.5 megawatts of power. This plant will have a triple impact, helping to remove yard waste without filling up a land fill while also creating cleaner power and reducing the city’s reliance on imported fossil fuel.
Chicago, Illinois When you think of a city going green it can be relatively easy to envision a water front, hippy filled town like San Francisco topping the list, but a more traditional, business oriented city like Chicago may not be the first to leap to mind. However, Chicago is earning a reputation for going green with a variety of sustainable initiatives being utilized throughout the city. Chicago has excellent public transportation which makes it easier to use a city system than to drive an individual car. They have also recently made a commitment to renewable and sustainable energy systems. For example, all of Chicago’s city museums (nine total) have installed solar power arrays that are providing at least part of their building’s power. In fact, the city is striving to get 20% of its electricity from renewable resources within the next year. They have also rolled out generous tax incentives to encourage homeowners to use sustainable energy supplies.
Austin, Texas Another big surprise on our list of green cities is Austin, Texas. Sure, when you think of Texas you think of oil and there is little green about burning fossil fuels. However, the city is really stepping up and doing its part. The city started simply by creating green areas, now including over 200 parks throughout the city and various nature preserves, watersheds and other green areas. They are also striving to get the city to a point of being powered with at least 20% sustainable energy. While they have encouraged recycling for decades, a relatively new “pay-as-you-throw” program charges people for how much trash they throw away, encouraging reuse and recycling.
Leave a Reply |
Posted by on Dec 20, 2019 in Diabetes mellitus | 0 comments
In a nutshell
This study is looking for patients with diabetes taking insulin to test a mobile app that helps patients understand how blood glucose levels change. The main outcome that will be measured is the average blood glucose level at the beginning versus the end of the study. This study is recruiting in Boston, MA, the US.
The details
Patients with diabetes have high blood glucose levels. This can cause other health complications. Blood glucose levels naturally rise after a meal and fall during periods when a person is not eating, such as sleep. Insulin is a treatment that helps to lower blood glucose levels. However, it can be difficult to control the amount of insulin after a meal or around bedtime. If blood glucose levels drop too low (hypoglycemia), other health complications can arise such as fatigue. Patients’ awareness and understanding of the problem can help them balance insulin levels.
This trial aims to use a mobile app that compares patients’ estimates of their blood glucose levels with actual readings to help patients control insulin doses more effectively. The main outcome that will be measured is the average blood glucose level at the beginning versus the end of the study.
Who are they looking for?
This trial is recruiting 70 patients with either type 1 or type 2 diabetes. Patients must be taking multiple daily insulin injections or have an insulin pump. Patients must be aged 18 or older. Patients’ HbA1c values (a measure of blood glucose for the previous 3 months) must be between 7 and 10.5% or they must have experienced hypoglycemia 3 or more times a week. Patients must own an Android or iOS smartphone with regular internet access.
Patients receiving insulin in a closed-loop pump, such as the Medtronic 670G or OpenAPS, cannot participate.
How will it work
All patients will interact with the Control:Diabetes mobile app. Patients will be asked each morning to predict their blood glucose levels. Patients will then enter their actual blood glucose level and, if there is a significant difference between estimated and actual, will be asked to explain why they think there was a difference.
The average change in blood glucose and the number of hypoglycemia events between the start and the end of the study will be measured.
Clinical trial locations
Locations near 20147, United States (Change):
Please enter zip/postal code and country to help us offer locations near you
Show only recruiting locations
Study ID:NCT04158921
Want personalized notifications? Get notified only when trials are matching for you.
(it's free) |
%POST_TITLE% Thumbnail
A Round-Up of Scientific Developments Related to the Novel Coronavirus
by Amanda Vaught, amandavaught@propel-fa.com
At Propel we keep an eye on the latest scientific developments regarding coronavirus research to better understand its impact, supply chain disruptions, and how societies and economies react. Our advisor Amanda Vaught utilizes her scientific training from the Johns Hopkins University and graduate school at Columbia University to better understand the science behind the headlines. She is by no means an expert, but will regularly report on the latest understanding of coronavirus and its impact.
Coronavirus is novel. So while scientists and society are generally familiar with viruses, we really do not know details of the virus’s behavior, how it is transmitted, its long-term effects or how to treat it or vaccinate for it. Scientists do not have the answers. What they do have is a lot of questions and an unprecedented amount of cooperation across borders to find the answers.
We will continue to monitor studies on COVID transmission. If you would like to discuss this information, we encourage you to email Amanda. As always, we hope you and your loved ones remain safe.
Here are some recent scientific updates that we find encouraging and why.
1. Initial studies are showing that most people can produce antibodies in response to a Covid-19 infection.
Read the study here: Convergent Antibody Responses to SARS-CoV-2 Infection in Convalescent Individuals
What this means is that a good vaccine MIGHT give better protection than a natural infection.
What we still don’t know is whether or how long the antibodies remain protective. Would a vaccine protect for an extended period of time? Or would people need an annual vaccine like for influenza?
2. Yale scientists develop saliva test for Covid-19.
Read the story here: Saliva samples preferable to deep nasal swabs for testing COVID-19
**Please note this study has not been peer-reviewed as of this writing.**
Widespread testing for Covid-19 infections in the United States remains woefully inadequate. (The US should be conducting 900,000 tests per day according to the American Public Health Association.)
One problem? The supply chain for test swabs.
“Despite Early Warnings, U.S. Took Months To Expand Swab Production For COVID-19 Test”.
With a test just using saliva, these specialized test swabs are not required.
In addition, many patients find the current swab test incredibly uncomfortable: “ It’s a deep burning, and it often elicits tears and sometimes coughing,” says Nurse Practitioner Molly Erickson in this Chicago Sun-Times article. “COVID-19 tests: What it feels like to have one”. The test and concurrent coughing exposes healthcare workers to infection and requires generous amounts of personal protective equipment (a supply chain issue unto itself).
The new Yale study demonstrated saliva is a more accurate test, it’s easier for patients and healthcare workers, and also eliminates the supply chain problem of the swabs. And, as they say in the article: “More sensitive and consistent detection is expected to be critical in helping to assess when individuals are able to safely return to work and when local economies can reopen during the current pandemic.”
3. New York survey shows most new coronavirus cases coming from people staying at home.
Recently Governor Cuomo announced that 66% of new coronavirus infections in New York were from people who had been staying at home.
“Cuomo says it’s ‘shocking’ most new coronavirus hospitalizations are people who had been staying home”.
How is this possible if staying at home is supposed to protect people from becoming infected?
Liqian Ren, Ph.D., Director of Modern Alpha at WisdomTree, wants this data set dug into deeper than has been reported. The survey seemingly contradicts all the other evidence saying this virus transmits person to person indoors. (For example, see CDC studies that all find “that talking, laughing, singing in close quarters, in unventilated interiors, for many hours, is the perfect storm for a COVID super-spreader event.”)
A better understanding of how the virus transmits will help all societies and economies evaluate and mitigate risk going forward.
Hear more of the discussion of the New York survey here: Behind the Markets Podcast Episode 174
At Propel we will be looking for more studies on transmission, and hopefully will have more clarity around it soon. |
Marriage Tactics
John Michael Wright
Image via Wikipedia
I suppose it must be theoretically possible to create an ethic without God or a god, but historically in the west it’s been a problem.
When Machiavelli developed the first utilitarian handbook on politics, that is to say, a book on politics that approached them without religion (except considered as a tool), he laid the foundations for Thomas Hobbes to develop his Social Contract.
Hobbes argued, following Machiavelli, that we are driven, not by reason, but by our appetites. That being the case, and to both it seems self-evident, though in Hobbes perhaps more explicitly so, society is not arranged around or by a moral law, but by people’s desires and passions.
The only way to organize such a society is through a continuous negotiation among its members. The fruit of this negotiation was the social contract. To maintain order, Hobbes argued, we need Leviathan.
Thus political tyranny and the whole western stream of politics-without-God walk hand in hand.
In the social contract we discern the basis of modern political theory, one that permeates economics as well, as it was applied by Adam Smith.
Without this notion of the social contract, we would have no Locke, no Rousseau, no American or French Revolution, no Marxism, and no special-interest industry negotiating their share of the social market with the representatives of the various parties appointed to oversee this great negotiation in Congress.
The reason the idea had such staying power in Machiavelli and Hobbes was twofold: one, much of the intellectual leadership of Europe was trying to escape the dominance of the Roman Catholic church and its appeal to a law of nature, and two, in a dynamic day to day sort of way, it is true that we are continuously negotiating the terms of our contract.
Under Machiavelli, Hobbes, and most other modern philosophers, the basis of that negotiation is personal advantage. We laugh at honor. We snicker at the idealist who would abandon his advantage for right and wrong.
Do not believe for a moment that I am referring primarily to financial transactions. On the contrary, I am talking about friendship, marriage, parent-child relationships, teachers and students, and so on.
Our underlying premise in every relationship is that we are engaged in a negotiation.
Think, for example, of the transition from the marriage covenant to the marriage contract. Think of the way people time their weddings to optimize tax benefits. Think of how parents are afraid to exercise their natural authority over their children for fear the children will reject the terms and hurt the parents.
I’m not sure, in such a context, good and evil are relevant terms. We have got “beyond good and evil,” to quote Nietzsche and Skinner.
Tom Wolfe expresses well the post-humanity of our condition in his 1998 novel A Man in Full:
Should he pour his heart?… Something told him that would be a tactical mistake. A tactical mistake. What a sad thing it was to have to think tactically about your own wife.
Sad indeed, and yet that is precisely how we are conditioned (and I use that word carefully) to approach these most foundational of human relationships.
Family, marriage, is a form. Form creates by limiting. We despise limits. Form is truth. Living in the form of the truth is virtue. Virtue is freedom.
We are no longer free to be married or to raise our children. Unless, of course, we seek first the Kingdom of God and His righteousness.
Then all is restored, no matter what is lost.
The Wizard of Oz and the Removal of Chests
Dorothy meets the Cowardly Lion, from The Wond...
Image via Wikipedia
The Wizard of Oz seems to be a fine movie from all I can tell, but the book strikes me as exactly the sort of thing that CS Lewis was talking about when he spoke of making “men without chests.”
Chapter XXI is called “The Lion Becomes the King of the Beasts.” After seeing the wizard and being given courage, the lion arrives, with the Woodman, the Scarecrow, Dorothy, and Toto at a forest that the Scarecrow finds gloomy but the lion finds “perfectly delightful.”
“I should like to live here all my life,” he says. See how soft he dried leaves are under your feet and how rich and green the moss is that clings to these old trees. Surely no wild beast could wish a pleasanter home.”
Leaving aside the question of whether a lion who has just received a chest (courage) would even notice a home with soft dried leaves underfoot and the nostalgic moss clinging to old trees rather than an opportunity to show off its newly gained courage, I proceed to tell you that, in spite of the fact that “no wild beast could wish a pleasanter home,” they don’t see any.
The next day, however, they resume their journey and soon hear a “low rumble, as of the growling of many wild animals.” (Baum seems to do this a lot: raise a problem that ends up not mattering, that demands nothing of the characters but the passing of time, that has nothing more than an accidental significance if any at all.)
And indeed the animals have gathered in a clearing where they came across hundreds of beasts in council. He quickly determines that they are in great trouble. But when he appears, the assembly falls silent and a tiger approaches him.
“Welcome, O King of Beasts, you have come in good time to fight our enemy and bring peace to all the animals of the forest once more.”
When he asks what their trouble is, the tiger tells him that they are threatened by a fierce spider-like monster, as big as an elephant, with eight legs as big as tree trunks. It has eaten every other lion in the forest, but none of them had been “nearly so large and brave as you.”
Then the newly brave lion asks, “If I put an end to your enemy, will you bow down to me and obey me as King of the Forest?” When they gladly agree, he heads off to “fight” the great monster.
“He bade his friends good-bye and marched proudly away to do battle with the enemy.”
In all the foregoing, I admire some of Baum’s story-telling tactics, though he is no Grimm. I have problems, but most of them can probably be responded to. But in the last paragraph of the chapter, he describes this battle, and I will tell you right now, I think it is badly done, and I think Baum betrays a harmful frivolousness that reminds me of Lewis’s opening words in Abolition: “We are not attentive enough to the importance of elementary text books.”
The great spider was lying asleep when the Lion found him, and it looked so ugly that its foe turned up his nose in disgust. Its legs were quite as long as the tiger had said, and its body covered with coarse black hair. It had a great mouth, but its head was joined to the pudgy body by a neck as slender as a wasp’s waist. This gave the Lion a hint of the best way to attack the creature, and as he knew it was easier to fight it asleep than awake, he gave a great spring and landed directly upon the monster’s back. Then, with one blow of his heavy paw, all armed with sharp claws, he knocked the spider’s head from its body. Jumping down, he watched it until the long legs stopped wiggling, when he knew it was quite dead.
The Lion went back to the opening where the beasts of the forest were waiting for him and said proudly, “You need fear your enemy no longer.”
Then the beasts bowed to the Lion as their King, and he promised to come back and rule over them as soon as Dorothy was safely on her way to Kansas.
Compare this “battle” with any other encounter in any other fairy tale or folk tale or fable and see if you can justify it.
The Lion is practical, he achieves his end. But he is not courageous, he is not noble, he is not worthy of a story for the simple reason that nothing worth learning about him or about virtue was displayed. It is not fitting to the world of fairy tales or children’s literature to read about such a conquest. We have had one more piece of our chests removed by reading and not resisting this story.
Give me Reepicheep, whom I can welcome into my soul with joy.
On Proving the Existence of God
The great argument of the “new atheism,” as of most atheisms of the old stripe, seems to be that “you can’t prove the existence of God.”
In other words, using the tools of science, you can’t prove the existence of something that transcends science.
To think more clearly on the matter, it might be helpful to look at the word religion. It comes from the Latin – legio: to tie, and re: a broad prepositional prefix with too many possible meanings to be able to properly translate.
The idea is generally taken to be that of tying together.
A religion is not a conclusion to an argument. It is a teaching that ties everything else together, that harmonizes everything.
The most powerful religions are those that are able to tie the most together.
I am a Christian because, while I have great respect for other religions, they all seem to leave us with one or two irresolvable dichotomies that are reconciled in Christ.
The mother of all dichotomies might be that between the material and the spiritual realms. Naturalism, the religion of today, resolves it by denying the spiritual or giving naturalistic explanations for all things spiritual.
Gnosticism, the perpetual enemy of Christianity and, according to Richard Weaver at least, the painfully ironic foundational dogma of progressive education (Dewey, James, etc.) treats the spiritual as legitimate and important and the material as valueless.
Christianity tells of one who is big enough to weave all things together into a harmony that damages nothing and blesses everything: Christ, the incarnate logos: Spirit made flesh, God made man, the weaving together in one of all things.
Now, if a religion is true, it cannot simply dismiss what it doesn’t like. That is a sign of theological weakness. A true religion ties everything together.
But when a philosophy is based on a necessarily inadequate premise, as is naturalism, then it is hard for this Christian to see why he ought to abandon his foundations because the other guys have developed a sophisticated argument.
A premise is necessarily inadequate when it excludes what it doesn’t like at the beginning of the discussion.
God is not the conclusion of an argument based on naturalistic premises. He is the beginning of thought and the harmony of all truth. He is necessary to every other premise, but I don’t see how that can “prove” his existence. He is simply Necessary: to thought, to ethics, to beauty, to society, to physics, to marriage, to education.
Bad Theory and the Practice of College Composition
RV Young on changes in Freshman composition over the past 40 years.
HT Martin at Vital Remnants
Two Kinds of Freedom
Human history and the human psyche reveal two conditions that we describe using the word freedom. They are, however, very different conditions.
The first is what I will call, borrowing the word from Kierkegaard, “aesthetic freedom.” This is the freedom of the adolescent and is characterized by the right to avoid making choices.
For example, the unmarried man is free to let his eyes and mind wander among the unattached females of the species, the uncommitted quasi-philosopher is free to wander among schools of thought, pretending to “not want to narrow himself to one position,” the undecided music critic is free to say, “I like all kinds of music.”
In each case, what the person is saying is that he is guided by his emotions or immediate needs, which, in turn are guided by his appetites. He is functioning slightly above the powers of an animal, but, in a way, not very far. Neither his will nor his reason have been decisively engaged.
To summarize, aesthetic freedom is the freedom of the adolescent and is characterized by the absence of willful decisions.
The second kind of freedom, and here again I borrow the word from Kierkegaard, is ethical freedom and is characterized the act of choosing.
Any time I make a choice, I am choosing more than just one of many options. For example, if I choose to go to a football game instead of a drinking party, I haven’t only chosen football over the party. I’ve also chosen a self that would go to a football game instead of the party.
In this sense, because we are created persons with a will, we are continually choosing ourselves in every decision we make.
These choices can lead to ethical slavery, in which our decisions bind us to the appetite we indulge, or ethical freedom, in which our decisions create of us a free person who governs himself and walks the path of wisdom.
Perhaps most significantly, each choice we make can be a choice for the finite or the infinite. The aesthete tries to maintain an infinite variety of choices and in so doing limits his choices to only the finite options.
The ethical person chooses limits and commitments, and in so doing he chooses the infinite, for concrete love is the infinite act of an eternal being. Love gives life to the faculty by which we can love, and that faculty is not earthly, worldly, selfish, cynical.
Indulgence destroys that faculty, thus destroying the soul of the self-indulgent.
Ethical freedom is the act of choosing oneself. Aesthetic freedom is the act of indulging oneself. The former leads to finite but real life. In the act of an infinite choice to love another one is connected to the infinite. The latter is the negation of the self by virtue of the disempowerment of the will and reason.
On the Soul – or Whatever
Image via Wikipedia
Do you think a school should teach psychology? I believe it should not just as I believe that it should not base its teaching techniques on psychology.
That might sound as mad as everything else I write, so I’d better explain. It’s simple, though. Psychology, as approached today, is false, wrong, in error, harmful, etc.
The foundational idea of modern psychology is positivism, happily combined with materialism. Psychologists spend all of their time determining what can be known about humans “scientifically.”
In order for anything to be know scientifically about human beings, humans would have to be subject to the laws of science. To an extent and in some areas they are. For example, their bodies need energy to move, are subject to gravity, etc.
However, humans have a will and reason. Neither of these are subject to the laws of science and the attempt to study humans as though these are subject to the laws of science is to alter the object studied.
If humans are nothing but appetites, then they can be studied scientifically. Our actions can be controlled through behavioral mechanisms.
But if humans have a will and reason, then to study them scientifically is akin to studying the sun with a sponge and a thermometer, or to study Saturn by climbing on a step-ladder.
Just as the Russian cosmonaut is said to have said something along the lines of “We went out into space and looked around and your god wasn’t there,” so the modern psychologist goes into the human mind with the wrong tools and says, “See, there’s no will there.”
No, if you close your eyes, you won’t be able to see. There’s no getting around that.
So why are private schools, so-called Christian schools, so anxious to ensure they follow the latest discoveries in a field run by Oedipus?
This isn’t a complex issue. The Bible, experience, our conscience, philosophy, ethics, language, literature, music, and the fine arts all tell us about, all show us, a creature made by God that is amazingly different from every other created being and that is morally responsible for all its actions. To teach modern psychology and to implement its so-called discoveries is to cease, while you do so, to believe in your statement of faith.
Let me quote the New Internation Dictionary of New Testament Theology, V3 Page 691:
The Old Testament speaks of man: not clinically, with his human attributes all neatly classified, but concretely, i.e. the writers take a man as they find him and assess what he does, his behavior towards his fellow-men and the attitude he displays toward the law of God.
Or perhaps this from a magazine I stumbled across in a bookstore and failed to record the date. The magazine was The Public Interest:
We produce no assessable outcomes. The shaping of a soul is a simply immeasurable event; moreover, it is sometimes not evident until much time has passed.
Why we think and how we can do it better
Portrait of Chaucer from a manuscript by Thoma...
Image via Wikipedia
We think to determine three things: whether something is true, whether something should be done, and whether something commands our appreciation. In other words, we think to know truth, goodness, and beauty.
In each case, a judgment is made. A judgment is embodied in a decision and expressed in a proposition.
When we know the truth, we don’t need to think about it so much as to enjoy it. When we know what is good, we need to act, which will arouse a thousand more questions, few of which will reach the conscious mind. When we know what is beautiful, we need to adore.
Thinking begins when we feel a contradiction. This is because thinking, as we generally experience it, is the quest for harmony, that is, a mind without contradictions. Thus Socrates: “Great is the power of contradiction.” It makes us think.
How then does The Lost Tools of Writing teach thinking? Mainly by pushing the responsibility for making decisions back to the students. Every essay involves making a decision – whether so and so should have done such and such, whether X should do Y, etc.
But if you want to undercut thinking in a hurry, give someone a responsibility without the tools to fulfill it. In my view, this is the cause of over 95% of students’ laziness. Therefore, LTW does not drop the task on the student, telling him to bear a burden that his teachers won’t bother carrying, and then walk away. It provides the tools to make decisions.
First, it provides the topics of invention. These are the categories of thought, without which one cannot possibly think about any issue adequately. It provides practice using these categories (topics) in real world issues, but not issues that concern them directly. They have not yet learned how to think based on principles, so I don’t want them getting emotionally involved in issues they cannot understand yet.
Because thinking takes practice.
It also takes order, and that’s what the canon of arrangement teaches. I’m not sure people generally appreciate how important order is to sound thinking. After all, the object of thought is a harmonious solution to a question, and the only way we can know if our solutions are harmonious (i.e. lacking contradictions) is if we see the parts in relation to each other.
Thought also requires judgment or assessment. The thinker needs to know if the form of his thought is sound, if the proportions and emphases match the reality about which he is thinking, if the more important parts are given their due emphasis.
This tends not to come under the Progressive reduction of thought to “critical thinking” but it is an essential element of clear and honest thinking.
In the canon of Elocution, LTW teachers yet another mode of thinking: the quest for the fitting expression, which requires a subtlety of judgment that cannot be gainsaid.
Here’s the thing: we can only appreciate what we can perceive. What we perceive depends on two things: the thing we are perceiving and the eyes with which we perceive it.
Now by “the eyes with which we perceive it” I do not mean only the eyes of the body, but also what Shakespeare called “the mind’s eye.” The mind’s eye perceives what it perceives as it perceives it because of the concepts it possesses while it perceives it.
When I listen to music, I cannot hear what my good friend John Hodges can hear. He is a composer with a tremendous and informed gift for music. But notice that he has an informed gift. He knows music. As a result, his experience of music is very different than mine.
In fact, he once converted me about a piece of music. When first I saw Les Miserables, I thought of it mostly in political terms and judged it to be sentimental claptrap. But when John explained the musical qualities, how characters had their own tunes, how the story put melodies out in one place, then withdrew them, the reinserted them in other places to tell the story through the music, I came to understand why it is regarded by those who can perceive these things as a masterpiece.
I was informed. My mind’s eye could see better. My appreciation grew.
Even so, modern readers (and that means most of us) struggle to read great poetry, while we can watch movies with incredible complexity. Why? Because since we were very little we have gone to the theatres and learned how to watch movies. We understand the art form without even having to think about it very much.
Poetry is not what it used to be, at least not in the classroom. The conventions are regarded as evil, the forms as tyrannical. Consequently, nobody reads Longfellow anymore.
But LTW is a classical curriculum. If that means anything it means that we respect the conventions. 2500 years of artistry gave us quite a remarkable treasure trove of riches. In elocution, we teach students schemes and tropes so they are capable of appreciating Shakespeare, Chaucer, Milton, and Spenser, and by appreciating their artistry, they can enter into the astounding insights that lie between their paradoxes and dilemmas.
Through LTW students begin or continue to grow toward a perceptive, insightful, and refined mind. Standardized testing and critical thinking become fleas they snap off their shoulders because they are on to important things, like making decisions and acting on them, adoring the beautiful, and knowing truth.
Marks of The Post-Human World
Migrant construction workers - Bangkok, city o...
I might need to add one of those “signs of the apocalypse” features to this blog. It would focus on developments and events that demonstrate the rejection of nature and the impact of that rejection on normal people – who become rapidly abnormal living in the vacuum so abhorred by nature.
This would be the first entry: Dating simulation game.
This is only funny in a limited sense.
The Lost Tools of Birthing
Between Geoffrey Chaucer, the author of The Canterbury Tales who died in 1400, and Edmund Spenser, who published The Sheapherd’s Calendar in 1576, you will scan your anthologies of English verse in vain for a renowned poet.
Why did English literature blossom in the 14th century only to enter an aesthetic dark age until Spenser? And why did the late 16th century, the Elizabethan age, experience a flowering that many students of English literature still consider a golden age? How did nearly 200 obscure years disappear in the radiance of Spencer, Sidney, Shakespeare, Marlowe, Donne, and so many great poets, writers, explorers, and scientists?
Grammar and rhetoric.
In 1540, King Henry VIII issued an Executive Order that every school throughout the realm should teach a uniform grammar. In the 1544 version, the following “letter to the reader” explains why he issued his history-altering decree:
“His majesty considering the great encumbrance and confusion of the young and tender wits, by reason of the diversity of grammar rules and teachings (for heretofore every master had his grammar, and every school diverse teachings, and changing of masters and schools did many times utterly dull and undo good wits) hath appointed certain learned men meet for such a purpose, to compile one brief, plain, and uniform grammar, which only (all others set apart) for the more speediness, and less trouble of young wits, his highness hath commanded all schoolmasters and teachers of grammar within this his realm, and other his dominions, to teach their scholars.”
Every English school child in Elizabethan England memorized this famous “Lily’s Grammar.” Even earlier, Dean Colet had re-founded St. Paul’s school in London, where he implemented a curriculum and text books written and assisted by his friend, Erasmus. By the time Shakespeare reached the Stratford Grammar School in 1571, the curriculum and methods of St. Paul’s had spread throughout England. Sister Miriam Joseph describes the manner of teaching:
“The method prescribed unremitting exercise in grammar, rhetoric, and logic. Grammar dominated the lower forms, logic and rhetoric the upper. In all forms the order was first to learn precepts, then to employ them as a tool of analysis in reading, and finally to use them as a guide in composition…. The boy must first be grounded in the topics of logic through Cicero’s Topica before he could properly understand the one hundred and thirty-two figures of speech defined and illustrated in Susenbrotus’ Epitome Troporum ac schematum et grammaticorum et rhetoricorum”
The assumption behind this Renaissance curriculum is the same assumption that an athlete or a painter or a dancer makes when he seeks excellence: virtue requires “unremitting exercise,” which is to say, disciplined mastery of the craft.
The Lost Tools of Writing is a shadow of the curriculum Erasmus and Lily established in 16th century England. It is hoped that this shadow, learned by eager students and taught by humble teachers, can plant the seeds of a thousand individual Renaissancen.
The Lost Tools of Writing rests on the conviction that our world is populated by geniuses and intelligent people who fail to realize their genius or fulfill their intelligence for lack of disciplined training in the craft of writing. When the insights and epiphanies come, the unprepared mind has no vessel to preserve it.
The more intelligent the student, the more frustrating the experience.
Perhaps it strains the point to insist that writing is a craft with tools that empower the craftsman through practice, that writing produces artifacts that can be objectively assessed for their consistency with the principles of the art, and that the goal of instruction is for the student to attain self-mastery, which is synonymous with freedom.
If American education is going to be reborn, if the United States are going to experience a much-needed rebirth of freedom, it will only occur through a wide-spread commitment to the verbal arts of grammar, logic, and rhetoric.
Could William Faulkner Write?
I don’t like to travel without an interesting compelling time-filling book, and I’m driving up to PA tomorrow in what is still called a car because that is what the people over at Hertz call it – a bright cool air-conditioned chamber with the windows all closed because as a man I realize that hot air prevents coolness from spreading and the open window will let more heat than cool in – so I was glancing over my office qua study bookcase covered with anthologies of great books and poems and individual novels from which life-changing insights broke in random gusts, breaking the backs of cultures on the rack of history and I made the mistake of picking up Faulkner’s Absalom, Absalom. I read the first page and a half and thought, “This demands a response.”
So, even though I have no time for it, and even though I can’t possibly say anything intelligent, I am going to take a few moments and respond to this page and a half.
My first thought, by the time it formed itself into a proposition, sounded something like this: “How does such a book find a publisher?”
It’s not that it doesn’t deserve to be published, it’s just that it breaks every rule in the publishers library of rule books. How did the first editor get past the second page? This book, were it handed in to a college professor, would have almost certainly been dismissed as ridiculous.
But the error would have been the professor’s, I guess, because its now among the great books in the American canon.
My trouble, and the trouble is mine and it is a vice, is that when I pick up a book to read on my own, I want to know it will be worth my time. I am a distressingly pragmatic reader. I want to take something out of the reading and I want to do it quickly.
So when I read, “From a little after two o’clock until almost sundown of the long still hot weary dead September afternoon they sat in what Miss Coldfield still called the office because her father had called it that — a dim hot airless room with the blinds all closed and fastened for forty-three summers because when she was a girl someone had believed that light and moving air carried heat and that dark was always cooler, and which (as the sun shone fuller and fuller on that side of the house) became latticed with ….” I wonder:
How do I know Faulkner isn’t playing a joke on me?
The thing is, it may be that Faulkner is writing this exactly as it needed to be written given the reality he is embodying in this description. It may be that unless we see all these things interpenetrating each other verbally we can never perceive how they interpenetrated each other in reality. In other words, maybe high school essay prose won’t express the idea Faulkner is trying to express.
So I flip randomly and end up on my head. Then I flip the pages of my book randomly and end up on page 87, where I read this:
“She must have seen Judith now and Judith probably urged her to come out to Sutpen’s Hundred to live, but I believe that this is the reason she did not go, even though she did not know where Bon and Henry were and Judith apparently never thought to tell her.”
And just as I’m about to plunge into despair, he follows that with this:
“Because Judith knew. She may have known for some time; even Ellen may have known. Or perhaps Judith never told her mother either.”
He can write short sentences – but he won’t write in a perfectly linear way, that’s evident. Every phrase seems to be a qualification of the preceding one.
Now, being a child of the age, I prefer to read fast and to get on to the next book, but it’s pretty obvious that if I’m going to read Absalom, Absalom I’m going to have to slow down and think about what I’m reading. I’ll probably even, horror of horrors, have to read it more than once.
Who’s got time for that? There are 54 great books in the great books set and this isn’t even one of them! Plus I have to read Hicks, Plato’s Phaedrus, and The Tempest for the apprenticeship, study Latin, study poetics for LTW development, and read things for next year’s conference – etc. etc.
Who’s got time for a leisurely read?
It reminds me of Emo Phillips doing the triathlon. He swims for about five minutes and then thinks, “This is stupid, the bike is getting rusty.”
So who knows, maybe I’ll read Faulkner or maybe I won’t. I know that until I do I can’t be considered educated, but that’s the way the cookie bounces. I blew my chance to get educated when I went to school as a child. Now I just do what I can.
But it does seem to me that the effort would be worth it. For one thing, I would have to read in a manner I’m not accustomed to reading and that’s always a good thing to do. Reading is an almost miraculous activity in that it opens the mind, not only to new ideas, but to new forms of thinking, to new patterns of perception.
I like the standard clear strong manly English sentence with a subject, predicate, direct object. I like the periodic sentence too, where the verb (imitating Latin and German), till the end of the sentence, is withheld. It seems to hold the attention while the reader, anxious to see whether the sentence will heal or wound itself with its ending, poised on a balance beam, waits; and the writer, heels over head, dismounting the same beam, nothing promises.
But Faulkner: what is he doing?
Here’s how it appears to me. He is not writing, or so it seems to me from the two pages I’ve read, about actions or about the world outside. He seems instead to be writing about perceptions, relationships, and recollections all flowing together – not a flow of thought subjectivism, but a dynamic interaction between the world around and the organ of perception.
His form, therefore, while it is not easy, would seem to be essential, as much a part of the story as the words themselves. It will be demanding, as much poetry as prose. But if I ever have the time and if I ever feel like it, I might well read this book. For now, I’m happy with my Spider-Man comic. |
Epistemic Community (Overgaard-2017)
From Encyclopedia of Scientonomy
Jump to navigation Jump to search
A definition of Epistemic Community that states "A community that has a collective intentionality to know the world."
Epistemic Community (Overgaard-2017).png
This definition of Epistemic Community was formulated by Nicholas Overgaard in 2017.1
Acceptance Record
This theory has never been accepted.
Suggestions To Accept
Here are all the modifications where the acceptance of this theory has been suggested:
ModificationCommunityDate SuggestedSummaryVerdictVerdict RationaleDate Assessed
Sciento-2017-0014Scientonomy19 May 2017Provided that the definition of community is accepted, accept new definitions of epistemic community and non-epistemic community as sub-types of community.Open
Question Answered
Epistemic Community (Overgaard-2017) is an attempt to answer the following question: What is epistemic community? How should it be defined? I.e. how is it different from non-epistemic community?
See Epistemic Community for more details.
This definition attempts to capture what is arguably the key feature of epistemic communities - their collective intentionality to study/know the world. This feature, according to the definition, distinguishes epistemic communities from non-epistemic communities, such as political, economic, or familial communities. To use Overgaard's own example, "it is clear that an orchestra is a community: the various musicians can be said to have a collective intentionality to play a piece of music" and yet its collective intentionality is different from that of knowing the world.1p. 59
1. a b Overgaard, Nicholas. (2017) A Taxonomy for the Social Agents of Scientific Change. Scientonomy 1, 55-62. Retrieved from https://www.scientojournal.com/index.php/scientonomy/article/view/28234. |
Tuesday 14th July: All orders are being dispatched as normal!
The most common type of harmonica is the diatonic harmonica or Richter harmonica, named after Joseph Richter from Bohemia, a folk musician who developed this tuning system around 1825 (although this is disputed!). These instruments are often generically referred to as Blues Harps. The diatonic harmonica is a single voice instrument and usually has 10 channels, each with one blow and one draw note.
Because these harmonicas are tuned to a single key (ignoring the possibility of playing in second/third/etc positions), most players will require more than one, in order to be able to play a variety of songs with other musicians.
Luckily diatonic harmonicas are relatively inexpensive, compared to their chromatic counterparts (and, indeed, other instruments, such as guitars), so the purchasing of multiple keys does not need to be financially onerous. Many models are available in packs of three or more, offering a significant saving over buying single harmonicas.
|
Select Page
Posted on May 23, 2008 (5768) By Rabbi Label Lam | Series: | Level:
These are the statutes and the judgments and the teachings (Toros- plural of Torah) that HASHEM gave between Himself and the Children of Israel at Sinai through the hand of Moshe. (Vayikra 26:46)
Toros: One (Torah) Written and one (Torah) Oral. This informs that both were given to Moshe at Sinai. (Rashi)
This is a critical and oft underappreciated nugget of information. Not one Mitzvah in the entire Torah is capable of being carried into action given only the parameters provided in the text. There are almost 30,000 details that comprise phylacteries and 5,000 in the ubiquitous mezuzah with little information to guide to their uniform completion. What’s called “killing”? When does life begin? When does it end? What one person calls “family planning” another may legitimately define as “murder!”
The Torah cries out for explanation. There must, by definition, have been a concomitant corpus of information that accompanied the giving of the laws and that is what we call the “Oral Torah”. Rabbi Samson Raphael Hirsch uses the analogy that the Written Torah is like the notes to a scientific lecture. Every jot and squiggle has significance. If properly understood it can awaken the actual lecture. The notes remain useless to someone who has not heard the lecture from the Master. Therefore in the Oral Torah is the sum of the lecture while the Written Torah is merely a shorthand record. Without an Oral Torah that book the whole world holds in such high esteem, the Bible is rendered in-actionable. It becomes a frozen document that cannot be lived. Unfortunately, so many over the ages have become lost due to a failure to appreciate this single point and its significance for our very survival as a people.
When my wife and I were engaged, at the party there was a cousin of hers that has written voluminously about the holocaust. He himself survived, somehow, seven concentration camps. One of the Rabbis encouraged him to speak. He claimed to be unprepared and not a good English speaker. He spoke amazingly well.
First he looked out at a room filled with newly observant Jews and wondered aloud, “Where do you people come from?” He then quoted the Talmudic principle, “Torah returns to those who have hosted it.” He explained, “If you are sitting here today then it’s probably because you have some great ancestors who were willing to and did give blood to keep this Torah alive.” He went on to talk about my wife’s and his illustrious family tree.
Then he said that had he known he was going to speak he would have brought with him a document he held in his hands that morning that answered a question that had been nagging him for almost four decades. “We all know Hitler’s “final solution” for European Jewry. What was his global scheme? Where was his plan to eliminate the rest of world Jewry?” He then paraphrased what he had learned from that document. Here is a printed transcript with a partial English translation:
“This document transmits a memorandum dispatched by I.A Eckhardt from the chief of the German Occupation Power. It is an order dated October 25, 1940 from das Reichssicherheitshauptamt-the central office of the German Security Forces to the Nazi district governors in occupied Poland, instructing them not to grant exit visas to Ostjuden- Jews form Eastern Europe. The reason behind this order is clearly spelled out: the fear that because of their “Othodoxen einstellung” their orthodoxy, these Ostjuden would provide “die Rabbiner und Talmudleher” – the Rabbis and the teachers of the Talmud, who would create “die geistige Erneuerung” the spiritual regeneration of the Jews in America and throughout the world.”
The Oral Torah is essential for our existence as a people. It is our most vital organ and instrument for survival. Without it we are immediately lost. It makes sense that those who plan our demise understand it very well! DvarTorah, Copyright © 2007 by Rabbi Label Lam and |
Home / Articles / Adding Images and Text
Adding Images and Text
Chapter Description
In this sample chapter from Adobe XD CC Classroom in a Book (2019 Release), author Brian Wood explains how to bring raster images into, and add text to, your app design.
Masking content
You can easily hide portions of images or shapes (paths) using two different methods of masking in Adobe XD: mask with shape or image fill. Masks are nondestructive, which means that nothing that is hidden by the mask is deleted. In either case, you can adjust the mask, if required, to highlight another portion of the masked content.
Masking with a shape or path
The first method for masking you will learn is masking with a shape. This method of masking (hiding) portions of artwork or images is similar to masking in a program like Illustrator. The mask is either a closed path (shape) or an open path (like a path in the shape of an “s,” for instance). To mask content, the masking object is on top of the object to be masked. Next, you’ll mask a portion of artwork.
1. Click in the gray pasteboard area to deselect all.
2. In the Layers panel on the left, double-click the artboard icon (artboard_icon.jpg) to the left of the Recording artboard to select it and zoom in to it.
3. Click to select the map illustration artwork so you can see all of it.
1. Select the Rectangle tool (rectangle_tool.jpg) in the toolbar. Starting at the top edge of the artwork on the left edge of the artboard, drag down and to the right corner of the artboard.
2. Press the V key to select the Select tool.
3. With the shape still selected, in the Layers panel, Shift-click the Path and Group objects to select the map artwork behind the shape.
4. Choose Object > Mask With Shape (macOS) or right-click and choose Mask With Shape from the menu that appears (Windows).
With the Layers panel open and the image still selected on the artboard, you’ll see Mask Group 1 in the Layers panel list. The mask shape and the object that is masked are now part of a group.
Editing a mask
When you mask content, you may later want to crop it in a different way, revealing more or less of that content. When you mask with a shape, as you did in the previous section, you can easily edit both the mask and the object masked. Next, you’ll change how the content from the previous section is masked.
1. With the Select tool (sp_selectiontool_lg_n.jpg) selected and the image still selected, double-click the map artwork to enter mask editing mode. The mask (rectangle) will be selected.
Double-clicking a masked object will temporarily show the mask and the masked object (the map artwork, in this instance) in the window. That way, you can edit either the mask or the object that is masked.
1. Click the Different Radius For Each Corner button (different_radius.jpg) in the Property Inspector on the right. Change the first two values to 15, pressing Return or Enter after typing in the second value. Leave the last two values at 0.
The top two corners of the mask are now slightly rounded. If you wanted to edit the mask shape further, you could double-click the edge of the mask and enter Path Editing mode to edit the anchor points.
2. In the Layers panel, click the Mask Group 1 icon (mask-group.jpg) to reveal the content of the mask group, if you don’t already see it. Click the “path” object and then Shift-click the “group” object to select both. To keep them together, you will now group them. Right-click one of the selected objects in the Layers panel list and choose Group to group them.
1. Option+Shift-drag (macOS) or Alt+Shift-drag (Windows) the lower-right handle of the map artwork down a little to make it larger.
You could transform the masked content in different ways, or you could select the shape that is the mask (the rectangle, in this case) and reposition or resize it. You can also copy and paste other content into the mask.
2. Drag the selected artwork into the center of the artboard. Make sure that it fills the mask shape and covers the lower corners of the artboard.
3. Press the Esc key to exit the mask editing mode. The map artwork is once again masked.
1. Press Command+0 (macOS) or Ctrl+0 (Windows) to see everything.
2. Click in a blank area away from the artboards to deselect the masked content.
3. Choose File > Save (macOS) or click the menu icon (menu_icon.jpg) in the upper-left corner of the application window and choose Save (Windows).
Masking with an image fill
Another method for masking is to drag and drop an image into an existing shape or path. The image becomes the fill of the shape. This method of masking is great when adding design content to a low-fidelity wireframe, for instance. Next, you’ll import a new image for a profile picture and mask it with a shape.
1. Double-click the artboard icon (artboard_icon.jpg) to the left of the artboard name “Journal” in the Layers panel to fit the artboard in the document window.
1. Select the Ellipse tool (ellipse_tool.jpg) in the toolbar. Shift-drag on the Journal artboard to create a circle. Release the mouse button and then the key when you see a width and height of approximately 144 in the Property Inspector. As you drag, you’ll notice that the Width and Height values change by 8 because the circle is snapping to the square grid.
1. Go to the Finder (macOS) or File Explorer (Windows), open the Lessons > Lesson04 > images folder, and leave the folder open. Go back to XD. With XD and the folder showing, find the image named meng.png in the folder, and drag the image on top of the circle you drew in the Journal artboard. When the circle is highlighted in blue, release the mouse button to drop the image into the frame.
By dragging an image onto a shape, the image becomes the fill of the shape.
Editing an image fill mask
Dropping an image into a shape so that it becomes the fill of the shape means the image is always centered in the shape. Next, you’ll explore the editing capabilities of this type of mask.
1. With the Select tool (sp_selectiontool_lg_n.jpg) selected, double-click the image to enter Path Edit mode. The image will be selected.
2. Drag a corner of the image to make it larger. Then, drag the image so that more of her face is in the circle.
3. Press the Esc key to stop editing the image within the circle.
4. Deselect the Border option in the Property Inspector to turn it off.
5. With the masked image still selected, Shift-drag a corner of the bounding box to make the image smaller. When Width and Height are 80 in the Property Inspector, release the mouse button and then the key.
The image will remain centered in the shape and resizes proportionally to fill the shape. Unlike images you place, the Lock Aspect option (locked_aspect.jpg) is not selected for masked content by default. That’s why you held the Shift key down when resizing it.
1. Drag the image into position, as you see in the figure.
5. Working with text | Next Section Previous Section
There are currently no related articles. Please check back later. |
Skip to content Skip to navigation
References to holding another person closely in the arms (whether or not in bed) where there are romantic or sexual implications from context. Note that depictions or descriptions of embracing may indicate a wide variety of interpersonal relationships and cannot be assumed to be romantic/sexual.
LHMP entry
The general scope of the work is language used to describe or refer to sexual and excretory acts, either as the primary meaning of the words, as a standard euphemism, or as ad hoc metaphorical or poetic reference. From the context of usage, especially the nature and formality of the text, one can identify hierarchies of offensiveness.
Gonda examines the rather peculiar mid-18th century text The Travels and Adventures of Mademoiselle de Richelieu within the context of cross-dressing narratives and as a lesbian-like narrative (she doesn’t use that specific term), as well as comparing it with its highly abridged knock-off The Entertaining Travels and Surprizing Advenrures of Mademoiselle de Leurich.
This chapter looks at evidence regarding lesbian activity that can be found in specific court cases, as well as perceptions of the role of lesbian relations in criminal activities and contexts. The point here is not that lesbians were inherently criminal in early modern Spain (though some official opinions were that one type of deviant behavior was expected to lead to other types), but that the nature of legal records can provide a wealth of detail that is not available for other contexts.
This chapter focuses on the image of “turning” away from right behaviors and objects and toward wrong actions and objects. In both text and image, there is a concept of wrong behavior being “turning in circles” and therefore being unable to follow/enter the desired path or gate. Vocabulary related to this include: deviation, conversion, translation, orientation.
In Paris, ca. 1200, there was an increased focus on anti-sodomy literature. One writer considered it equivalent to murder because both “interfere with the multiplication of men.” Sodomy also relates to gender categories because non-procreative sex blurs distinctions and suggest androgyny. Androgynous people, according to this position, must pick a binary identity based on the nature of who they find arousing within an imposed heterosexual framework. The focus in this anti-sodomy literature is not generally on gender ambiguity, but specifically on preserving “active” male sexuality.
In this chapter, Faderman explores the types of sexual activity between women that were portrayed in literature written by men. Authors such as Brantôme describe tribadism, with one woman atop another rubbing the genitals together, or the use of a dildo to perform penetrative stimulation.
This article looks at the 1744 novel The Travels and Adventures of Mademoiselle de Richelieu, concerning a cross-dressing lesbian heroine who goes about Europe having adventures. Woodward examines this text in the context at other 18th c novels with similar themes that veer off from the lesbian resolution. She also considers the problem of the work’s authorship. It purports to be a translation into English by a man of a French original, written by a woman, but there are reasons to doubt several aspects of that framing.
Images of women-loving-women were established enough in 16th century England to appear as a character type that was not so much defined as simply assumed, and therefore was available for reference both explicitly and obliquely. Within this general type, there were clear distinctions made between the motifs of desire between women and sexual acts between women. This chapter explores evidence for this character type in non-dramatic sources that were available to early modern English playwrights and their audiences.
This chapter begins with a look at allegorical images of what appear on the surface to be female same-sex erotic embraces. Images such as "Peace and Justice embracing" on the frontispiece of Saxton's 1579 atlas (in the cartouche above Elizabeth's head), or various paired embracing nudes in paintings representing Justice and Prudence or Faith and Hope raise questions of the public use of female homoeroticism for symbolic purpose.
Interpreting the meaning and context of Greek pottery art is far from straightforward. The modern framing as valuable “fine art” is to a large extent a by-product of the antiquities trade and it must be remembered that these vessels were originally created as a cheap imitation of fine metal utensils and, as such, might reasonably be viewed as “pop culture” works rather than the products of an artistic elite. These views make quite a difference in interpreting the depictions of women and their interrelationships with each other.
Subscribe to embracing |
Download The Lesson Study Guide
Subscribe Now
The Lesson Summary
(Beacham's Guide to Literature for Young Adults)
Narrated in the voice of Sylvia, a preadolescent black girl, "The Lesson" is her version of a summer day trip organized by Miss Moore, a socially conscious spinster who is determined to teach eight children a lesson about the nature of money and how it is distributed in American society. In order to expose the children to the notion of class differences, Miss Moore, the self-appointed teacher, takes them from the "slums" in New York City to the upscale retail area on Fifth Avenue where they visit F.A.O. Schwarz, a world-famous toy store. Although the lesson Miss Moore attempts to teach is quite serious, the story is infused with sassy humor provided by the various children's honest and irreverent voices.
(Comprehensive Guide to Short Stories, Critical Edition)
Sylvia, who narrates the story, is a young girl living in a poor area of New York City. She and her friends are developing their strategies to cope with life as they know it. She has adopted the pose of a know-it-all who can figure out things for herself, and she tells herself that she resents and has no use for Miss Moore, the college-educated African American woman who frequently serves as a guide and unofficial teacher for the local children.
Miss Moore arranges a trip for Sylvia, Sugar, and six other children to go to the F. A. O. Schwarz toy store at Fifth Avenue and Fifty-seventh Street. Miss Moore knows that this will be a new experience for the children, who have been isolated in their neighborhood, and that they will encounter items they have never seen, items that are far beyond their economic means. She wants the youngsters to learn that there is much more to the world than the slum area they know, and particularly for them to realize that wealth is unfairly and unequally distributed.
The emphasis on the relative value of money begins for Sylvia when Miss Moore gives her a five-dollar bill to pay the taxi fare to the store. Sylvia is told to include a 10 percent tip for the driver and return the change to Miss Moore. Sylvia gives the cab driver the fare of eighty-five cents but decides that she needs money more than he does and keeps not only the tip but the remainder of the money.
At the toy store, the children feel uneasy and out of place. Looking through the window, they are stunned by the products offered and by their high prices. Ronald sees what he recognizes as a microscope, for three hundred dollars, but neither he nor the others know what a microscope is used for or how it might fit their academic education or their future jobs. Rosie spots a chunk of glass with a price tag of $480. None of them knows what it is, even when Miss Moore says it is a paperweight. Only one of the children has a study area at home where she might have papers to scatter, so they do not understand the concept, much less why someone might want, or be able and willing to pay $480 for, a fancy glass paperweight. Another boy interrupts Miss Moore’s explanations when he sees a toy sailboat priced at $1,195. The children cannot imagine who could spend so much money on the boat, especially because they think it would probably break or be stolen when they played with it. Even Sylvia is stunned at the price. She hesitates to go inside the store, feeling ashamed somehow, as though she does not belong here, despite her bravado that she can do anything she wants.
Inside, Sylvia becomes angry at the high prices. She wants to know who are these people who could spend a thousand dollars on toy sailboats and why she and her friends cannot. As Miss Moore takes the youngsters home, she asks them to think of what kind of society it is in which some people can spend more on a toy than others have to spend on food and housing. Sugar responds that it must not be much of a democracy because some people obviously do not have an equal opportunity to earn money. Sylvia feels Sugar has betrayed her by giving Miss Moore the satisfaction of an answer, and she walks away.
Sugar catches up with Sylvia, glad that they kept the rest of the money Miss Moore gave them...
(The entire section is 1,482 words.) |
The Italian Island of Sardinia
Non Potho Reposare - una canzone popolare sarda cantata da una delle più belle voci della musica italiana - Andrea Parodi di Tazenda
Sardinia (Sardegna in Italian) is the second largest island of Italy. Sicily is the largest.
Sardinia has a very interesting flag. It consists of the Cross of Saint George - la Croce di San Giorgio - on a white background and the heads of four Moors.
The flag is named - I Quattro Mori (The Four Moors).
Sardegna lies in the middle of the Mediterranean Sea, just south of the French island Corsica.
You can see the Corsican coastline from some parts of northern Sardinia.
Below is a satellite picture of Sardinia. The southern part of Corsica can also be seen in the picture.
The narrow channel of sea that separates Sardinia and Corsica is called le Bocche di Bonifacio and it is famous for being extremely rough with very rocky and dangerous parts. Many boats have sunk making the short journey between the two islands.
The French writer
Maupassant wrote a story in the 19th century called Une Vendetta and it opens with a vivid description of the rough waters and coastlines of Bonifacio.
The capital city of Sardinia is Cagliari, in the south. The official language of Sardinia is Italian but most of the island also speaks the Sardinian language called sardo. Il sardo varies according to the area of Sardinia.
In the seaside town of Alghero, in the north-west of Sardinia, the people speak a dialect that belongs just to that town. It is called algherese and it is very similar to the Catalan language of Spain, having originated with a Catalan colony that settled in the area of Alghero hundreds of years ago.
That is why the colours of the Alghero flag are the same as the Spanish flag - red and yellow.
Below is the coat of arms for Alghero where you can see the red and yellow stripes.
The town of Alghero takes its name from the large quantity of seaweed (or algae) that is washed up continually on the coastline.
The people of Alghero are very proud of their town. It is a walled, historical town with a port. Originally, it was just a small fishermen's harbour but now it is one of the largest leisure ports in Italy! (Below)
In the old walled town of Alghero, the streets are all cobbled. In Italian, the old historic town centre is known as il centro storico. There is a large cathedral called Santa Maria and a very beautiful old church called San Francesco. There is a cloister attached to the church of San Francesco and, in the summer, it is used for chamber music.
Below is a photo of a typical, cobbled street in Alghero.
Outside Alghero, in the countryside, there is an area called Valverde. This is a sacred place for the people of Sardinia. In Valverde, there is a little chapel and it is visited by thousands of Sardinians and thousands of visitors from abroad every year.
Inside the chapel, there is a little, terracotta statuette of the Virgin Mary. She stands just 30.5 cm. high. This statuette is called la Madonna di Valverde. Sardinians believe that she performs miracles. Inside the chapel, the walls are decorated with paintings produced by the local people. Each painting depicts the story of a personal miracle.
Below is a photo of the statuette wearing a crown and draped with real jewellery and cloth robes, as she is traditionally seen.
Il Sughero
Sardegna produces lots of cork. This is called il sughero in Italian. Cork trees can be easily recognised because the bark of the trunk is stripped off, making the trunk look as if it has been 'skinned.'
It takes about ten years for the bark of the tree to become ready. It starts to detach itself naturally from the trunk. Workers strip it off and it is collected in tons!
All kinds of items are made from Sardinian cork - picture frames, book covers, goblets, trays, ornaments and bottle tops. The bark regrows and in another ten years or so the trunk can be 'skinned' again!
Il Mare
The sea around the island of Sardegna is very clean and attracts thousands of holiday-makers every summer. It is well-known that boat-owners sail from all over Europe in order to spend the summer on the Sardinian coast.
The north-east coast known as la Costa Smeralda (the Emerald Coast) is the most famous for attracting boats during the summer. It is very expensive to stay there (either in a hotel or afloat on your boat) and many film stars, royal families and other famous people spend their summer holidays in this area. During the months of July and August, you will always see enormous and beautiful private yachts afloat in the waters of la Costa Smeralda.
If you want to look seriously stylish and important in the summer, then keep your boat moored at the Sardinian port of Porto Cervo on la Costa Smeralda. It will cost you thousands of euros every day.
The exact price will depend on the length of your boat.
Il Corallo
Sardinia produces very beautiful jewellery and statuettes made from red coral - il corallo rosso - taken from the local waters. The Sardinian people believe that red coral brings good luck, so it is a popular choice for gifts at christenings and weddings, etc. There is a paler colour of Sardinian coral too, but this costs less and is considered inferior. It is always the red variety that is the first choice for i sardi - the Sardinians.
The north-west coast is the biggest producer of coral in Sardinia and it is for this reason that the area is called la Riviera del Corallo - the Coral Coast. The north-west coastal town of Alghero sells very beautiful coral jewellery in shops throughout its historic town centre - il centro storico.
In December 2018, the town's Christmas tree was even designed to look like red coral!
Il Mirto
The flowering plant known as myrtle grows abundantly throughout Sardinia. Its leaves and berries are used to make a liqueur called mirto. There are two types of the liqueur: - red (mirto rosso) and white (mirto bianco.
It is drunk on special occasions or at the end of a large meal.
In the Sardinian mountains, there is a village called Fonni. It is the highest town in Sardinia and it is famous for its amazing three-dimensional paintings on the outside of the buildings. These paintings have been created by the local people and they look very real. It seems that real people are standing in doorways, looking out of windows or going about their daily chores.
These wall paintings are considered arte di strada - street art. They are known as I Murales di Fonni.
Do not be fooled! All the people in the photos below are painted on the exterior walls of buildings-
This style of painting can be seen in other villages in the area of Fonni, too.
Il Pane
Sardinia produces very good bread. One of the most famous types is called il pane carasau. It is really only found in Sardinia! It is wafer-thin and crunchy. It can be sprinkled with salt and olive oil and it keeps very well. It is also very healthy because it is so light. Usually, you buy it in a round pack and, because there are so many wafer-thin layers, it seems to last for ages!
Originally, it was prepared for shepherds who had to stay away from home for long periods of time, travelling with their flocks. They required bread that was light to carry and that would last for a long time.
Below is a photo of il pane carasau.
Il Formaggio
Il Pecorino Sardo is a traditional Sardinian cheese made from sheep's milk. This is the same cheese that is used to make another type of famous Sardinian cheese called casu marzu.
Casu marzu is the Sardinian way to say formaggio marcio or 'rotten cheese.' It is considered to be the most dangerous cheese in the world! Continue reading if you want to know more about it ......
Casu marzu is traditionally made by shepherds (pastori) using sheep's milk. It is prohibited from being sold in the European Union because of hygiene regulations and health concerns. Nevertheless, it is a traditional food which is still produced by Sardinian shepherds for their own consumption and for family and friends. Why is this cheese so rotten and dangerous? Read on.....
It is made by first producing normal Pecorino (in photo above). The crust of the Pecorino is then cut open and left for a few weeks to attract cheese flies which lay thousands of eggs inside the exposed cheese. Thousands of tiny maggots hatch from the eggs and then live inside the cheese.
The cheese is eaten whilst the maggots are wriggling inside. They also jump around! The cheese becomes very soft and is eaten by scooping it up with a spoon onto bread. Buon appetito!
Un proverbio italiano: Tutti i gusti sono gusti.
This literally means - 'All tastes are tastes.'
(One man's meat is another man's poison.)
( Each to his own.) (There's no accounting for taste.)
Il Bottone Sardo
A traditional piece of Sardinian jewellery is un bottone sardo. Its name means 'a Sardinian button.' It is a gold or silver sphere with a round gem in the middle. There is also a 'half button' (un mezzo bottone) which is a round shape instead of a full sphere.
Below: un bottone sardo and un mezzo bottone sardo
Il bottone sardo can be worn as a brooch, necklace, earrings, a ring or a bracelet. Some Sardinians even choose it as a tattoo! The gem in the centre of the button represents 'an eye' which wards off something called il malocchio, meaning 'the evil eye.'
Sardinians (and all Italians) are very superstitious (
read about Italian superstitions) and they believe that envious people can cause bad luck through their jealous and nasty thoughts. So, il bottone sardo is worn as an amulet to protect against 'the evil eye.'
Places in Italy
The Sardinian Lake of Baratz
Italian Zone
Site updated: 9 July 2020
Euroclub Schools Website 2007 - 2020 |
Need Help? Call Now
What The World Owes To The Moody Movement
What The World Owes To The Moody Movement poster
Sermon preached by A.C. Dixon, D.D. at The Moody Church in 1909.
In the year 1858 there began in Chicago a religious movement which became the greatest revival movement of the century, the abundant fruits of which can be seen today in every part of the world. It centered in D.L. Moody, who resolved in early life that he would let God show to the world what He could do with one man fully surrendered to His will.
The beginning was small. First, a group of ragged children in an abandoned freight car, which grew into a large Sunday school. Then a church with a large membership. Then the Moody Bible Institute, which has trained and sent out into all parts of the Earth fifty-two hundred Christian workers, four hundred and sixty of whom are on the foreign field.
Through the labors of Moody and Sankey and afterward of Torrey and Alexander, two great evangelistic movements, beginning in Chicago, have become world-wide, resulting in the conversion of millions. Growing out of Mr. Moody’s evangelistic work came the Northfield Bible Conference and many other Bible Conferences now blessing the land, the Northfield schools, a Y.M.C.A. building in almost every great city of Christendom and a vast amount of religious books and periodicals.
The secret of the success of this great movement can be found in the Scriptures, and John 3:7 expressed much: “Ye must be born again.” D.L. Moody was not a reformer or an educator, though he was in sympathy with reformatory work and Christian Education. He believed that regeneration is really at the basis of all true reformation and education. To him, however, the Gospel of Christ was the panacea for all the ills of the Earth. To save a man was better than to reform or educate him. Salvation, he believed, promoted temperance, made pure politics and gave a foundation for education fitting men for Earth and heaven.
Mr. Moody believed that the new birth is a sudden, instantaneous experience, the beginning of a lifetime of growth in Christ. He was fond of saying that Zaccheus was converted somewhere between the limb on the sycamore tree and the ground. Beginning to save a man a hundred years before he was born had no place in his theology, though he was willing enough to admit the influence of heredity. It was his constant purpose, therefore, to bring people to an immediate decision for Christ. In the large meeting which he held before the great Chicago fire he told the people to go home, get down on their knees and give themselves to Christ. Most of them never reached their homes, which that night went up in flame and smoke. And he resolved that he would never again urge people to go home and decide for Christ, but would seek to bring them to a decision then and there.
The Scripture, however, which gives the very heart of the Moody Movement is Mark 1:17, in connection with 1 Corinthians 9:22: “Come ye after Me and I will make you to become fishers of men.” “I am made all things to all men that I might by all means save some.”
D.L. Moody had a passion for souls. His heart was on fire with love for lost sinners and his enthusiasm kindled the fire in the hearts of others. He studied the Bible that he might win souls to Christ. He held Bible Conferences with the single purpose of preparing men and women to be better soul-winners. He invited a Keswick speaker to Northfield because he had learned that in his field of labor there had been several conversions the year following the blessing he had received at the Keswick convention.
D.L. Moody was pre-eminently an evangelist, and the consuming purpose of his life was “by all means to save some.” He could not be happy unless souls were saved. Christian joy he considered spurious unless it was associated with soul-winning. Holiness was a sham, if it did not result in winning souls. A man once told him that he had not sinned in several years, and his reply was: “How many souls have you led to Christ in that time?” The man was silent and Mr. Moody assured him that such holiness was not to his taste, because it was not of the Bible kind.
A third Scripture which explains another feature of the Moody Movement is Ephesians 5:18-19: “Be filled with the Spirit; speaking to yourselves in psalms and hymns and spiritual songs.” The hymnology of the church all down the ages is made up, for the most part, of praise and prayer to God. The chant in the Hebrew temple and the Synagogue was mostly praise and prayer in Scripture language. Such are some of our most popular hymns like “Jesus Lover of My Soul,” “Rock of Ages,” Come, Thou Almighty King.” They are full of Gospel truth in the form of praise and prayer to God and they will never wear out. Many of them will appropriate in heaven. But it remained for the Moody Movement to respond to the spirit of the text in singing directly to the people. It gave to the world a phrase “Gospel Song,” which means a song written for the purpose of carrying the Gospel into the hearts of the hearers.
The fourth Scripture which still further defines the Moody Movement is Matthew 23:10: “One is your Master, even Christ,” in connection with 1 Corinthians 12:5: “There are differences of administration, but the same Lord.”
God used D.L. Moody to unify evangelical Christianity more than any other man of the nineteenth century. Before he went to England the Church of England and the nonconformists were like the Jews and Samaritans, having little if any dealings with each other. Before he left England hundreds of them were in beautiful Christian harmony working together for the salvation of the lost. Mr. Moody used his genius for organization, not in the founding of a new denomination, which he might have done, but in bringing together all denominations for the evangelization of the people. His creed was, like that of the Apostle Paul, “Christ and Him crucified.” And to every one who stood with him under the blood, trusting, loving and worshiping his Saviour and Lord, he gave the hand of fellowship. |
As experts continue to monitor weather conditions, SpaceX continues preparing for Saturday's planned historic launch - the first time a private company has attempted to send NASA astronauts into space, and the first launch from American soil since 2011.
After Wednesday's attempt was scrubbed due to lightning, the Associated Press reports that forecasters set the odds of good conditions today at 50/50.
PHOTOS: See NASA's space suits through the years
At a target time of 2:22 p.m. CTD, NASA and SpaceX will launch the first commercially-built and operated American rocket and spacecraft carrying astronauts Robert Behnken and Douglas Hurley to the International Space Station during the SpaceX Demo-2 test flight.
Hurley said on Twitter that dealing with delays and cancellations is something astronauts are used to.
"On my first flight STS-127 on Shuttle Endeavour, we scrubbed 5 times over the course of a month for technical and weather challenges. All launch commit criteria is developed way ahead of any attempt. This makes the correct scrub/launch decision easier in the heat of the moment."
The Houston Chronicle's Andrea Leinfelder has more on the mission and why watching the weather is so important for a safe and successful launch. |
The world’s oldest known payslip depicts how workers were possibly ‘paid’ in beer
world-oldest-paycheck-paid-in-beer_1Credit: Trustees of the British Museum.
Mesopotamia, the cradle of civilization, boasts a lasting legacy that relates to many of humanity’s ‘firsts’. One among them pertains to a 5,000-year old artifact originally salvaged from the city of Uruk (in modern-day Iraq). Inscribed with the pictorial language of cuneiform, this tablet dating from around 3300 BC, depicts a human head eating from a bowl and drinking from a conical vessel. The bowl represents ‘ration’, while the conical glass alludes to consumption of beer. And more than just this human visage, the tablet is also marked with scratches that basically record the quantity of beer assigned to each worker. Simply put, the ancient Mesopotamian artifact is the world’s oldest known payslip that rather hints at how the hierarchical system of workers and employers existed even five millenniums ago – and they were possibly connected by exchange of beer.
Interestingly during this phase of human history, economic reliance among a significant (and connected) populace was just starting to emerge, and it was based on a dual form of so-called ‘gift economy’ and debt. In fact, from the historical perspective, the concept of ‘real’ money possibly made its debut after 3000 BC, which is more than 300 years after this artifact was etched. So during the epoch of the inscribed tablet, the Mesopotamian people probably dealt in a system known as commodity money, which entails the use of exchanged objects that have value in themselves, like salt, gold, silver, tea and even alcohol – as opposed to dedicated coins or metal tokens.
In other words, given the absence of a full fledged currency system, the employers opted for ingenious methods of ‘paying’ their workers. And one of them was assigning the much-loved beverage of beer. As Gus O’Donnell, a former Cabinet Secretary and Head of the British Civil Service, told BBC in 2010 –
What’s amazing for me is that this is a society [in Mesopotamia, circa 3300 BC] where the economy is in its first stages, there is no currency, no money. So how do they get around that? Well, the symbols tell us that they have used beer – beer glorious beer, I think that is absolutely tremendous; there is no liquidity crisis here, they are coming up with a different way of getting around the problem of the absence of a currency and at the same time sorting out how to have a functioning state. As this society develops you can see that this will become more and more important and the ability to keep track, to write things down, which is a crucial element of the modern state – that we know how much money we are spending and we know what we are getting for it – that is starting to emerge.
Now from the archaeological scope, this 5,000-year old tablet is just one among a whopping 130,000 written specimens from ancient Mesopotamia, stored inside the British Museum. And even beyond Mesopotamia, the concept (and system) of paying beer to workers was also prevalent in ancient Egypt, circa 25th century BC. For example, around a total of 4-5 liters of beer were assigned daily to the laborers working on the Great Pyramid. And intriguingly enough, other than just the allure of alcohol consumption, some ancient beer variants even had health benefits. One pertinent example would relate to the 2,000-year old Nubian special beer that was laced with tetracycline, an antibiotic. Lastly, it should also be noted that most ancient beer variants were more akin to a starchy gruel (or porridge) that was the basis for an alternate meal for workers, as opposed to just an inebriating drink.
Funerary stele from Amarna, circa 1350 BC.
Source: BBC / Via: ArsTechnica / ScienceAlert
ROH Subscription
|
By Andrea Romano
November 06, 2019
It’s no secret that seat belts save lives.
Most people wouldn’t think twice about buckling up in a car. Statistics have shown that seat belts are instrumental in keeping riders safe. And with all of the driving that we do, why wouldn’t we want to make our commutes safer?
For some reason, when it comes to airplanes, the same logic doesn’t seem to apply. While there are plenty of people out there who always buckle up for their entire plane ride, there are a lot of travelers who instantly release the buckle as soon as the seat belt sign is off – regardless of whether they need to get up or not.
Of course, if you do need to walk around the airplane for a quick stretch or to go to the lavatory, unbuckling is naturally required, but so many of us end up returning to our seats without buckling again. And this could be a big problem if anything were to shake or damage the plane.
Honestly, some people don’t need a lot of convincing when it comes to wearing a seat belt. Sometimes it just feels natural while traveling. Others, however, could probably learn a thing or two about how buckling your seat belt is the most important thing you can do on a plane — way more important than packing hand sanitizer or ordering the perfect cocktail.
Airplane Seat Belt Design
You’ve probably already noticed that your airplane seat belt isn’t quite as comprehensive as the one in your car. Moreover, you might have heard that pilots and crew also get shoulder straps in addition to the lap belt. Did you know there is a real reason for the different airplane belt designs?
According to Atlas Obscura, these “lift lever” belts have been around since before airplanes existed, but they became common in airplanes by the 1930s and 1940s. The reason they stuck with the “lift lever” design is not only because they’re cost-effective (the materials are very light and cheap), but they're also made to help you during minor disturbances and events onboard. Sadly, a seat belt is unlikely to save you if the plane actually crashes. “You can survive a car crash in which the car is totaled; your chances of survival in an equivalent plane crash are significantly less rosy,” said Atlas Obscura.
But the simple belts are helpful in situations such as turbulence (mild or even severe), small collisions (on the runway, for instance), or rocking. According to Business Insider in 2013, the Deputy Assistant Administrator for Public Affairs at the Federal Aviation Administration found that 58 U.S. passengers are injured annually due to not wearing seat belts while on airplanes.
The Myths About Seat Belts
Perhaps the biggest reason why people don’t use their seat belts on planes is because they’re “ineffective” in the event of a crash. While this may be true in extreme circumstances, small accidents such as planes colliding with each other while taxiing on the runway can also lead to injury for non-seatbelt wearers.
According to the Telegraph, there are actually quite a few myths people still believe about airplane seat belts, including the notion that they’re only used to identify passengers after a fatal accident.
“That is the stupidest thing I've ever heard,” said Heather Poole, author of Cruising Attitude: Tales of Crashpads, Crew Drama and Crazy Passengers, to the Telegraph. “Passengers switch seats all the time and we're not chasing them down trying match up names to seat numbers.”
Poole also noted that some airlines, like Southwest Airlines, do not have seat assignments, making this idea completely moot.
Other people have questioned wearing a seat belt in flight due to the belief that they hinder evacuation. After all, if there’s a fire in the cabin, you’d want to get out as quick as possible, right? Fiddling with a seat belt can make things worse, according to people who believe this myth.
In reality, industry experts have discredited this idea that seat belts would be the main problem for passengers trying to make a timely evacuation, according to the Telegraph.
Buckle Up for Turbulence
Turbulence is the main reason passengers should stay buckled up in flight. Turbulence — that rocking, shaking feeling caused by a shift in airflow— is very common on flights. Chances are, you experienced turbulence of some degree on your last flight, and you’ll likely feel it again on your next. This is why a seat belt is definitely necessary.
“The reason you must wear a seat belt, flight crew included,” Poole told the Telegraph, “is because you don't want the plane coming down on you.” She explained that while we, as passengers, may feel like we’re lifted up during turbulence, the sensation is actually produced from the airplane dropping.
Getty Images
“It comes down hard and it comes down fast, and that's how passengers get injured - by getting hit on the head by an airplane,” Poole told the Telegraph.
A bout of bad turbulence can lead to injuries, especially if you hit your head on the bulkhead or slam your arm against an armrest. In more extreme circumstances, turbulence has been known to “throw” people, full force into the ceiling of the plane, which can cause concussions, broken bones, or possibly even more serious injuries.
How Pilots Know When to Turn on the Seat Belt Sign
Of course, there are ways to predict when a plane might encounter turbulence, but it’s not always fool-proof. Pilots can use meteorology maps to avoid thunderstorms, dangerous winds, or even turbulence, according to ATTN.
However, you can’t always know what’s going to happen on a flight. While pilots do their best to turn on the seat belt sign when they see a pocket of turbulence coming, there’s always a chance that it can still come without warning.
Whenever the seat belt sign is on, you should stay seated, buckle up, and not call for the flight attendant (they need to think about their safety, too). However, if you’re staying in your seat and the seat belt sign is off, you should still keep it buckled.
Poole told the Telegraph, “You never know when it's going to happen, and it happens, even when the sign is off. That is what is called clear air turbulence. Turbulence is no joke. People get hurt.”
It's always better to be safe and prepared, so think twice before unbuckling just for the fun of it on your next flight. |
Campus Alert: Find the latest UMMS campus news and resources at
Page Menu
Diabetic Kidney Disease (Nephropathy)
Nephropathy means your kidneys are not working well. The final stage of nephropathy is kidney failure or end-stage renal disease (ESRD). Kidneys filter blood, produce urine, control blood pressure, regulate blood chemicals, stimulate blood cell production, and perform other crucial physiologic functions.
Diabetes, both type 1 and type 2, is the most common cause of kidney disease
There are five stages of diabetic nephropathy. The final stage is ESRD. Progress from one stage to the next can take many years.
Reducing risk of kidney disease when living with diabetes
Treatment and prevention
Nephrologists (kidney specialists) help people living with diabetes to take measures to protect their kidney function over time and to treat the various complications of kidney disease. Severe kidney disease may result in dialysis or a transplant
At the UMass Diabetes Center of Excellence, Dr. Matthew Niemi works closely with endocrinologists, nutritionists, and other specialists, to provide comprehensive care for people with diabetic kidney disease. |
Economy: Its Definition & Types, and System
By Tara Shwan
An economy is a system of organization and institution that either facilitate or play a role in the production and distribution of goods and services in the society.
Read More
The Economy of Turkey
December 09, 2016
The Economy of Turkey:
The economy of Turkey is defined as an emerging market economy by the IMF. Turkey is among the world's developed countries according to the CIA World Fact book. Turkey is also defined by economists and political scientists as one of the world's newly industrialized countries. The country is among the world's leading producers of agricultural products; textiles; motor vehicles, ships and other transportation equipment; construction materials; consumer electronics and home appliances.
Population: The population of Turkey as of 2016 is 79,897,551 based on latest UN estimates. Its population equivalent to 1.07% of the world's p. and it ranks no 19 in the list of countries by population.
Currency: Turkish lira (Turkish lira symbol 8x10px.png) (TRY)
Trade organizations: G-20 major economies, OECD, EU Customs Union, WTO, ECO, BSEC
GDP: $1.665 trillion (PPP, 2016), $751 billion (Nominal, 2016)
GDP rank: 18th (nominal) / 17th (PPP)
GDP per capita: $21,198 (PPP, 2016), $9,562 (Nominal, 2016)
GDP by sector: Agriculture: 8.1%; industry: 27.7%; services: 64.2% (2015)
Population below poverty line: 16.9% (2010)
Labour force: 29.4 million (2015)
Labour force by occupation:
Agriculture: 25.5%, industry: 26.2%, services: 48.4% (2010)
Unemployment: 9.3% (April 2016)
Ease-of-doing-business rank: 55th (2015)
Exports: Decrease $153.6 billion (28th) (2015)
Export goods: apparel, foodstuffs, textiles, metal manufactures, transport equipment.
Main export partners:
Germany 9.3%
United Kingdom 7.3%
Iraq 5.9%
Italy 4.8%
United States 4.5%
France 4.1% (2015)
Imports: Decrease $204.3 billion (22nd) (2015)
Import goods: machinery, chemicals, semi-finished goods, fuels Transport equipment.
Main import partners:
China 12%
Germany 10.3%
Russia 9.9%
United States 5.4%
Italy 5.1% (2015)
Revenues: $225 billion (2015), Expenses: $234 billion (2015)
Main data source: CIA World Fact Book. |
Monday, November 9, 2015
Weekly Tidbit #7 - Reliability, continued
by Paul Uhlig
Collaborative care creates a stable, rich tacit knowledge environment that otherwise doesn't usually exist in health care. This shared environment of trust and learning grows over time if conditions are in place and right for that, and helps make reliable, safe care possible.
This week's Tidbit is the last in a short series on reliability. The main lesson of this Tidbit is the importance of team-level tacit knowledge for achieving reliability. Explicit knowledge is knowing "what." Tacit knowledge is knowing "how." Team-level tacit knowledge means, "knowing how, at the level of the team itself."
In traditional health care, a care "team" may not really be a team at all. Various health professionals on any given day may never have worked together before, and may not work together again. The patient's nurse may never see the doctor; the pharmacist may never see the nurse, the respiratory therapist may never see the social worker, and so forth. Health professionals may know the patient mostly from the perspective of their area of expertise and task. It is not uncommon that people in traditional health care interact only through notes and explicit instructions left in the patient's record (orders).
Viewed from the perspective of each individual, there is a lot of individual tacit knowledge in health care. People have learned how to do their individual jobs well, in highly developed routines. Yet, if viewed from a perspective of the team itself, there is much less tacit knowledge. Team-level knowledge requires consistency and learning to develop, and practice to maintain. Achieving reliability requires connections and integration that bring together disparate experiences, understandings, and goals, so that a composite picture of events emerges that the team as a whole is aware of and can account for together. Traditional health care assumes that team-level awareness and coordination depends on explicit knowledge, and tries to accomplish this by carefully specifying everything in written notes and orders. Collaborative care makes a different assumption: that the team-level coordination needed for reliability depends mostly on tacit knowledge, and that this team-level tacit knowledge arises over time if conditions are in place and right for that.
This distinction, between relying on explicit knowledge for coordination, or building environments where people are able to coordinate their actions almost effortlessly by relying primarily on rich tacit knowledge that has developed within the care team itself, is one of the most important differences between traditional care and collaborative care. Of course, explicit knowledge is important. But, when there is a foundation of rich, team-level tacit knowledge, the explicit knowledge that matters is easy to identify and use by the care team. Without a rich tacit foundation, people may lose sight of what truly matters.
Think of it like this: Here, on this hand, are the things we want always do for every patient. Here, on this other hand, are the things we want to do, uniquely, just for this particular patient. Reliability requires doing both of these things well. Having a rich context of team-level tacit knowledge makes routine things truly routine and effortless, so that the unique things - things that actually do require explicit knowledge - can be more easily seen and accomplished.
Envision your care environment. In your mind's eye, consider how your team works together. How does your team-level coordination feel? Think about how well information and coordination flow through your team (or not!), and whether this feels like "riding a bicycle" (effortless and intuitive--tacit knowledge at work!), or like the struggle of learning how to ride a bicycle (explicit knowledge doesn't work very well for activities that depend on tacit knowledge).
As yourself, "What if reliability requires rich team-level tacit knowledge - rather than explicit knowledge (notes and orders)? What would it take for team-level tacit knowledge to grow and develop for our team? Is our care environment intentionally designed to make that happen really well?"
Teaching a child how to ride a bike:
1. Find a grassy field with a gentle downhill of 30 yards or so, that then flattens out or goes uphill slightly. Ideally the grass is short enough that it doesn't create too much drag on the wheels, but still can provide a soft landing in case of a fall.. A hard surface learning area can also be used, but it should have only a very slight slope - almost flat.
2. Go about 15 yards up the hill. If necessary, hold the bike while the student gets on. Have him or her put both feet on the ground, then you should be able to let go of the bike and nothing should happen. Praise the learner.
3. Have the child lift his or her feet about an inch off the ground and coast down the hill or scoot along. The objective here is to get a feel for balancing on the bike. Try to resist holding the bike to steady the learner. Because the bike will coast slowly, the cyclists can put his or her feet down if they get scared.
4. Repeat until your student feels comfortable coasting and doesn't put his or her feet down to stop. Throughout the progression there is no need to rush moving on to the next step.
Add pedaling:
1. Reattach the pedals. Now have your student put his or her feet on the pedals and coast down. First just one pedal, then both pedals. After several runs, have him or her begin pedaling as he or she is rolling.
2. Repeat coasting/pedaling until the bicyclist feels comfortable, then move up the hill.
Riding in a straight line:
1. Go to a flat part of the field and practice starting from a standstill, riding in a straight line, stopping, and turning.
1. Starting from a standstill - Start with one pedal pointed at the handlebars (2 o'clock -- the power position). This gives the rider a solid pedal stroke to power the bike and keep it steady until the other foot finds the pedal. Childs tend to want to rush and take short cuts on this and get off to very wobbly starts. Work to have them develop habits so that they consistently get smooth steady starts.
2. Riding straight - Look straight ahead. Keep the elbows and knees loose and pedal smooth circles. When a novice rider turns his or her head, their arms and shoulders follow, causing the bike to swerve.
3. Stopping - Apply both brakes at the same time (if the bike has both front and rear brakes). Using just the front brake can launch the rider over the handlebars. Using just the rear brake limits the rider to just 20 or 30 percent of braking power and the bike is more likely schild.
Add turning:
1. Turning - Initially, slow down before entering a corner. Turning is a combination of a little leaning and a very little steering. Keep the inside pedal up and look through the turn. As confidence grows let the speed gradually increase.
No comments:
Post a Comment |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.