text
stringlengths
222
548k
id
stringlengths
47
47
dump
stringclasses
95 values
url
stringlengths
14
7.09k
file_path
stringlengths
110
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
53
113k
score
float64
2.52
5.03
int_score
int64
3
5
Fractions-A Piece of Cake? Or is Your Child Struggling With Maths? Recently I have come across the work of a Maths Professor at Stanford University named Jo Boaler. A quote from her “Many people grow up being told they are ‘not a math person’ or perhaps ‘not smart’; They come to believe their potential is limited.” Through her research and collaboration with others, she has reached the conclusion that anyone can learn math as long as they apply a growth mindset to the process. This thinking has revolutionized mathematics in my homeschool and we are making progress that I had not been sure was possible. Through changing my thinking and removing the limiting beliefs around math we have been able to approach the subject from a different perspective. If you would like to read more the link to Jo’s book Limitless Mind is below. Yes, I use affiliate links in my posts to products that I believe will help you and your kids. See my affiliate disclaimer for more info. Thanks, for supporting my work, you will not pay more for products or services when using my affiliates links. Fractions in Everyday Life So getting back to the cake! Is your child struggling to grasp the concept of fractions? There are many real-world experiences that you can use to teach your child about fractions in their everyday life. First of all, teach your child that mistakes are ok and that mistakes grow our brains. Making a mistake causes the synapses on our neurons to make new connections with each other as we think through the problem and look for different ways to find the solution. However if we get frustrated, annoyed and angry at ourselves for not getting the problem right in the first place our brains are not free to look for solutions, they shut down and we just reinforce the limiting beliefs that we are “not a math person”. Learning fractions can be a really abstract idea so the more that we can show children real-life examples of fractions the easier it is for them to understand what fractions mean and how they work in real life. For example, the mandarin which is on the right, in the picture above, has nine segments. Ask your child, how many pieces they can see? Explain that when we are talking about fractions we are looking at parts of a whole. Then show them that 1 segment can be shown as the fraction 1/9 because we have one segment out of the nine. Definition – A Fraction is part of a whole At home, it is actually really easy to find some great examples of fractions. From the fridge to the walls there are fractions all around us. How often would you use a word, in a conversation, to describe things that are fractions? You might say things like “at half-past six we will eat tea”, or “we need 2 thirds of a cup of sugar for the cake”. The problem is that we rarely extend our vocabulary of fractions beyond this in everyday language unless we consciously try to include this in our conversations. Maths is actually a language and many children trip up on the words used in maths because they don’t know or understand the vocabulary. When you are first helping your child to see fractions at home start with half and quarters. Look everywhere for half and quarters. Cut cakes into halves and quarters. Find things that are different shapes and explore with your child how they can see half and a quarter of the object. Even everyday things like folding up laundry can be an exploration of fractions. Fold the tea towels in halves and then quarters. Understanding these fractions will help your child when they are learning to tell the time as this is a common area that we use vocabulary for half and quarter. Next to your clock, you could put post-it notes with ¼ past, ½ past, and ¼ to so that your child can see the relationship in the division of time and the divisions of a circle. Try asking your child questions to discover the information rather than just telling them the answers. For example, you might say ‘How many quarters do we need to make a half or a whole?’ So once the kids have the halves and quarters down you can then introduce more fractions. If the kids ask questions about fractions they see while you are still working on ½ and ¼ then discuss it with them because their mind is in a place of inquiry and wanting to explore the information. You may have pizza for dinner and your child notices that the pizza is cut into eight pieces. You can then have a conversation at dinner about how the pizza is cut into eighths and how you shared the pieces between each person. Or what about that Zucchini slice you made for dinner; as you are cooking ask your child “how many pieces will we need for everyone to have an equal part?” So a great way to test and see if your child is getting the concept of fractions is to make a lovely pie or slice and tell your child to cut up the food to divide evenly between family/friends and they can have the smallest piece. It’s amazing how equal the portions are! Yes, fifths are difficult but they’ll be incentivised (my husband learnt this early). Baking in the kitchen leads to loads of opportunities for learning fractions. There are 1/4, 1/3, 1/2, 2/3, 3/4 measures in spoons and cups. An easy recipe that I have used with my kids is a pancake recipe. 1 cup of flour 1 cup of milk or alternative 1 pinch of salt Combine ingredients and then cook on a well oiled flat base pan. Flip when bubbles appear on the surface. I often like to challenge my kids to make the cup of flour with different cup measures. For example, use the 1/3 cup 3 times to make 1 cup of flour. You can also double the recipe and give your child 2/3 cup measure and see if they can work out how many they need to make 2 cups of flour. What favourite recipe do you have that you could adapt to help you kids visualise fractions? Ok, so enough about food it’s making me hungry! Where else are fractions used in real life? Go on a maths walk and see what fractions you can see in your streetscape and environment. Think about objects you can see that repeat themselves like power poles, fence posts, driveways and houses. You could look at the total number of cars on your street and work out what fraction of the cars are white. Teach your child that first, we need to count the total number of the cars and write this as the bottom number (the technical name is “denominator”) of the fraction and then we need to count the white cars and write that as the top number (this is called the “numerator”). Can this fraction then be made simpler? Is there as a number that both the top and bottom numbers can be divided by? Example 3/6 top and bottom can be divided by three to make 1/2. If you would like some more ideas for finding fractions all around you then check out this You Cubed link here. Where else can we discover fractions? Well, I know that roughly only 1/10 of an iceberg shows above water but how do I show my child this when I don’t live anywhere near ice? How about discovering a lovely nature documentary together and have a game to see how big a list you can work together on to record any fractions that you see. Or what about doing an experiment with some ice cubes and see if they have the same effect as an iceberg? There are loads of kids science books available now so check out your local library to find some science activities that you can also use to see fractions. Hmm, any other parents out there now want to teach their children fractions just as an excuse to make that raspberry ripple cheesecake?! Feel free to leave your comments below this blog post and check out the links below for some products which may help with teaching your child fractions. Check our my Visual tasks for exploring Percentage on Teachers Pay Teachers Naomi (about me) If you have gmail check promotions for your email with the free download. After you have confirmed your email.
<urn:uuid:4b3cd37e-c298-44aa-b6c8-f60a18d611be>
CC-MAIN-2021-39
https://naomimcdougall.com/where-fractions-are-used-in-everyday-life/
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057083.64/warc/CC-MAIN-20210920161518-20210920191518-00560.warc.gz
en
0.959199
1,708
3.203125
3
March 23, 2017 - Juan Montas In Part 2, Chapter 4 “Ghetto” of City of Rhetoric, Fleming discusses the reasons why Chicago had become a ghetto city by the mid-1990s, due to the high amounts of poverty, unemployment, violence, and crime rates. This ghetto culture served as a way for the communities in Chicago to start isolating themselves which caused less social gathering. The Cabrini Green families were primarily led by single mothers who did not have a job. These women had one duty and it is was to raise their children in the best way possible by giving them all of their time and buying necessities such as food and clothing. This led women not participating in the outside world. As well, the communities were isolated in the built environment, which meant that there were a lot of black families with whom there was no contact. “Ironically, within a large, diverse, and highly mobile post-industrial society such as the United States, black living in the heart of the ghetto are among the most isolated people on Earth” (Fleming, 88). This segregation limited the Cabrini Green family to create a functioning public sphere due to the high amounts of social inequality between classes. In Chicago, the “terrorization” and violence in public spaces made it difficult for people to go out into the streets and discuss with neighbors or other civilians. Public spaces such as elevators, lobbies, stairwells, and laundry rooms were the greatest targets of crime. “To be in public in places like this, in other words, is to be at risk for one’s life” (Fleming, 89). This made it useless to encourage public discourse since there were not safe places for the civilians. Not only were there no public spaces, but also the new “ghetto” that imposed fear and violence created a more isolated community. At the end of the chapter, Fleming discusses how the isolating atmosphere of the ghetto made the community a quiet place. Due to high crime rates the citizens of Chicago went to the streets few times and instead stayed home protecting themselves and their families. A code of silence was implemented on its own due to isolation within the community. Fear was everywhere in the city of Chicago as the new ghettos grew. Fleming, David. City of Rhetoric: Revitalizing the Public Sphere in Metropolitan America. SUNY Press, 2008.
<urn:uuid:7fc7b389-5b30-4390-afce-9ae3d5db885d>
CC-MAIN-2020-29
https://edspace.american.edu/jm1675a/2017/03/23/reading-analysis-3/
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655887046.62/warc/CC-MAIN-20200705055259-20200705085259-00523.warc.gz
en
0.977677
501
3.640625
4
While archeology tells us that the harp was likely "invented" when early human hunters played the string(s) of their hunting bows, that simple weapon-turned-instrument took on many metamorphoses along the Silk Road. Eventually the harp became popular in Irish society in the 12th Century. Due to it's technical difficulty and the young age that harpers needed to begin their training to reach professional status by adulthood, the harp and the education of harpers was coveted by Gaelic aristocracy of the era. By the 19th Century, the harp was a metaphor for the poor and downtrodden of Irish society. In songs like, "The Minstrel Boy" and "The Harp That Once Through Tara's Halls" the harp is a symbol for a nation that has fallen into financial and political strife. Around the same time, the RIC (The Royal Irish Constabulary) embellished the their caps and uniforms with the Celtic harp and used it as a symbol of solidarity and national pride. Presently, the Celtic harp is on the national coin and the government of Ireland's national seal. While many Americans will be celebrating St. Patrick's Day this March 17 (if not sooner,) I thought I would share just a little bit of history of my favorite instrument.
<urn:uuid:54782bc4-4f62-4fef-8139-7760191dec98>
CC-MAIN-2019-43
http://www.michele-roger.com/blog-3/2018/3/8/the-irish-harp
s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987750110.78/warc/CC-MAIN-20191020233245-20191021020745-00158.warc.gz
en
0.974755
270
3.328125
3
Mobile phones must be able to connect with other devices and become a ‘remote control for life’ to satisfy demanding under-18s who represent the next generation of consumers, according to a new research report. International consumer research specialist Intersperience asked 1,000 young people in the UK between age eight and 18 their views on mobiles, how they use them and how they would like to use them in future. The findings, released today, Feb. 20, are part of a comprehensive ‘Digital Futures’ study of how under-18s engage with technology, particularly with internet-enabled devices. [ Also Read : What Moms Do with Their Mobiles The research found that 42% of under-18s want to use a mobile to control other devices in future, compared to just 24% of adults. Children are also keener than adults on embracing new developments such as using a smartphone to buy things, with 33% expecting to use a mobile for payment and purchase and 25% expecting to use their phone as a ‘mobile wallet‘ – significantly higher than for adults. Under-18s already use mobiles for more functions than adults, particularly for games, photos, music and making videos. Children and teens are also four times more comfortable using their phones to store personal information than their parents. [ Also Read : How to Make Big-Screen Movies on Your iPad In future young people want interconnectivity between mobiles and other digital devices and even household services such as cable or satellite TV or utility supplies. Eight to 11 year olds have the highest expectations from mobiles of the future with a strong emphasis on interconnectivity with entertainment functions such as gaming and movies. Intersperience chief executive Paul Hudson said: “For under-18s the future is uncompromisingly mobile. They have a vision of a powerful multi-functional mobile which can connect with and control an array of other devices and services from a Sky+ box to home heating or lighting systems and functions we haven’t even thought of yet.” [ Also Read : New Arrivals at The 7-O-7 Tech Shop The findings imply that software developers, phone companies and service providers in general need to accelerate efforts in this area if they are to satisfy a generation of consumers for whom greater inter-connectivity, versatility and functionality are important factors. Mobiles are reducing PC usage by children – twice as many under-18s said they would choose a phone over a PC for music downloads, emails or research while an even higher ratio would rather make purchases via mobile rather than via a PC. In terms of affordability, the study found that smartphones are now within the reach of the current generation of teenagers with parents generally willing to pay around GBP20 a month to cover their children’s mobile bills. [ Also Visit : RMN Kids – An Edutainment Site for Children It also revealed significant ‘pester power’ among children who put pressure on parents to buy them a mobile from as young as eight. Parents frequently buy children an iPod Touch as a compromise before caving in to pressure to buy a mobile and the majority regard age 11 as an appropriate age for a child to receive first mobile. Intersperience conducted a survey among 1,000 young people in the UK between the ages of eight and 18 on how they use the internet and internet-enabled devices. Participants mirrored the general UK population in terms of social class and of the total group, 35% were aged between eight and 11, 37% were aged 12 to 14, and the remainder were aged 15 to 17. [ Visit Raman Media Network (RMN) News Service for Global News and Views In addition, the team carried out qualitative research among 15 families with children aged from two to 18 which included participation in family tasks such as video diaries, communication logs and mood diaries. Researchers also carried out 23 in-depth family interviews including 11 face-to-face interviews with under 18s. Field research was carried out between July and August 2011. Photo courtesy: Intersperience
<urn:uuid:15e4165f-949f-403e-b9a1-2c917247a183>
CC-MAIN-2020-29
http://www.rmndigital.com/what-under-18s-want-from-future-mobiles/
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655886706.29/warc/CC-MAIN-20200704201650-20200704231650-00250.warc.gz
en
0.962728
850
2.59375
3
April is Alcohol Awareness Month. It is a great time to start the conversation and in many cases it is never too early. Here are ten questions that are asked frequently by parents of kids, tweens and teens: 1) At what age would you suggest parents start talking to kids about alcohol? Should parents bring it up independently, or wait for their children to ask before broaching the topic? Like with any sensitive and serious subject, as soon as a parent believes their child is mature enough to understand the topic (alcohol) is when they should start discussions. It can start by asking them their thoughts on alcohol, listen to them carefully and remember, never criticize. Start the discussion at their level and start learning from each other. Education is the key to prevention and can help your child to better understand the risk and dangers of alcohol from an early age. Waiting for a crisis to happen, such as living with an alcoholic or having an issue with a family member that has a drinking problem is not the time to start talking to the child. With this type of situation, the subject should be approached as early as the child can possibly understand alcohol and substance use. 2) If you’ve had bad experiences with alcohol in the past (ie you or a friend/family member has battled alcoholism or similar issues), should you be open about them with your kid? If so, when is the right age for kids to hear this information? How open should you be? This is a very tricky question. On one hand we value honesty, however when a teenager likes to throw it back at you when they decide to experiment and it goes too far is when you realize you may want to pick and choose what stories from your past you want to share. If you have a family member that has battled with addiction, alcoholism or similar issues, there is nothing like firsthand experiences (especially those people that are related to them) to help them understand how harmful this disease can be and in some cases, deadly. I think it is very important that your teenager know these stories and how it relates to them – especially as they go into middle school and high school and start feeling the peer pressure from to others to experiment with different substances. 3) Are there any websites or books that you’d recommend having parents read or showing kids (at any age)? Are certain types of information better for each age group (ie maybe children respond better to broad themes and videos, tweens respond well to anecdotes and stories, and teens respond better to hard facts about drinking and health)? Ask Listen Learn: Is a fantastic interactive and educational website created by The Century Council For Underage Drinking. This site if full of facts, resources, videos downloads, games as well as more links that offer extended information. This site is targeted for all ages from younger kids to teens. The Cool Spot: This is another great website for tweens and teens. This deals with information on alcohol and helping teens and young teens resist peer pressure. Smashed: Story of a Drunken Girlhood by Koren Zailckas – This is an excellent book for both parents and teens of a true story. It was a NYT’s best seller. Eye-opening and utterly gripping, Koren Zailckas’s story is that of thousands of girls like her who are not alcoholics—yet—but who routinely use booze as a shortcut to courage and a stand-in for good judgment. This book is more for teenagers and parents. 4) Do you think that schools and/or the media do a good job of warning kids about the dangers of alcohol consumption, or do they receive mixed messages about drinking? How might you incorporate your thoughts about this into a conversation with your child? Schools and teachers do what they are paid to do, and in most cases, especially with dedicated teachers and employees, will go above their duty and do more. However it is the parent’s responsibility to continue to talk to their child about the risks and dangers of alcohol, as well as the peer pressure they may face in school and in their community. Though many parents are busy today, some working two jobs, many are single parents – there are few excuses not to take the time to talk to your kids about these subjects. Whether it is Internet safety, substance abuse, safe sex, or simply homework – parenting is your priority. I am not saying this is easy, I know for a fact, it isn’t. I was a single parent with two teenagers, it was very hard. I think today is even more challenging since there is more obstacles to contend with than there was even a decade ago. The good news is the most recent study by The Century Council says that 83% of youth cite parents as the leading influence in their decisions not to drink alcohol. Another words – our kids are listening and parents are doing their job parenting! 5) How often should you talk to kids about alcohol, and does it vary by age? (i.e. less frequently for younger children, more frequently for tweens, and most frequently for teenagers?) As frequently as you have an opportunity. If there is a reason for it – if there is a conversation about it, expand on it – don’t run from it. This is for both tweens and teens. As far as little children are concerned, again it depends on their maturity and what your family dynamics consist of. 6) If you drink yourself, is it ever a good idea to allow kids to drink with you (i.e. a glass of wine at dinner) to de-stigmatize alcohol and help them be responsible? Or is it instead better to forbid them from consuming alcohol altogether until they are 21? Alcohol is illegal for underage drinkers. However there are some that believe that a sip of alcohol isn’t be a big deal. I believe this is a personal decision, but if you have alcoholism that runs in your family, it is something that I would caution you on. The other side to this is some people believe it would eliminate them from trying it at a friend’s house where they could get into trouble such as drinking and driving. I think this goes back to being a personal choice on for your family. It goes back to talking to your teen – communication. Keep the lines open! 7) If you suspect your child’s friends are drinking or pressuring him/her to drink, should you stop allowing your child to hang out with them? Communication. Talk to your child about these friends. Find out what is going on and help your child see that maybe the choices he/she is making are not in their best interest. It is better if your teen comes to the conclusion not to hang out with these friends rather than their parent telling them not to. 8) Should the discussion be different for a daughter versus a son? How might you talk to the different sexes differently about alcohol (i.e. maybe you’d warn girls more about not having people slip something in their drinks at parties, while you’d warn boys more about alcohol and hazing/pranks.) I don’t want parents to get confused on gender and alcoholism. It doesn’t discriminate. A girl or a boy can be slipped a drug in their drink at a party – just like a girl or boy can be coerced into participating into a mean prank of hazing. With this, whether you have a son or daughter, you need to speak with them about the risks of leaving any drink alone and coming back for it. Keep in mind, you don’t have to have an alcoholic beverage to put a powdery substance into it (another words even a soda can be spiked). The important issue is they understand that these things can happen and they can happen to them. 9) What should you do if you suspect your teenager is drinking against your advice? Communication. I know it is easier said than done (and I sound like a broken record), however it is the best tool we have and the most effective. As hard as it can be, talking with a teenager is difficult, but we have to continue to break down those walls until they talk to us and tell us why they are turning to alcohol. If you aren’t able to get through, please don’t be ashamed or embarrassed if you can’t, you are not alone. Again, teen years are the most trying times. Reach out to an adolescent therapist or counselor. Hopefully your teen will agree to go. If not, may you have a family member or good friend your teen will confide in. It so important to get your teen to talk about why he/she is drinking. Don’t give up – whether it is a guidance counselor, sports coach, someone he/she is willing to open up to. Parents can’t allow this to escalate and only believe it is a phase. Maybe it is – but maybe it isn’t. Be proactive. Don’t wait for it to reach the addiction level. Don’t be a parent in denial. There is help and you don’t have to be ashamed to ask for it. There are many typical teens that end up being addicts – don’t let your teenager be one of them. 10) Could you offer one specific tip for each age group (elementary school, tween/middle school, and high school) that I may have missed or that people might not think of? For all ages, parents need to realize how important it is to be a role model. As I mentioned earlier, 83% of children are listening and are influenced by their parents. That is a large number. So continue keeping those lines of communication open – starting early and going into their college years! Join me on Facebook and follow me on Twitter for more information and educational articles on parenting today’s teenagers.
<urn:uuid:227fde31-16f1-43d0-ae69-86d7ba4820ff>
CC-MAIN-2015-27
https://parentsuniversalresourceexperts.wordpress.com/category/teen-drinking/
s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375094491.62/warc/CC-MAIN-20150627031814-00189-ip-10-179-60-89.ec2.internal.warc.gz
en
0.973067
2,052
2.96875
3
I would do anything that I can to avoid shame and embarrassment, and I will gladly duck and dive a situation in which I suspect that I may be judged by someone else and put into the “not good enough” box because being looked down on is painful. It’s as if someone is taking a bunch of negative labels and insults and spearing them onto a dart before throwing them at me and repeatedly puncturing and pinning me down me with their harsh opinions. That is stigma. Assumptions made about someone based on limited knowledge about a characteristic that they may have. See Mike? Mike is HIV-positive. Mike must have been promiscuous and irresponsible, right? See Jerome? He has bipolar disorder, so he must be a nightmare to be in a relationship with because he must be crazy. See Steve? He’s obese, so he must be lazy, whereas Terrence is a ginger so he must be temperamental and Dumi is Xhosa so he must also be quick to anger. The list is endless. But just how true are these assumptions and beliefs? Mostly, they are not true at all. Stigma is when you jump to conclusions about somebody based on only one thing that you know about them, and it’s very dangerous because most people are like me, most people will do whatever they can to avoid this kind of judgment and discrimination. Mike may not even know that he is HIV-positive because he is afraid to find out in case you also find out and judge him. Mike may even know that he is HIV-positive but does not take his ARVs regularly because he is scared that you might see him collecting them at the clinic. Mike doesn’t want you to think less of him. Mike could unwittingly infect someone else because he’s afraid of getting tested and he may also get sick and eventually die because he is that scared of your judgment. He wouldn’t be the first. Like Mike, Jerome could also benefit from taking medication. With bipolar disorder taking mood stabilisers and an anti-depressant would subject him to far fewer suicidal thoughts and debilitating bouts of depression, but Jerome also doesn’t want you to put him in a box and judge him. Jerome is scared that if word gets out about his illness, nobody will want to work with him, love him or spend time with him, so Jerome stays at home and hopes that he’s not sick. His suicide months later will come as a shock to us all. There is an American pastor currently planning to visit South Africa in September who is causing an uproar. He preaches that homosexuals are all paedophiles and has been seen celebrating the Orlando Massacre on social media as he believes that all homosexuals should be put to death. This man is discriminating and perpetuating a terrible stigma that most of us know is not the truth. Instinctively, we know how dangerous this stigma could be if he is allowed to spread it and this is why hundreds of LGBTQIA+ people are picketing and doing what they can to ensure that this man does not gain access to the country. We know that if enough people believe the filth and vitriol that this man wants to spread, that it could be catastrophic for everyone in the LGBTQIA+ community. So what are we doing to stop the stigma that we spout? If Mike and Jerome do die because they were afraid to seek treatment, they will die because of stigma. When we gossip about someone because of a characteristic that they have, we are perpetuating stigma. If we ostracise, patronise or discriminate against someone because of a trait that they have, without doing further investigation, we are bringing more stigma into the world, and the more stigma there is in the world, the more likely it is that we will be the next ones to suffer from its effects. Educate yourself about HIV, TB, STIs and mental illnesses and already a lot of ground will be won in the battle against stigma. Find out more before you judge. Mike, Jerome and others suffering from various illnesses, as well as people who just happen to have certain traits are still valuable and powerful members of society, worthy of love and respect and opportunities. Don’t speak badly about people who are living with these conditions and don’t be quiet if others speak badly either. Let your words and actions empower and encourage rather than limit and alienate others, others will afford you the same graces then too. Your empathy, compassion and understanding may very well save someone’s life. In fact, it may even save your own life a little further down the line. Bruce J. Little is a contributing writer for Anova Health Institute. These are his views, which may or may not reflect those of Anova Health Institute and affiliates.
<urn:uuid:2e2ad3e1-2e10-46c3-bd7b-7a689a969383>
CC-MAIN-2020-45
https://www.health4men.co.za/queer-talk/08/02/stigma-killer-can-stop/
s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107894759.37/warc/CC-MAIN-20201027195832-20201027225832-00573.warc.gz
en
0.976188
991
2.65625
3
A paper about Mussolini. Benito Mussolini is a leader that is remembered through time as an evil mind. He was a very open ally of Nazi Germany and he established an ineffective dictatorial government as he continued to criticize the existing democratic governments. Mussolini believed that he was the sole person that knew what the people of Italy wanted and what was best for them. However, he was soon going to receive a rude awakening to the fact that he did not know everything. Mussolini established himself as the leader in a dictatorial government. Dictatorships are one party political systems that are ruled… E-pasta adrese, uz kuru nosūtīt darba saiti: Saite uz darbu:
<urn:uuid:99118e63-0b83-4aac-a022-45fff0354e17>
CC-MAIN-2017-04
https://www.atlants.lv/eseja/a-paper-about-mussolini/856835/
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279933.49/warc/CC-MAIN-20170116095119-00049-ip-10-171-10-70.ec2.internal.warc.gz
en
0.976702
157
3.046875
3
This article needs additional citations for verification. (January 2013) (Learn how and when to remove this template message) The Udaloy class, Russian designations Project 1155 Fregat and Project 11551 Fregat-M (Russian: Фрегат, 'Fregat' meaning Frigate), are series of anti-submarine guided missile destroyers built for the Soviet Navy, seven of which are currently in service with the Russian Navy. Twelve ships were built between 1980 and 1991, while the thirteenth ship built to a modified design, known as Udaloy II class, followed in 1999. They complement the Sovremennyy-class destroyers in anti-aircraft and anti-surface warfare operations. Admiral Vinogradov underway |Preceded by:||Sovremenny class| |Succeeded by:||Lider class| |Completed:||13 (12 Udaloy I, 1 Udaloy II)| |Type:||Guided missile destroyer| |Length:||163 m (535 ft)| |Beam:||19.3 m (63 ft)| |Draught:||6.2 m (20 ft)| |Propulsion:||2 shaft COGAG, 2× D090 6.7 MW and 2× DT59 16.7 gas turbines, 120,000 hp 89.456 MW| |Speed:||35 kn (65 km/h; 40 mph)| |Range:||10,500 nmi (19,400 km; 12,100 mi) at 14 kn (26 km/h; 16 mph)| |Sensors and | |Electronic warfare | |Aircraft carried:||2 × Ka-27 series helicopters| |Aviation facilities:||Helipad and hangar| The Project 1155 dates to the 1970s when it was concluded that it was too costly to build large-displacement, multi-role combatants. The concept of a specialized surface ship was developed by Soviet designers. Two different types of warships were laid down which were designed by the Severnoye Design Bureau: Project 956 destroyer and Project 1155 large anti-submarine ship. The Udaloy class are generally considered the Soviet equivalent of the American Spruance-class destroyers. There are variations in SAM and air search radar among units of the class. Based on the Krivak class, the emphasis on anti-submarine warfare (ASW) left these ships with limited anti-surface and anti-air capabilities. In 2015, the Russian Navy announced that five out of the eight Project 1155 ships will be refurbished and upgraded as part of the Navy modernization program by 2022. In addition to overhauling their radio-electronic warfare and life support systems, they will receive modern missile complexes to fire P-800 Oniks and Kalibr cruise missiles. The ships are to have their service life extended by 30 years until sufficient numbers of Admiral Gorshkov-class frigates are commissioned. Upgrades will include replacing the Rastrub-B Silex missiles with 3S-24 angling launchers fitted with four 3S-34 containers using the 3M-24/SS-N-25 Switchblade anti-ship missile, and two 3S-14-1155 universal VLS with 16 cells for Kalibr land attack, anti-ship, and anti-submarine cruise missiles in place of one of the AK-100 guns. Following Udaloy's commissioning, designers began developing an upgrade package in 1982 to provide more balanced capabilities with a greater emphasis on anti-shipping. The Project 1155.1 Fregat II Class Large ASW Ship (NATO Codename Udaloy II) is roughly the counterpart of the Improved Spruance class; only one was originally completed, but in 2006 Admiral Kharlamov was reported to have been upgraded to a similar standard. In April 2010 Severnaya Verf shipyard announced that the destroyer Vice-Admiral Kulakov, which had been retired in 1990, was being upgraded to Udaloy II standard and has since resumed patrolling in 2013. Similar to Udaloy externally, it was a new configuration replacing the SS-N-14 with SS-N-22 "Sunburn" (Moskit) anti-ship missiles, a twin 130 mm gun, UDAV-1 anti-torpedo rockets, and gun/SAM CIWS systems. A standoff ASW capability is retained by firing SS-N-15 missiles from the torpedo tubes. Powered by a modern gas turbine engine, the Udaloy II is equipped with more capable sonars, an integrated air defense fire control system, and a number of digital electronic systems based on state-of-the-art circuitry. The original MGK-355 Polinom integrated sonar system (with NATO reporting names Horse Jaw and Horse Tail respectively for the hull mounted and towed portions) on Udaloy-I ships is replaced by its successor, a newly designed Zvezda M-2 sonar system that has a range in excess of 100 kilometres (62 mi) in the 2nd convergence zone. The Zvezda sonar system is considered by its designers to be the equivalent in terms of overall performance of the AN/SQS-53 on US destroyers, though much bulkier and heavier than its American counterpart: the length of the hull mounted portion is nearly 30 meters. The torpedo approaching warning function of the Polinom sonar system is retained and further improved by its successor. |Udaloy I class (Russian type BPK - Large ASW Ship)| |Udaloy||Bold||23 July 1977||5 February 1980||31 December 1980||Decommissioned in 1997. Scrapped at Murmansk in 2002| |Vice-Admiral Kulakov||Nikolai Mikhailovich Kulakov||4 November 1977||16 May 1980||29 December 1981||Modernization completed in 2010, in service with the Northern Fleet| |Marshal Vasilyevsky||Aleksandr Vasilevsky||22 April 1979||29 December 1981||8 December 1983||Decommissioned| |Admiral Zakharov||Mikhail Nikolayevich Zakharov||16 October 1981||4 November 1982||30 December 1983||Caught fire in 1992 and scrapped| |Admiral Spiridonov||Emil Nikolayevich Spiridonov||11 April 1982||28 April 1984||30 December 1984||Decommissioned in 2001. 2002 sold for scrap.| |Admiral Tributs||Vladimir Filippovich Tributs||19 April 1980||26 March 1983||30 December 1985||Caught fire in 1991, but returned to service. In service with the Pacific Fleet| |Marshal Shaposhnikov||Boris Mikhailovich Shaposhnikov||25 May 1983||27 December 1984||30 December 1985||In service with the Pacific Fleet| |Severomorsk||Severomorsk||12 June 1984||24 December 1985||30 December 1987||In service with the Northern Fleet| |Admiral Levchenko||Gordey Ivanovich Levchenko||27 January 1982||21 February 1985||30 September 1988||In service with the Northern Fleet| |Admiral Vinogradov||Nikolai Ignatevich Vinogradov||5 February 1986||4 June 1987||30 December 1988||In service with the Pacific Fleet| |Admiral Kharlamov||Nikolay Mikhaylovich Kharlamov||8 July 1986||29 June 1988||30 December 1989||In service with the Northern Fleet| |Admiral Panteleyev||Yuriy Aleksandrovich Panteleyev||28 January 1988||7 February 1990||19 December 1991||In service with the Pacific Fleet| |Udaloy II class| |Admiral Chabanenko||Andrey Trofimovich Chabanenko||28 February 1989||16 June 1994||28 January 1999||Laid up to be repaired, planned to return to service by 2019.| |Admiral Basisty||Nikolai Efremovich Basistiy||1991||Scrapped in 1994| |Admiral Kucherov||Stepan Grigorievich Kucherov||Scrapped in 1993| - Противолодочные корабли, Том III, часть 1, "Корабли ВМФ СССР", Ю.В. Апалков, Санкт-Петербург, 2005 - Russian Navy to modernize five Udaloy-class (Project 1155) ASW Destroyers by 2020 - Navyrecognition.com, 23 January 2017 - Russian Navy Udaloy I-class ASW Destroyer Marshal Shaposhnikov to Receive Kalibr Missiles - Navyrecognition.com, 22 August 2017 - "Udaloy Class Anti-Submarine Destroyers - Naval Technology". Retrieved 2016-03-16. - "Russian ship enters Panama Canal". BBC News Online. December 6, 2008. Retrieved 2008-12-06. - Kramnik, Ilya (11 December 2009). "Russian Navy's days could be numbered". Moscow: RIA Novosti. Retrieved 7 May 2010. - "Russian North Fleet destroyer to rejoin fleet after 18 years". Moscow: RIA Novosti. 5 April 2010. Retrieved 7 May 2010. - "Russian Naval Destroyer Moving to Mediterranean". Moscow: RIA Novosti. 29 July 2014. Retrieved 18 August 2014. - "Pacific Fleet Moving South". 21 September 2005. Archived from the original on 9 May 2006. |Wikimedia Commons has media related to Udaloy class destroyers.|
<urn:uuid:8a9dac32-864d-49c3-9d79-80dd76ffb82d>
CC-MAIN-2019-47
https://en.m.wikipedia.org/wiki/Udaloy-class_destroyer
s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670635.48/warc/CC-MAIN-20191120213017-20191121001017-00334.warc.gz
en
0.890404
2,117
2.6875
3
Penn and Nicole Mattison’s daughter, Millie, has infantile spasms with hypsarrhythmia, a form of epilepsy. By the time she was four months old she was having upwards of 700 seizures a day. The Mattisons tried numerous medications and diet plans, but Millie didn’t improve. After her doctors said they’d tried everything they could, the Mattisons looked to Colorado for an alternative treatment. About National Geographic: National Geographic is the world’s premium destination for science, exploration, and adventure. Through their world-class scientists, photographers, journalists, and filmmakers, Nat Geo gets you closer to the stories that matter and past the edge of what’s possible. Click here to read more on what scientists are discovering about marijuana online in National Geographic magazine: http://ngm.nationalgeographic.com/201… Some parents are turning to cannabidiol (CBD) oil, a cannabis extract with little or none of the psychoactive compound THC, to treat their children who have cancer and epilepsy. The oil is currently legal in more than a dozen U.S. states, but the supply is limited. The science also lags the law—dosing standards haven’t been set, and the effects of long-term use are unclear. Many doctors believe that more research is needed. In «Cannabis for Kids» a few parents share their experiences navigating the uncertainties of medical marijuana in America as they try to help their children. «Our hands were tied.» Millie’s Story (Cannabis for Kids, Part 1) | National Geographic
<urn:uuid:70850ff7-37e9-43be-b157-d5ed120bf675>
CC-MAIN-2023-50
https://hampaksjonen.no/our-hands-were-tied-millies-story-national-geographic/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100535.26/warc/CC-MAIN-20231204214708-20231205004708-00763.warc.gz
en
0.938006
340
2.671875
3
Compiled and edited by Charles J. Kappler. Washington : Government Printing Office, 1941. Treaty made and concluded on the 7th day of August one thousand eight hundred and fifty five between Garland Hurt Indian Agent for the Territory of Utah for and in behalf of the President and Senate of the United States of the one part and the Chiefs, head men, and warriors of the Sho-sho-nee Nation of Indians (commonly called Snake Diggers) occupying the northern, and middle portion of the Valley of the Humboldt River of the other part We the Chiefs and head men of the Sho-sho-nee Nation do hereby declare that all former disputes and feelings of hostility between our people and the people of the United States are this day amicably adjusted and settled. We guarantee to the people of the United States perfect safety to life and property at all times when peacefully sojourning in, or traveling through our country. We give the right of way through our country to the people of the United States, that said people may pass and repass without harm to themselves or property. We will treat all persons claiming to be citizens of the United States who may settle in our country as brothers and friends, and not as enemies. We acknowledge the supremacy of the laws of the United States and that all persons who may hereafter commit crimes within the limits of our country shall be accounted answerable to said laws. We will use all diligence when called to aid the officers and people of the United States in arresting and bringing to justice, all persons who may have committed crimes within the limits of our country irrespective of the tribes or nations to which the offenders may belong. And the said Garland Hurt for, and in behalf of the President and Senate of the United States, pledges hereby the friendship and good will of the people of the said States to the Chiefs and people of the said Sho-sho-nee Nation. For, and in consideration of the faithful observance of all the obligations above stipulated on the part of the Chiefs and people of the said Sho-sho-nee Nation of Indians, the President of the United States will give to the Chief and people of said nation, through his proper agent, the sum of three thousand dollars in presents (such as provisions, clothing and farming implements &c) to be delivered to them at some convenient point within the limits of their country, on or before the 30th day of September 1857: Provided however that if any part of the above treaty shall be violated by any of the Chiefs or people of the said Sho-sho-nee Nation the above obligations on the part of the President of the United States shall be void, or held at his discretion until such time as ample atonement shall have been made for such violation: Provided further, that if the President and Senate of the United States shall refuse to ratify this treaty, the same shall be void. In witness whereof the said Garland Hurt and the aforesaid Chiefs and head men have hereunto subscribed their names and affixed the seals. GARLAND HURT [SEAL] NIM-OH-TEE-CAH (his x mark) (Man Eater) [SEAL] SHO-COP-IT-SEE (his x mark) (Old Man) [SEAL] PAN-TOW-GUAN (his x mark) (Diving Mink) [SEAL] TOW-(JUAN-DAVAT-SEE (his x mark) (Young Ground Hog) [SEAL] SHO-COP-IT-SEE JUNIOR (his x mark)[SEAL] POW-WAN-TAH-WAH (his x mark) (Strong Smoker) [SEAL] JAN-OUP-PAH (his x mark) (Chinning Man) [SEAL] INK-AH-BIT (his x mark) (Head Man) [SEAL] KO-TOO-BOT-SEE (his x mark) [SEAL] WOT-SOW-WIT-SEE-MOT-TOW (his x mark) (The four Shians) [SEAL] Signed in presence of A. P. Hawes, Interpreter. C. L. CRAIG, VAN EPPS HUGNUIN,
<urn:uuid:8bedf06e-faca-4595-bfb0-0a43dc92ce6a>
CC-MAIN-2017-30
http://digital.library.okstate.edu/Kappler/Vol5/html_files/v5p0685.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549429485.0/warc/CC-MAIN-20170727202516-20170727222516-00060.warc.gz
en
0.926203
913
2.53125
3
There is no generally agreed definition of what a tax haven is. The term itself is troublesome, because these places offer facilities that go far beyond tax. Loosely speaking, a tax haven provides facilities that enable people or entities escape (and frequently undermine) the laws, rules and regulations of other jurisdictions elsewhere, using secrecy as a prime tool. Those rules include tax – but also criminal laws, disclosure rules (transparency,) financial regulation, inheritance rules, and more. Treasure Islands, Nicholas Shaxson’s definitive book on tax havens Financial Secrecy Index, the world’s only bona fide tax haven ranking based on objective criteria We don’t offer a formal definition of tax haven either, but we think that those two words in bold text are the keys to understanding the phenomenon. Language is important: the word ‘escape’ points to the word ‘haven’ in ‘tax haven;’ and the word ‘elsewhere’ points to the word ‘offshore’, which is another term that we sometimes use when we want to emphasise the ‘elsewhere’ nature of the phenomenon. What is a secrecy jurisdiction? We also sometimes use the term ‘secrecy jurisdiction’ instead of tax haven. We take this term to mean a similar thing as ‘tax haven’; we use it when we want to emphasise the secrecy aspect. (We do not offer our own strict definition of a secrecy jurisdiction either, though there are useful definitions out there, such as this one.) See a more detailed discussion of this question here. Different jurisdictions make different offshore offerings. The British Virgin Islands, for example, specialises in incorporating offshore companies. Ireland is a corporate tax haven and a haven for laxity in financial regulation but not really a secrecy jurisdiction; Switzerland and Luxembourg offer secret banking, corporate tax avoidance and a wide range of other offshore services. The United Kingdom does not itself offer secret banking but it sells an even wider range of offshore services, including lax financial regulation. And so on. It is impossible to get accurate estimates for the size of financial assets held in tax havens, because of secrecy, and because nobody agrees on a tax haven is. Here are two of the best-known estimates. – [The Price of Offshore, Revisited](http://www.taxjustice.net/cms/upload/pdf/Price_of_Offshore_Revisited_120722.pdf). In a 2012 report for the Tax Justice Network, James Henry used three separate methods to estimate between $21-32 trillion worth of financial assets in tax havens. Henry in 2016 produced a [preliminary update](https://www.foreignaffairs.com/articles/panama/2016-04-12/taxing-tax-havens) raising the estimate to $24-36 trillion. – **Gabriel Zucman**. Using a very different, narrower method, [Gabriel Zucman estimated](http://gabriel-zucman.eu/hidden-wealth/) in 2015 that about 8 percent of the world’s wealth, or $7.6 trillion, is held in tax havens. Where are the tax havens? Just as there’s no agreed definition of tax havens, there’s no definitive list either. We prefer to talk of a continuum, where there are degrees of tax haven-ness, across the different dimensions that they offer: secrecy, tax escape and so on. Several international bodies have their own lists of tax havens, which are frequently skewed by political expediency. These lists tend to exclude or downplay large, powerful nations like Tax Haven USA and highlight small, weaker ones. Our own list, the product of years of exhaustive research into financial secrecy, makes no such concessions: it is the Financial Secrecy Index. (Click on an individual country to read a history of how it became a secrecy jurisdiction; each country also comes with an in-depth database report) Although Switzerland tops our index (as of 2016), we argue that Britain is the single most important player in the offshore system of tax havens, because of its control and support of a wide network of part-British territories (such as the Cayman Islands or Jersey) which are major players in the system. Read more about Britain’s role here.
<urn:uuid:6fd44f68-8092-4cf3-8f1a-812289dcda59>
CC-MAIN-2017-09
http://www.taxjustice.net/faq/tax-havens/
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172783.57/warc/CC-MAIN-20170219104612-00175-ip-10-171-10-108.ec2.internal.warc.gz
en
0.901202
902
2.5625
3
Electronics Recycling vs. Disposal Of the 2.25 million tons of TVs, cell phones and computer products ready for end-of-life (EOL) management, 18% (414,000 tons) was collected for recycling and 82% (1.84 million tons) was disposed of, primarily in landfills. From 1999 through 2005, recycling rate was relatively constant at about 15%. During these years, the amount of electronics recycled increased but the percentage did not because the amount of electronics sent for end of life management increased each year as well. For 2006-2007, the recycling rate increased to 18%, possibly because several states have started mandatory collection and recycling programs for electronics. According to the EPA: In 2007 the United States recycled 72 million metric tons of metals or about 52 percent of the supply of metal. The United States exported 23.1 Mt of scrap and imported 5.4 Mt of scrap metal. The United States recovered 51.8 million tons of paper in 2008. Of this total, 57.4% of the paper was consumed in the U.S. 20.1million tons of recovered paper was exported in 2008, including 2.8 million tons of printed news, 6.3 million tons of corrugated cardboard, 7.2 million tons of mixed paper, 600,000 tons of hi-grade paper, 1.3 million tons of pulp subs, and 1.7 million tons of other forms of recovered paper. Plastic Bottle Pounds Collected for Recycling in the United States - The total pounds of plastic bottles recycled reached a record high 2,410 million pounds. - The total plastic bottle recycling rate was 27.0%, up from 24.4% in 2007. - The total pounds of plastic bottles collected increased by 75 million pounds for 2008 over 2007. - The annual increase in pounds of plastic bottles recycled was 3.2%. - The 19 year compounded annual growth rate for plastic bottle recycling is 9%. - PET bottles collected increased by 55 million pounds. - HDPE bottles collected rose by 16.1 million pounds to 936.7 million pounds, reflecting vigorous collection in the first three quarters of the year.
<urn:uuid:11db4c22-9588-478a-9cb9-07f7a17ba421>
CC-MAIN-2023-06
https://recyclingquotes.com/recycling-statistics.html
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499801.40/warc/CC-MAIN-20230130034805-20230130064805-00228.warc.gz
en
0.944517
470
2.9375
3
Introduction to Light Emitting Diodes The past few decades have brought a continuing and rapidly evolving sequence of technological revolutions, particularly in the digital arena, which has dramatically changed many aspects of our daily lives. The developing race among manufacturers of light emitting diodes (LEDs) promises to produce, literally, the most visible and far-reaching transition to date. Recent advances in the design and manufacture of these miniature semiconductor devices may result in the obsolescence of the common light bulb, perhaps the most ubiquitous device utilized by modern society. The incandescent lamp is the best known of Thomas Edison's major inventions, and the only one to have persisted in use (and in nearly its original form) to the present day, now more than a century after its introduction. The phonograph, tickertape, and mimeograph machines have been replaced by digital technologies in the last few decades, and recently, full-spectrum light emitting diode devices are becoming widespread, and could force incandescent and fluorescent lamps into extinction. While some applications of LED technology may be as straightforward as replacing one light bulb with another, far more visionary changes may involve dramatic new mechanisms for utilizing light. As a result of the predicted evolution, walls, ceilings, or even entire buildings could become the targets for specialized lighting scenarios, and interior design changes might be accomplished through illumination effects rather than by repainting or refurnishing. At the very least, a widespread change from incandescent to LED illumination would result in enormous energy savings. Although light emitting diodes are in operation all around us in videocassette recorders, clock radios, and microwave ovens, for example, their use has been limited mainly to display functions on electronic appliances. The tiny red and green indicator lights on computers and other devices are so familiar, the fact that the first LEDs were limited to a dim red output is probably not widely recognized. In fact, even the availability of green-emitting diodes represented a significant developmental step in the technology. In the past 15 years or so, LEDs have become much more powerful, and available in a wide spectrum of colors. A breakthrough that enabled fabrication of the first blue LED in the early 1990s, emitting light at the opposite end of the visible light spectrum from red, opened up the possibility to create virtually any color of light. More important, the discovery made it technically feasible to produce white light from the tiny semiconductor devices. An inexpensive, mass-market version of white LED is the most sought-after goal of researchers and manufacturers, and is the device most likely to end a hundred-year reliance on inefficient incandescent lamps. The widespread utilization of diode devices for general lighting is still some years away, but LEDs are beginning to replace incandescent lamps in many applications. There are a number of reasons for replacing conventional incandescent light sources with modern semiconductor alternatives. Light emitting diodes are far more efficient than incandescent bulbs at converting electricity into visible light, they are rugged and compact, and can often last 100,000 hours in use, or about 100 times longer than incandescent bulbs. LEDs are fundamentally monochromatic emitters, and applications requiring high-brightness, single-color lamps are experiencing the greatest number of applications within the current generation of improved devices. The use of LEDs is increasing for automotive taillights, turn signals, and side marker lights. As one of the first automotive applications, the high-mount brake light on cars and trucks is a particularly appealing location for incorporating LEDs. Long LED lifespans allow manufacturers more freedom to integrate the brake light into the vehicle design without the necessity of providing for frequent (and easy) replacement, as is required when incandescent bulbs are used. Approximately 10 percent of the red traffic lights in the United States have now been replaced with LED-based lamps. The higher initial cost of the LEDs can be recovered in as little as one year, due to their higher efficiency in producing red light, which is accomplished without the need for filtering. The LEDs in a red traffic light consume about 10 to 25 watts, compared with 50 to 150 for a red-filtered incandescent light of similar brightness. The longevity of the LEDs is an obvious advantage in reducing expensive maintenance of the signals. Single-color LEDs are also being utilized as runway lights at airports and as warning lights on radio and television transmission towers. As improvements have been made in manufacturing efficiency and toward the ability to produce light emitting diodes with virtually any output color, the primary focus of researchers and industry has become the white light diode. Two primary mechanisms are being employed to produce white light from devices that are fundamentally monochromatic, and both techniques will most likely continue to be utilized for different applications. One method involves mixing different colors of light from multiple LEDs, or from different materials in a single LED, in proportions that result in light that appears white. The second technique relies on using LED emission (commonly non-visible ultraviolet) to provide energy for excitation of another substance, such as a phosphor, which in turn produces white light. Each method has both advantages and disadvantages that are likely to be in constant flux as further developments occur in LED technology. Fundamentals of Semiconductor Diodes Details of the fundamental processes underlying the function of light emitting diodes, and the materials utilized in their construction, are presented in the ensuing discussion. The basic mechanism by which LEDs produce light can be summarized, however, by a simple conceptual description. The familiar light bulb relies upon temperature to emit visible light (and significantly more invisible radiation in the form of heat) through a process known as incandescence. In contrast, the light emitting diode employs a form of electroluminescence, which results from the electronic excitation of a semiconductor material. The basic LED consists of a junction between two different semiconductor materials (illustrated in Figure 2), in which an applied voltage produces a current flow, accompanied by the emission of light when charge carriers injected across the junction are recombined. The fundamental element of the LED is a semiconductor chip (similar to an integrated circuit), which is mounted in a reflector cup supported by a lead frame connected to two electrical wires, and then embedded in a solid epoxy lens (see Figure 1). One of the two semiconductor regions that comprise the junction in the chip is dominated by negative charges (n-type region; Figure 2)), and the other is dominated by positive charges (p-type region). When a sufficient voltage is applied to the electrical leads, current flows and electrons move across the junction from the n region into the p region where the negatively charged electrons combine with positive charges. Each combination of charges is associated with an energy level reduction that may release a quantum of electromagnetic energy in the form of a light photon. The frequency, and perceived color, of emitted photons is characteristic of the semiconductor material, and consequently, different colors are achieved by making changes in the semiconductor composition of the chip. The functional details of the light emitting diode are based on properties common to semiconductor materials, such as silicon, which have variable conduction characteristics. In order for a solid to conduct electricity, its resistance must be low enough for electrons to move more or less freely throughout the bulk of the material. Semiconductors exhibit electrical resistance values intermediate between those of conductors and insulators, and their behavior can be modeled in terms of the band theory for solids. In a crystalline solid, electrons of the constituent atoms occupy a large number of energy levels that may differ very little either in energy or in quantum number. The wide spectrum of energy levels tend to group together into nearly continuous energy bands, the width and spacing of which differ considerably for different materials and conditions. At progressively higher energy levels, proceeding outward from the nucleus, two distinct energy bands can be defined, which are termed the valence band and the conduction band (Figure 3). The valence band consists of electrons at a higher energy level than the inner electrons, and these have some freedom to interact in pairs to form a type of localized bond among atoms of the solid. At still-higher energy levels, electrons of the conduction band behave similarly to electrons in individual atoms or molecules that have been excited above ground state, with a high degree of freedom to move about within the solid. The difference in energy between the valence and conduction bands is defined as the band gap for a particular material. In conductors, the valence and conduction bands partially overlap in energy (see Figure 3), so that a portion of the valence electrons always resides in the conduction band. The band gap is essentially zero for these materials, and with part of the valence electrons moving freely into the conduction band, vacancies or holes occur in the valence band. Electrons move, with very little energy input, into holes in bands of adjacent atoms, and the holes migrate freely in the opposite direction. In contrast to these materials, insulators have fully occupied valence bands and larger band gaps, and the only mechanism by which electrons can move from atom to atom is for a valence electron to be displaced into the conduction band, requiring a large energy expenditure. Semiconductors have band gaps that are small but finite, and at normal temperatures, thermal agitation is sufficient to move some electrons into the conduction band where they can contribute to electrical conduction. Resistance can be reduced by increasing the temperature, but many semiconductor devices are designed in such a manner that the application of a voltage produces the required changes in electron distribution between the valence and conduction bands to enable current flow. Although the band arrangement is similar for all semiconductors, there are large differences in the band gap (and in the distribution of electrons among the bands) at specific temperature conditions. The element silicon is the simplest intrinsic semiconductor, and is often used as a model for describing the behavior of these materials. In its pure form, silicon does not have sufficient charge carriers, or appropriate band gap structure, to be useful in light emitting diode construction, but is widely used to fabricate other semiconductor devices. The conduction characteristics of silicon (and other semiconductors) can be improved through the introduction of impurities in small quantities to the crystal, which serve to provide either additional electrons or vacancies (holes) in the structure. Through this process, referred to as doping, producers of integrated circuits have developed considerable ability to tailor the properties of semiconductors to suit specific applications. The process of doping to modify the electronic properties of semiconductors is most easily understood by considering the relatively simple silicon crystal structure. Silicon is a Group IV member of the periodic table, having four electrons that may participate in bonding with neighboring atoms in a solid. In pure form, each silicon atom shares electrons with four neighbors, with no deficit or excess of electrons beyond those required in the crystal structure. If a small amount of a Group III element (those having three electrons in their outermost energy level) is added to the silicon structure, an insufficient number of electrons exist to satisfy the bonding requirements. The electron deficiency creates a vacancy, or hole, in the structure, and the resulting positive electrical character classifies the material as p-type. Boron is one of the elements that is commonly utilized to dope pure silicon to achieve p-type characteristics. Doping in order to produce the opposite type of material, having a negative overall charge character (n-type), is accomplished through the addition of Group V elements, such as phosphorus, which have an "extra" electron in their outermost energy level. The resulting semiconductor structure has an excess of available electrons over the number required for covalent silicon bonding, which bestows the ability to act as an electron donor (characteristic of n-type material). Although silicon and germanium are commonly employed in semiconductor fabrication, neither material is suitable for light emitting diode construction because junctions employing these elements produce a significant amount of heat, but only a small quantity of infrared or visible light emission. Photon-emitting diode p-n junctions are typically based on a mixture of Group III and Group V elements, such as gallium arsenide, gallium arsenide phosphide, and gallium phosphide. Careful control of the relative proportions of these compounds, and others incorporating aluminum and indium, as well as the addition of dopants such as tellurium and magnesium, enables manufacturers and researchers to produce diodes that emit red, orange, yellow, or green light. Recently the use of silicon carbide and gallium nitride has permitted blue-emitting diodes to be introduced, and combining several colors in various combinations provides a mechanism to produce white light. The nature of materials comprising p-type and n-type sides of the device junction, and the resulting energy band structure, determines the energy levels that are available during charge recombination in the junction region, and therefore, the magnitude of the energy quanta released as photons. As a consequence, the color of light emitted by a particular diode depends upon the structure and composition of the p-n junction. The fundamental key to manipulating properties of solid-state electronic devices is the nature of the p-n junction. When dissimilar doped materials are placed in contact with each other, the flow of current in the region of the junction is different than it is in either of the two materials alone. Current will readily flow in one direction across the junction, but not in the other, constituting the basic diode configuration. This behavior can be understood in terms of the movement of electrons and holes in the two material types and across the junction. The extra free electrons in the n-type material tend to move from the negatively charged area to a positively charged area, or toward the p-type material. In the p-type region, which has vacant electron sites (holes), lattice electrons can jump from hole to hole, and will tend to move away from the negatively charged area. The result of this migration is that the holes appear to move in the opposite direction, or away from the positively charged region and toward the negatively charged area (Figure 4). Electrons from the n-type region and holes from the p-type region recombine in the vicinity of the junction to form a depletion zone (or layer), in which no charge carriers remain. In the depletion zone, a static charge is established that inhibits any additional electron transfer, and no appreciable charge can flow across the junction unless assisted by an external bias voltage. In a diode configuration, electrodes on opposite ends of the device enable a voltage to be applied in a manner that can overcome the effect of the depletion region. Connecting the n-type region of the diode to the negative side of an electrical circuit, and the p-type region to the positive side will cause electrons to move from the n-type material toward the p-type, and holes to move in the opposite direction. With application of a sufficiently high voltage, the electrons in the depletion region are elevated in energy to dissociate with the holes, and to begin moving freely again. Operated with this circuit polarity, referred to as forward biasing of the p-n junction, the depletion zone disappears and charge can move across the diode. Holes are driven to the junction from the p-type material and electrons are driven to the junction from the n-type material. The combination of holes and electrons at the junction allows a continuous current to be maintained across the diode. If the circuit polarity is reversed with respect to the p-type and n-type regions, electrons and holes will be pulled in opposite directions, with an accompanying widening of the depletion region at the junction. No continuous current flow occurs in a reverse-biased p-n junction, although initially a transient current will flow as the electrons and holes are pulled away from the junction. Current flow will cease as soon as the growing depletion zone creates a potential that is equal to the applied voltage. Light Emitting Diode Construction Manipulation of the interaction between electrons and holes at the p-n junction is fundamental in the design of all semiconductor devices, and for light emitting diodes, the primary design goal is the efficient generation of light. Injection of carriers across the p-n junction is accompanied by a drop in electron energy levels from the conduction band to lower orbitals. This process takes place in any diode, but only produces visible light photons in those having specific material compositions. In a standard silicon diode, the energy level difference is relatively small, and only low frequency emission occurs, predominately in the infrared region of the spectrum. Infrared diodes are useful in many devices, including remote controls, but the design of visible-light emitting diodes requires fabrication with materials exhibiting a wider gap between the conduction band and orbitals of the valence band. All semiconductor diodes release some form of light, but most of the energy is absorbed into the diode material itself unless the device is specifically designed to release the photons externally. In addition, to be useful as a light source, diodes must concentrate light emission in a specific direction. Both the composition and construction of the semiconductor chip, and the design of the LED housing, contribute to the nature and efficiency of energy emission from the device. The basic structure of a light emitting diode consists of the semiconductor material (commonly referred to as a die), a lead frame on which the die is placed, and the encapsulation epoxy surrounding the assembly (see Figure 1). The LED semiconductor chip is supported in a reflector cup coined into the end of one electrode (the cathode), and, in the typical configuration, the top face of the chip is connected with a gold bonding wire to a second electrode (anode). Several junction structure designs require two bonding wires, one to each electrode. In addition to the obvious variation in the radiation wavelength of different LEDs, there are variations in shape, size, and radiation pattern. The typical LED semiconductor chip measures approximately 0.25 millimeter-square, and the epoxy body ranges from 2 to about 10 millimeters in diameter. Most commonly, the body of the LED is round, but they may be rectangular, square, or triangular. Although the color of light emitted from a semiconductor die is determined by the combination of chip materials, and the manner in which they are assembled, certain optical characteristics of the LED can be controlled by other variables in the chip packaging. The beam angle can be narrow or wide (see Figure 5), and is determined by the shape of the reflector cup, the size of the LED chip, the distance from chip to the top of the epoxy housing or lens, and the geometry of the epoxy lens. The tint of the epoxy lens does not determine the emission color of the LED, but is often used as a convenient indicator of the lamp's color when it is inactive. LEDs intended for applications that require high intensity, and no color in the off-state, have clear lenses with no tint or diffusion. This type produces the greatest light output, and may be designed to have the narrowest beam, or viewing angle. Non-diffused lenses typically exhibit viewing angles of plus or minus 10 to 12 degrees (Figure 5). Their intensity allows them to be utilized for backlighting applications, such as the illumination of display panels on electronic devices. For creation of diffused LED lenses, minute glass particles are embedded in the encapsulating epoxy. The diffusion created by inclusion of the glass spreads light emitted by the diode, producing a viewing angle of approximately 35 degrees on either side of the central axis. This lens style is commonly employed in applications in which the LED is viewed directly, such as for indicator lamps on equipment panels. The choice of material systems and fabrication techniques in LED construction is guided by two primary goals—maximization of light generation in the chip material, and the efficient extraction of the generated light. In the forward-biased p-n junction, holes are injected across the junction from the p region into the n region, and electrons are injected from the n region into the p region. The equilibrium charge carrier distribution in the material is altered by this injection process, which is referred to as minority-carrier injection. Recombination of minority carriers with majority carriers takes place to reestablish thermal equilibrium, and continued current flow maintains the minority-carrier injection. When the recombination rate is equal to the injection rate, a steady-state carrier distribution is established. Minority-carrier recombination can take place in a radiative fashion, with the emission of a photon, but for this to occur the proper conditions must be established for energy and momentum conservation. Meeting these conditions is not an instantaneous process, and a time delay results before radiative recombination of the injected minority carrier can take place. This delay, the minority carrier lifetime, is one of the primary variables that must be considered in LED material design. Although the radiative recombination process is desirable in LED design, it is not the only recombination mechanism that is possible in semiconductors. Semiconductor materials cannot be produced without some impurities, structural dislocations, and other crystalline defects, and these can all trap injected minority carriers. Recombinations of this type may or may not produce light photons. Recombinations that do not produce radiation are slowed by the diffusion of the carriers to suitable sites, and are characterized by a nonradiative process lifetime, which can be compared to the radiative process lifetime. An obvious goal in LED design, given the factors just described, is to maximize the radiative recombination of charge carriers relative to the nonradiative. The relative efficiency of these two processes determines the fraction of injected charge carriers that combine radiatively compared to the total number injected, which can be stated as the internal quantum efficiency of the material system. The choice of materials for LED fabrication relies upon an understanding of semiconductor band structure and the means by which the energy levels can be chosen or manipulated to produce favorable quantum efficiency values. Interestingly, certain groups of III-V compounds have internal quantum efficiencies of nearly 100 percent, while other compounds utilized in semiconductors may have internal quantum efficiencies as low as 1 percent. The radiative lifetime for a particular semiconductor largely determines whether radiative recombinations occur before nonradiative. Most semiconductors have similar simple valence band structure with an energy peak situated around a particular crystallographic direction, but with much more variation in the structure of the conduction band. Energy valleys exist in the conduction band, and electrons occupying the lowest-energy valleys are positioned to more easily participate in recombination with minority carriers in the valence band. Semiconductors can be classified as direct or indirect depending upon the relative positioning of the conduction band energy valleys and the energy apex of the valence band in energy/momentum space. Direct semiconductors have holes and electrons positioned directly adjacent at the same momentum coordinates, so that electrons and holes can recombine relatively easily while maintaining momentum conservation. In an indirect semiconductor, the match between conduction band energy valleys and holes that would allow momentum conservation is not favorable, most of the transitions are forbidden, and the resulting radiative lifetime is long. Silicon and germanium are examples of indirect semiconductors, in which radiative recombination of injected carriers is extremely unlikely. The radiative lifetime in such materials occurs in the range of seconds, and nearly all injected carriers combine nonradiatively through defects in the crystal. Direct semiconductors, such as gallium nitride or gallium arsenide, have short radiative lifetimes (approximately 1 to 100 nanoseconds), and materials can be produced with sufficiently low defect density that radiative processes are as likely as nonradiative. For a recombination event to occur in indirect gap materials, an electron must change its momentum before combining with a hole, resulting in a significantly lower recombination probability for the occurrence of a band-to-band transition. The quantum efficiencies exhibited by LEDs constructed of the two types of semiconductor material clearly reflect this fact. Gallium nitride LEDs have quantum efficiencies as high as 12 percent, compared to the 0.02 percent typical of silicon carbide LEDs. Figure 6 presents an energy band diagram for direct band gap GaN and indirect band gap SiC that illustrates the nature of the band-to-band energy transition for the two types of material. The wavelength (and color) of light emitted in a radiative recombination of carriers injected across a p-n junction is determined by the difference in energy between the recombining electron-hole pair of the valence and conduction bands. The approximate energies of the carriers correspond to the upper energy level of the valence band and the lowest energy of the conduction band, due to the tendency of the electrons and holes to equilibrate at these levels. Consequently, the wavelength (l) of an emitted photon is approximated by the following expression: where h represents Planck's constant, c is the velocity of light, and E(bg) is the band gap energy. In order to change the wavelength of emitted radiation, the band gap of the semiconducting material utilized to fabricate the LED must be changed. Gallium arsenide is a common diode material, and may be used as an example illustrating the manner in which a semiconductor's band structure can be altered to vary the emission wavelength of the device. Gallium arsenide has a band gap of approximately 1.4 electron-volts, and emits in the infrared at a wavelength of 900 nanometers. In order to increase the frequency of emission into the visible red region (about 650 nanometers), the band gap must be increased to approximately 1.9 electron-volts. This can be achieved by mixing gallium arsenide with a compatible material having a larger band gap. Gallium phosphide, having a band gap of 2.3 electron-volts, is the most likely candidate for this mixture. LEDs produced with the compound GaAsP (gallium arsenide phosphide) can be customized to produce band gaps of any value between 1.4 and 2.3 electron-volts, through adjustment of the content of arsenic to phosphorus. As previously discussed, maximization of light generation in the diode semiconductor material is a primary design goal in LED fabrication. Another requirement is the efficient extraction of the light from the chip. Because of total internal reflection, only a fraction of the light that is generated isotropically within the semiconductor chip can escape to the outside. According to Snell's law, light can travel from a medium of higher refractive index into a medium of lower refractive index only if it intersects the interface between the two media at an angle less than the critical angle for the two media. In a typical light-emitting semiconductor having cubic shape, only about 1 to 2 percent of the generated light escapes through the top surface of the LED (depending upon the specific chip and p-n junction geometry), the remainder being absorbed within the semiconductor materials. Figure 7 illustrates the escape of light from a layered semiconductor chip of refractive index n(s) into epoxy of lower index (n(e)). The angle subtended by the escape cone is defined by the critical angle, q(c), for the two materials. Light rays emerging from the LED at angles less than q(c) escape into the epoxy with minimal reflection loss (dashed ray lines), while those rays propagating at angles greater than q(c) undergo total internal reflection at the boundary, and do not escape the chip directly. Because of the curvature of the epoxy dome, most light rays leaving the semiconductor material meet the epoxy/air interface at nearly right angles, and emerge from the housing with little reflection loss. The proportion of light emitted from an LED chip into the surroundings is dependent upon the number of surfaces through which light can be emitted, and how effectively this occurs at each surface. Nearly all LED structures rely on some form of layered arrangement in which epitaxial growth processes are utilized to deposit several lattice-matched materials on top of one another to tailor the properties of the chip. A wide variety of structures is employed, with each material system requiring different layer architecture in order to optimize performance properties. Most of the LED structural arrangements rely on a secondary growth step to deposit a single-crystal layer on top of a single-crystal bulk-grown substrate material. Such a multilayering approach enables designers to satisfy seemingly contradictory or inconsistent requirements. A common feature of all of the structural types is that the p-n junction, where the light emission occurs, is almost never located in the bulk-grown substrate crystal. One reason for this is that bulk-grown material generally has a high defect density, which lowers the light generation efficiency. In addition, the most common bulk-grown materials, including gallium arsenide, gallium phosphide, and indium phosphide, do not have the appropriate band gap for the desired emission wavelengths. Another requirement in many LED applications is for a low series resistance that can be met by appropriate substrate choice, even in cases in which the low doping required in the p-n junction region would not provide adequate conduction. The techniques of epitaxial crystal growth involve deposition of one material on another, which is closely matched in atomic lattice constants and thermal expansion coefficient to reduce defects in the layered material. A number of techniques are in use to produce epitaxial layers. These include Liquid Phase Epitaxy (LPE), Vapor Phase Epitaxy (VPE), Metal-Organic Epitaxial Chemical Vapor Deposition (MOCVD), and Molecular Beam Epitaxy (MBE). Each of the growth techniques has advantages in particular materials systems or production environments, and these factors are extensively discussed in the literature. The details of the various epitaxial structures employed in LED fabrication are not presented here, but are discussed in a number of publications. Generally, however, the most common categories of such structures are grown and diffused homojunctions, and single confinement or double confinement heterojunctions. The strategies behind the application of the various layer arrangements are numerous. These include structuring of p and n regions and reflective layers to increase the internal quantum efficiency of the system, graded-composition buffer layers to overcome lattice mismatch between layers, locally varying energy band gap to accomplish carrier confinement, and lateral constraint of carrier injection to control light emission area or to collimate the emission. Even though it does not typically contain the p-n junction region, the LED substrate material becomes an integral part of the function, and is chosen to be appropriate for deposition of the desired epitaxial layers, as well as for its light transmission and other properties. As previously stated, the fraction of generated light that is actually emitted from an LED chip is a function of the number of surfaces that effectively transmit light. Most LED chips are categorized as absorbing substrate (AS) devices, where the substrate material has a narrow band gap and absorbs all emission having energy greater than the band gap. Therefore, light traveling toward the sides or downward is absorbed, and such chips can only emit light through their top surfaces. The transparent substrate (TS) chip is designed to increase light extraction by incorporating a substrate that is transparent to the wavelength of emitted light. In some systems, transparency in the upper epitaxial layers will allow light transmitted toward the side surfaces, within certain angles, to be extracted as well. Hybrid designs, having substrate properties intermediate between AS and TS devices, are also utilized, and significant increases in extraction efficiency can be achieved by employment of a graded change in refractive index from the LED chip to air. There remain numerous other absorption mechanisms in the LED structure that reduce emission and are difficult to overcome, such as the front and back contacts on the chip, and crystal defects. However, chips made on transparent, as opposed to absorbing, substrates can exhibit a nearly-fivefold improvement in extraction efficiency. Development of Multiple LED Colors The first commercial light emitting diode, developed in the 1960s, utilized the primary constituents gallium, arsenic, and phosphorus to produce red light (655-nanometer wavelength). An additional red light-emitting material, gallium phosphide, was later used to produce diodes emitting 700-nanometer light. The latter version has seen limited application, in spite of high efficiency, due to the low apparent brightness resulting from relative insensitivity of the human eye in that spectral region. Throughout the 1970s, technological developments enabled additional diode colors to be introduced, and production improvements increased the quality control and reliability of the devices. Changes in the elemental proportions, doping, and substrate materials resulted in development of gallium-arsenide-phosphorus (GaAsP) diodes producing orange and yellow emission, as well as a higher-efficiency red emitter. Green diodes based on GaP chips were also developed. The introduction and refinement of the use of gallium-aluminum-arsenide (GaAlAs), during the 1980s, resulted in a rapid growth in the number of applications for light emitting diodes, largely due to an order-of-magnitude improvement in brightness compared to previous devices. This gain in performance was achieved by the use of multilayer heterojunction structures in the chip fabrication, and although these GaAlAs diodes are limited to emission in the red (660 nanometers), they began to be used in outdoor signs, bar code scanners, medical equipment, and fiber optic data transmission. Light Emitting Diode Color Variations A major development occurred in the late 1980s, when LED designers borrowed techniques from the rapidly progressing laser diode industry, leading to the production of high-brightness visible light diodes based on the indium-gallium-aluminum-phosphide (AlGaInP) system. This material allows changes in the emission color by adjustment of the band gap. Therefore, the same production techniques can be employed to produce red, orange, yellow, and green LEDs. Table 1 lists many of the common LED chip materials (epitaxial layers and, in some cases, the substrate) and their emission wavelengths (or corresponding color temperature for white light LEDS). More recently, blue LEDs have been developed based on gallium nitride and silicon carbide materials. Production of light in this shorter-wavelength, more energetic region of the visible spectrum, has long been elusive to designers of LEDs. High photon energies typically increase the failure rate of semiconductor devices, and the low sensitivity of the human eye to blue light adds to the brightness requirement for a useful blue diode. One of the most important aspects of a blue light emitting diode is that it completes the red, green, and blue (RGB) primary color family to provide an additional mechanism of producing solid-state white light, through the mixing of these component colors. Solid-state researchers have sought to develop a bright blue light source since the development of the first light emitting diodes. Although LEDs utilizing silicon carbide can produce blue light, they have extremely low luminous efficiency, and are not capable of producing the brightness that is necessary for practical applications. Recent developments in Group III-nitride based semiconductors have led to a revolution in diode technology. In particular, the gallium-indium-nitride (GaInN) system has emerged as the leading candidate for the production of blue LEDs, and is also a primary material in the developing white LED market. The GaInN material system evolved in the 1990s with the achievement of p-doping in GaN, followed later by the utilization of GaInN/GaN double heterostructure for LED fabrication, and then by the commercial availability of high-brightness blue and green GaInN LEDs in the late 1990s. White Light LEDs The role of the gallium-indium-nitride semiconductor material system extends to the development of white-light diodes. The addition of bright blue-emitting LEDs to the earlier-developed red and green devices makes it possible to use three LEDs, tuned to appropriate output levels, to produce any color in the visible light spectrum, including white. Other possible approaches to producing white light, utilizing a single device, are based on phosphor or dye wavelength converters or semiconductor wavelength converters. The concept of a white LED is particularly attractive for general illumination, due to the reliability of solid-state devices, and the potential for delivering very high luminous efficiency compared to conventional incandescent and fluorescent sources. Whereas conventional light sources exhibit an average output of 15 to 100 lumens per watt, the efficiency of white LEDs is predicted to reach more than 300 lumens per watt through continued development. Figure 8 illustrates the luminous efficiency values for a number of LED types and conventional light sources, and includes the CIE (Commission Internationale de l'Eclairage) luminosity curve for the visible wavelength range. This curve represents the human eye response to an emitter of 100 percent efficiency. Some of the current LED materials systems exhibit higher luminous performance than most of the conventional light sources, and soon light emitting diodes are expected to be the most efficient emitters available. White LEDs are certainly suitable for display and signage applications, but in order to be useful for general illumination (as hoped), and for applications demanding accurate and aesthetically pleasing color rendering (including illumination for optical microscopy), the manner in which "white" light is achieved must be seriously considered. The human eye perceives light as being white if the three types of photosensory cone cells, located in the retina, are stimulated in particular ratios. The three cone types exhibit response curves that peak in sensitivity at wavelengths representing red, green, and blue, and the combination of response signals produces various color sensations in the brain. A wide variety of different color mixtures are capable of producing a similar perceived color, especially in the case of white, which may be realized through many combinations of two or more colors. A chromaticity diagram is a graphical means of representing the results obtained from mixing colors. Monochromatic colors appear on the periphery of the diagram, and a range of mixtures representing white is located in the central region of the diagram (see Figure 9). Light that is perceived as white can be generated by different mechanisms. One method is to combine light of two complementary colors in the proper power ratio. The ratio that produces a tristimulus response in the retina (causing the perception of white) varies for different color combinations. A selection of complementary wavelengths are listed in Table 2, along with the power ratio for each pair that produces the chromaticity coordinates of a standard illuminant designated as D(65) by the International Commission for Illumination (CIE, Commission Internationale de l'Eclairage). Another means of generating white light is by combining the emission of three colors that will produce the perception of white light when they are combined in the proper power ratio. White light can also be produced by broadband emission from a substance that emits over a large region of the visible spectrum. This type of emission approximates sunlight, and is perceived as white. Additionally, broadband emission can be combined with emission at discrete spectral lines to produce a perceived white, which may have particular desirable color characteristics that differ from those of white light produced by other techniques. The combination of red, green, and blue diode chips into one discrete package, or in a lamp assembly housing a cluster of diodes, allows the generation of white light or any of 256 colors by utilizing circuitry that drives the three diodes independently. In applications requiring a full spectrum of colors from a single point source, this type of RGB diode format is the preferred technique. Most white-light diodes employ a semiconductor chip emitting at a short wavelength (blue, violet or ultraviolet) and a wavelength converter, which absorbs light from the diode and undergoes secondary emission at a longer wavelength. Such diodes, therefore, emit light of two or more wavelengths, that when combined, appear as white. The quality and spectral characteristics of the combined emission vary with the different design variations that are possible. The most common wavelength converter materials are termed phosphors, which exhibit luminescence when they absorb energy from another radiation source. The typically utilized phosphors are composed of an inorganic host substance containing an optically active dopant. Yttrium aluminum garnet (YAG) is a common host material, and for diode applications, it is usually doped with one of the rare-earth elements or a rare-earth compound. Cerium is a common dopant element in YAG phosphors designed for white light emitting diodes. Complementary Color Wavelengths The first commercially available white LED (fabricated and distributed by the Nichia Corporation) was based on a blue-light-emitting gallium-indium-nitride (GaInN) semiconductor device surrounded by a yellow phosphor. Figure 1 illustrates the cross-sectional structure of the device. The phosphor is Ce-doped YAG, produced in powder form and suspended in the epoxy resin used to encapsulate the die. The phosphor-epoxy mixture fills the reflector cup that supports the die on the lead frame, and a portion of the blue emission from the chip is absorbed by the phosphor and reemitted at the longer phosphorescence wavelength. The combination of the yellow photo-excitation under blue illumination is ideal in that only one converter species is required. Complementary blue and yellow wavelengths combine through additive mixing to produce the desired white light. The resulting emission spectrum of the LED (Figure 10) represents the combination of phosphor emission, with the blue emission that passes through the phosphor coating unabsorbed. The relative contributions of the two emission bands can be modified to optimize the luminous efficiency of the LED, and the color characteristics of the total emission. These adjustments can be accomplished by changing the thickness of the phosphor-containing epoxy surrounding the die, or by varying the concentration of the phosphor suspended in the epoxy. The bluish white emission from the diode is synthesized, in effect, by additive color mixing, and its chromaticity characteristics are represented by a central location (0.25, 0.25) on the CIE chromaticity diagram (Figure 9; Bluish White LED). White light diodes can generate emission by another mechanism, utilizing broad-spectrum phosphors that are optically excited by ultraviolet radiation. In such devices, an ultraviolet-emitting diode is employed to transfer energy to the phosphor, and the entire visible emission is generated by the phosphor. Phosphors that emit at a broad range of wavelengths, producing white light, are readily available as the materials used in fluorescent light and cathode ray tube manufacture. Although fluorescent tubes derive their ultraviolet emission from a gas discharge process, the phosphor emission stage producing white light output is the same as in ultraviolet-pumped white diodes. The phosphors have well known color characteristics and diodes of this type have the advantage that they can be designed for applications requiring critical color rendering. A significant disadvantage of the ultraviolet-pumped diodes, however, is their lower luminous efficiency when compared to white diodes employing blue light for phosphor excitation. This results from the relatively high energy loss in the down-conversion of ultraviolet light to longer visible wavelengths. Dyes are another suitable type of wavelength converter for white diode applications, and can be incorporated into the epoxy encapsulant or in transparent polymers. The commercially available dyes are generally organic compounds, which are chosen for a specific LED design by consideration of their absorption and emission spectra. The light generated by the diode must match the absorption profile of the converting dye, which in turn emits light at the desired longer wavelength. The quantum efficiencies of dyes can be near 100 percent, as in phosphor conversion, but they have the disadvantage of poorer long-term operational stability than phosphors. This is a serious drawback, as the molecular instability of the dyes causes them to lose optical activity after a finite number of absorptive transitions, and the resulting change in light emitting diode color will limit its lifetime. White light LEDs based on semiconductor wavelength converters have been demonstrated that are similar in principle to the phosphor conversion types, but which employ a second semiconductor material that emits a different wavelength in response to the emission from the primary source wafer. These devices have been referred to as photon recycling semiconductors (or PRS-LEDs), and incorporate a blue-emitting LED die bonded to another die that responds to the blue light by emitting light of a complementary wavelength. The two wavelengths then combine to produce white. One possible structure for this type of device utilizes a GaInN diode as a current-injected active region coupled to an AlGaInP optically-excited active region. The blue light emitted by the primary source is partially absorbed by the secondary active region, and "recycled" as reemitted photons of lower energy. The structure of a photon recycling semiconductor is illustrated schematically in Figure 11. In order for the combined emission to produce white light, the intensity ratio of the two sources must have a specific value that can be calculated for the particular dichromatic components. The choice of materials and the thickness of the various layers in the structure can be modified to vary the color of the device output. Because white light can be created by several different mechanisms, utilizing white LEDs in a particular application requires consideration of the suitability of the method employed to generate the light. Although the perceived color of light emitted by various techniques may be similar, its effect on color rendering, or the result of filtration of the light, for example, may be entirely different. White light created through broadband emission, through mixing of two complementary colors in a dichromatic source, or by mixing of three colors in a trichromatic source, can be located at different coordinates on the chromaticity diagram and have different color temperatures with respect to illuminants designated as standards by the CIE. It is important to realize, however, that even if different illuminants have identical chromaticity coordinates, they may still have substantially different color rendering properties (Table 3), due to variations in details of each source's output spectrum. LED Efficiency and Color Rendering Index Two factors, referred to previously, are of primary importance in evaluating white light generated by LEDs: the luminous efficiency, and the color rendering capabilities. A property referred to as the color rendering index (CRI) is utilized in photometry to compare light sources, and is defined as the source's color rendering ability with respect to that of a standard reference illumination source. It can be demonstrated that there exists a fundamental trade-off between luminous efficiency and color rendering ability of light-emitting devices, as illustrated by the values in Table 3. For an application such as signage, which utilize blocks of monochromatic light, the luminous efficiency is of primary importance, while the color rendering index is irrelevant. For general illumination, both factors must be optimized. The spectral nature of the illumination emitted from a device has a profound influence on its color rendering ability. Although the highest possible luminous efficiency can be obtained by mixing two monochromatic complementary colors, such a dichromatic light source has a low color rendering index. In a practical sense, it is logical that if a red object is illuminated with a diode emitting white light created by combining only blue and yellow light, then the appearance of the red object will not be very pleasing. The same diode would be quite suitable for backlighting a clear or white panel, however. A broad-spectrum white light source that simulates the sun's visible spectrum possesses the highest color rendering index, but does not have the luminous efficiency of a dichromatic emitter. Phosphor-based LEDs, which either combine blue emission wavelengths with a longer-wavelength phosphorescence color, or create light solely from phosphor emission (as in ultraviolet-pumped LEDs), can be designed to have color rendering capabilities that are quite high. They have color character that is similar in many respects to that of fluorescent lamp tubes. The GaInN LEDs utilize blue emission from the semiconductor to excite phosphors, and are available in cool white, pale white, and incandescent white versions that incorporate different amounts of phosphor surrounding the chip. The cool white is the brightest, utilizing the least phosphor, and produces light with the most bluish color. The incandescent white version surrounds the blue-emitting chip with the most phosphor, has the dimmest output, and the yellowest (warmest) color. The pale white has brightness and color shade characteristics intermediate between the other two versions. The long-anticipated availability of white LEDs has generated great interest in applying these devices to general lighting requirements. As lighting designers become familiar with the characteristics of the new devices, a number of misconceptions will have to be dispelled. One of these is that the light from a white LED can be used to illuminate a lens or filter of any color, and maintain the accuracy and saturation of the color. In a number of the versions of white LED, there is no red component present in the white output, or there are other discontinuities in the spectrum. These LEDs cannot be used as general sources to backlight multicolored display panels or colored lenses, although they function well behind clear or white panels. If a blue-based GaInN white LED is employed behind a red lens, the light transmitted will be pink in color. Similarly, an orange lens or filter will appear yellow when illuminated with the same LED. Although the potential benefits in application of LEDs are tremendous, consideration of their unique characteristics is necessary in incorporating these devices into lighting schemes in place of more familiar conventional sources. Kenneth R. Spring - Scientific Consultant, Lusby, Maryland, 20657. Thomas J. Fellers and Michael W. Davidson - National High Magnetic Field Laboratory, 1800 East Paul Dirac Dr., The Florida State University, Tallahassee, Florida, 32310. Questions or comments? Send us an email. © 1998-2018 by Michael W. Davidson and The Florida State University. All Rights Reserved. No images, graphics, scripts, or applets may be reproduced or used in any manner without permission from the copyright holders. Use of this website means you agree to all of the Legal Terms and Conditions set forth by the owners. This website is maintained by our
<urn:uuid:d7cb6899-fa8d-4608-a092-acf0a640146b>
CC-MAIN-2018-43
http://micro.magnet.fsu.edu/primer/lightandcolor/ledsintro.html
s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583515029.82/warc/CC-MAIN-20181022092330-20181022113830-00326.warc.gz
en
0.928138
10,447
3.5625
4
Make your research discoverable Search engines like Google and Google Scholar, as well as indexing services like PubMed, are now the first port of call for researchers to discover articles they want to read or cite. The way we search has also changed dramatically in the past 10 years, with users now searching for ‘key phrases’ rather than by full title or single words. Once a search is performed, articles are quickly scanned on the basis of the title and abstract, before the user decides whether to access the full text, or move on. You can see how it is essential for your paper to be correctly set up for discoverability, right from the start. Here are a few steps you can take to make your work more visible and, as a result, more likely to be cited. Think as if you were searching for your article – What key phrases would you use to search for your own article? Make a list. Pick a clear and descriptive title: include the main key phrase(s) you have identified, and remember that your title should have meaning outside of the context of the journal. Include your key phrases in the abstract: Abstracts are one of the most important elements in the process of discovery, they provide search engines with the data they need to find your article and rank it in the search results page. Remember that search engines can detect abuse too! Avoid too much repetition and just focus on 3 main key phrases. Use plain English and avoid jargon – keep in mind that discovery often happens by serendipity and your article might be of interest to researchers in other fields or countries. Make sure they understand it! Keep it natural – Google will un-index your article if you go overboard on repetition of keywords. Just write naturally for your audience. Be identifiable with an ORCID ID. What is an Orcid ID and why is it important? ORCID provides researchers with a unique identifier that can be kept throughout their career. It can be used in publications and grant applications. ORCID distinguishes between researchers with similar names, and helps ensure that publications are attributed and recorded correctly. It also helps researchers to comply with funders’ open access requirements. Persistent identifiers, like an ORCID iD, are crucial as a way to find, link and navigate the vast volumes of information available. Having an ORCID iD will support the discovery of your research and publications. BMJ Journal submission systems support ORCID allowing authors to enter their unique identifier.
<urn:uuid:24d0ae8a-26a4-4a56-b27b-4f8d7d4ad9dd>
CC-MAIN-2017-22
http://authors.bmj.com/writing-your-paper/make-your-research-discoverable/
s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463613135.2/warc/CC-MAIN-20170529223110-20170530003110-00493.warc.gz
en
0.941405
516
2.9375
3
- Food & Wines - Real Estate - Learn Italian - Home & Garden - Sign in A History of Italy in Brief In many ways, the history of Italy is the history of the modern world. So many pivotal moments in our collective past have taken place in Italy that it can be considered Europe’s historical keystone. In this section, learn about the great and not so great moments in Italian history, from the grandeur of Rome to the Renaissance, the Risorgimento to the battlefields of World War II. Brief History of Italy By 500 BC, a number of groups shared Italy. Small Greek colonies dotted the southern coast and island of Sicily. Gauls, ancestors of today's modern French, roamed the mountainous north. While the Etruscans, a group originally hailing from somewhere in western Turkey, settled in central Italy, establishing a number of city-states, including what is now modern-day Bologna. Little is known about the Etruscans except that they thrived for a time, creating a civilization that would pass down a fondness for bold architecture (stone arches, paved streets, aqueducts, sewers) to its successor, Rome. According to legend, Rome was founded on April 21, 753 BC by Romulus and Remus, twin brothers who claimed to be sons of the war god Mars and to have been raised as infants by a she-wolf. Romulus saw himself as a descendant of the defeated army of Troy, and wanted Rome to inherit the mantle of that ancient city, if not surpass it. When Remus laughed at the notion, Romulus killed his brother and declared himself the first king of Rome. Rome went through seven kings until 509 BC when the last king was overthrown and the Roman Republic was formed. Rome then came to be ruled by two elected officials (known as consuls), a Senate made up of wealthy aristocrats (known as patricians), and a lower assembly that represented the common people (plebeians) and had limited power. This format of government worked well at first, but as Rome expanded beyond a mere city-state to take over territory not just in Italy, but overseas as well, the system of government came under severe strain. By the First Century BC, Rome was in crisis. Spartacus, a slave, led the common people in a revolt against the rule of the aristocratic patricians. Rome was able to put down the rebellion, but at great cost, as the Republic dissolved into a series of military of dictatorships that ended with the assassination of Julius Caesar. In 29 BC, after a long power struggle, Julius Caesar's nephew, Octavius, seized power and declared himself Emperor Augustus. The Roman Empire was born. For the next two hundred years, Rome thrived, ruling over a vast territory stretching from Britain and the Atlantic coast of Europe in the north and west to North Africa and the Middle East in the south and east. This Pax Romana, a time of peace, ended in 180 AD with the death of Marcus Aurelius, Rome's last great emperor. A combination of economic problems, barbarian invasions, domestic instability, and territorial rebellions, combined with a lack of strong leadership, resulted in the slow and gradual decline of Rome. In 380 AD, after three hundred years of persecution, Christianity became the one and only official religion. By the end of the Fourth Century AD, the Roman Empire split into two. The East, based out of the newly-built capital of Constantinople, in what is now Turkey, thrived, eventually becoming the long-lasting Byzantine Empire. Rome, capital of the West, continued to decline. In 410 AD, Rome itself was sacked by barbarian hordes. The Eastern Empire invaded but failed to restore order and had to withdraw. The Roman Empire in the West completely collapsed. For the next thousand years, Italy once again became a patchwork of city-states, with Rome, home to the Catholic Church, being the most powerful. This long period of quiet stagnation was known as the Dark Ages. Prosperity did not return to Italy again until the Fourteenth Century, when city-states such as Florence, Milan, Pisa, Genoa, and Venice became centers of trade. The influx of wealth and increased trade contact with foreign lands, transformed Italy into Europe's premier center of culture. Funded by wealthy patrons, figures such as Leonardo Da Vinci, Michelangelo, Dante, Machiavelli, and Galileo, among others, revolutionized the fields of art, literature, politics, and science. Italian explorers, such as Marco Polo and Christopher Columbus, introduced Italy and Europe to the rest of the world. Italy remained a center of power until the Sixteenth Century, when trade routes shifted away from the Mediterranean and the Protestant Reformation resulted in the Catholic Church, which was based in Rome, losing influence over much of Northern Europe. Weakened, the various Italian city-states became vulnerable to conquest by Spain, France, and Austria. Italy remained a patchwork of principalities controlled through proxy by various European powers until the Nineteenth Century, when the French leader Napoleon supported the unification of Italy as a way of creating a buffer state against his many enemies. With the backing of France, Italian nationalist Giuseppe Garibaldi led a popular movement that took over much of Italy, ending in 1870 with the fall of Rome and complete unification of Italy. Plagued by internal political divisions and with an economy devastated by war, the new Kingdom of Italy was no Roman Empire. In 1919, frustrated that Italy had received few gains despite having been a victor in the First World War, a politician named Benito Mussolini launched a movement that called for the restoration of Italy as a great power. In 1922, impatient with electoral politics, Mussolini led his supporters, known as Fascists, on a march on Rome to seize power directly through a coup. Spooked, the Italian king did not put up a fight and allowed Mussolini to become supreme ruler of Italy. Mussolini spent the next twenty years consolidating power and building up the Italian economy, but he never gave up on the idea of restoring Italy as a great power. Calling himself "Il Duce" (meaning Leader), Mussolini dreamed of leading a new Roman Empire. In the 1930s, he indulged his dreams of conquest, by invading Ethiopia and Albania. When the Second World War broke out, Italy remained neutral at first. However, once it appeared through the Fall of France that Germany would win, Mussolini eagerly joined Hitler, a fellow Fascist and longtime ally, in the war effort and rushed to invade Greece, the Balkans, and North Africa. Overextended and unprepared for such a large-scale effort, Italy quickly found that it could not maintain its military position and had to ask Germany for help. Before long, Mussolini saw himself losing control of North Africa, the Mediterranean, and eventually his very own country to the Allies. Fleeing Rome, Mussolini tried to set up a puppet state in Northern Italy but failed. Abandoned by a disgusted Hitler, Il Duce and his mistress were captured and executed by Italian partisans. After the Second World War, Italy abolished the monarchy and declared itself a republic. With the strong support of the United States, Italy rebuilt its economy through loans from the Marshall Plan, joined the North Atlantic Treaty Organization, and became a strong supporter of what is now the European Union. Today, Italy is now one of the most prosperous and democratic nations in Europe.
<urn:uuid:597ce4fe-0dfd-4f89-9c2f-19dfc19b4b9e>
CC-MAIN-2013-48
http://www.lifeinitaly.com/culture/history
s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164959491/warc/CC-MAIN-20131204134919-00033-ip-10-33-133-15.ec2.internal.warc.gz
en
0.971722
1,558
3.546875
4
This video tutorial helps you to know the notes to playIf you want to learn Happy Birthday on the piano more faster and for free just go to our piano app connect your keyboard to your device and learn at your own pace the app listens to how you play and. Play your fourth finger on your left hand twice the G note. Use one of these to make a duet with your beginner piano student or even have two students play together. How to play happy birthday on piano two hands. Then you can try playing with both hands which will make the song sound happier and overall better. If its a G Major chord then it will be a G Dominant 7 because the 7th note is F not F. Very easy Happy Birthday To You piano tutorial for beginners. Just make sure the 7th note is still in the key of C. The Happy Birthday piano sheet music arrangement I feature just above uses a simple. Then quickly play your third finger down afterwards A note. Follow the letter notes and learn the melody first. Happy birthday is a classic tune and one of the easiest tunes to play. Make sure both your thumbs 1s are on the middle C. Learn how to play Happy Birthday To You on piano and keyboard. G G A G C B G G A G D C G G G E C B A F F E C D C. You will learn how to play the right hand par. Free easy shared-hands version of Happy Birthday with lettered notes. Piano Tutorial for beginners. Name the notes C D E F G and play them on the piano using the right hand and using the. The keyboard notes for happy birthday are. Click here to print the Happy Birthday Piano Sheet. Left Hand Notes Happy Birthday To You Easy Piano Tutorial. Happy Birthday To You piano tutorial. To respect the beats of each note you can use the following vidéo tutorial of the Happy Birthday song to play them correctly. How To Play Happy Birthday Very Easy Piano Music 1. HttpbitlyMarijanPianoSheets More Piano Tutorial. This one is a must-learn and you should totally introduce it to your repertoire. The easiest way to do this is to start adding 7ths to the chords. Happy birthday Piano notes for Beginners. Also check happy birthday. Happy Birthday K HAP-PY BIRTH-DAY TO BIRTH-DAY TO YOU HAP – NAME BIRTH-DAY TO YOU HAP – py BIRTH-DAY DEAR YOU. Put your hands on top of the notes. Watch the video and play these notes with your left hand while playing the melody with the right F C C F F B F C F. If you see two versions of the song it means that one is arranged for the melody played with both hands for pure beginners and the other one is with the melody on the right hand and the accompaniment on the left hand. Place hands over the notes with both thumbs 1s on middle C. G G A G C B G G A G D C G G G E C B A F F E C D C. So if the first chord is a C Major youll make it a C Maj 7. Easy melody and left hand part as well as chords. Free piano sheet for happy birthday keyboard notes can be obtained from the below piano song download image. Place both your thumbs right and left on middle C without playing a sound. Locate middle C on the keyboard. Just click on both of them. You will learn how to play Happy Birthday To You. To prepare ask the child to play and name the notes C D E F G with the right hand 1 2 3 4 5 both up and. F C Happy birthday to you C F Happy birthday to you F B. Learn how to play the song HAPPY BIRTHDAY TO YOU on piano Sheet Music.
<urn:uuid:a16e3cb5-445d-44eb-831d-b68be8bd9d0f>
CC-MAIN-2021-49
https://thienlong.info/how-to-play-happy-birthday-on-piano-two-hands/
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363215.8/warc/CC-MAIN-20211205160950-20211205190950-00060.warc.gz
en
0.884479
792
2.8125
3
Sexual violence. It’s a tough concept with an even tougher definition that includes acts of sexual assault, sexual abuse of both children & adults, and/or sexual harassment. Both men & women can commit sexual violence or have sexual violence committed against them. Sexual violence exists, but it’s not your fault. Never your fault. It is a physical and emotional violation. It is a crime that takes away your right to protect your body and mind; your right to safety. The worst part is, even though it infiltrates every community here and around the world, it is so incredibly difficult to talk about, especially in an open and healthy way. Roadblocks are everywhere. Besides being an extremely uncomfortable topic, people’s attitudes about sexual violence are often rooted in myths. Don’t believe me? Check out myths that still pervade today. A Safe Place. A Strong Community. The Your World project was created so that people like you and I could understand and talk about socially pervasive thoughts and attitudes related to sexual violence in our own community. We need proactive thinking & behaviours that challenge the attitudes & myths that allow sexual violence to continue. We need to create a community that recognizes sexual violence doesn’t belong in our world. Contact us today for more information.
<urn:uuid:67a99f14-ad95-41af-9d3f-c65ad3df429e>
CC-MAIN-2018-13
http://ccasayourworld.com/about/
s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647280.40/warc/CC-MAIN-20180320033158-20180320053158-00157.warc.gz
en
0.951698
265
2.671875
3
English - Foundation to Year 12 Foundation to Year 10 Australian Curriculum The Australian Curriculum in English includes: The study of English is central to the learning and development of all young Australians. It is through the study of English that individuals learn to analyse, understand, communicate with and build relationships with others and with the world around them. Implementing the Foundation to Year 10 Australian Curriculum Schools, states and territories are responsible for implementation of the F-10 Australian Curriculum. Each state and territory has its own implementation plan, along with support programs for teachers. Further information about the implementation plans and programs of the F-10 Australian Curriculum: English for each state or territory can be found in the following summary of implementation , or by contacting your local school or education authority. Writing of the Foundation to Year 10 Australian Curriculum The writing of the F-10 Australian Curriculum: English commenced in 2009 with expert writers and advisory group members. The content for the F-10 Australian Curriculum: English was published in December 2010, following extensive consultation with a wide range of stakeholders, critical friend review and international benchmarking. It was developed with reference to the Shape of the Australian Curriculum: English (2009) and the Curriculum Design Paper v 2.2. Following validation of the F-10 achievement standards during 2011, the final F-10 Australian Curriculum: English (content and achievement standards) was published at the end of 2011 on the Australian Curriculum website www.australiancurriculum.edu.au English F-10 Development Timeline Senior Secondary Curriculum Implementing the senior secondary Australian Curriculum State and territory curriculum, assessment and certification authorities are responsible for how senior secondary courses are organised, and they will determine how the Australian Curriculum content and achievement standards are to be integrated into their courses. The state and territory authorities will also determine the assessment and certification specifications for those courses and any additional information, guidelines and rules to satisfy local requirements including advice on entry and exit points and credit for completed study. For more information, contact your local state or territory education authority. Writing of the senior secondary Australian Curriculum The writing of the senior secondary Australian Curriculum: English was developed with reference to Shape of the Australian Curriculum: English (2009) and the Curriculum Design Paper v 2.2 . Australian Curriculum has been written for four senior secondary subjects within the English learning area. These are: The writing of the senior secondary Australian Curriculum commenced in 2009 with expert writers and advisory group members. An initial draft of the senior secondary English subjects was published for national consultation in 2010. Consultation meetings were held throughout June and July 2010 and many groups made submissions. Following consultation and in response to that feedback, writing recommenced in May 2011 and a national forum was held in August 2011. The recommendations of all the stakeholders were incorporated into the documents and further consultation with curriculum authorities and professional associations were undertaken prior to the release of the draft senior secondary Australian Curriculum for national public consultation between May and July 2012. The consultation feedback (see Consultation Report) along with national and international feedback contributed to further revisions and consultation with state and territory education authorities in August and September 2012. Final revised and quality assured documents were approved by the Standing Council of Ministers in December 2012. The final senior secondary curriculum is published on the Australian Curriculum website www.australiancurriculum.edu.au. The senior secondary Australian Curriculum for English incorporates: a statement of rationale and a set of aims content descriptions that specify what students are to be taught across four units (and four bridging units in EALD) achievement standards that describe the quality of learning expected of students at five levels for each pair of units (1 and 2; 3 and 4). The senior secondary Australian Curriculum for English: has been subject to extensive and sustained consultation during its development has been reviewed against curricula of leading nations during the development process sets challenging standards does not overload the curriculum but encourages the pursuit of deeper learning. Senior Secondary English Development Timeline
<urn:uuid:e1b74c8e-256e-4cb4-9c79-1b4c5c52fd59>
CC-MAIN-2018-51
http://acara.edu.au/curriculum/learning-areas-subjects/english-foundation-to-year-12
s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376824119.26/warc/CC-MAIN-20181212203335-20181212224835-00232.warc.gz
en
0.949089
868
2.90625
3
Legendary Sikh Battle of Saragarhi 1897 You might have heard about the (one of the greatest, unequal, ever heard about) Legendary Battle of Thermopylae which was fought between an alliance of Greek city states, led by King Leonidas of Sparta and the Persian Empire of Xerxes I over the course of three days in 480BC. Unlike the Hollywood film ′300′ which fails to mention the +7,100 Greek City soldiers, King Leonidas also had 300 of his finest Spartans. The Persians had a force of 300,000 men. The odds of the battle were heavily (approx. 40:1) against the Spartans. In August or September 480 BC, at the narrow coastal pass of Thermopylae, Greece the Spartans and allies fought for 2 days (it’s well worth watching ‘300’ to get a flavour of the fighting prowess of the Spartans ). On the third day the Persians managed to find a mountain trail and encircle the Spartans. Fearing defeat King Leonidas of Sparta allowed his allies to escape, and stayed (with his 300 men and 1,200 others) to fight to the death. So now we have 1,500 men against approx. 300,000 Persians (approx. 200:1 odds) which is not looking great. Needless to say the Spartans were all slaughtered but not before they killed 20,000 Persians! The Spartans lost approx. 2,000 men in total. That equates to a kill rate of approx. 10:1! If you thought the Battle of Thermopylae associated with the heroic stand of a small Greek force against the mighty Persian Army of Xerxes I in 480 B.C was legendary, then read about this last stand by the Sikhs at the Battle of Saragarhi on 12th September 1897. Saragarhi is the incredible story of 21 men of the 36th Sikh Regiment who gave up their lives in devotion to their duty. In keeping with the Sikh Khalsa tradition they fought to the death rather than surrender. It has been mentioned as one of the most significant events of its kind in the world and was honoured by the British Parliament and Queen Victoria. How many men did the 21 Sikhs face? +10,000 (that’s 500:1) How many men did the Sikhs kill? 800 (that’s 40:1!) What was the Khalsa Sikh’s secret? Being Sikh’s… with shouts of ‘Bole So Nihal… Sat Sri Akal!’ The Battle of Saragarhi was fought during the Tirah Campaign on 12th September 1897 between twenty-one Sikhs born in Majha region of the 36th Sikhs (which later became the 4th Battalion) of the Sikh Regiment of British India. It was a one class Jat (farmers) Sikh Battalion raised at Jalandhar and was the last to join the ranks of the elite Sikh Regiment in 1887. Within one decade it won the Sikh Regiment immortal fame during operations on the Samana Ridge (1897). At the time the battalion was holding posts on the ridge. Those at Saragarhi, Gulistan and Fort Lockhart served as communication links. A mass attack came on Saragarhi on 12th September 1897 and the 21 strong detachment fought one of the most unequal engagements in the 'history of warfare'. The twenty-one man detachment of the 36th Sikhs were responsible for defending the Saragarhi army post against +10,000 Musalmaan Afghan and Orakzai tribesmen. The battle occurred in the North-West Frontier Province, which formed part of British India. It is now named the Khyber-Pakhtunkhwa and is part of Pakistan. The contingent of the twenty-one Sikhs from the 36th Sikhs was led by Havildar Ishar Singh. They all chose to fight to the death. The battle is not well known outside military academia, but is "considered by some military historians as one of history's great last-stands". Sikh military personnel and Sikh civilians commemorate the battle every year on 12 September, as Saragarhi Day. Saragarhi is a small village in the border district of Kohat, situated on the Samana Range. In August 1897, five companies of the 36th Sikhs under Lt. Col. John Haughton, were sent to the Khyber-Pakhtunkhwa, stationed at Samana Hills, Kurag, Sangar, Sahtop Dhar and Saragarhi. The British had partially succeeded in getting control of this volatile area, however tribal Pashtuns attacked British personnel from time to time. Most of these forts had initially been built by Maharaja Ranjit Singh, Ruler of the Sikh Empire, as part of the consolidation of the Sikh empire in Punjab and the British added some more. Two of the forts were Fort Lockhart, (on the Samana Range of the Hindu Kush mountains), and Fort Gulistan (Sulaiman Range), situated a few miles apart. Due to the forts not being visible to each other, Saragarhi was created midway, as a heliographic communication post. The Saragarhi post, situated on a rocky ridge, consisted of a small block house with loop-holed ramparts and a signalling tower. A general uprising by the Afghans began there in 1897, and between 27 August - 11 September, many vigorous efforts by Pashtuns to capture the forts were thwarted by 36th Sikh regiment. In this uprising, Mullahs (Musalmaan religious leaders) played a prominent role. It was the duty of the 36th Sikh to occupy Gulistan and Lockhart forts. On 3rd and 9th September 1897, Orakazai and Afridi lashkars attacked Fort Gulistan. On both occasion the attacks were beaten back. A relief column was sent from the fort to assist in beating back these attacks. After both the attacks were repulsed, and a relief column from Fort Lockhart, on its return trip, reinforced the signalling detachment positioned at Saragarhi, increasing its strength to a total of 21, one Non-Commissioned Officer (NCO) and twenty troops of Other Ranks (ORs). The Musalmaan Pashtuns were now uncertain as to what to do next. Both of their attacks to occupy Gulistan and Lockhart forts had failed and they felt the shame of having to return home with so many men without a victory. The Pashtuns resolved to attack Saragarhi, which was a make-shift fort of stones and mud walls. The Pashtuns thought they could win an easy victory and retreat home with some honour after the recent defeats. In a renewed effort, on 12 September 1897, hordes of tribesmen laid siege to Fort Lockhart and Saragarhi, with the aim of overrunning Saragarhi and at the same time preventing any help from the Fort Lockhart. The Commanding Officer of 36th Sikh, Lt. Col. Haughton, was at Fort Lockhart and was in communication with the Saragarhi post through helicograph. The defenders of Saragarhi under the indomitable and inspiring leadership of their detachment commander, Havildar Ishar Singh, resolved to defend their post in the best tradition of their race and regiment. They were not there to hand over the post to the enemy and seek safety elsewhere. Havildar Singh and his men knew well that the post would fall, because a handful of men in that make-shift fort of stones and mud walls with a wooden door could not stand the onslaught of thousands of tribesmen. These plucky men knew that they will go down but they had resolved to do so fighting to the last. From Fort Lockhart, troops and the Commanding Officer could count at least 14 standards and that gave an idea of the number of tribes and their massed strength against the Saragarhi relay post (estimated at between 10,000 to 12,000 tribals). The odds, a staggering 500:1 against the Sikhs. From early morning the tribals started battering the fort. The Sikhs fought back valiantly. Charge after charge was repulsed by the men of the 36th Sikh. The tribal leaders started to make tempting promises so that the Sikhs would surrender. True to the Musalmaan way the Pashtuns thought they could trick and lie their way to victory, as they tried to do in the past with the Sikh Guru's. But Havildar Singh and his men ignored them. For quite some time, the troops held their own against the determined and repeated attacks by the wild and ferocious hordes. A few attempts were made to send a relief column from Fort Lockhart but these were foiled by the tribals. At Saragarhi, the enemy made two determined attempts to rush the gate of the post and on both occasions the defenders repulsed the assault. While the enemy suffered heavy casualties, the ranks of the defenders too kept dwindling as the fire from the attackers took its toll and their ammunition stocks were depleting. Unmindful of his safety, Sepoy Gurmukh Singh kept signalling a minute-to-minute account of the battle from the signal tower in the post to Battalion HQs. The battle lasted the better part of the day. The Orakazai and Afridi did the sort of thing you'd expect from an overwhelmingly-powerful force assaulting a tiny outpost garrisoned by a force they outnumber roughly 500:1 – they charged, looking to overrun the defenders by hurling wave after wave of their own men at the walls. It didn't work out for them. When repeated attacks failed, the enemy used more traditional Musalmaan tactics of subterfuge and set fire to the surrounding bushes and shrubs. Two of the tribesmen under cover of smoke, managed to close in with the post's boundary wall in an area blind to the defender's observation and rifle fire from the post holes. They succeeded in making a breach in the wall. This development could be seen from Fort Lockhart and was flashed to the post. The Sikhs didn't have access to heavy machine guns or assault rifles in 1897. These Sikh warriors were using bolt-action rifles, and were somehow firing them so fast that 10,000 trained warriors with guns somehow found themselves unable to push their way through. Meanwhile, when he wasn't taking potshots from the signal tower with his scoped rifle, the garrison's signalman, Gurmukh Singh, was operating his signaling equipment and informing the nearest British outpost (just barely visible in the distance beyond the ridge) exactly what was going on, how many men the enemy had, and what sort of equipment they were carrying. It didn't take long for the enemy to find the weak point in the inner defenses – a rickety wooden gate that was already on fire – yet still, even when the tribesmen shot up the gate, stormed the wall, and breached through to the main building of Saragarhi, they blitzed through only to find a determined handful of Khalsa Sikhs standing there with fixed bayonets. A few men from those defending the approaches to the gate were dispatched to deal with the breach in the wall. This diversion by the enemy and the defenders' reaction resulted in weakening of the fire covering the gate. The enemy now rushed the gate as well as the breach. Thereafter, one of the fiercest hand-to-hand fights followed. One of the Havildar Singh's men, who was seriously wounded and was profusely bleeding, had taken charge of the guardroom. He shot four of the enemy as they tried to approach his charge. All this time, Sepoy Gurmukh Singh continued flashing the details of the action at the post. Beside this the Commanding Officer of 36th Sikh and others at Lockhart Fort also saw his unique saga of heroism and valour unfold at Saragarhi. The battle had come too close for Sepoy Gurmukh Singh's comfort, so he asked Battalion HQs for permission to shut down the heliograph and take up his rifle. Permission was flashed back. He dismounted his heliograph equipment, packed it in a leather bag, fixed bayonet on his rifle and joined the fight. From this vantage point in the tower he wrought havoc on the intruders in the post. Details of the Battle of Saragarhi are considered fairly accurate, due to Gurmukh Singh signalling events to Fort Lockhart as they occurred; • Around 09:00am, around 10,000 Afghans reach the signaling post at Saragarhi. • Sardar Gurmukh Singh signals to Col. Haughton, situated in Fort Lockhart, that they are under attack. • Colonel Haughton states he cannot send immediate help to Saragarhi. • The soldiers decide to fight to the last to prevent the enemy from reaching the forts. • Bhagwan Singh becomes the first injured and Lal Singh is seriously wounded. • Soldiers Lal Singh and Jiwa Singh reportedly carry the dead body of Bhagwan Singh back to the inner layer of the post. • The enemy breaks a portion of the wall of the picket. • Colonel Haughton signals that he has estimated between 10,000 and 14,000 Pashtuns attacking Saragarhi. • The leaders of the Afghan forces reportedly makes false promises to the soldiers to entice them to surrender. • Reportedly two determined attempts are made to rush open the gate, but are unsuccessful. • Later, the wall is breached. • Thereafter, some of the fiercest hand-to-hand fighting occurs. • In an act of outstanding bravery, Ishar Singh orders his men to fall back into the inner layer, whilst he remains to fight. However, this is breached and all but one of the defending soldiers are killed, along with many of the Pashtuns. • The Singh, who communicated the battle with Col. Haughton, was the last Sikh defender and the only man alive and unwounded out of the little band and taking his rifle placed himself in the front of a doorway leading from the room, into which the enemy had forced their way, prepared to sustain the fight alone, calmly and steadily. It is believed that when he ran out of bullets he fixed his bayonet and charged down into the fray shouting the battle cry of the Sikhs. He is stated to have killed twenty Afghans, the Pashtuns having to set fire to the post to kill him. As he was dying he was said to have yelled repeatedly the Sikh battle-cry "Bole So Nihal, Sat Sri Akal". The tribals set fire to the post, while the brave garrison lay dead or dying with their ammunition exhausted. Next morning the relief column reached the post and the tell tale marks of the epic fight were there for all to see. When British troops reached the position later, they found 21 dead Sikhs and somewhere between 600-800 dead tribesmen. The number is debated because when the British showed up there was a second round of fighting over the fort, and it was difficult to say how many enemies were killed between the two fights, but we do know that nearly every single Sikh rifleman (who were brilliant with rifles) was completely out of ammunition. They had started with 400 rounds each... This episode when narrated in the British Parliament, drew from the members a standing ovation in the memory of the defenders of Saragarhi. The story of the heroic deeds of these men was also placed before Queen Victoria. The account was received all over the world with awe and admiration. 'The British, as well as the Indians, are proud of the 36th Sikh Regiments. It is no exaggeration to record that the armies which possess the valiant Sikhs cannot face defeat in war.' — Parliament of the United Kingdom. 'You are never disappointed when you are with the Sikhs. Those 21 soldiers all fought to the death. That bravery should be within all of us. Those soldiers were lauded in Britain and their pride went throughout the Indian Army.' — Field Marshal William Joseph Slim, 1st Viscount Slim. All the 21 valiant men of this epic battle were awarded the Indian Order of Merit Class III (posthumously) which at the time was one of the highest gallantry awards given to Indian troops and is considered equivalent to the present-day Vir Chakra. All dependants of the Saragarhi heroes were awarded 50 acres of land and 500 Rupees. Never before or since has a body of troops - that is, all of them won gallantry awards in a single action. It is indeed a singularly unique action in the annals of Indian military history. A tablet erected in the memory of these brave men. The tablet reads; "The Government of India have caused this tablet to be erected to the memory of the twenty one non-commissioned officers and men of the 36 Sikh Regiment of the Bengal Infantry whose names are engraved below as a perpetual record of the heroism shown by these gallant soldiers who died at their posts in the defence of the fort of Saragarhi, on the 12 September 1897, fighting against overwhelming numbers, thus proving their loyalty and devotion to their sovereign, the Queen Empress of India, and gloriously maintaining the reputation of the Sikhs for unflinching courage on the field of battle." 1) 165 Havildar Ishar Singh 2) 332 Naik Lal Singh 3) 834 Sepoy Narayan Singh 4) 546 Lance Naik Chanda Singh 5) 814 Sepoy Gurmukh Singh 6) 1321 Sepoy Sundar Singh 7) 871 Sepoy Jivan Singh 8) 287 Sepoy Ram Singh 9) 1733 Sepoy Gurmukh Singh 10) 492 Sepoy Uttar Singh 11) 163 Sepoy Ram Singh 12) 182 Sepoy Sahib Singh 13) 1257 Sepoy Bhagwan Singh 14) 359 Sepoy Hira Singh 15) 1265 Sepoy Bhagwan Singh 16) 687 Sepoy Daya Singh 17) 1556 Sepoy Buta Singh 18) 760 Sepoy Jivan Singh 19) 1651 Sepoy Jivan Singh 20) 791 Sepoy Bhola Singh 21) 1221 Sepoy Nand Singh The battle has frequently been compared to the Battle of Thermopylae because of the overwhelming odds faced by a tiny defending force in each case, and the defenders' brave stand to their deaths, as well as the extremely disproportionate number of fatalities caused to the attacking force.
<urn:uuid:16d7cc35-aa89-4644-9f2c-8f337e197f52>
CC-MAIN-2018-13
http://discoversikhism.com/sikhism/sikh_battle_of_saragarhi.html
s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647681.81/warc/CC-MAIN-20180321180325-20180321200325-00005.warc.gz
en
0.976024
3,884
2.921875
3
New Test Could Lead to Effectively Customizing Treatments for CF Patients, Study Contends The test finds which compounds are more effective in restoring the beating of patient-derived airway cilia, which are hair-like structures that line the human airways and whose movement is disrupted in patients with CF. Researchers believe this tool may lead to personalized therapy in CF patients, as well as for other disorders in which cilia movements are disrupted. The findings, “Phenotyping ciliary dynamics and coordination in response to CFTR-modulators in Cystic Fibrosis respiratory epithelial cells,” were published recently in the journal Nature Communications, and featured in a news story by Ziba Kashef from Yale University. The thick mucus that is characteristic of CF obstructs breathing, and disrupts the beating of cilia covering the cells lining our lungs and upper airways. Cilia normally beat in a rhythmic, sweeping fashion, toward the mouth to help move airway mucus (swallowed or coughed out). This self-clearing mechanism, known as mucociliary clearance, helps keep harmful microbes and irritants away. As the thick mucus of CF patients accumulates in airway surfaces, it restricts the cilia beatings, and these changes can be observed and quantified in the lab by specialized microscopy techniques. In the study, researchers at University of Cambridge and Yale School of Medicine, took advantage of the disrupted ciliary movements observed in CF patients to create a test that screens for the most effective therapies in restoring such cilia movements. If the defect on CF cells is partially corrected by a specific compound, then less thick mucus will be produced, and the cilia will beat at a rhythm closer to how they would on healthy cells. To observe and quantify these changes with high-resolution, the test combines automatic high-speed video microscopy (multiscale differential dynamic microscopy, multi-DDM) and a new video analysis algorithm model. In the pilot study, researchers examined the cilia of airway epithelial cells isolated from patients with different CF mutations (F508del on one or both copies of the CFTR gene, which is the gene defective in CF patients), and compared those samples to normal cells. Researchers measured how the beat frequency and coordination of cilia changed in response to six different CFTR-modulating compounds: the CFTR corrector lumacaftor (VX-809), the FDA-approved CFTR potentiator Kalydeco (ivacaftor), and a combination of both, sold as Orkambi (lumacaftor/ivacaftor); and two other investigational CFTR correctors (C4 and C18), alone or combined. Of note, lumacaftor, Kalydeco, and Orkambi were all developed by Vertex Pharmaceuticals. The assay was able to quantitatively identify the most efficient compounds to restore cilia beating in each patient. By identifying which specific modulators best restore ciliary beating, researchers argue that this test could be a fast and efficient way to help predict treatment effectiveness in patients in a personalized way. That is an important feature, “as patient-to-patient variation is an obstacle to therapeutic intervention and cannot currently be explained by mutation/s in the CFTR gene alone” the researchers wrote. The team believes that such an approach may pave the way for personalized medicine, helping to tailor treatments for CF patients. Also, as problems in cilia dynamics are evident for other diseases, the test is not limited to CF, but also may have applications in other disorders. Yale University made available a microscopy video record of some of the cilia analyzed in the study. The video can be viewed here.
<urn:uuid:2215e8de-f311-4c33-884f-5ab6fc529d5a>
CC-MAIN-2021-25
https://cysticfibrosisnewstoday.com/2019/04/18/test-customizing-treatments-cf-patients-study/
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487607143.30/warc/CC-MAIN-20210613071347-20210613101347-00347.warc.gz
en
0.932513
784
2.609375
3
New York to London in Less Than Two Hours If flying from New York (USA) to London (UK) in less than two hours sounds like science fiction, continue reading. On September 1, 1974 Major James V. Sullivan, 37 (pilot) and Noel F. Widdifield, 33 (reconnaissance systems officer) set a world speed record of 2,000 miles per hour (3218 kilometers per hour) flying the Blackbird SR-71 jet air plane. It took them exactly 1 hour, 54 minutes and 56.4 seconds to complete this cross-Atlantic journey. To date this record has not been broken with another jet plane. The Blackbird SR-71 air plane is a military, spy plane capable of speeds in excess of Mach 3. It was the first true Stealth (radar evading) aircraft, with a body made of a mixture of titanium and plastic. While flying at top speeds the outside of the Blackbird gets really hot, up to 900 deg F (480 deg C) due to air friction. Its outside is painted black to more efficiently dissipate this heat. It was designed to fly at approximately 80,000 feet (24.3 km), where air is thinner and where pilots can actually see the curvature of the Earth. In spite of this high flying altitude and speed, Blackbirds could take a sharp photograph of a golf ball on the surface of the Earth. Truly amazing! There were only about 40 of these planes ever made, and most of them are now grounded. Only 2 or 3 of them are still used by NASA for research. At the time they were made, in the 1970s, their price tag was a mere $33 million. Even today, the only faster plane than the Blackbird SR-71 is the X-15; however this plane is rocket powered. NASA has a brand new plane, the X-43, which is a combination of a rocket and jet-propelled craft that is designed to fly at Mach 7; however, its first test flight failed last year. About the Author Anton Skorucak, MS Anton Skorucak is a founder and publisher of ScienceIQ.com. Anton Skorucak has a Master of Science (MS) degree in physics from the University of Southern California, Los Angeles, California and a B.Sc. in physics with a minor in material science from the McMaster University, Canada. He is the president and creator of PhysLink.com, a comprehensive physics and astronomy online education, research and reference web site.
<urn:uuid:4522706c-53ca-4dac-807f-7a6809960f16>
CC-MAIN-2023-40
https://www.scienceiq.com/Facts/FastestPlane.cfm
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506421.14/warc/CC-MAIN-20230922170343-20230922200343-00245.warc.gz
en
0.957719
524
2.5625
3
Finnish energy supplier Fortum is entering the lithium-ion battery-recycling market. Using a process developed by Crisolteq of Finland, Fortum claims it can now recycle over 80 per cent of the materials in each battery on an industrial scale. The recycling is based on a hydrometallurgical process that first makes the batteries safe for mechanical treatment by separating plastics, aluminium and copper and feeding them into their own recycling processes. Cobalt, manganese, nickel and lithium are then recovered and returned to the battery manufacturers for reuse in the production of new batteries. In addition, Fortum is also testing second life applications with batteries that have been phased out as stationary energy storage devices after they are no longer suitable for use in electric vehicles, among other things. In February this year, Volkswagen announced a pilot project recycling plant in Salzgitter, Germany, where they say they are aiming for a long-term goal of recycling 97 per cent of all materials in each battery. Belgian company Umicore currently has the capacity to recycle batteries of about 150,00 to 200,000 electric vehicles. Batteries can and are being recycled, however, currently, it is still cheaper to produce new lithium and cobalt. China wants to obligate manufacturers of electrified vehicles to recycle used batteries. In the summer of 2018, China also selected 17 cities and regions to launch a pilot programme for the recycling of used electric vehicle batteries. Fortum itself estimates that the global market for battery recycling could be worth at least 20 billion euros ($23 billion) a year by 2025 as demand for electric cars takes off. – ADVERTISEMENT –
<urn:uuid:a6d2add9-3cd7-4707-973e-df6f8bf60014>
CC-MAIN-2019-51
https://www.electrive.com/2019/03/25/fortum-capable-of-80-recycling-on-industrial-scale/?fbclid=IwAR241ISW0gMMtUpRsMR6to2VfUbtSko2028mgwFfHcEdlX2oiVjnwlo5NzE
s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540569146.17/warc/CC-MAIN-20191213202639-20191213230639-00158.warc.gz
en
0.949067
336
2.65625
3
When you think about the medieval era, what comes to mind? Perhaps looming castles, gallant knights, damsels in distress, or praying monks? These aspects of life in the Middle Ages are only part of the picture. Vibrant and sophisticated cultures flourished everywhere, and the people who lived during this period, from the 5th to the 15th century, found time for amusement in the margins of their lives and their manuscripts. Surprisingly, playful images are most often found in religious books, where artists tended to populate the margins with humorous, even outrageous or sacrilegious imagery. The medieval mind loved to juxtapose the profound and the frivolous. Sometimes the artist’s playfulness had a serious intent: for example, to help readers remember a prayer or a passage from the Gospels. But often the artists were simply having fun, creating delightfully whimsical images for the entertainment of the reader. In these pages, praying monks become playing monks, knights battle with dice instead of swords, children shirk their winter duties to lob snowballs at each other, and damsels forget their distress and go out for an afternoon of butterfly hunting. Through these images, this exhibition explores a sense of whimsy and fun that is uniquely medieval, yet remarkably relevant to us today.
<urn:uuid:ca4da87e-6354-4432-87ef-ebf26c1b8630>
CC-MAIN-2015-32
https://thewalters.org/exhibitions/checkmate/index.aspx
s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042990603.54/warc/CC-MAIN-20150728002310-00271-ip-10-236-191-2.ec2.internal.warc.gz
en
0.958723
258
2.890625
3
The oldest house on Bruges’ main square is certainly impressive to look at, as are most buildings along this popular stretch. Turn your gaze toward its roof, and you’ll notice two features many people fail to spot. On its front facade, just beneath the roof, you’ll see a massive compass that was installed in 1682. But this isn’t your typical compass. Instead of showing the magnetic North, it actually depicts the direction of the wind. If you pay attention, you’ll see a golden metal flag on the roof. It’s a weather vane that shows the direction of the wind, and it’s connected to the giant compass’ needle. Being able to see the way from which the wind blew was useful for merchants back when Bruges was one of Europe’s biggest harbors. It let them know whether their delivery sailboats would be delayed due to poor wind conditions. Keep looking at the building’s roof, and you’ll see another nifty scientific instrument that’s a bit more modern. This golden globe was used at the dawn of the railroad era to keep all the city’s clocks coordinated. It was installed because the advent of rail system meant clocks needed better accuracy to ensure the reliability of the train schedules. This clever device is actually quite simple. At noon, the sun’s shadow aligns with a small hole in the globe and falls upon a meridian line in the pavement. There were actually 41 such devices in Belgium, and this one is the last surviving one. If you look at the ground in the square, you’ll also notice a copper nail that shows the sun’s path. Know Before You Go It's free to look at from the main square. Check the copper nails on the pavement of the square as well.
<urn:uuid:194f992f-1beb-418d-a83f-04e3b45c6e7e>
CC-MAIN-2022-05
https://assets.atlasobscura.com/places/bouchoute-house
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301720.45/warc/CC-MAIN-20220120035934-20220120065934-00111.warc.gz
en
0.958185
388
3.296875
3
[SatNews] ... expected to benefit climate-related risk management, and help to underpin the development of climate change mitigation and adaptation options. Between 13 and 17 October 2014, the Climate Symposium will address “Climate Research and Earth Observations from Space/Climate information for decision making”, bringing together over 500 global climate experts, policy makers and representatives from industry and international space agencies in Darmstadt, Germany. Thursday, July 10, 2014 Taking place directly after the release of key elements of the Intergovernmental Panel on Climate Change (IPCC) Fifth Assessment Report, the symposium and its follow-on activities are expected to benefit climate-related risk management, and help to underpin the development of climate change mitigation and adaptation options. Climate change impacts natural and human systems on all continents and across the oceans through rising sea levels, heat stress, water-borne illnesses and an increase in severe weather phenomena. The symposium will benefit climate research, modelling and prediction by initiating the development of an international space-based climate observing system in response to the needs of the Global Framework for Climate Services (GFCS) and the Global Climate Observing System (GCOS). On October 13, the symposium will be kicked of by policy makers from Germany and the European Commission, as well as managers of climate observation, research and assessment programs. Speakers include Ms Brigitte Zypries, German State Secretary for Economic Affairs and Energy, Mr Michel Jarraud, Secretary-General of the World Meteorological Organization, and Ms Barbara Ryan, Director of the Group on Earth Observations. Opening addresses will also be given by David Carlson, Director of the World Climate Research Program, and Julia Slingo, Chief Scientist at the Met Office (UK). The Climate Symposium is organized by the WCRP and EUMETSAT, with the support of the European Commission, the European Space Agency, and the City of Darmstadt. Other sponsors are GFCS, GEO, JAXA, DLR, NOAA, CNES and NASA. For more details of the symposium program, visit the Climate Symposium web site. To read more about EUMETSAT’s contribution to international climate change monitoring follow the newly launched Climate blog. Organization for the Exploitation of Meteorological Satellites is an intergovernmental organization based in Darmstadt, Germany, currently with 30 Member States (Austria, Belgium, Bulgaria, Croatia, Czech Republic, Denmark, Estonia, Finland, France, Germany, Greece, Hungary, Iceland, Ireland, Italy, Latvia, Lithuania, Luxembourg, the Netherlands, Norway, Poland, Portugal, Romania, Slovakia, Slovenia, Spain, Sweden, Switzerland, Turkey and the United Kingdom) and one Cooperating State (Serbia). EUMETSAT operates the geostationary satellites Meteosat-8, -9 and -10 over Europe and Africa, and Meteosat-7 over the Indian Ocean. EUMETSAT also operates two Metop polar-orbiting satellites as part of the Initial Joint Polar System (IJPS) shared with the US National Oceanic and Atmospheric Administration (NOAA). The Metop-B polar-orbiting meteorological satellite, launched on September 17, 2012, became prime operational satellite on 24 April 2013. It replaced Metop-A, the first European polar-orbiting meteorological satellite, which was launched in October 2006. Metop-A will continue operations as long as its available capacities bring benefits to users. The Jason-2 ocean altimetry satellite, launched on 20 June 2008 and exploited jointly with NOAA, NASA and CNES, added monitoring of sea state, ocean currents and sea level change to the EUMETSAT product portfolio. The data and products from EUMETSAT’s satellites are vital to weather forecasting and make a significant contribution to the monitoring of environment and the global climate: With almost 40 years of data, EUMETSAT’s operational meteorological satellites and the high-quality instruments they carry form an invaluable asset for climate monitoring and the understanding of climate change. EUMETSAT has a long-term perspective in maintaining satellite systems and their role will become increasingly important with the next generations of the systems in geostationary and polar orbit.
<urn:uuid:5aaed0c3-47a9-476f-8c70-e6edbe794b8d>
CC-MAIN-2018-51
http://www.satnews.com/story.php?number=1228731628
s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376826354.54/warc/CC-MAIN-20181214210553-20181214232553-00055.warc.gz
en
0.911982
881
2.6875
3
The new model overcomes a long-standing barrier to accessing hair cells, the delicate sensors in the inner ear that capture sound and head movement and convert them to neural signals for hearing and balance. These cells have been notoriously difficult to treat with previous gene-delivery techniques. Researchers from Harvard Medical School and the Massachusetts General Hospital in the US, showed that the treatment leads to notable gains in hearing and allows mice that would normally be completely deaf to hear the equivalent of a loud conversation. The approach also improved the animals' sense of balance. The gene therapy carries the promise of restoring hearing in people with several forms of both genetic and acquired deafness, researchers said. "To treat most forms of hearing loss, we need to find a delivery mechanism that works for all types of hair cells," said David Corey, professor at HMS. To achieve that, researchers used the adeno-associated virus (AAV). It has been already used as a gene-delivery vehicle for retinal disorders. To super-charge AAV as a gene carrier into the inner ear, the team used a form of the virus wrapped in protective bubbles called exosomes. They grew regular AAV virus inside cells. Those cells naturally bud off exosomes - tiny bubbles made of cell membrane - that carry the virus inside them. The membrane wrapping around the virus is coated with proteins that bind to cell receptors. "Unlike current approaches in the field, we didn't change or directly modify the virus. Instead, we gave it a vehicle to travel in, making it better capable of navigating the terrain inside the inner ear and accessing previously resistant cells," said Casey Maguire, assistant professor at HMS. In lab dish experiments, exo-AAV successfully penetrated 50-60 per cent of hair cells, researchers observed. By contrast, AAV alone reached a mere 20 per cent of hair cells. To test the approach in living animals, researchers worked with mice born without a gene critical for hair cell function. Such animals normally cannot hear even the loudest sounds and exhibit poor balance. Researchers injected exo-AAV preloaded with the missing gene into the inner ears of mouse pups, shortly after birth. Post-treatment tests showed that the gene entered between 30 and 70 per cent of hair cells, reaching both inner and outer hair cells. A month after treatment, nine of 12 mice had some level of hearing restored and could be startled by a loud clap. Four could hear sounds of 70 to 80 decibel intensity, the rough equivalent of conversation in a loud restaurant.Treated mice had notably improved balance, showing far less head tossing or running in circles, both markers of instability or disorientation.
<urn:uuid:6ab1f331-43d0-417b-9f4c-896bc6fc2c06>
CC-MAIN-2018-13
https://health.economictimes.indiatimes.com/news/diagnostics/new-gene-delivery-therapy-restores-partial-hearing-in-mice/56832703
s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257648177.88/warc/CC-MAIN-20180323024544-20180323044544-00375.warc.gz
en
0.947524
557
3.515625
4
Hawaii was the last state to be admitted to the Union and is the only one to exist completely separate from another mainland. Geologically the islands are a collection of volcanic features. While other landmasses often shrink, the Hawaiian Islands are distinguished for their ability to grow, thanks to the continual production of lava, the substrata for the islands. In fact, consider the unique ecosystems of these volcanic and tropical islands, in combination with their remoteness to any other landmass, it seems amazing to realize the collection of flora and fauna extant on the islands. As impressive as the evolution has been, though, overdevelopment has made many of the native plants and animals endangered. Hawaii now has the dubious honor of having the longest list of endangered species in proportion to its size. As far as population goes, Hawaii leads the Union in multicultural residents, number of Asian Americans and a surprising lack of Hispanics, proportionately. Also, if you seek good, clean living, Hawaii now boasts the longest life expectancy of any other state. Besides English, the state recently added the Hawaiian language as the alternative official language. This is why many place names are now given in English and in their Hawaiian version. Native Hawaiians comprise a subset of Polynesians. Hawaii’s educational system is quite distinct as well. While other states have decentralized public school systems, or those run by local municipalities, Hawaii’s has remained governed by the state’s singular Department of Education. Pearl Harbor was, and still is, a key Pacific Naval Base. On December 7, 1941, the Japanese attacked it in a brutal and unexpected air strike. This singular event officially launched the U.S. into the midst of World War II. Today Pearl Harbor is one of the most popular tourist attractions in the state. |Hawaii in Numbers:| Wailuku, Maui County, Hawaii Maria Sims , eXp Realty
<urn:uuid:e359ace5-2c6d-412e-8400-b803ed334814>
CC-MAIN-2019-09
https://www.horseproperties.net/properties/Hawaii
s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550249501174.94/warc/CC-MAIN-20190223122420-20190223144420-00360.warc.gz
en
0.951123
394
3.140625
3
This log calculator (logarithm calculator) allows you to calculate a logarithm of any number with any arbitrary base. Regardless of whether you are looking for a natural logarithm, log base 2, or log base 10, this tool will solve your problem for you. Read on to get a better understanding of the logarithm formula and the rules you have to follow. A logarithmic function is an inverse of the exponent function. In essence, if a raised to power y gives x, then the logarithm of x with base y is equal to x. In the form of equations, a^y = x is equivalent to log_a(x) = y If you want to calculate the natural logarithm of any number, you need to choose a base equal to the number e = 2.718. The natural logarithm is denoted with symbol ln(x). One of the popular bases for logarithms is 10. The logarithm with base 10 is denoted as lg(x). It is used, for example, in our decibel calculator. If you want to calculate a logarithm with an arbitrary base, but are able to access only a natural logarithm calculator or a log 10 base calculator, you need to apply the following rules: log_a(x) = ln(x) / ln(a) log_a(x) = lg(x) / lg(a) We listed some basic log rules for operations on logarithms below as well. log_a(x*y) = log_a(x) + log_a(y) log_a(x/y) = log_a(x) - log_a(y) log_a(x^y) = y*log_a(x) Let's assume you want to use this tool as a log base 2 calculator. To calculate the logarithm of any number, simply follow these steps: lg(100) = 2. lg(2) = 0.30103. lg(100)/lg(2) = 2 / 0.30103 = 6.644.
<urn:uuid:ce1b5719-5255-4697-8f6d-a478132fe219>
CC-MAIN-2018-17
https://www.omnicalculator.com/math/log
s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125949036.99/warc/CC-MAIN-20180427041028-20180427061028-00535.warc.gz
en
0.881783
481
3.828125
4
A lottery is a procedure for distributing something (usually money or prizes) among a group of people by lot or by chance. It includes a number of elements, including a pool of tickets or counterfoils from which winning numbers are extracted, the selection of winners by a random procedure, and payment to bettors of a sum that has been staked on the outcome of the drawing. Historically, lotteries have been a means of raising funds for private and public projects. They have been used to finance roads, libraries, churches, colleges, canals, and bridges. They have also been used to raise money for military campaigns and as an incentive for citizens to participate in community activities. In the United States, lotteries were first organized to help finance the Revolutionary War and later were used as a source of funds for state and local projects. They also played a significant role in financing colonial America’s defense of Philadelphia and other cities. They have been used as an alternative to taxes, which were not popular at the time and which would have slowed down or even stopped the development of public projects. Alexander Hamilton wrote that the public would be willing to risk a small amount of money for a good chance of large gain. Many people buy lottery tickets as a low-risk investment, thinking that they have a great opportunity to win hundreds of millions of dollars. But the odds of winning are remarkably small, and that money could be better spent on other things such as retirement or college tuition. The government also benefits from the money that is spent on lottery tickets, and it receives billions of dollars in receipts from the lottery industry as a whole. These receipts are used to pay for the costs of running a lottery, but they could be going to fund other worthwhile projects such as schools and hospitals, which the general population would prefer. A lottery is a popular activity that many people enjoy, and it can be fun to play. Some people have a special knack for picking numbers, and they are able to win large amounts of money by picking specific combinations. If you have a knack for picking numbers, you might want to consider joining a local lottery. These games are often cheaper than big national ones, and they have better odds of making you a winner. Some people are able to win more than once by using a strategy that involves obtaining a set of lottery tickets for each number combination possible. Romanian-born mathematician Stefan Mandel was able to win 14 times by doing this. The strategy works by getting enough people together who can afford the cost of buying tickets that cover all the possible combinations. This can be done by a variety of methods, such as raising money through investors. There are a number of strategies to use when playing the lottery, but the one that will work best for you depends on how much you want to invest. If you’re looking for a quick way to win money, try a state pick-3 game. This is a lot easier than trying to pick five or six different numbers.
<urn:uuid:6784824c-56e1-490d-9c82-8a760e135deb>
CC-MAIN-2023-40
https://timothypiazzafoundation.com/what-is-a-lottery-3/
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511106.1/warc/CC-MAIN-20231003124522-20231003154522-00268.warc.gz
en
0.984744
614
3
3
The LDS Church has always had a keen interest in education–how, what, who, and why to teach. All of these issues were tackled by the early Church and principles were formed with varying degrees of success. Many of the formative principles have been set aside, however, in favor of modern methodology and the all-consuming role of public schools. As in so much that is “new and improved,” it is well to study the original model to determine what was intended at the outset. Often the passage of time and new add-ons that enter into the educational arena do little more than to complicate a doable, straight-forward approach to teaching. In Jack’s presentation, he addresses these issues.
<urn:uuid:7beb3f14-e1df-499e-b357-039eea9a0ad7>
CC-MAIN-2022-05
https://www.agencybasededucation.org/original-education-by-jack-monnett/
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301264.36/warc/CC-MAIN-20220119064554-20220119094554-00413.warc.gz
en
0.985679
147
2.828125
3
Water is undoubtedly the most important element of human life. It is one of the few things that our body cannot do without as approximately 60% of our body is made up of water. That being said, the importance of drinking adequate water has been stressed upon time and time again. Multiple rules and guidelines have been formulated for the mandatory water intake for humans. One rule which is drilled into our brains is the ‘8x8 rule’ which means "Drink eight 8-ounce glasses of water a day." This rule is quite apt and easy to remember but there’s just one problem: it’s outdated and sort of misinterpreted. The Food and Nutrition Board in 1945 suggested that a person should consume one milliliter of water per each calorie of food consumed. Thus, an average diet of 1900 calories per day amounted to a total of 1900 milliliters (64 ounces) of water, thus giving rise to the ‘8x8 Rule.’ This theory completely rules out the fact that the body also gets its share of water from the fruits and vegetables that we consume. Keeping in mind the changing lifestyles, the National Academies of Sciences, Engineering, and Medicine estimates that the human body needs: About 15 cups (3.7 liters) water/day for men. About 11 cups (2.7 liters) water/day for women. Nutritionist Venu Adhiya Hirani says that, "While the general belief is to drink eight to 10 glasses of water, it is advisable to drink 12 to 15 glasses of fluids which includes water, tea, buttermilk, soup, etc. Many fruits and vegetables, such as watermelon and spinach, are almost 100 percent water by weight which also contributes to the water intake." Note: Your body will always remind you when it needs water. You can know if your water intake is adequate by the color of your urine. If your urine is colorless or light yellow, you can be assured that your body is adequately hydrated. Speaking of urine, it is also important to know how much urine output is considered healthy and normal. An average person will urinate 6-7 times in a day but the frequency also varies from person to person. A person can urinate as less as 4 times in a day to as often as 10 times in a day. There’s no absolute whatsoever as it is heavily influenced by external factors like the weather, diet, intensity of physical exercise, health conditions, medications, etc. For eg: a diabetic person might pee a lot more compared to a normal person. As long as you are happy with your urine output, there’s absolutely nothing to worry about. Need to know more? Consult a health expert in less than 30 minutes on DocsApp. Help is just an app away!
<urn:uuid:e2085e5e-5eb0-44d5-af1f-6c7eb4889918>
CC-MAIN-2019-51
https://blog.docsapp.in/the-right-amount-water-intake-and-urine-output/
s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540488620.24/warc/CC-MAIN-20191206122529-20191206150529-00037.warc.gz
en
0.964462
586
3.375
3
The Evolution Deceit The above verses indicate the existence of life forms unknown to people at the time of the revelation of the Qur'an. Indeed, with the discovery of the microscope, new living things too small to be seen with the naked eye have also been discovered by man. People have therefore begun to learn about the existence of these life forms, indicated in the Qur'an. Other verses which point to the existence of micro-organisms, which are invisible to the naked eye and generally consist of a single cell, read: … He is the Knower of the Unseen, Whom not even the weight of the smallest particle eludes, either in the heavens or in the earth;nor is there anything smaller or larger than that which is not in a Clear Book. (Qur'an, 34:3) … Not even the smallest speck eludes your Lord, either on earth or in heaven. Nor is there anything smaller than that, or larger, which is not in a Clear Book. (Qur'an, 10:61) There are 20 times more members of this secret world, which is spread all over the planet, micro-organisms in other words, than there are animals on Earth. These micro-organisms, invisible to the naked eye, comprise bacteria, viruses, fungi, algae and Acarina (mites and ticks). They also constitute an important element in the balance of life on Earth. For example, the nitrogen cycle, one of the fundamental components of the formation of life on Earth, is made possible by bacteria. Root fungi are the most important element in plants being able to take up minerals from the soil. The bacteria on our tongues prevent us being poisoned by food containing nitrates, such as salad stuffs and meat. At the same time, certain bacteria and algae possess the ability to make photosynthesis, the fundamental element in life on Earth, and share that task with plants. Some members of the Acarina family decompose organic substances and turn them into foodstuffs suitable for plants. As we have seen, these tiny life forms, about which we have only learned with modern technological equipment, are essential to human life. Fourteen centuries ago, the Qur'an indicated the existence of living things beyond those which can be seen with the naked eye. This is another spectacular miracle contained within the verses of the Qur'an.2010-06-30 23:01:54
<urn:uuid:030ed73a-6547-4ca0-99d7-aef813b6a9c2>
CC-MAIN-2013-48
http://www.evolutiondeceit.com/en/works/27497/The-existence-of-microscopic-life
s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163047545/warc/CC-MAIN-20131204131727-00035-ip-10-33-133-15.ec2.internal.warc.gz
en
0.937408
497
2.671875
3
Tallest Mountain in the World Video for Kids Tallest Mountain in the World Mount Everest is the tallest mountain in the world. This mountain is over 60 million years old. It was formed by the movement of the Indian tectonic plate. The mountain grows 4 mm higher every year due to geological uplift. Mount Everest is known as Sagarmatha in Nepal that means ‘Sky’s Forehead’. The height of this mountain is 29,035 feet. On May 29, 1953, Edmund Hillary and Tenzing Norgay were the first ones to climb the Mount Everest. The summit of this mountain is covered with deep snow all year long. The temperature can drop to -80°F. Fast Facts: – - Mount Everest was first identified for the western world by a British survey team led by Sir George Everest in 1841. - Apa Sherpa and Phurba Tashi hold the joint record for the maximum number of Everest ascents that is 21. - The wind can blow over 200 mph on the summit. - Junko Tabei of Japan was the first woman to climb Mount Everest successfully in 1975. - Miura Yuichiro from Japan was the oldest person to climb the Everest. - There are total 18 different climbing routes to reach the summit. - There are approximately 200 dead bodies of climbers on the mountain and surprisingly, these bodies work as waypoints to other climbers. - The average duration of an expedition to the summit of the tallest mountain in the world is about 40 days. - More than 150 people have died while trying to climb Mount Everest. - Till now, more than 2,000 people have reached the summit. - The cost of a permit for seven people to climb the Mount Everest from the Government of Nepal is $70,000. Cite This Page You may cut-and-paste the below MLA and APA citation examples: MLA Style Citation Declan, Tobin. " Fun Facts for Kids about Tallest Mountain in the World ." Easy Science for Kids, Jun 2020. Web. 01 Jun 2020. < https://easyscienceforkids.com/tallest-mountain-in-the-world-video-for-kids/ >. APA Style Citation Tobin, Declan. (2020). Fun Facts for Kids about Tallest Mountain in the World. Easy Science for Kids. Retrieved from https://easyscienceforkids.com/tallest-mountain-in-the-world-video-for-kids/ We've recently added - How To Build a Winogradsky Column and Learn About Soil Science - Potato Light Bulb Experiment - How To Use Friction to Pick Up Bottle of Rice - How To Make Popcorn Dance - Vinegar and Baking Soda Fire Extinguisher - Power of Bleach - Comparing Surface Tension of Liquids with Pennies - Ice Cream Chemistry - Using Distillation to Purify Water - Filter Water with Dirt - Build a Balloon Barometer - Build Your Very Own Seismograph Sponsored Links :
<urn:uuid:555cc486-fa4d-45d4-aee5-874c1de98b57>
CC-MAIN-2020-24
https://easyscienceforkids.com/tallest-mountain-in-the-world-video-for-kids/
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347419639.53/warc/CC-MAIN-20200601211310-20200602001310-00017.warc.gz
en
0.898213
654
3.28125
3
Wednesday, February 18. In South Carolina, Confederate General P.G.T. Beauregard warned against potential Federal attacks on either Savannah or Charleston: “To arms, fellow citizens!” In Virginia, a portion of the Confederate Army of Northern Virginia was transferred from Fredericksburg to positions east of Richmond to protect the Confederate capital from potential Federal attacks from the Peninsula between the York and James Rivers. In Kentucky, Federal authorities dispersed a suspected pro-Confederate Democratic convention. Skirmishing occurred in Tennessee and Kentucky. Thursday, February 19. In Mississippi, Federals under General Ulysses S. Grant skirmished with Confederates north of Vicksburg. Skirmishing occurred in Virginia, Tennessee, and Missouri. Confederate President Jefferson Davis wrote to Western Theater commander Joseph E. Johnston that he regretted “the confidence of superior officers in Genl. Bragg’s fitness for command has been so much impaired. It is scarcely possible in that state of the case for him to possess the requisite confidence of the troops.” However, Davis was reluctant to remove Braxton Bragg as commander of the Army of Tennessee. Friday, February 20. The Confederate Congress approved issuing bonds to fund Treasury notes. Skirmishing occurred between Federals and Indians in the Dakota Territory. Saturday, February 21. In Virginia, two Federal gunboats attacked Confederate batteries at Ware’s Point on the Rappahannock River. In Washington, a public reception was held at the White House. Sunday, February 22. To commemorate George Washington’s Birthday, the Central Pacific Railroad began construction on the transcontinental railroad project at Sacramento, California. Skirmishing occurred in Tennessee and Alabama. Monday, February 23. Skirmishing occurred in North Carolina and Kentucky, and Union meetings were held at Cincinnati; Russellville, Kentucky; and Nashville, Tennessee. Tuesday, February 24. On the Mississippi River, four Confederate vessels attacked the Federal gunboat Indianola. Among the attackers was Queen of the West, a Federal gunboat that had been captured and commandeered by the Confederates. Indianola was rammed seven times in the blistering fight, and Lieutenant Commander George Brown finally surrendered the ship, which he called “a partially sunken vessel.” This Confederate victory was a major setback to Federal river operations below Vicksburg. Primary Source: The Civil War Day by Day by E.B. Long and Barbara Long (New York, NY: Da Capo Press, Inc., 1971)
<urn:uuid:acbea90c-30f1-49ff-b3f4-b353471f47c7>
CC-MAIN-2017-26
https://civilwarhistory.wordpress.com/tag/confederate-batteries/
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320491.13/warc/CC-MAIN-20170625115717-20170625135717-00582.warc.gz
en
0.943701
522
2.6875
3
Understanding the Cost and Benefits Of Installing Solar Panels As the cost of utilities continues to rise, many homeowners are looking into making their residences fully dependent on solar power. As such, understanding the costs, benefits, and potential financial returns of moving to solar panel-power is extremely helpful and informative. In the United States the average household consumes approximately 4,600 kWh of electricity per year. This page covers some US prices, go here for the main UK solar panel costs info. However, continuously-running appliances or those that create a large drain on electricity (like hot tubs) will affect this number considerably, as will the months when an air conditioner is in frequent or continual use. In the United States, electricity costs vary widely by state (West Virginia costs as little as seven cents per kilowatt per hour, but rises to 24 cents in Hawaii). For UK users, Northern areas will consume slightly less electricity than elsewhere during the year (less in Northern Scotland than Cornwall, for example). But these differences are negligible and so solar energy remains a viable option for many households. The kinds of solar power systems used for United Kingdom residences range between 1 to 4Kw (kilowatts) and usually end up costing £2,000 to £8,100 pounds. However, the cost per watt (which includes parts, permitting fees, profit, labor, and overhead) has massively decreased in the last ten years. The Cost Of Solar Panels In the United States, in most states the price is between six to eight dollars per watt ($/W). As such, the cost of the actual solar panels themselves is only about thirty percent of total pricing (followed by operational and system balance costs, both at twenty percent). The average solar panel conservatively generates about ten watts per square foot, with a conversion efficiency rate of twelve percent. This translates into requiring about 100 square feet of solar panels for every kilowatt you want to generate. In order to maximize the output of your solar paneling, it’s also important to know how many hours per day you can expect sunlight, which varies widely based on location (for US users, examples are highs of seven to eight hours a day in Arizona, versus lows of three per day in Chicago). This will affect the size of the paneling, with more panels if you get more sunlight and fewer if you get less. An example of one family’s experience may be of illustrative help. They had a 4 kW system installed on their roof. (For reference, a 4 kilowatt system can generate up to 3,800 kWh a year in the south of England—the same amount of energy needed to turn the London Eye fifty times! In London, this kind of system will save nearly two tonnes of carbon dioxide every year.) So, assuming their electric bill costs £76.10 /month, with a per-watt cost of £4.95, the total cost for their particular system would only be £14,840.41. Further, they could receive various incentives and credits totaling over £7,001.63, making the actual total cost a bit over £7,838.78. (Since further credits, government incentives, and benefits can be received over time, the homeowner might actually pay even less than this.) With current FIT rates, homeowners can save about £266 per year. Homeowners are paid for the power they generate, whether or not they sell said power back to the grid. Given the FIT rate (which will vary depending on when they sign up for it and other factors, such as their EPC rating and the size of their array; those installing after January of 2016 will get a considerably lower rate than those doing so last year), you would receive about 4.39p per kilowatt, which equals about £61 per year based on a 3 to 4 kWh system. For the first year, this family would save an average of nearly forty pounds a month on electric costs (most months it would be higher). The threshold when savings exceed costs is called “payback time”; in this family’s case, they have paid back the total cost of their system after nine years and six months. How Many Panels? It is important to determine how many panels can fit on your roof. Most homes in the UK can only support a 4kW system, which equals around 15 panels. The more panels you have, you earn money back faster as your system will generate more power. If you are unsure of how many panels your roof can support, most solar installer companies will come to your home and do a free inspection, and can recommend the approximate number of panels. Branding is also profoundly important when considering which panels to buy. Higher quality paneling tends to be more efficient, which means they are able to produce more energy (sometimes up to 14 percent more), which will add up over time. As mentioned above, the actual solar panels account for approximately thirty percent of total costs (not including installation or operational costs). Panels suitable for residential use and covering a range of one to four kilowatts will cost anywhere between £2,000 to £8,100 (though some rate it higher at £5,000 to £8,000, assuming a VAT of 5 per cent). It is very possible to purchase high-quality panels without paying an unnecessary amount of money. Ideally, homeowners should heavily consider cost per watt, which measures annual costs relative to the particular electrical output of the panel. Given that panels have drastically decreased in price over the last decade, most homeowners will end up paying about £1.14 per watt per panel. You can earn a tariff for each kWh of electricity your system generates, and you can also earn the same for every kWh you export. Since costs vary widely between installers and products, you should ideally get quotes from at least three companies before you decide which to use. This will help maximize the money saved and also ensure you get the most efficient, highest-quality solar array for the price. An inverter is a device which converts direct current (DC) from the solar panels into alternating current (AC), which is used by home appliances. By far the most common kind of inverter is a string or centralized inverter, which is the most cost-effective option in the United States. (The panels are arranged into “strings” which channel their energy into one inverter, hence the name.) They will need to be replaced before the 20- or 25-year solar panel warranty expires—this will cost you about £800 at current rates. This can be potentially delayed by using a micro-inverter (but this will also increase costs, along with higher power output and a lengthier warranty). Micro-inverters are becoming more popular among residential solar users, as they convert the energy at the panel level (in other words, right on the roof), without the need for a separate, centralized inverter. Because centralized inverters can malfunction or otherwise generate less power due to shading from trees and other objects, the immediate conversion of energy provided by micro-inverters is making them a contender in the solar market. The conversion takes place at each individual panel, which allows the flow of power to be smoother and faster. Additionally, micro-inverters also allow the homeowner to monitor the performance of each individual panel. As far as installation costs are concerned, these will vary widely based on county, as they include both labor costs and various permit and inspection fees. In addition, homeowners will need to account for the building and electrical permits required, any pertinent neighborhood covenant requirements, and approval from their local homeowners’ association, if they have one. It is best to choose a large and reputable solar installer, as they will likely be the most informed about all the permits and other hoops to jump through that are necessary where you live. Operational costs will include monitoring, repair, insurance, maintenance, and overhead costs, which make up about twenty percent of the total cost. This amounts to about $4,000 to $8,000 for US users. However, it is important to note that this is the area with the greatest potential for cutting of costs; not every family will need monitoring and most will not need a lot of maintenance, either. Mostly, you just need to make sure the panels stay clean and are kept free of trees or other objects which might shade them. UK users have an added benefit, in that if they keep the panels tilted at a fifteen-degree angle, rainwater will naturally wash the panels and ensure optimal performance. Incentives For Solar Panels There are lots of incentives available for those in the solar market. As such the final total cost of your solar system is heavily conditioned on where you live and the rebates, grants, and text credits for which you qualify. Sometimes, it is possible to cut costs by up to fifty percent, making the total cost of solar energy even more affordable. In the United States, every solar household receives at least a thirty percent federal tax credit; UK users must formally apply for the Feed-in Tariff program to be eligible for various government incentives. US users can search the Database of State Incentives for Renewables and Efficiency (DSIRE) to determine the incentives for which they are eligible, while UK users can contact Ofgem for a list of energy suppliers that are authorized to handle applications and payments for the FIT program. Since some incentives are capped at certain power or cost thresholds it is important to determine the system size for your home that maximizes these benefits. Your installer can help you determine this. In recent years, new models and options have surfaced to reduce or even eliminate the up-front costs of transitioning to solar power. These options include power purchase agreements (PPAs), third-party ownership, and pay-as-you-go. These models are by far the largest reason why solar power has become tremendously popular in recent years. Indeed, nowadays it is entirely possible for a household to switch to sola power for zero dollars down, and thus start saving money from the very first day. Leasing your system is a wonderful way to do this; instead of paying the electrical utility company to use their power, you lease the solar system, and simply pay a fixed rate for the electricity the system produces. Furthermore, the new electricity rates connected with leasing are much cheaper than traditional costs, and are also typically locked in for fifteen years—in stark contrast with utility rates, which have been rising steadily for decades. With concerns about the environment and the cultivation of renewable sources of energy being a very public discussion in our day, along with the ever-present concern of rising home maintenance costs in an uncertain economy, more and more homeowners are choosing to switch their households to solar power. With recent technological advancements and an increasingly competitive, user-beneficial market, it seems that for many families, making the switch to solar power may be one of the smartest decisions they could ever make.
<urn:uuid:c7a38188-59ce-447f-a6a2-a2f6f41a5d5d>
CC-MAIN-2021-43
https://www.expertsure.com/uk/home/cost-benefits-solar-panels/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585348.66/warc/CC-MAIN-20211020183354-20211020213354-00678.warc.gz
en
0.9535
2,261
3.078125
3
A Special Collections Exhibit: “1916 Easter Rising: To Strike for Freedom (Images from the Joseph McGarrity Collection)” Five cases, densely packed with materials drawn from the McGarrity Collection housed in Falvey’s Special Collections, and one case with loaned artifacts provide a comprehensive view of the 1916 Easter Rising, which occurred in Ireland one hundred years ago on Easter Monday, April 24. The backstory for the Easter Rising, the subject of the exhibit, dates from the English occupation of Ireland in 1169. Over many years and centuries, the Irish resisted and rebelled, but were always defeated. In 1801 England imposed “The Act of Union” which annexed Ireland as part of the United Kingdom (Ireland, England, Scotland and Wales). The Home Rule party was created as were the Irish Republican Brotherhood (IRA) and the Sinn Féin (We Ourselves). There were rebellions in 1803, 1848, 1867 and 1916, all aimed at ending British rule. In New York, c.1867, the predecessor to Clan na Gael, an American Irish republican organization was founded. Joseph McGarrity of Philadelphia (the donor of the McGarrity Collection) was a prominent member of the Clan na Gael and a staunch supporter of the IRA. Five cases are organized thematically: “Brothers, Rise! Your Country Calls,” “A Supreme Moment for Ireland,” “The Curse of the Irish Nation,” “Ireland for the Irish,” and “Who Fears to Speak of Easter Week?” These cases all contain materials from the McGarrity Collection, primarily books opened to show illustrations. Of particular interest in the “Brothers, Rise!…” case are a large photograph, “Joseph McGarrity, standing with gloves,” and a typed poem, “To the Fianna” [members “of a secret 19th century Irish and Irish-American organization dedicated to the overthrow of the British rule in Ireland,” Meriam Webster Dictionary], written by McGarrity in 1915. There is a photograph of the Na Fianna Eirann Congress of 1913 and a number of books, many open to display illustrations. A Sinn Fein Rebellion Handbook: Easter 1916 (published in Dublin, 1917) and its map of Dublin occupy a prominent place in “A Supreme Moment for Ireland.” In “The Curse of the Irish Nation” there are again books opened to illustrations, a letter handwritten by Eamonn De Valera to Philadelphia (March 9, 1920) and a photograph of De Valera with the McGarrity family, c.1919. “Ireland for the Irish” displays books, but a number of these feature women who were involved with the Irish nationalist cause. Two items of interest in “Who Fears to Speak of Easter Week?” are a copy of The Clan-na-Gael Journal (October 22, 1916) published in Philadelphia and an article, “Editorial: Proclamation of the Irish Republic – 1916” printed in The Irish People (April 10, 1982, p. 4), a newspaper published in the Bronx, New York. The tall vertical case displays the proclamation, “Poblach Na H Eireann: The Provisional Government of the Irish Republic to the People of Ireland,” a small decorated Irish harp, an Irish Volunteers medal and a photograph of “Seán White, Co Derry, Staff Captain GHQ Dublin, Ireland.” On the bottom of the case are two framed copies “of excerpts of [handwritten] letter of provenance regarding the copy of the Irish Proclamation displayed above.” (From the accompanying placard.) These artifacts are on loan from an anonymous collector. Anne Fitzpatrick, a history student, Laura Bang, and Michael Foight, with additional research provided by Craig Bailey, PhD, were the principal curators for this exhibit. Joanne Quinn, team leader for Communication and Service Promotion designed the graphics. The exhibit will remain on view until July 1. The digital exhibit is now live and can be viewed here. On March 21 at 4:00 p.m. in the Speaker’s Corner, Irish Studies Program, Department of Theatre and Falvey will host “To Strike for Freedom, 100th Anniversary of the Easter Rising.” This event celebrates Irish culture in commemoration of the Easter Rising anniversary. Members of the Villanova community will present readings. The event is free and open to the public. 0 Comments » No comments yet.
<urn:uuid:f6bd36ce-d1fc-4855-979e-c2c1118eb383>
CC-MAIN-2022-40
https://blog.library.villanova.edu/2016/03/14/a-special-collections-exhibit-1916-easter-rising-to-strike-for-freedom-images-from-the-joseph-mcgarrity-collection/
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00730.warc.gz
en
0.941528
960
3.515625
4
LINUX CLASSES - COMPRESSION, ENCODING AND ENCRYPTION Can Linux Do File Encryption? Making Your Data Top Secret Internet email is about as secure as sending a postcard--any postal clerks along the delivery path can read your message if they wanted to since there's no envelope to protect it. So if you're concerned about your private email or files falling into the wrong hands, encryption is the solution. Encryption will scramble your message so that only the holder of the secret decryption key will be able to read it. The de facto standard for encryption is the PGP (Pretty Good Privacy) program, written by Phil Zimmerman. Actually pretty good privacy is a pretty serious understatement. If you use a sufficiently long password key, the computing power required to crack the code that PGP uses becomes astronomically large. Are You a Crypto Criminal? PGP is such a good program that the United States government has classified it as a "munition" and made exporting it illegal, for reasons of national security. Apparently the feds are worried that the bad guys will be able to correspond in a way they can't decode. Although it is not illegal to send a message that has been encoded by PGP, you can't export (via email, FTP, or any other means) the PGP sofware from the United States, except to Canada (or from Canada, except to the United States), without a license from the federal government. The one strange exception to this rule is that printed books containing the PGP source code can be exported. It is also illegal to use PGP in some countries (it's legal in the United States), so if you are an evil terrorist or are plotting the overthrow of your government, check with your local authorities first before using PGP. The Crypto Law Survey at http://rechten.uvt.nl/koops/cryptolaw summarizes the legalities of PGP around the world. If you're interested in learning more about PGP, visit the International PGP Home Page Previous Lesson: Encoding and Decoding Next Lesson: Accesing DOS Files [ RETURN TO INDEX ] Comments - most recent first (Please feel free to answer questions posted by others!) (12 Dec 2011, 08:21 better yet use steganography, encrypt a message inside a pic or music or video and send, to normal people they wouldn't know it is encrypted. (24 Feb 2011, 20:11 GnuPG is the alternative for Linux, if you want it to run with the GUI and context menu install "seahorse-plugins" and then you can right click and encrypt a file, you also need password and encryption keys for key management on Gnome. For KDE you can install kgpg and use that for all (05 Feb 2011, 08:46 looking for encryption software for linux (02 Jul 2010, 21:57 Hi Bob... Are there any other encryption alternatives (non illegal) to PGP Again, thank you for this great site!!! I welcome your comments. However... I am puzzled by many people who say "Please send me the Linux tutorial." This website *is* your Linux Tutorial! Read everything here, learn all you can, ask questions if you like. But don't ask me to send what you already have. :-) NO SPAM! If you post garbage, it will be deleted, and you will be banned.
<urn:uuid:aa2a2319-25e7-4ff9-aa12-1025a3507d94>
CC-MAIN-2017-09
http://lowfatlinux.com/linux-pgp-encryption.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170914.10/warc/CC-MAIN-20170219104610-00575-ip-10-171-10-108.ec2.internal.warc.gz
en
0.898922
744
3.109375
3
Certification of Persons The certification of persons is a tool intended to create confidence within the market and with the authorities and employers in the competence of certain individuals for the performance of certain activities. Confidence in the respective schemes for the certification of persons is generated through a globally accepted process of assessment and regular reassessment of the competence of certified persons described in international standard UNE EN-ISO 17024. The development of schemes for the certification of persons in response to rapid technological innovation and the growing specialisation of staff can offset differences in education and training and therefore be of assistance in the global labour market. The tool has also been proven to equip the markets of professional services with more transparent and symmetrical information, thereby allowing the clients of these professionals to make a more informed choice based on competence and increasing the transparency and competitiveness of these markets. Lastly, it is an effective self-regulation tool for unregulated professions that increases the level of requirements and helps prevent intrusion and fraud. The certification of persons has been used to varying degrees in different countries, both for the professions themselves and for training. In countries such as Germany, the Netherlands and the United Kingdom, for example, it has traditionally had a broader implementation and routine use than in Spain. The certification of persons is applicable to any category: people are certified in unique and varied activities and include health workers, welders, security staff, financial advisers, handlers of hazardous material, stockbrokers, engineers in diverse areas, gas appliance fitters, loss adjusters, real estate agents, verifiers of electrical installations, railway staff, project managers, operators of different types of equipment, etc. The "Accredited Bodies" section contains details of all organisations accredited by ENAC for the certification of persons. All ENAC documentation relative to accreditation criteria and procedures for product certification is available in the DOCUMENTS section. Examples of accredited activities - Welders of metallic materials - Plastic welders - Quality managers and technicians - Auditors of quality and environmental management systems - Project managers, project management professionals and technicians - Non-destructive testing operators - Sustainable building project advisers - Installers and experts in thermal buildings installations - Energy auditors and housing certification technicians
<urn:uuid:1a30443f-32a3-45d2-9eaf-9a818413dc23>
CC-MAIN-2018-47
https://www.enac.es/web/english/what-we-do/accredited-services/persons-certification
s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039741628.8/warc/CC-MAIN-20181114041344-20181114062859-00030.warc.gz
en
0.920629
453
2.625
3
stretch someone's patience Definition of stretch someone's patience : to cause someone to lose patience Her bad behavior is stretching my patience (to the limit). Word by Word Definitions : to extend (one's limbs, one's body, etc.) in a reclining position : to reach out : extend : to extend in length : an exercise of something (such as the understanding or the imagination) beyond ordinary or normal limits : an extension of the scope or application of something : the extent to which something may be stretched : easily stretched : elastic : longer than the standard size : some person : somebody Seen and Heard What made you want to look up stretch someone's patience? Please tell us where you read or heard it (including the quote, if possible).
<urn:uuid:b4cddcad-078f-493b-a00c-a6bf6f98cc50>
CC-MAIN-2017-34
https://www.merriam-webster.com/dictionary/stretch%20someone%27s%20patience
s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886120573.75/warc/CC-MAIN-20170823152006-20170823172006-00655.warc.gz
en
0.91711
165
2.859375
3
On the International Day of Peace this year, join us in our celebrations and in making resolutions for each of us to do our part in working toward for environmental sustainability. We cannot achieve the UN Sustainable Development Goals unless we work together to make this a better world for people, other animals and the environment. — Dr. Jane Goodall, UN Messenger of Peace, Founder of Roots & Shoots START IN AUSTIN AND KEEP AUSTIN BEAUTIFUL Nature provides so many ways to find Peace within one's self and a community. There is research that suggests simply standing on the Earth bare foot helps realign the rhythms of your body. But we have to take care of the Earth together to ensure it remains a viable source of Peace. Check out some upcoming ways you can pitch in and help Keep Austin Beautiful http://keepaustinbeautiful.org/calendar/events Leonardo DiCaprio's Address to the UN General Assembly: “Clean air and a livable climate are unalienable human rights. The people made their voices heard on September 20, 2014 around the world, and the momentum will not stop. The time to answer humankind’s greatest challenge is NOW.” – Leonardo DiCaprio, Actor and UN Messenger of Peace Peace Day is a powerful day to do something in support of the environment. Consider coordinating a neighborhood clean-up, encouraging children to learn about people, animals and habitats in different parts of the world, start a “Roots and Shoots” group in your school or neighborhood, plant a peace pole in a community space, or start a neighborhood garden where youth can learn about growing food and good nutrition. MORE ENVIRONMENTAL PROGRAMS AND ACTIVITIES TO INSPIRE YOU Rising Youth for a Sustainable Earth (RYSE): Activating the Youth-Led Climate Movement Roots and Shoots Youth and Environment Program Roots and Shoots was founded in 1991 by the renowned Dr. Jane Goodall. It empowers youth to identify and respond to problems in their world. Roots and Shoots is now in over 120 countries across the world, and is based on the values of KNOWLEDGE, COMPASSION, ACTION. Dr. Jane’s Message to Youth Any Peace Day Austin activity, service, project, or event can also be part of the Compassion Games. Are you ready to challenge each other to make the world an even more compassionate place to live? Game On! Register here to play. Questions? Contact firstname.lastname@example.org.
<urn:uuid:0be1c083-7638-49f7-97c0-d4d9a4ecda68>
CC-MAIN-2020-50
https://www.peacedayaustin.org/environment
s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141188146.22/warc/CC-MAIN-20201126113736-20201126143736-00233.warc.gz
en
0.900811
535
2.828125
3
Most of us have heard about type 1 or type 2 diabetes, but Type 3 diabetes barely puts a blip on the radar. Although discovered in 2005, this new condition is just beginning to pop up on the headlines of today’s science and medical news journals. Lay people still have a lot to learn. When it comes to type 3 diabetes, Wikipedia doesn’t even have the answers. The relatively new discovery of the disease leaves people concerned about their health searching for answers. Read on for a quick primer on diabetes mellitus 3 and how it may be affecting your health or the health of your loved ones. During a study conducted at the Rhode Island Hospital and Brown Medical School, researchers made a groundbreaking discovery: the hormone insulin was not just produced by the pancreas as previously thought. After careful study of their subjects, the researchers discovered that the brain was also responsible to producing small amounts of insulin. This discovery led to several more important revelations. One of those revelations was the discovery of insulin’s effect on the brain. One of those effects on the brain is the development of diabetes mellitus 3. Type 3 diabetes is a condition where the brain does not produce enough insulin. In the absence of insulin, the brain is affected much the way the body is in type 1 or type 2 diabetes. In fact, diabetes mellitus 3 only occurs in people who have either type 1 or type 2 diabetes already. Diabetes mellitus 3 is also known as brain diabetes. This is because the brain requires insulin to form new memories. Receptors on the brain’s synapses help facilitate the communication that creates new memories. The insulin produced by the brain wards off amyloid beta-derived diffusible ligands (ADDLs)that destroy those receptors. In diabetes mellitus 3, the brain is either doesn’t produce enough insulin for new memory formation or is resistant to the insulin it produces. Without insulin, those insulin receptors die. Without those insulin receptors, the brain can’t form new memories. This inability to form new memories is what produces the type 3 diabetes symptoms, signs and difficulties that mimic those of Alzheimer’s and dementia. Sufferers experience the memory loss and confusion that is typical of both diseases. Because of the similarity of these diseases, doctors often have trouble diagnosing diabetes mellitus 3 unless they are specifically looking for it using Magnetic Resonance Imaging (MRI) scanning technology. Diabetes mellitus 3 was only officially recognized as an illness in 2005. But doctors already know quite a bit about how to treat the disease. Much of that head start is thanks to the fact that the treatment for type 3 diabetes symptoms is very similar to the treatment for diabetes mellitus 2. One of the keys to treating and preventing the onset of diabetes mellitus 3 is to exercise. Regular exercise three to five times a week combined with a healthy diet helps to maintain the healthy weight that wards off the disease. Obesity — especially in women — is a key factor in the onset of both type 2 and type 3 diabetes. Doctors also treat diabetes mellitus 3 with the same drugs used to treat type 2 diabetes like regular doses of insulin and insulin-sensitizing rosiglitazone. These drugs actually slow and even prevent further memory loss. They do this by protecting the brain’s neurons from the damaging ADDLs. Cholesterol build up is another similarity between diabetes of all types and Alzheimers. Certain preliminary trials have found that lipid lowering drugs used to fight high cholesterol are effective in treating diabetes mellitus 3. Today, many type 3 diabetes sufferers are turning to this drug for relief. Diabetes mellitus 3 is a newly-discovered disease that leaves many questions still to be answered. But as we discover more about all types of diabetes, treatments are improving. If you or someone you know is suffering from the symptoms of type 3 diabetes, Mayo Clinic and Wikipedia searches aren’t enough. Contact your doctor as soon as possible to catch and treat type 3 diabetes in its primary stage.
<urn:uuid:1f4968c6-0e51-4e97-ad30-d41b9ead07e7>
CC-MAIN-2015-35
http://dealingwithdiabetes.org/type-3-diabetes-attacks-your-brain/
s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644065375.30/warc/CC-MAIN-20150827025425-00302-ip-10-171-96-226.ec2.internal.warc.gz
en
0.934508
832
3.03125
3
From People for the Ethical Treatment of Animals (PETA) Archaic law forced shelters to sell animals to labs for experimentation. Following PETA's release of findings from a shocking eight-month-long undercover investigation of dog and cat experiments at the University of Utah, state legislators have overwhelmingly passed House Bill 107 (Animal Shelter Amendments), which amends an archaic state law so that government-run animal shelters will no longer be forced to sell homeless dogs and cats to laboratories upon request for use in cruel and deadly experiments. The new law also lengthens the required holding period for animals in animal shelters and mandates that animal shelters make greater efforts to find the guardians of lost animals. Gov. Gary R. Herbert signed the bill into law on Saturday. In the wake of the new law, PETA is urging Utah animal shelters to formally prohibit the sale of animals to laboratories and will be pushing for the University of Utah to stop purchasing cats and dogs from animal shelters once and for all. Until Saturday's amendment, Utah was one of only three states in the country that still forced animal shelters to engage in this practice. PETA's undercover investigation at the University of Utah, which was released just five months ago, revealed that more than 100 homeless cats and dogs from animal shelters in Utah are sold to the university every year for use in invasive, painful, and deadly experiments. "We congratulate the Utah legislature for acting so quickly after our investigation to recognize that dogs and cats at animal shelters aren't laboratory tools," says PETA Vice President of Laboratory Investigations Kathy Guillermo. "Now Utah's public animal shelters should focus on what they're supposed to: providing a haven for lost and homeless dogs and cats."
<urn:uuid:ad0ce0d6-8aa7-4fbb-897d-628c9c65a18d>
CC-MAIN-2014-23
http://www.all-creatures.org/articles/ar-pound.html
s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510274289.5/warc/CC-MAIN-20140728011754-00482-ip-10-146-231-18.ec2.internal.warc.gz
en
0.94218
348
2.65625
3
We studied waterfowl use of grass-sage stock ponds in north-central Wyoming during the 1988 and 1989 breeding seasons. Dabbling ducks, particularly mallards, were the most common breeders. Indicated breeding pair density averaged 2.7 pairs/ha of wetland surface, while brood density averaged 1.0 brood/ha of wetland surface. Waterfowl use and productivity were greatest on large (>3 ha), clear, deep ponds with grass shorelines and abundant submergent macrophytes. Pair use was positively correlated with water clarity, pond area, and macroinvertebrate diversity. Brood use was related to macroinvertebrate diversity, pond depth, and Shoreline Development Index. We recommend management priority be given to ponds that are deeper than 1 m to provide more water that is clear so macrophytes can be established. Macroinvertebrates should be artificially introduced into ponds. Fencing should be used to improve ponds for waterfowl use and brood rearing.
<urn:uuid:92a8b204-18af-4739-93f4-136d303e91ec>
CC-MAIN-2017-34
https://pubs.er.usgs.gov/publication/70020661
s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105922.73/warc/CC-MAIN-20170819201404-20170819221404-00235.warc.gz
en
0.936372
204
2.6875
3
August 25, 2011 ‘Hidden’ Differences Of Chromosome Organization Become Visible Why different species have dissimilar sets of chromosomes? Why the differentiated species often conserve apparently identical chromosome complements? Furthermore, why, while chromosome rearrangements can considerably change the course of species evolution, certain variation among individuals and populations of some species persists indefinitely? Such questions motivate researchers to compare chromosomes in closely related species.To understand the nature of chromosome changes in the voles Microtus savii, researchers from the Rome State University "Sapienza" launched a molecular cytogenetic study. Three of the five Italian forms of pine voles showed remarkable differences in chromosomal distribution of two molecular markers. Analyzing these data and weighing them against previously obtained genetic information, the authors expect to improve the taxonomy of these rodents and to track the pathway of their chromosomal evolution. The Italian pine voles have long been known as a "species complex", namely the Microtus savii complex. The group includes five "forms": "savii", "brachycercus", "nebrodensis", "niethammericus", and "tolfetanus", distributed throughout the Apennine peninsula. The most widely dispersed is "savii"; "brachycercus" lives in Calabria, "niethammericus" inhabits the Southeast part of the peninsula, while "nebrodensis" is restricted to Sicily. These ground voles have evolved at different times either with or without chromosomal rearrangements. Chromosomal distribution of specific genes and DNA sequences can help to distinguish between related species with very similar, apparently identical, chromosomes. By localization of such molecular "markers" on chromosomes, or so-called "physical mapping", researchers evidence differences that are normally invisible in microscope. These differences indicate "hidden" processes of chromosome diversification. Original source: Gornung E, Bezerra AMR, Castiglia R (2011) Comparative chromosome mapping of the rRNA genes and telomeric repeats in three Italian pine voles of the Microtus savii s.l. complex (Rodentia, Cricetidae). Comparative Cytogenetics 5(3): 247. doi: 10.3897/CompCytogen.v5i3.1429 Image Caption: This is the Italian pine vole, Microtus savii. Credit: Alexandra M.R. Bezerra On the Net:
<urn:uuid:17426586-162c-47fd-a092-c56a7b7f8741>
CC-MAIN-2017-47
http://www.redorbit.com/news/science/2601574/hidden-differences-of-chromosome-organization-become-visible/
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805687.20/warc/CC-MAIN-20171119153219-20171119173219-00116.warc.gz
en
0.8517
513
3.828125
4
W.J. BERANEK and G.J. HOBBELMAN Delft University of Technology Lattice models provide the opportunity to introduce the structure of a brittle material into its mechanical behaviour. The advantages and disadvantage!: of regular and irregular lattice models are discussed. In the paper regular lattices are applied throughout. To improve unsatisfactory behaviour in certain directions, several adjustments have been worked out regarding the failure criterion and the post peak behaviour. This has primarily been done for the ideal brittle material and adaptations are shown which are required for masonry walls.
<urn:uuid:fe09a8ac-16df-4a95-94c4-dfa68ce20569>
CC-MAIN-2021-21
https://www.masonry.org.uk/downloads/recent-development-of-the-lattice-model-for-in-plane-loading-of-masonry-walls/
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991801.49/warc/CC-MAIN-20210515100825-20210515130825-00138.warc.gz
en
0.935794
116
2.671875
3
Teen cannabis use linked to lower-IQ adults Teen cannabis use Teenagers who become dependent on cannabis may pay for it in adulthood with a significantly lower IQ, reveals a long-term New Zealand study. Dr Madeline Meier, of Duke University in the US, and colleagues, compared participants' IQs at age 13, before any cannabis use, with their IQ twenty five years later. Their results appear in the Proceedings of the National Academy of Sciences. Tracking 1037 people born in Dunedin in 1972/3, they interviewed participants at 18, 21, 26, 32 and 38 years and recorded whether they were dependent on cannabis at the time. People who were cannabis dependent at 18 had lower IQs at age 38 and the IQ fall was most severe if they remained dependent into early adulthood, dropping by an average of 8 IQ points. For a person starting with average intelligence (IQ=100), an 8 point fall would put them in the bottom 30 per cent of the population for IQ, the authors say. Adult use may be less damaging As expected when tracking a cohort of the general population, only a small number (52) became cannabis dependent in their teens, but they showed deficits affecting not only IQ, but memory, attention and speed of processing, the authors say. Importantly, a further 92 people became cannabis dependent as adults, but they did not show a significant IQ drop, suggesting that the detrimental effect is specific to cannabis during adolescence. The authors were able to rule out alcohol, hard drugs, recent cannabis use and tobacco as being responsible for the IQ decline. And although cannabis users tend to drop out of school, lack of education was also excluded as a cause. Adolescent brain 'vulnerable' "We know that the brain is undergoing important critical developmental changes from adolescence through the early 20s," says Meier, who suspects that cannabis is damaging at this crucial time. "I think the study shows that cannabis use in adolescence can have long-term effects on mental abilities," she says, advising adolescents not to use it because their brains are "vulnerable and still developing". If people are determined to use it, we should be concentrating on delaying use until adulthood, she argues. Commenting on the study, Dr Matthew Large of the University of New South Wales School of Psychiatry endorsed the research. "You don't get any better information than this," he says, adding that it is a large and well- conducted study. "It adds to a growing body of knowledge that cannabis is not an entirely benign compound," remarks Large. He says we need to be informed about the potential consequences of its use, particularly as it is popular amongst adolescents. "Twelve year olds in Australia are more likely to have smoked a joint than a cigarette," he says.
<urn:uuid:74c3f775-3160-4288-a72a-4c3e0ba1a60d>
CC-MAIN-2017-43
http://www.abc.net.au/science/articles/2012/08/28/3576748.htm
s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824894.98/warc/CC-MAIN-20171021190701-20171021210701-00660.warc.gz
en
0.968311
573
2.671875
3
An hour after Baum advanced eastward along this route, he received a report that the enemy is in the area. He halted and retraced his steps to a location Wasmus describes as being “one mile [east of] the place where [they] camped last night.” Finding the report to be false, Baum is prepared to proceed when he received a letter from Burgoyne instructing him to post himself on the Batten Kill and to await instructions. Wasmus reports that Generals Burgoyne and Phillips met with Baum. There is no account of what was said between the men. We had hardly covered one mile in the woods when we went back again and made our camp one mile behind the place where we camped last night. The reason for this was a false report stating that the enemy, a few thousand men strong, had occupied a post not far from us. This afternoon, Generals Burgoyne and Phillips came to us, talked a long time with our Lieut. Colonel Baum, and returned to the army. Julius Wasmus August 12, 1777 The Return of Eunice Campbell Reid – Lakes to Locks Passage Travel Planning Official National Geographic Mapguide When on our way home from Burgoyne’s camp we stopped several days at John McNeil’s. Whilst there a a large party of Brunswickers, to the number of 30 or more, came and went into Mister McNeil’s…
<urn:uuid:becfc6fd-1fa1-4edf-bef9-39396985cf72>
CC-MAIN-2018-34
http://passageport.org/bennington/baum-site-9-batten-kill-encampment/
s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221210362.19/warc/CC-MAIN-20180815220136-20180816000136-00649.warc.gz
en
0.970727
311
2.71875
3
Harvard psychologist and researcher Nancy Etcoff, along with a biotechnology company, Deep Longevity, has identified a machine learning approach to human psychology by creating two digital models based on data from the Midlife in the United States study. The findings were reported in a research paper published in Aging-US. The first model is an ensemble of deep neural networks, which predicts the respondent’s chronological age and psychological well-being in the next ten years using the information from a psychological survey. The model illustrates the trajectories of the human mind as it ages. The model demonstrates that the respondent’s capacity to form meaningful connections, mental autonomy, and environmental mastery develops with age. It also suggests that the focus on personal progress is constantly declining; however, the sense of purpose in life only fades after the age of 40 to 50. The results from the first model add to the growing knowledge on hedonic adaptation and socioemotional selectivity in the context of adult personality development. The second model mentioned in the paper is a self-organizing map created to serve as the foundation for the recommendation engine for mental health applications. The unsupervised learning algorithm splits all respondents into clusters depending on their likelihood of developing depression and then determines the shortest path toward mental stability for any individual. Deep Longevity has released a web service FuturSelf, to demonstrate this system’s potential. FuturSelf is a free online application that allows users to take the psychological test described in the original publication. At the end of the test, users receive a report with insights aimed at improving their long-term mental health. Also, users can enroll in a guidance program that provides them with a steady flow of AI-chosen recommendations.
<urn:uuid:24bc724a-427b-4ab4-8243-f3caab1acfbe>
CC-MAIN-2023-50
https://analyticsdrift.com/harvard-psychologist-identifies-machine-learning-approach-to-human-psychology/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00768.warc.gz
en
0.933046
354
2.890625
3
The face of adult education is evolving with the Common Core Standards driving intense curriculum changes and computer based testing challenging the traditional delivery methods of the GED Test. Beyond the GED Test, McGraw Hill is developing an alternative test called the TASC and Educational Testing Services is introducing the HiSet exam. There are many other exam initiatives being bounced around too so those are probably not the only names you will hear locally and nationally as we move closer to January 2014. The new testing options and CBT delivery are game changers in the field of adult education. “On January 2, 2014, GED Testing Service will unveil a new assessment in all jurisdictions (except Canada) that ensures the GED® testing program is no longer an endpoint for adults, but a springboard for more education, training, and better-paying jobs. The new assessment will continue to provide adults the opportunity to earn a high school credential, but it goes further by measuring career- and college-readiness skills that are the focus of today’s curriculum and tomorrow’s success.” ~ from GED® Testing Service This new series will reflect the academic modifications needed to keep the test current and will integrate the broad technological advancements and uses of technology in today’s society. As in series past, educators and testers will not only note the standard academic changes in content, but will also experience a change in the format. A computer based delivery format will be fully integrated into the GED® Test. In 2014, all testers sitting for the GED® Test will take the test on a computer at an official GED® Test center. Beyond that, the other players in the high school equivalency test arena are introducing computer-based testing but at a bit slower rate. HiSet will be doing paper/pencil in 2014 as will McGraw Hill. However, both HiSet and McGraw Hill have plans to introduce CBT to their testing arsenal over the next few years. Computer based testing is not going away – it may be delayed for some, but it is an inevitability. For Adult Education programs across the nation, this means two things. One, students must be prepared with the background knowledge and skills needed to answer test questions correctly. Two, students must be equipped with the necessary computer skills to comfortably and successfully navigate the computer based testing format. In the video, the GED Testing Service® assures GED® candidates that they will need only “basic computer skills”. What are basic computer skills? Which ones will students need to be familiar with while taking the test on computer? The GED Testing Service® website gives these examples for testers and educators to get a feel for the kinds of computer familiarity and skills needed to navigate the test on computer. GED®Test Tutorial http://www.gedtestingservice.com/GEDTS%20Tutorial.html 2014 GED® Item Sampler http://www.gedtestingservice.com/educators/itemsampler i-Pathways, in its origins and development, has been and continues to be a leader in providing an internet accessed, computer-based high school equivalency preparation program. Many of the i-Pathways lessons and activities correspond directly and secondarily to the computer literacy skills that will be needed for the CBT, no matter what the testing product is that a program or state opts to use. Explore the Units and lessons listed as examples of how i-Pathways prepares students for CBT. This is an i-Pathways Program User created document. Thanks to Doreen Balzarini and Cindy Lock, of Illinois Valley Community College for creating this document. See the link to the Excel Spreadsheet entitled, i-Pathways Computer Skills and CBT: NOTE: Adult educators should not confuse a student’s digital device technology proficiency with basic computer literacy. They are not the same. In 2011, the majority of GED® test candidates were 16-30 years of age. While 96% of students in that age bracket have a smart phone, 70% have a laptop computer and even fewer 55% have a desktop computer. Something else to note, Home usage of the PC is down 20% since 2008, according to the chart from a Morgan Stanley report examining the burgeoning tablet market. What has changed in the last few years? The growth of the smartphones and tablets, says Morgan Stanley. As people use smartphones for more simple computing tasks like web surfing, they use traditional PCs less. Undoubtedly, many changes will need to take place in the classroom. Clearly, for many students preparation will be key to successfully completing the computer based testing. Familiarity with technology in the form of a computer and mastery of certain computer skills will be an important part of each tester’s classroom experience. So how will programs go about choosing curricula that is relevant to the new GED® Test content, and, at the same time, incorporating the kind of computer literacy skills that will be needed for the computer-based testing now and in the future? Implementation or continued use of i-Pathways as a supplement into your traditional GED® classroom or providing it as a distance learning program is a great choice that seamlessly addresses all three of these areas. The i-Pathways team is ready to help you get ready for the future of adult education no matter what high school equivalency exam you use. Check out this blog entry to see more info about how we align to the Common Core Standards and more. Contact Crystal Hack, i-Pathways Project Director, at firstname.lastname@example.org for more details on how to become an i-Pathways user at your program.
<urn:uuid:b5d4df73-546a-4b95-85e5-a4f041c5c946>
CC-MAIN-2018-09
https://ipathways.wordpress.com/2013/03/19/ged-2014-a-game-changer/
s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891817523.0/warc/CC-MAIN-20180225225657-20180226005657-00754.warc.gz
en
0.929157
1,168
2.671875
3
Discover the cosmos! Each day a different image or photograph of our fascinating universe is featured, along with a brief explanation written by a professional astronomer. 2012 July 9 Explanation: What did you do over your winter vacation? If you were the Opportunity rover on Mars, you spent four months of it stationary and perched on the northern slope of Greeley Haven -- and tilted so that your solar panels could absorb as much sunlight as possible. During its winter stopover, the usually rolling robot undertook several science activities including snapping over 800 images of its surroundings, many of which have been combined into this 360-degree digitally-compressed panorama and shown in exaggerated colors to highlight different surface features. Past tracks of Opportunity can be seen toward the left, while Opportunity's dust covered solar panels cross the image bottom. Just below the horizon and right of center, an interior wall of 20-kilometer Endeavour Crater can be seen. Now that the northern Martian winter is over, Opportunity is rolling again, this time straight ahead (north). The rover is set to investigate unusual light-colored soil patches as it begins again to further explore the inside of Endeavour, a crater that may hold some of the oldest features yet visited. Authors & editors: Jerry Bonnell (UMCP) NASA Official: Phillip Newman Specific rights apply. A service of: ASD at NASA / GSFC & Michigan Tech. U.
<urn:uuid:08e02109-01e9-42db-ab01-fb0990d1d60c>
CC-MAIN-2015-06
http://apod.nasa.gov/apod/ap120709.html
s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115863063.84/warc/CC-MAIN-20150124161103-00243-ip-10-180-212-252.ec2.internal.warc.gz
en
0.937665
291
3.203125
3
Americans Drive Solo to Work, Despite Long Delays and Rough Roads Patterns of people moving goods and services through our economy have changed drastically since the 1990s. Since 1980, the US population grew by 43%, the same amount that vehicle miles traveled grew in rural areas. But the number of miles driven in urban areas grew by 163%. Vehicle registrations outgrew population, increasing 66% over the same period. Americans overwhelmingly commute to work alone in cars, with three-quarters (76.4%) of the estimated 153 million people going to work every morning in America driving solo (Fig. 122). Though driving (including carpools) as a share of all commuting shrank a few percentage points since the early nineties, there was a net increase of 39 million people using vehicles to get to work since 1993. In 2017, just 5% took public transit, and nearly 3% walked or bicycled. Working from home is becoming more common. From 2007 to 2017, people reporting to work from home increased from 5.7 million to 8 million, a 40% increase. Roughly half of all urban collectors (roads with speed limits of 35-55 miles per hour) traveled some 223 billion miles in 2017 were rated as unacceptably bumpy by a measure of road roughness (Fig. 120). One-quarter of urban arterials, traveled some 1.1 trillion miles by urban drivers in 2017 with speed limits of 50-70 miles, were rated unacceptable. This urban road trend has held steady for the last decade. Rural roads remain in better shape than urban ones. Conversely, bridges have seen a significant decrease in structural deficiencies over time. Since the 1990s, delays per commuter have increased, reaching a peak just before 2009. As of 2014, delays remained relatively level at 42 hours per commuter (Fig. 121).
<urn:uuid:d69bb3a6-d574-4ab7-b16d-15c371e99773>
CC-MAIN-2021-04
https://annualreport.usafacts.org/articles/21-transportation-infrastructure-energy-natural-disasters-americans-drive-solo-work-despite-long-delays-rough-roads
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703517159.7/warc/CC-MAIN-20210118220236-20210119010236-00642.warc.gz
en
0.958254
376
2.9375
3
It’s September, and our apple tree has been so laden with fruit this year that branches have actually cracked and broken off under the weight, as though taking Keats’ lines from Ode to Autumn – To bend with apples the mossed cottage-trees And fill all fruit with ripeness to the core… far too literally. The lawn is now almost ankle deep in windfalls… What is it about apples? Why are they so evocative? Why was the fruit of the Tree of the Knowledge of Good and Evil – not actually named in the Bible – assumed to be an apple? Why did the Firebird, in Russian folklore, steal golden apples from the garden of the Czar? Why did golden apples of immortality grow in the Garden of the Hesperides, why was the Norse goddess Idun the keeper of golden apples which preserved the youth of the gods? Why was the Apple of Discord – with its inscription To the Fairest – an apple, and why were three golden apples so irresistible to Atalanta that she paused to pick them up and lost her race? (Mind you, that dress wouldn't help.) The apple as the fruit of immortality, or perhaps equally of death, appears as a symbol in Celtic mythology too. Heralds from the Land of Youth might bear a silver apple branch, with silver blossom and golden fruit, whose tinkling music lulled the hearers to sleep – perhaps to everlasting sleep… And Arthur, after his final battle, went to the island of Avalon, the island of apples, to be healed of his mortal wound. Then of course there’s the apple given by the wicked Queen to Snow-White, one bite of which sends the little princess into a death-like sleep. Apples are tokens of love and promises of eternity. In Yeat’s ‘The Song of Wandering Aengus’, the lovelorn Aengus seeks forever the beautiful girl from the hazel wood. Though I am old with wandering Through hollow lands and hilly lands I find out where she has gone, And kiss her lips and take her hands; And walk among long dappled grass And pluck till time and times are done, The silver apples of the moon, The golden apples of the sun. But such an eternity is probably also the land beyond death. Where do apples even come from, why are they so ubiquitous? Why, even today, are so many varieties available - even in supermarkets, usually the home of homogeneity? I went into our local Sainsburies the other day and counted eleven different named varieties of apple all on sale at once: Empire, Royal Gala, Red Delicious, Golden Delicious, Cox’s Orange Pippin, Russets, Granny Smiths, Pink Ladies, Jazz, Braeburns and Bramleys. By contrast, there were only four named varieties of pears, and everything else was generic – bananas, strawberries, oranges, etc. Apples are related to roses, I’m delighted to tell you. According to a rather lovely book called ‘Apples: the story of the fruit of temptation’, by Frank Browning (Penguin 1998): ‘In the beginning there were roses. Small flowers of five white petals opened on low, thorny stems, scattered across the earth in the pastures of the dinosaurs, about eighty million years ago. …These bitter-fruited bushes, among the first flowering plants on earth, emerged as the vast Rosaceae family and from them came most of the fruits human beings eat today: apples, pears, plums, quinces, even peaches, cherries, strawberries, raspberries and blackberries. ‘The apple [paleobotanists believe]… was the unlikely child of an extra-conjugal affair between a primitive plum from the rose family and a wayward flower with white and yellow blossoms of the Spirea family, called meadowsweet.’ Isn’t that wonderful? Apples as we know them today developed in Europe and Asia. The Pharoahs grew them. The Greeks and Romans grew them. And they keep. You can store apples overwinter, eat them months after you’ve picked them: fresh fruit in hard cold weather when there’s nothing growing outside. So perhaps you would think of them as life-giving, immortal fruit. They smell fragrant. They feel good too: hard-fleshed, smooth, a cool weight in the hand. The medieval lyric Adam lay y-bounden provocatively celebrates the Fall of Man when Adam ate the forbidden fruit: And all was for an appil An appil that he toke As clerkes finden Written in her boke. It ends on the mischievously subversive thought that if Adam had not eaten the apple, Our Lady would never have become the Heavenly Queen: Blessed be the time That appil take was! Therefore we maun singen: Here is a poem by John Drinkwater (surely the most poetically-named poet ever!) which captures some of those mystical coincidences of apples, eternity, sleep, moonlight, magic and death. At the top of the house the apples are laid in rows, And the skylight lets the moonlight in, and those Apples are deep-sea apples of green. There goes A cloud on the moon in the autumn night. A mouse in the wainscot scratches, and scratches, and then There is no sound at the top of the house of men Or mice; and the cloud is blown, and the moon again Dapples the apples with deep-sea light. They are lying in rows there, under the gloomy beams On the sagging floor; they gather the silver streams Out of the moon, those moonlit apples of dreams And quiet is the steep stair under. In the corridors under there is nothing but sleep. And stiller than ever on orchard boughs they keep Tryst with the moon, and deep is the silence, deep On moon-washed apples of wonder. Apple tree: Author's garden Atalanta racing Hippomenes: Willen van Herp, c1650 Adam and Eve: Lucas Cranach, 1537 Apple Tree: Arthur Rackham
<urn:uuid:5b7cd4b2-4413-423a-b7c6-b6d11b93b138>
CC-MAIN-2019-04
http://the-history-girls.blogspot.com/2017/09/comfort-me-with-apples-katherine.html
s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583657907.79/warc/CC-MAIN-20190116215800-20190117001800-00635.warc.gz
en
0.945423
1,345
2.828125
3
Blenny or Bream: Who can tell? These two different families of fish look similar in the photo, but it's the video that really showcases mimicry at work. During one of his innumerable dives, Ned DeLoach tracked a juvenile bream (Scolopsis bilineatus) to see if it would lead him to a fangblenny (Meiacanthus grammistes), a species this bream is known to mimic. Sure enough, the bream connected with a fangblenny, and Ned captured footage of the two side-by-side. The video helps you appreciate how effective mimicry really is — much more so than photos alone could. The bream not only looks like the fangblenny, but it also behaves like one. This type of mimicry is called Batesian mimicry where a harmless species (the baby bream) imitates a harmful species (the fangblenny, which is venomous) to fool predators.This isn't the first Batesian mimicry footage recorded by Ned DeLoach. We previously shared another video of a baby sole fish imitating a (get this!) flatworm! Make sure to also watch these videos to witness just how amazing mimicry in action can truly be. Read more at our friend's website blennywatcher.com. Ain't nature cool?
<urn:uuid:0176214d-58d8-4160-8ad9-020119770d25>
CC-MAIN-2016-07
http://www.advancedaquarist.com/blog/blenny-or-bream-who-can-tell
s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701165697.9/warc/CC-MAIN-20160205193925-00192-ip-10-236-182-209.ec2.internal.warc.gz
en
0.949188
280
3.03125
3
The Eclipse-class star dreadnought are an extremely large warship, of which only two were ever made. Both were constructed to serve Darth Sidious's personal flagship. They're 17 kilometers long, a bit shorter then the Executor-class shipss, but were much wider. Considering the initial statements about the size of the Executors, it's fairly obvious the Eclipse ships were meant to dwarf them. Eclipses were among the most powerful warships to appear in Star Wars. They had enough firepower to engage entire Rebel fleets at a time, and their armor and shields were so strong they wouldn't need to think twice before ramming other ships (one these rammed through the Galaxy Gun without taking any damage). In addition to their standard weapons and huge number of fighters, they also possessed gravity well projectors to keep enemies from escaping inter hyperspace. The crown jewel of their features, however, was their own superlaser. Though not in the same league as the super-lasers possessed by the Death Star, they were still powerful sear of continents and/or crack open a planet's crust. The resulting heat and superheated chunks of the planet's crust being ejected would likely kill everything on it.
<urn:uuid:8a375676-b390-4675-90fc-9459ffd8e4b2>
CC-MAIN-2015-14
http://www.stardestroyer.net/mrwong/wiki/index.php?title=Eclipse-class&oldid=23281
s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131309986.40/warc/CC-MAIN-20150323172149-00206-ip-10-168-14-71.ec2.internal.warc.gz
en
0.989143
248
2.765625
3
The famous family of Dilli or Dehli, or Delhi as the British called it, came to the Indian subcontinent in the seventeenth century and they were not known as Khairis then. The ancestor who came to India was Khair ullah. Only when the family started to be split, going to various parts of the world, it was decided that a common name or title to be given to it for the purposes of identification. The name, or title chosen, Khairi since then has stuck. The author Saad Rashidul Khairi’s grandfather, Allama Rashidul Khairi, was very famous and earned a name for himself as a prose writer. He wrote about people, real people; since he was able to realistically portray the characters of the women of his age and wrote about their limitations and miseries, he was given the honorific ‘musawir-e gham’. Rashidul Khairi was also one of the first novelists, so to say, who started to write prose away from the traditional form, the dastaan. The novel with greater realism and less stylisation was the product of European literary interaction with local literatures and he was one of those who tapped into it and crafted a form out of it in Urdu prose. His son and Saad’s father Razik ul Khairi, who also championed the cause of women, was running a very successful publishing house. The magazines he published that were meant to be for women could only reach them through an act of subversion. It involved a bit of espionage and intelligence work as women were supposed to access these magazines without knowledge of the members of their family, howsoever close. When Razik migrated to Pakistan, the most cherished possession he smuggled with him was the addresses of the women readers who subscribed to his magazine. But, as Razik moved to Pakistan, everything changed. The first section of this biography regarding colonial India is a realistic account of the lives of the ‘mussalman ashraaf’ in the twentieth century. It revolves around their values, customs, traditions, rituals, and economic activity. It offers a great portrayal of educational institutions and the atmosphere there. Even for people who moved to Pakistan and were unhinged from a lifestyle that spanned over four to five centuries, that era seems like a chapter out of a fairy tale or a folk dastaan. The values and lifestyle was so different to be called medieval; so the hurried or rushed migration catapulted them into the here and now of the twentieth century. Saad Rashidul Khairi had taken the public services exam in united colonial India but everything was disrupted due to independence /partition. When he came to Pakistan, he was chosen for Foreign Service after an interview. Then begins the second part of the book, while the third part deals with his post-retirement phase when he worked for a Saudi newspaper, and later came back to Pakistan to spend the remaining years of his life. His tenure at the foreign office was not particularly fulfilling as he met with narrow-minded superiors and colleagues who were not really interested in the work they were doing. It is a revelation that he also worked as an under agent in a cover posting for the CIA. It gives us an insight into the very close relationship that Pakistan has had with the intelligence agencies of the United States (US) and the so-called free world. He was also an eye witness to how the Soviets were offended when after a warmer beginning they were sidestepped by the Pakistanis politicians for a closer link with the US. His stay in Saudi Arabia made him a critical observer; thus, he gives a very honest account of the lives and values of the Saudis. The state of the media was tightly controlled and the lives of the people equally so; this made them manipulate the system to deviate from it. The entire story is extremely personal and a recount of the experiences as an individual. It would have been better if he had also given his own views about the foreign policy that the state of Pakistan adopted right from the very beginning. He just hints at the change of policy with Ghulam Muhammad assuming the reins of power after the assassination of Liaquat Ali Khan. He could have mentioned the critical turns the foreign policy took and its impact, positive or negative, on the state. It would also expose his predicaments, leanings and proclivities which are missing in the account. There are hints about Pakistan getting an invitation from the Soviet Union, the start of the cold war and deteriorating conditions in the post war era but that is only in passing and get lost in other details like his cantankerous relationship with his bosses as well as the life of diplomats abroad, especially from the third world. It also gives a good account of the life patterns in the so-called Muslim world for he had the privilege of serving in Iran, Lebanon, Egypt, Algeria, Syria, Yemen, Sudan, Kenya and Ethiopia, the last two with sizeable Muslim populations. After seeking early retirement from the Foreign Service, Saad Rashidul Khairi wrote a book on Quaid-e-Azam. He has a style that is very effortless, in the sense that words or the turn of the phrase do not stand on their own but are only a means to expressing the intention. This is a very good and positive ability in a writer, and it’s a pity he did not write more. Like his grandfather, he could have stood on the laurels of his writings alone. Aap Beeti Jag Beeti Author: Saad Rashidul Khairi Publisher: Maktaba Daniyal Karachi, 2018 second edition Price: Rs 850
<urn:uuid:4bfb2fdf-bd42-4a51-83f2-15933cbdaff8>
CC-MAIN-2019-39
http://tns.thenews.com.pk/just-individual-life/
s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574182.31/warc/CC-MAIN-20190921022342-20190921044342-00505.warc.gz
en
0.985338
1,185
2.953125
3
Donkey Gives Birth to 'Zedonk' by Brian Thomas, M.S. * For 25 years, donkeys and zebras at the Chestatee Wildlife Preserve have shared the same pasture while maintaining separate identities. When a donkey gave birth recently, zookeepers were surprised to see the baby "zedonk." Its head was donkey-like, but its legs displayed clear zebra stripes, meaning that a zebra stallion was the father. The zoo, located 45 minutes north of Atlanta, Georgia, posted on its website: "Zedonks are extremely rare and we are very excited about her birth. Mother and baby are doing well and are on exhibit in a spacious pasture."1 Also called zonkeys, the offspring of zebras and donkeys are only known to occur among captive animals. Their rarity is probably due to the fact that members of the horse kind are picky about appearance. For example, white horses are often shunned by an otherwise colorful wild herd. Crosses other than zedonks also occur. A "zorse" results from the union of a horse and zebra (or a "hebra," if the father is a horse). A mule comes from the union of a horse and donkey. Because zebras, horses, and donkeys can interbreed, this is very strong evidence that they are descendants of a single created kind--the horse kind.2 However, these zedonks, zorses, and mules are usually sterile, due to a mismatch in chromosome number. Donkeys have 62 chromosomes, horses have 64, and zebras have 44. The Przewalski's horse--considered by both evolutionists and creationists to be most similar to the ancient population that gave rise to all of modern horse kind--has 66 chromosomes, while another, called Hartman's mountain zebra, has only 32.3 Because of the differing chromosome numbers, it might be contended that God created each of these horse-like varieties separately. They have evidently existed in their current general forms for much of earth's history. For instance, by the time of Joseph, horses were already distinct from donkeys, and both had been domesticated.4 The favored son of Jacob, Joseph was a very powerful official in Egypt, according to the Bible. He was probably called Mentuhotep by Egyptians and was the famous vizier under Pharoah Sesostris I of the 12th dynasty. Archaeologist and author David Down stated, "Sir Alan Gardiner assigns a date of 1971-1928 B.C. to Sesostris I, but by a revised chronology he would have been ruling when Joseph was sold as a slave into Egypt in about 1681 B.C."5 This was likely not more than a thousand years after the great Flood. How could just two individual horses on Noah's Ark have given rise to pure-breeding varieties in such a short time? There is plenty of evidence that all these horse forms came from an originating few. The most direct evidence comes from the fact that they can interbreed. This marks them as belonging to one created kind, since Genesis 1 clearly indicates that each life form was created to reproduce within a distinct kind. Also, many of--if not most--instances of biological change are known to occur rapidly, often in one generation. For example, researchers working in France were surprised to find that separate species of bees, which normally pollinate their own particular species of orchid, nevertheless happened to cross-pollinate two orchids, yielding a new orchid that attracted yet a third bee species.6 And Galapagos Island researchers never expected to find a hybrid of land and marine iguanas, but it also appeared suddenly in one generation. Chromosomes can change rapidly, too.7 All of this points to a God who is smart enough to have designed creatures with the ability to undergo rapid changes in order to fulfill His purpose for them--to be fruitful and multiply and fill the world's rapidly changing environmental niches. Zedonks are rare, but they offer a glimpse into the creative mind of the Maker of all life. - Chestatee Wildlife Preserve website at chestateewildlife.com. - Today, there are nine members of the horse kind that have produced 17 known hybrids. - Wichman, H. A. et al. 1991. Genomic Distribution of Heterochromatic Sequences in Equids: Implications to Rapid Chromosomal Evolution. Journal of Heredity. 82 (5): 369-377. - "And they brought their cattle unto Joseph: and Joseph gave them bread in exchange for horses, and for the flocks, and for the cattle of the herds, and for the asses [donkeys]: and he fed them with bread for all their cattle for that year" (Genesis 47:17). - Down, D. and J. Ashton. 2006. Unwrapping the Pharaohs: How Egyptian Archaeology Confirms the Biblical Timeline. Green Forest, AR: Master Books, 82. - Thomas, B. New Orchid Arose Too Fast for Darwin. ICR News. Posted May 18, 2010, accessed July 29, 2010. - Wichman et al, 372: "Cladistical analyses show that some taxa radically reorganize their genome in a relatively short time frame, whereas others exhibit long periods of karyotypic stasis." * Mr. Thomas is Science Writer at the Institute for Creation Research. Article posted on August 4, 2010.
<urn:uuid:dd3338ac-a9c8-4cfa-ab12-4c38c7342817>
CC-MAIN-2016-44
http://www.icr.org/articles/view/5546/297/
s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720972.46/warc/CC-MAIN-20161020183840-00191-ip-10-171-6-4.ec2.internal.warc.gz
en
0.959958
1,163
3.390625
3
Hip Cartilage Injury Cartilage injury can occur on the ball (or femoral head) or socket (acetabulum). When the gliding cartilage is injured it can cause further problems in the hip as well as pain in the hip. While it shares the common theme of cartilage injury with hip osteoarthritis, it is different. Hip osteoarthritis is a more widespread injury affecting the head and the ball associated with bone spurring while cartilage injury is more focal or isolated to one area of cartilage in the hip. Symptoms of hip cartilage injury are not very specific. - Groin or Buttock Pain - Locking or catching sensation in the hip. - Pain with weight bearing or deep hip flexion Femoracetabular Impingement (FAI) Acetabular cartilage injuries are most commonly caused by Femoracetabular Impingement Syndrome (FAIS), or commonly called hip impingement. The cam shape initially causes tearing of the labrum. After the labrum, repetivive contact of the cam deformity begins to cause shearing and impact injury of the adjacent hip gliding cartilage (chondrolabral junction). This often causes fraying of the cartilage and delamination or separation injury of the gliding cartilage from the acetabulum bone. This can also occur as a lever type injury seen with pincer type FAI but this is less common. Femoral head and acetabular cartilage can also be injured with trauma. Traumatic hip injuries such as hip dislocation/subluxation, femoral head fractures, and acetabular fractures can cause injury to the cartilage in the hip. Previous hip arthroscopy or other hip procedures often performed by less experienced hip surgeons can cause cartilage scuffing or injury. This commonly is discovered on revision hip arthroscopy performed by Dr. Faucett. A history of hip trauma or previous hip surgery are commonly associated with cartilage injuries. Dr. Faucett will perform a physical exam to determine if you have FAIS findings. Some impaction tests and range of motion exams can also be helpful. It is difficult to diagnosis a cartilage injury on physical exam as many of the symptoms and exam findings are also seen in other hip conditions such as FAI and labrum tears. X-rays are used to help screen for osteoarthritis and determine if there is irregularity in the bones around the hip joints. It is also useful to diagnosis FAI syndrome. MRI can better image the cartilage layers in the hip joint. It commonly underestimates the severity of the cartilage injury. It can also be useful to find loose cartilage pieces in the joint. Diagnostic Arthroscopy: A needle arthroscopy can be performed to directly look at the cartilage in the hip joint. This can be more accurate than doing a MRI as Dr. Faucett can directly view the cartilage rather than interpret the MRI results Limiting impact activities can lessen the injury to the cartilage. Avoiding hip impingement positions and activities will also limit the damage to the acetabular cartilage. Dr. Faucett may discuss the role of hip injections to help manage the pain and cartilage injury. Injections can include: - Cortisone – This is a powerful anti-inflammatory and can lessen inflammation and pain in the hip. It can also be helpful for diagnostic purposes to determine how much pain is coming from the hip joint. - Viscosupplementation – There are many forms of hyaluronic acid which can be injected to treat cartilage injuries in the hip. Dr. Faucett recommend which one would be the most helpful is for you. These can be injected every 6 months if needed. - Platelet Rich Plasma – This is a technique to harvest the bodies own potential to heal itself. It is considered an investigational treatment. This also works to reduce inflammation and recruit the repair (stem cells) cells to an injury area to repair it. - Stem cells – Using bone marrow aspirate, bone marrow stem cells can be injected into the hip to encourage the damaged cartilage to heal itself. This is an investigational treatment this time and there are not very many studies looking at the effectiveness or the durability of these biologic treatments. In cases where surgery is needed, the cartilage injury can be treated arthroscopically and in some cases using an open surgery. The following are some repair techniques Dr. Faucett might use to treat cartilage injuries: When the gliding cartilage is injured through damage from Femoracetabular impingement or trauma, it may need to be repaired. One technique to treat mildly damaged cartilage (less the 50% of its thickness) is chondroplasty. Chondroplasty is a method to remove or debride the damaged cartilage and leave the remaining cartilage. When the cartilage damage is deeper (>50%) but of a small surface area, a technique called microfracture can be used to grow new scar type cartilage. This technique makes small holes in the acetabular bone to allow for the patient’s own stem cells to populate the cartilage defect and grow new cartilage. In some cases when the deep defect (>50%) is a large surface area a cartilage/bone transplant can be performed. This is called an “osteochondral transplant.” Healthy intact cartilage is harvested earlier from someone who has donated their tissues after their death. After the cartilage is deemed to be free of any infection or communicable diseases and deemed the highest quality the cartilage can be transplanted into the patients defect and secured in place using a biologic glue. In some cases when the cartilage delaminates but is otherwise healthy looking, the cartilage can be “glued” back to the acetabular bone using a substance call fibrin glue. This is a collagen epoxy the starts as liquid and becomes a gel creating stability between the layers and a conduit for repair cells to repair the delamination. Total Hip Replacement In some cases, the cartilage damage by too extensive and a total hip replacement might be recommended. Patients undergoing these procedures typically need crutches for 6-8 weeks. Range of motion is important to help the cartilage heal. This is often performed with home circumduction exercises, continuous passive motion (CPM) machines, upright exercise bike. After 6 weeks, the patient is allowed increase weight bearing, as tolerated, and then to more aggressive strengthening exercises. The cartilage should be resilient enough to start impact exercises around 4-6 months. Success rates range from 80% to 90%, with most studies showing good or excellent results beyond 2 years. The earlier the diagnosis is made, the higher the chance the patient can be successfully treated with hip arthroscopy. If the problem is recognized earlier, we can limit the amount and severity of the cartilage damage. The more severe the cartilage damage, the harder it is to achieve a better outcome after surgery.
<urn:uuid:8ab0f94e-3f0f-4248-a54a-9f528c1aca69>
CC-MAIN-2020-40
https://www.scottfaucettmd.com/conditions-and-treatments/hip/hip-cartilage-injury/
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400192778.51/warc/CC-MAIN-20200919142021-20200919172021-00767.warc.gz
en
0.92323
1,485
3.0625
3
How to Manage Pests UC Pest Management Guidelines Scientific name: Helicoverpa (=Heliothis) zea (Reviewed 12/09, updated 11/12, pesticides updated 6/16) In this Guideline: DESCRIPTION OF THE PEST Tomato fruitworm adults are medium-sized moths with a wing span of about 1 to 1.3 inch (25-35 mm). They are pale tan to medium brown, or sometimes have a slight greenish tinge. The front wings are variously marked and usually have an obscure dark spot in the center and a lighter band inside a dark band around the tip. The hind wings are drab white and have a dark gray band around their tip. A diffuse light spot is in the center of the dark band. At hatching, tomato fruitworm larvae are creamy white caterpillars with a black head and conspicuous black tubercles and hairs. Larger larvae vary in color from yellowish green to nearly black and develop fine white lines along the body but retain the black spots at the base of bristlelike hairs. Older larvae also have patches of stubby spines on their body segments that are much shorter than the bristles and can be seen best with the use of a hand lens. Eggs are tiny, hemispherical, and slightly flattened on top with coarse striations or ribs running from base to tip. They are easy to confuse with looper eggs, but looper eggs have fine striations. Fruitworm eggs are laid singly on both upper and lower surfaces of the leaves usually in the upper part of the plant. When first laid they are creamy white, but develop a reddish brown ring after 24 hours. Soon after hatching, the larvae burrow into the fruit, usually near the calyx, and remain inside, feeding on the flesh. Infested fruit decay, turn red, and fall off the plant early, reducing yield. Larvae consume very little foliage. Regular monitoring of pepper fields is important in detecting and managing this pest. Weed control, site location, and biological control are important in reducing the potential for damage. Insecticide treatment may be necessary when monitoring indicates a need. These insects have a wide host range. Weed control in the area can help to reduce the population; however, the moths can fly great distances. Avoid planting peppers near field corn or garbanzo beans. Tomato fruitworm eggs can be heavily parasitized by Trichogramma pretiosum. Experimental releases of Trichogramma have resulted in control of fruitworm on pole tomatoes. Parasitized eggs are completely black. When any eggs are found they should be held in vials for several days to determine the level of parasitism. The parasitic wasp, Hyposoter exiguae, attacks fruitworm larvae and can reduce fruitworm populations considerably; however, often the worm will die inside the fruit and the parasite cocoon remains in the fruit as a contaminant. Organically Acceptable Methods Cultural and biological control and sprays of Bacillus thuringiensis or the Entrust formulation of spinosad are acceptable for use in an organically certified crop. Monitoring and Treatment Decisions Start monitoring for tomato fruitworm at the seedling stage and continue through harvest. Inspect the upper part of the plants for fruitworm eggs. Examine the eggs closely with a hand lens to determine the stage of development of the larvae and check for parasitism. If necessary, treat within 2 to 3 days after the head capsule has formed. There are no treatment thresholds. Timing of sprays is critical because the worms enter the fruit shortly after hatching and are thus susceptible to the pesticide for only a brief period. In peppers grown for fresh market consumption and where fruit aesthetics are paramount, treatments may be needed when egg laying is documented. UC IPM Pest Management Guidelines: Peppers Insects and Mites E. T. Natwick, UC Cooperative Extension, Imperial County Acknowledgment for contributions to Insects and Mites:W. J. Bentley, UC IPM Program, Kearney Agricultural Center, Parlier W. E. Chaney, UC Cooperative Extension, Monterey County R. L. Coviello, UC Cooperative Extension, Fresno County C. F. Fouche, UC Cooperative Extension, San Joaquin County C. G. Summers, Kearney Agricultural Center, Parlier
<urn:uuid:a50ed2e3-ca92-4306-a479-8d352dd09c07>
CC-MAIN-2017-04
http://ipm.ucanr.edu/PMG/r604300511.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281069.89/warc/CC-MAIN-20170116095121-00273-ip-10-171-10-70.ec2.internal.warc.gz
en
0.909894
912
3.453125
3
Region: South America Countries: Brazil, Columbia and Peru Cities: Iquitos (Peru); Leticia (Colombia); Manaus (Brazil), Santarém and Belém do Pará (Brazil), Macapa (Brazil) Tributaries: Marañón, Japurá/Caquetá, Rio Negro/Guainía, Putumayo (left); Ucayali, Purus, Madeira, Xingu, Tocantins (right) Primary Source: Andes Mountains (Peruvian Andes) Primary Source Location: Nevado Mismi, Arequipa, Peru Mouth Location: Atlantic Ocean Length: 6,400 km Width: 1.6 to 10 km at low stages (expanding to 48 km in wet season); 240 km (at the mouth – Atlantic Ocean) Basin: 7,050,000 km2 (Approx.) Average Discharge: 209,000 m3/s The Amazon River is the second longest river worldwide, after the Nile. The river has the biggest drainage basin in the world as well as covers regarding 30% of South America. The Amazon.com got its name from the take on females warriors of the Greek mythology that were understood by the name of Amazons. The Spanish soldier, Francisco de Orellana, that wased initially European to discover the Amazon, in 1541, offered the river its name after reporting battle royals with tribes of women warriors, which he likened to the Amazons of Greek folklore. The river originates from the Andes mountains in the Peru and also flows from west to east, going across through Colombia and Brazil, prior to draining pipes into the Atlantic Ocean. The Amazon.com rain forests are the biggest environment of the globe as well as are home to a wide variety of flora and also animals. The Amazon is the largest river of the world. The size of the river ranges 1.6 km and 10 km at low stage yet broadens to 48 kilometres throughout wet weather. Prior to entering the Atlantic Sea, the river acquires the shape of a wide tidewater concerning 325 kilometres wide. Because of its substantial size, the Amazon.com is likewise called as The Sea River. Get to Know More About the Amazon Forest:
<urn:uuid:2617719a-6871-417d-ab4f-3657130fa6ff>
CC-MAIN-2018-47
https://queenoftheforest.org/environmental/the-amazon-river-the-giver-of-life-of-the-rainforest/
s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039746171.27/warc/CC-MAIN-20181119233342-20181120015342-00368.warc.gz
en
0.931342
480
3.421875
3
Saturday, May 2, 2015 Looks Like Another Endangered Turtle Several photographs had to be taken of this amphibian for positive identification. People say turtles are slow movers but, that's a myth. When it stopped moving, I took some quick portrait shots. When it wanted to go somewhere, it moved along pretty quickly. This time, I remembered to take a photo of the belly. Scientists would call this part of a turtle's body, the plastron. Don't ask why. As soon as I had taken one picture of the underside, I set the critter upright. Then, I backed off with the camera and, watched the turtle move across the field. Comparing the photos with images on ARKive helped. The images and descriptions for Mauremys mutica, are similar to mine. The Yellow Pond Turtle is an endangered species. Now we need to take the research a bit further and, see if this turtle is a subspecies. It may be a Ryukyu Yellow Pond Turtle (Mauremys mutica kami). Somebody in the scientific community will have to make that determination. The Asian Yellow Pond Turtle is found throughout Asia and the Ryukyu Archipelago. Subspecies M. m. kami turtles exist, only on the Ryukyu Islands. Turtle Scientists, Your comments are welcomed!
<urn:uuid:c214affa-96ac-4728-95f9-2be110f35e28>
CC-MAIN-2020-40
https://www.ryukyulife.com/2015/05/looks-like-another-endangered-turtle.html
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400198868.29/warc/CC-MAIN-20200920223634-20200921013634-00389.warc.gz
en
0.92213
290
2.546875
3
Cognitive Behavioural Therapy This course is designed to train students who are new to the field as well as existing professional therapists or coaches who wish to add Cognitive Behavioural Therapy (CBT) methods to part of their practice. CBT is a form of talking therapy which helps you identify and understand your patterns of thought and to help you realise how you can alter your patterns of thought in order to change how you feel. This is done by focusing on how you think about the things happening in your life and how those thoughts affect your behaviour. This CBT course is designed to enable the graduate to meet the National Occupational Standards for Counselling, relevant to counselling work within the context of CBT by including comprehensive core counselling modules. How does the course work? The time you can dedicate to your studies, from home, is entirely up to you. That is why our distance learning courses are able to fit around your other commitments such as work or family life. You will be responsible for your study schedule and the motivation to successfully achieve this qualification. It is also recommended that you spend 1-2 hours of your study time to complete the question paper which will need to be completed and sent to your tutor after the end of each lesson. - What is CBT - Levels of change - Counselling 2: Basic Methods and Basic Counselling Skills - Developmental Psychology - Psychotherapy (psychodynamic methods and history) - Cognitive Behavioural Change (CBT) Part 1 - Cognitive Behavioural Therapy Part 2 - Cognitive Behavioural Therapy Part 3 - Cognitive Behavioural Therapy – Practical methods - Coaching – Philosophy - Coaching – Planning - Coaching – Mind Sets - Referral and Assessing Seriousness - CBT for Depression and Anxiety 1: Working with Minor Emotional issues - CBT for Depression and Anxiety 2: Working with more serious Emotional issues - CBT for Depression and Anxiety 3: Managing depression, NEAD, Early warning systems, Bi Polar, Personality disorders. Management and cooperation with professionals - CBT for relationships - Complementary CBT Techniques 1 - Complementary CBT Techniques 2 - CBT forms and method for other issues - Anger and Stress Management Training - Professional Practice This course does not require any previous experience or qualification to enrol on this course, it is available to all learns who wish to enrol. The approximate time required to complete this course is 360 hours. At the end of this course, you will receive a Certificate of Achievement by ABC Awards and a Learner Unit Summary (which lists the details of all the units you have completed as part of your course). The course has been endorsed under the ABC Awards’ Quality Licence Scheme. This means that learndirect has undergone an external quality check to ensure that the organisation and the courses it offers, meet certain quality criteria. Completing the course on its own does not lead to a Ofqual regulated qualification but may be used as evidence of knowledge and skills towards regulated qualifications in the future. If you want to progress your studies in this sector, this unit summary can be used as evidence towards Recognition of Prior Learning. The learning outcomes of this course have been benchmarked at Level 5 against the level descriptors published by Ofqual, which indicates the depth of study and level of difficulty involved in successful completion. After you complete this course, you will be able to work as a counsellor. You could choose to spend your time with individual clients on a one to one basis or choose to work with couples, families or groups. You could even choose to counsel clients face-to-face over the phone or internet. You may choose to build your career in Cognitive Behavioural Therapy by continuing your training and education to become a psychotherapist, working either for a health agency or as a teacher or mentor, you could even go on to open your own private practice. Psychotherapists can earn up to £100,000+*/year. *Source: Payscale.com, Oct 2012 Need proof of your English and Maths skills? Want to catch up and learn more English and Maths? You can for free when you sign up to this learndirect course. Get your NUS Extra card All professional development students are eligible for the NUS Extra card, which gives you access to over 200 UK student discounts with brands like Co-op, Amazon and ASOS. Apply and find out more at http://cards.nusextra.co.uk/ Our 3 year 0% loans are provided by our partners Deko. Loan applications are processed over the phone with a member of our team and a decision can be provided within a matter of minutes. All loans are subject to status & and Credit check Call now to speak to a member of the team 0800 101 901.
<urn:uuid:d0f55870-3260-48ee-873f-0ca99dd4d888>
CC-MAIN-2018-34
https://www.learndirect.com/course/cognitive-behavioural-therapy
s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221209040.29/warc/CC-MAIN-20180814131141-20180814151141-00115.warc.gz
en
0.947455
1,012
2.546875
3
Although I call this the Forest and the Tree Strategy, I learned about this strategy for generating ideas from Edward de Bono in his excellent book Serious Creativity. The strategy is simply this: any time you want to generate ideas, examine the concepts at a tree or a forest level. Here’s an example: imagine you are a dentist and you think there’s a market for a new type of toothbrush. The problem is you don’t know how you would redesign the toothbrush. You brainstorm for a few hours but feel you aren’t getting anywhere with the concept of ‘toothbrush’ – especially when it comes to generating ideas. So, you look at the ‘trees’ – smaller concepts within the concept of the toothbrush. A smaller concept? Brushing away food. You can take the concept of brushing away food and generate different ideas based on that one concept: for example, you use a broom to brush dust and debris into a dustpan, but you could instead use a vacuum. What if you used a vacuum to suck away food stuck on your teeth? How about another small concept? A toothbrush is used to apply toothpaste. You can apply toothpaste in different ways: say like a paintbrush as if you are applying paint, a liquid gel that fills all the gaps like a drain cleaner, wall repair that is one colour when you apply it, and a different colour after some time. Don’t like looking at smaller concepts? Look at bigger concepts for additional ideas. A toothbrush is part of an overall hygiene program. Hygiene can be a similar concept to maintenance. One type of maintenance people do? Getting rid of their old clothing. Clothing can be purchased in a subscription – companies send you a box of clothing, and you pay for what you keep. What if you signed up for a subscription model with your dentist – they send you toothpaste, toothbrushes, floss, etc. – since they know your teeth, they can send you tailored products to your health needs. Or another big concept is a toothbrush as part of a travel kit. There are miniature versions of products – why not a miniature toothbrush that doubles as floss? I call the strategy the Forest and the Trees strategy – sometimes you need to get into the trees and see individual concepts; other times you need to step back and look at the overall concept from above to see the forest. Each view gives you multiple concepts to work with and each concept gives you multiple perspectives (and ideas) that you can ‘translate’ back to your original concept.
<urn:uuid:237948de-909f-45ff-ba6a-ffe48446fee1>
CC-MAIN-2022-05
https://wangyip.ca/2021/09/22/the-forest-and-the-tree-strategy-for-generating-ideas/
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305052.56/warc/CC-MAIN-20220127012750-20220127042750-00404.warc.gz
en
0.93918
538
2.71875
3
How to Become an Anesthesiologist Anesthesiologists are doctors who design and implement plans to usher patients safely through surgery. An anesthesiologist will meet with you before surgery to assess your health and readiness, administer anesthesia to dull or eliminate pain before surgery, monitor your vital signs and adjust anesthesia during surgery, and oversee pain management and safe recovery after your procedure. To become an anesthesiologist, you must start with a bachelor’s degree. Your goal is medical school, so your undergraduate course of study should prepare you with a broad base of knowledge in the sciences and liberal arts, and you should take the Medical College Admission Test (MCAT) before graduation. Earning a high score on the MCAT and volunteering or completing internships in health care can increase your chances of being accepted by the medical school of your choice. Medical school typically takes four years to complete. The first two years consist mostly of classroom and laboratory learning, and the last two years are spent learning clinical practice under the supervision of medical professionals in a variety of health care settings. After medical school, prospective anesthesiologists complete a four-year anesthesiology residency. Medical school graduates in the United States are matched with residencies through a national system called the National Residency Matching Program (NRMP). Residents train with highly skilled medical school faculty to learn how to practice their chosen specialty. Some doctors follow their residency with a fellowship to further train in anesthesiology specialties like pain management, pediatric anesthesiology, or obstetric anesthesiology. Are there any certification or licensure requirements? After completion of a residency program, an anesthesiologist can obtain a license to practice medicine in their state and complete certification with the American Board of Anesthesiology. Not all anesthesiologists hold board certification, but a state license is required to practice medicine. Each state has its own requirements for physician licensure, but they generally involve completing medical or osteopathy school, spending at least one year in a residency program, and passing licensing examinations. The multi-step United States Medical Licensing Examination (USMLE) and the examinations administered by the National Board of Osteopathic Medical Examiners (NBOME) are typically used by states to license physicians. To earn ABA certification, anesthesiologists take three exams: - The BASIC exam is taken after the second year of residency and focuses on the scientific basis of anesthesiology practice. - The ADVANCED exam is taken after completion of the residency and focuses on clinical and advanced aspects of anesthesiology practice. - The APPLIED exam includes oral and clinical examinations and can be taken after candidates pass the ADVANCED exam. Anesthesiologists have seven years after completion of their residency to pass this exam. How long does it take to become an anesthesiologist? Counting four years of undergraduate study, four years of medical school, and four years of residency, it takes twelve years to become an anesthesiologist. Some medical students enroll in combined six-year undergraduate and medical school programs, which can reduce the time needed to begin a career. Additional time may be needed after completion of the residency to pursue fellowships or to achieve state licensure or board certification. What does an anesthesiologist earn? According to the Bureau of Labor Statistics, the average yearly pay for anesthesiologists in the United States was $232,830 in 2010. The median yearly pay was $407,292 in that year. What are the job prospects? Job growth for physicians is expected to increase by 24% between 2010 and 2020, faster than the average for all occupations during that time period. More physicians, including anesthesiologists, will be needed to meet the needs of the aging baby boomer generation and to work in underserved low-income and rural areas. Anesthesiologists can enhance their job prospects by earning board certification and by pursuing advanced specializations through fellowships. What are the long-term career prospects for anesthesiologists? Anesthesiologists can continue to learn about advanced topics in their field as they practice and pursue additional board certifications in specialties like critical care medicine, pain medicine, hospice and palliative medicine, sleep medicine, and pediatric anesthesiology. Another career option for anesthesiologists is to branch out into higher education and research. You can pursue grants to fund research that will contribute new knowledge to your field. How can I find a job as an anesthesiologist? Most of the United States’ more than 29,000 anesthesiologists practice in physicians’ offices, but anesthesiologists can also find work in hospitals, colleges and universities, and outpatient care centers. During their years of training, anesthesiologists make many professional contacts and may learn about job opportunities through their network. Anesthesiologists may also find jobs through physician recruiters, which are often used by hospitals and practices to fill openings. Professional publications and societies like the Journal of the American Medical Association and the American Society of Anesthesiologists offer job boards and other career resources. How can I learn more about becoming an anesthesiologist? There are many professional boards and societies associated with anesthesiology, including the American American Society of Anesthesiologists, the , the Association of University Anesthesiologists, the Foundation for Anesthesia Education and Research, and the Society for Education in Anesthesia. Each of these organizations offers resources for those who want to learn more about the practice of anesthesia and how to become an anesthesiologist.
<urn:uuid:495e8bdc-e8ad-403b-881f-c80dd252cac5>
CC-MAIN-2022-05
https://www.howtobecome.com/how-to-become-an-anesthesiologist
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305317.17/warc/CC-MAIN-20220127223432-20220128013432-00503.warc.gz
en
0.960142
1,147
3.046875
3
A Signer of the Texas Declaration of Independence Secretary of the Treasury of the Republic Born in Tennessee, 1795 Died on Caney, Creek, October 12, 1836 Erected by the State of Texas ||Section:Republic Hill, Section 1 (C1) |Reason for Eligibility: ||Veteran, War of 1812; Veteran, Republic of Texas; Signer, Texas Declaration of Independence; Interim Secretary of State and Secretary of the Treasury for the Republic of Texas ||February 26, 1795 ||October 12, 1836 ||Reinterred August 29, 1936 |HARDEMAN, BAILEY (1795-1836). Bailey Hardeman, War of 1812 soldier, Santa Fe trader, mountain man, a founder and officer of the Republic of Texas, thirteenth or fourteenth child of Thomas and Mary (Perkins) Hardeman, was born at the Thomas Hardeman station or stockade, near Nashville, on February 26, 1795. His father was a prominent frontiersman who served in the North Carolina convention that considered ratifying the United States Constitution at Hillsboro, North Carolina, and in the Tennessee state constitutional convention of 1796. Bailey spent his early years in Davidson and Williamson counties, Tennessee. He was a store proprietor, deputy sheriff of Williamson County, and lawyer in Tennessee. At eighteen he served as an artillery officer in the War of 1812 under his father's friend Andrew Jackson in Louisiana. On June 19, 1820, he married Rebecca Wilson, also of Williamson County. The next year he joined his father and his brother John on the Missouri frontier west of Old Franklin. There he met William Becknell and became involved in the early Santa Fe trade. Hardeman was in the Meredith Miles Marmaduke expedition to New Mexico in 1824-25. He and Becknell trapped beaver along the Colorado River north and west of Santa Cruz and Taos and narrowly escaped starvation during the winter of 1824-25. On his return trip to Missouri, he lost two horses and a mule to Osage Indian attackers, but his overall trading profits must have been considerable. He was able to finance the Santa Fe trading trip of William Scott in the summer of 1825. Several years later he endowed Hardeman Academy at Hardeman's Cross Roads (later Triune), donated lands to Wilson's Creek Baptist Church, and opened a tavern and store, all in Williamson County, Tennessee. A few years after his return to Tennessee he moved from Williamson to Hardeman County. In the fall of 1835 he and his brothers Thomas Jones and Blackstone Hardemanq and his sister Julia Ann Bacon, together with their families, numbering about twenty-five people in all, moved to Texas. Bailey and several other members of the family quickly joined the independence movement. Bailey's first involvement was to help secure an eighteen-pound cannon at Dimmitt's Landing near the mouth of the Lavaca River and haul it to San Antonio, an action that encouraged Gen. Martín Perfecto de Cos to surrender his forces, on December 10, 1835. On November 28, while Hardeman was on the artillery assignment, the General Council of the provisional government appointed him to serve on a commission to organize the militia of Matagorda Municipality. After this, Hardeman's activities shifted from the military to the political arena. He was elected a representative from Matagorda to the convention at work on the Texas Declaration of Independence. He arrived at Washington-on-the-Brazos on March 1, 1836, and was selected to serve on the five-member drafting committee of the declaration. After the convention approved the document, Bailey, along with two other members of the committee, was appointed to a twenty-one-member committee to draw up a constitution for the Republic of Texas. The resulting Constitution was approved in mid-March. Hardeman performed several other services for the convention, including membership on the militia and tariff-payment committees. Although he requested to be excused in order to rejoin the military forces, he was persuaded to assume other political duties. The delegates elected him secretary of the treasury. Concurrently with this position, he held the office of secretary of state when Samuel P. Carson left for the United States on April 2-3, 1836. After the fall of the Alamo, Hardeman fled eastward with other cabinet members as the ad interim government moved from Washington to Harrisburg, and from Harrisburg to Galveston Island, in advance of approaching Mexican troops. The group reached Galveston in safety around the time of the battle of San Jacinto; after the Texas victory, Hardeman left the island to deliver supplies to the soldiers of the republic. As acting secretary of state he negotiated and signed two treaties, an open document honorably ending the war and providing for removal of Mexican soldiers from Texas, and a secret agreement in which Mexican general Antonio López de Santa Anna promised diplomatic recognition of the new republic. Hardeman was then appointed to go to Mexico City in order to help secure ratification of the open treaty. His service to the republic was cut short by his death from congestive fever, probably on September 25, 1836, at his Matagorda County home on Caney Creek. He was buried there, but in 1936 his remains were moved to the State Cemetery in Austin. Bailey was survived by his wife and a son. A daughter had died at the age of eight in Hardeman County, Tennessee. Hardeman County, Texas, was named for Bailey and Thomas Jones Hardeman. BIBLIOGRAPHY: Sam Houston Dixon, Men Who Made Texas Free (Houston: Texas Historical Publishing, 1924). Nicholas P. Hardeman, Wilderness Calling: The Hardeman Family in the American Westward Movement, 1750-1900 (Knoxville: University of Tennessee Press, 1977). Louis Wiltz Kemp, The Signers of the Texas Declaration of Independence (Salado, Texas: Anson Jones, 1944; rpt. 1959). Nicholas P. Hardeman "HARDEMAN, BAILEY." The Handbook of Texas Online. [Accessed Wed Feb 12 16:35:20 US/Central 2003].
<urn:uuid:e9b35f61-58df-4a39-898c-2e1871153d93>
CC-MAIN-2018-17
http://www.cemetery.state.tx.us/pub/user_form822.asp?pers_id=4
s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945724.44/warc/CC-MAIN-20180423031429-20180423051429-00470.warc.gz
en
0.963768
1,290
2.59375
3
10 Toxic Bodies Of Water The sight of a calm, pristine lake, river, or beach is enticing to many. Whether it’s for fishing, swimming, boating, or just relaxing, bodies of water have always drawn people in. Sometimes, however, dipping one’s toes in that water isn’t at all recommended. Lakes, rivers, and lagoons can become a threat to both animal and human populations. Causes include industrial pollution, human waste, bacterial growth, and even Mother Nature’s temper. Here are ten bodies of water that can cause some serious damage. 10 Blue Lagoon Not Suitable For Swimming Looks can be deceiving, even when it comes to water. This is the case with the Blue Lagoon of Buxton, England. The “lagoon” is the disused Far Hill Quarry, which flooded and became a popular swimming spot. People are drawn to the brilliant blue water that looks as if a Caribbean oasis sits in the middle of Derbyshire. But in reality, the blue water is extremely toxic. The turquoise color comes from chemicals leaching into the water from the limestone rocks. Calcium oxide, used as a part of the quarrying process, gives the lagoon a pH level roughly comparable to ammonia. One of the many signs warning visitors not to enter the water cautions that the high pH levels can cause skin and eye irritation, stomach problems, fungal infections, and rashes. On top of the chemicals, the lagoon has also been used as a dumping ground. Another sign posted near the area reads: “Lagoon known to contain: Car Wrecks, Dead Animals, Excrement, Rubbish.” Despite warnings about the toxicity and unsanitary conditions, families continue to flock to the Blue Lagoon. Children are allowed to swim with the simple warning not to dunk their heads or swallow any water. Locals wanted to drain the flooded quarry but were told that doing so would pose a threat to their water supply. In June 2013, the town council dyed the lagoon black to deter swimmers, but by 2015, the water had reverted to its turquoise color. 9 Lake Titicaca Kills Endangered Frogs South America’s largest lake has become contaminated by human and industrial waste. Lake Titicaca lies between Peru and Bolivia. It was one of the most sacred sites for the Incas, who believed the lake was the birthplace of the Sun. Now, heavy metals such as lead and arsenic pollute the water. Many of the industrial toxins come from El Alto, where 70 percent of factories operate illegally and are not monitored for pollution. In addition, more than half of the people living on the shores of Lake Titicaca do not have plumbing. In 2015, an estimated 10,000 endangered frogs were found dead on the shores of Lake Titicaca and its connecting river. The Titicaca water frog is one of the largest aquatic frogs in the world. Because of its baggy skin, the frog is often referred to as the “scrotum frog.” The cause of the massive die-off is thought to be the sewage and heavy metals that pollute the lake. 8 Pinto Lake Kills Sea Otters And More Located in Watsonville, California, Pinto Lake has been referred to as the most toxic lake in the state, thanks to an abundance of blue-green algae. The algae, also known as cyanobacteria, feed on nitrogen and phosphorus. These chemical elements exist in sediments at the bottom of Pinto Lake. Bottom-feeding fish such as carp stir up the nitrogen and phosphorous, which release into the water, feeding the algae blooms. These blooms produce a toxin called microcystin. Touching or ingesting microcystin can cause nausea, fever, and even liver failure. The toxin has been linked to the deaths of birds, fish, sea otters, and dogs in the area. Signs are posted warning that any direct contact with the water is dangerous, and people are also cautioned not to eat any fish caught in Pinto Lake. 7 Buriganga River Suffocates Fish The Buriganga River in Bangladesh flows though the capital city of Dhaka and is their primary water source. It is also the primary dumping ground for waste from the tanneries in Hazaribagh, a neighborhood in Dhaka. Hazaribagh is home to 95 percent of Bangladesh’s leather tanneries. These tanneries dump an estimated 22,000 liters (5,300 gal) of toxic waste into the Buriganga each day. Tannery waste contains animal flesh and hair as well as numerous chemicals, dyes, oils, and heavy metals. Bangladesh does have environmental regulations, but there has been a complete lack of monitoring and enforcement. Samples of wastewater taken from tanneries contain chemicals and toxins that greatly exceed the permitted amounts. In 2002, the government ordered Hazaribagh tanneries to relocate outside of Dhaka within three years. The deadline has been extended for over a decade. Trash collects in heaps along the shores of the Buriganga, and the water is so polluted that all the fish have died. Especially in the slums of the city, many residents use the river for bathing, cooking, and even drinking. People who depend on the river’s water often suffer from health problems such as headaches, diarrhea, and jaundice. 6 Karymsky Lake Boils Its Inhabitants The Kamchatka Peninsula is located in far-eastern Russia. It contains a number of active volcanoes, as well as geysers and hot springs. One of the most active volcanoes is called Karymsky, which lies about 5 kilometers (3 mi) north of Karymsky Lake. The lake was created when a massive eruption of the volcano emptied out a magma chamber, leaving behind a caldera that filled with water. The caldera was thought to be dormant, until it erupted in 1996. The Karymsky volcano erupted first, around midnight on January 2. Later that afternoon, the lake began to explode. The underwater eruption caused steam and ash to spew into the air for about 18 hours. The erupted material landed back in the lake, creating a soup of sodium, sulfate, calcium, and magnesium. The water in the lake was actually boiling. The extreme temperatures and additional chemicals killed all life in the lake. Before the eruption, Karymsky Lake was filled with clear water that had a pH level of 7.5. After the eruption, the water turned a yellowish-brown color and had a pH level of 3.2. The freshwater lake had become the biggest natural reservoir of acid water in the world. By 2012, Karymsky Lake had regained a pH of 7.54, and the water was once again clear. However, new hot springs that appeared during the eruption keep the lake three times saltier than it was before. 5 Matanza-Riachuelo River Poisons Residents The Matanza-Riachuelo, which literally translates to “slaughter brook,” runs through Buenos Aires, the capital city of Argentina. Having been used as a dumping site for waste and sewage, the river has become one of the most polluted waters in the world. The Matanza-Riachuelo is lined by urban slums that house close to five million people. Tanneries, chemical plants, and factories dump an average of 82,000 cubic meters (2.9 million ft3) of industrial waste containing hard metals and pesticides into the river each day. As a result, 25 percent of children living in these slums have lead in their bloodstreams. There is a lack of plumbing in the riverside slums. Some homes have outhouses with drainage pipes that lead directly into the Matanza-Riachuelo. Residents suffer from undiagnosed skin conditions, respiratory problems, and gastrointestinal illnesses so severe they can lead to death. In 2005, an Argentine environmental minister vowed to have the Matanza-Riachuelo cleaned up within 1,000 days, adding that she would be the first to drink the water. Neither of those things happened. 4 Berkeley Pit Mass-Murders Snow Geese On November 28, 2016, a large flock of geese landed on a small body of water in Butte, Montana, known as Berkeley Pit. It is estimated that roughly 10,000 geese landed in the water, and thousands of them died. Berkeley Pit is a former mine where nearly 300 million tons of copper were extracted between 1955 and 1982. The process created a deep pit that eventually filled with 275 meters (900 ft) of water. The water is full of arsenic, cadmium, cobalt, copper, iron, and other compounds. 2016 was not the first time Berkeley Pit became a massive snow goose grave. In November 1995, 342 dead geese were discovered floating in the pit. Postmortem examinations revealed that the cause of death was due to the water, which is acidic enough to liquefy the steel propeller on a motorboat. The birds had burns and sores on their trachea, esophagus, and digestive organs. As the water level in Berkeley Pit continues to rise, so do concerns about contamination. The water is scheduled to reach the level of Butte’s groundwater by 2023. If a successful treatment plan is not implemented before then, the pit pollution will enter the local water source. 3 Yamuna River Dies In Delhi The Yamuna River begins as crystal-clear water that comes from a glacier in the Himalayas. North of Delhi, the river is home to turtles, fish, crocodiles, and numerous aquatic plants. But the Yamuna that enters the city is not the same river that emerges. River water is diverted north of Delhi to supply one-third of the city’s drinking water, and it is also siphoned off to irrigate rice fields. This leaves nearly dry riverbeds, which are replenished with pollution and sewage. Data from a 2011 water quality report showed that water leaving Delhi contained over one billion fecal coliform bacteria per 100 milliliters. The standard for bathing is 500 coliform bacteria per 100 milliliters. More than five million Delhi residents live in illegal settlements that lack sewer service. They defecate in places that drain directly into the river. Industrial waste containing heavy metals and other pollutants are dumped into the river daily. In Hinduism, the Yamuna is not just a river but a goddess. The sad state of the Yamuna bothers some believers, who say that the goddess is dying and in need of help. Others maintain that because the river is a goddess, she can never be polluted, no matter how bad she is physically mistreated. It might be debated whether or not the goddess is dying, but there is sufficient proof of the river harming mortal entities. The 23-kilometer (14 mi) stretch of the Yamuna that runs through Delhi has no aquatic life. The tainted river is responsible for numerous cases of typhoid fever as well as an unusually high infant mortality rate. Heavy metals in the water leach into local fields and contaminate vegetables, causing children in the area to suffer and even die from arsenic and lead poisoning. 2 Lake Natron Mummifies Its Victims Lake Natron is a salt lake located in Northern Tanzania near the Kenyan border. The lake is named for a chemical it contains, natron, which is a naturally occurring mix of sodium carbonate and sodium bicarbonate (aka baking soda). Lake Natron’s chemical makeup is due to a unique nearby volcano. Ol Doinyo Lengai, or “Mountain of God,” is the only active volcano on the planet that spills natrocarbonatites instead of silicates. The lava from Ol Doinyo Lengai is not as hot as lavas from other volcanoes. It also has a different appearance, resembling an oil spill more than flowing magma. When the lava cools, it turns into a whitish powder. Rainfall runoff collects the ashy residue and deposits it in Lake Natron. This gives the water a pH level that fluctuates between 9 and 10.5. Temperatures in the lake can reach 60 degrees Celsius (140 °F) during warmer months. Photographer Nick Brandt came across the lake while on a photo shoot for a book about East Africa. Brandt was “blown away” when he saw the hundreds of carcasses the littered the shore. The remains consisted largely of bats and migratory birds. Birds often crash into the shallow lake, which is less than 3 meters (10 ft) deep. It is thought that they become confused by the lake’s highly reflective surface. Once the birds die in the lake, their skeletons are preserved by the sodium carbonate in the water. The chemical, which was used in Egyptian mummification, prevents the carcasses from decomposing. When water levels recede, the remains wash onto shore in their preserved state, which is what Brandt came across. With the help of locals, Brandt collected a variety of corpses and posed them in lifelike positions to create an eerie photo series. Despite the corrosiveness of Lake Natron, one species uses it as a popular breeding location. Flamingos have very leathery skin on their legs, which allows them to tolerate the salty water. Lesser flamingos, the smallest of their kind, build nests on top of salt crystal islands that appear when the lake is low enough. The nests are protected by their location in the middle of the lake. Predators such as cheetahs and baboons are deterred by the caustic water, which keeps the baby birds safe. 1 The Jacuzzi Of Despair Is An Underwater Menace The Jacuzzi of Despair is an underwater lake located in the Gulf of Mexico. The lake lies 1,000 meters (3,300 ft) under water, on the seafloor. Underwater lakes, also known as brine pools, form when salts leach from ancient seabeds. The salt makes the water in one area extremely briny, until it is so dense that it doesn’t mix with the surrounding seawater. This results in an underwater lake that has its own surface, shoreline, and current. The Jacuzzi of Despair is a crater-like pool that rises 3.7 meters (12 ft) above the ocean floor. It was named for its warm temperatures. The water in the lake is around 18 degrees Celsius (65 °F), while the surrounding seawater is only 4 degrees Celsius (39 °F). Mussels thrive along the shoreline, but the extreme amounts of salt and methane in the lake itself are toxic to most sea creatures. The warmth of the lake attracts marine life, such as crabs, looking for food. If they fall into the Jacuzzi of Despair, they die. While most sea creatures cannot survive the lake’s conditions, researchers have found proof of microbial life in the Jacuzzi of Despair. Scientists believe these creatures that adapted to the underwater lake may resemble life-forms that thrive on other planets. Read about more terribly toxic and polluted places on Top 10 Toxic Ghost Towns and Top 10 Most Dangerous Places To Visit Thanks To Humans.
<urn:uuid:2bf136a4-8a63-46f3-919c-91ffc9f7cf9b>
CC-MAIN-2018-30
http://listverse.com/2017/10/30/10-toxic-bodies-of-water/?utm_source=more&utm_medium=link&utm_campaign=direct
s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676592309.94/warc/CC-MAIN-20180721032019-20180721052019-00351.warc.gz
en
0.957478
3,216
3.234375
3
What is Metabolic Syndrome? Metabolic syndrome is a group of risk factors that increase the likelihood of developing heart disease, diabetes, and stroke. It can also be known as insulin resistance syndrome. The five risk factors are: - Increased blood pressure - High blood sugar levels - Excess fat around the waist - High triglyceride levels - Low levels of good cholesterol The American Heart Association describes metabolic syndrome as a “cluster of metabolic disorders.” These include high blood pressure, high fasting glucose levels, and abdominal obesity. All of these combined increases the risk of heart disease. What Causes Metabolic Syndrome? The main reason for metabolic syndrome to occur is from being overweight. Being overweight or obese causes insulin resistance. Meaning, the body isn’t responding to insulin, making it harder for glucose to enter the cells. What are the signs and symptoms of the syndrome? People with metabolic syndrome have: - High Body Max Index and waist circumference - Blood test results that show low HDL cholesterol, high triglycerides, or high fasting blood sugar - Acanthosis Nigricans (the darkening of the skin in folds and creases, like the neck and armpits, is a sign of insulin resistance) Having these factors signifies a higher risk of cardiovascular diseases. There are also other factors that can increase the chance of metabolic syndrome: - Family history - Not getting enough exercise - Women who have been diagnosed with polycystic ovary syndrome Doctors may suspect the syndrome in people who are overweight or obese. The doctors will do a BMI and waist measurement, measurement of blood pressure, and many blood tests like a lipid panel, glucose test, or hemoglobin A1c. How can it be treated? Start creating positive lifestyle changes. Weight loss is significant to preventing metabolic syndrome and other diseases. Additionally, it can bring major improvements in blood pressure, blood sugar, and lipids. Families can work with their health care provider, a dietician, or a weight loss program like Options Medical Weight Loss. Some of the recommendations include: - Eating more fruits and vegetables - Choosing whole grains - Exercising and being more physically active - Avoid smoking - Limit junk foods and processed foods Don’t let being overweight affect your lifestyle. Avoid unwanted syndromes and diseases by taking care of yourself and seeing someone like Options Medical Weight Loss, Orland Park.
<urn:uuid:de5142e3-6a03-4e93-989a-b859c514f128>
CC-MAIN-2022-40
https://optionsmedicalweightloss.com/orland-park-illinois-blog/metabolic-syndrome/
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00664.warc.gz
en
0.914898
516
3.640625
4
Self-editing Tips - Issues to Look for A few years ago, a friend and I attended a seminar given by John Lescroart, a bestselling writer of crime and legal fiction. John insists that writers should be wordcraft experts. He checks every sentence of his book. Anyone who submits a manuscript to John for examination gets a series of letters in the margin to point out problems with the prose. Each letter stands for a specific problem. Here is his list of writing problems, with explanatory comments or examples. John Lescroart's checklist for proper wordcraft. P passive (such sentences often use was, were) G grammar, punctuation (e.g. "Are you coming?" She asked.) T telling (exposition) (Such sentences often use was or were, or are explanations by the narrator.) W wrong word (e.g. there / their; assure / ensure; literally / figuratively) R redundancy. (e.g. the orphan had no father or mother. Another sentence may be redundant with earlier sentences, so could be struck entirely.) U unreferenced or improper antecedent (usually for it, which, who) A adverb or adjective unnecessary (e.g. "I wish you were dead," he said, meanly.) E echo (same non-trivial word or phrase used recently (i.e. in the same book)) F fake, negative description, saying what didn't happen instead of what did. X contradiction (e.g. Choosing to remain silent, he told her he loved her. Most contradictions will span more than one sentence.) NV narrator voice … should use proper English ? huh? (Doesn't make sense.) I insults the reader (by telling obvious facts, unnecessary explanations; by explaining jokes, by using illogical plot or narrative devices. Example of the last: A character says to his friend, "As you know, Bob, black holes are formed when…" If Bob already knows, the speaker wouldn't have explained it. The author should find another way to tell the reader what the reader needs to know (only.) In addition to problems with setting, plot, conflict, and characterization, these concepts are what I look for when I am editing a client's story. When you revise your draft, examine each sentence, checking it for the wordcraft problems that John Lescroart identifies.
<urn:uuid:bb67af21-0df5-4f1a-bb73-f75cdea5bd89>
CC-MAIN-2019-22
http://www.robertmsmythe.com/wordcraft-list.html
s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232258147.95/warc/CC-MAIN-20190525164959-20190525190959-00045.warc.gz
en
0.950897
505
2.578125
3
While your risk for developing uterine cancer is dramatically lowered by the surgery, your risk for other gynecologic cancers — such as ovarian — may not be. In late August, fans around the world were shocked by the unexpected death of “Black Panther” star Chadwick Boseman, who had not disclosed his four-year battle with colon cancer. His death shone a light on the fact that younger people, especially younger Black men and women, have a higher incidence of colorectal cancer — and a higher rate of death from the disease — than any other racial group in the United States. What is the best way to get critical medical and health information to those who need it? Reach people where they are. Whether your summer plans include biking, fishing, swimming or just working in the garden, you’ll need to protect yourself from the sun’s ultraviolet rays — UVA (long wave) and UVB (short wave). Aging increases cancer risks in our bodies in several ways. The older we are, the higher the proportion we acquire of cells with mutations. And these cells create populations of high risk for recruiting cancer-initiating cells. Even if you already have cancer, you can’t let down your guard when it comes to prevention. In fact, cancer patients have even more reason to be on guard, because they usually have a higher risk for infection or developing other types of cancer. During the summer and warm weather season, it’s important to remember that exposure to ultraviolet (UV) radiation from the sun can increase your risk of developing skin cancer. Roswell Park’s Christine Ambrosone, PhD, admits she may not have pursued the most conventional route to becoming a leading breast cancer researcher. Triggers — or reasons why someone wants to smoke, are different for everyone who is trying to quit smoking. Try these strategies to control some of the most common smoking triggers. “We are starting to cure melanoma, and it’s very exciting. We’re doing great things and hopefully people won’t have to die from this diagnosis anymore.” Evidence has shown that e-cigarettes can be less harmful to a person’s health in the short-term when someone who regularly smokes completely switches to them, but they still deliver aerosols and other harmful chemicals. "There's a lot of evidence that for someone who's overweight, losing even a small amount — five pounds, 10 pounds — can reduce the chances that they'll be diagnosed with cancer."
<urn:uuid:7549ef9f-eba6-4fd1-8273-5153536511fa>
CC-MAIN-2020-45
https://www.roswellpark.org/cancertalk/categories/cancer-prevention
s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107882103.34/warc/CC-MAIN-20201024080855-20201024110855-00635.warc.gz
en
0.960652
528
2.59375
3
In the last 50 years window glass has gone from functional to ornate. Now, with ‘smart’ windows looming, driven by energy-efficiency demands, glass may be about to become The Environmental Protection Agency says an average household spends over 40 percent of its annual energy budget on heating and cooling costs. Office buildings now account for about one-third of all the energy used in the U.S., a quarter of which is lost through the inefficiency of standard windows to retain heat in the winter or deflect heat in the summer. But new ‘smart’ window technology is poised to change that. While regular glass can only allow a constant amount of light, the ‘smart’ window can be tuned, or dimmed, permitting any amount of light to pass. At the turn of a knob, the amount of light that shines through windows can now be controlled, dialing in an estimated $11 billion to $20 billion dollars a year savings in heating, lighting and air-conditioning According to the EPA, even a $7 billion savings would equate to a reduction in carbon emissions at power generating plants equal to taking 336,000 cars off the road, and the energy savings would be enough to light every home in New York City. With ‘smart’ windows, homes and office buildings have the potential to recover installation costs through money saved by decreasing energy lost as the air conditioning battles a hot summer sun or by reducing indoor lighting requirements because the shades need no longer be pulled closed to block the ‘Smart’ windows boast other benefits. They increase comfort, light and view and decrease condensation. Users are given control over their privacy and environment, and harmful ultraviolet rays are blocked, thereby eliminating the fading of furniture, carpets, drapes, artwork and other valuables. The cost of blinds, curtains and drapes are also slashed and in many cases eliminated. By 2020, industry leaders believe windows will become active parts of building climate, engineering, information and structural systems. Scientists working at the Fraunhofer Institute in Germany, for instance, have developed a hybrid system that collects solar energy that warms the air on the glass facade of the building and funnels it through cavities in the walls and floors. The energy stored in this way can be fed into the building's interior heating system over night. Presently, three distinct ‘smart’ window technologies are positioning themselves for this endeavor, competing for shares of a global architectural glass market that produces an estimated 20 billion square feet of flat glass each year. Domestically, sales of residential window units have grown by about five percent a year since 1992 to over 50 million units. Commercial window sales have increased by approximately 11 percent annually during the same period to nearly 500 million square feet a year. The new ‘smart’ technologies are liquid crystal, electrochromic, and suspended particle device (SPD). A few companies are working with liquid crystals, and a larger number are trying electrochromics. Only one has SPD technology. While none of the new technologies have yet established a serious market presence, even a one percent share of the global market would equate to 200 million square feet a year. CLAP ON, CLAP OFF Polymer dispersed liquid crystals (PDLCs), invented at Kent State University in 1983, found a major application in switchable windows, that is, windows that change from clear to opaque with the flip of a switch. Using the same voltage as standard household appliances, multiple windows can be controlled from one switch and can be connected to a timer. Most uses of PDLCs, however, are confined to privacy applications, where popular uses are found in glass walls for offices, conference rooms, lobbies, and store fronts. Privacy glass also provides unique opportunities for use by homebuilders in bathrooms, entryways, family rooms, bedrooms, and skylights. In the opaque state, the glass diffuses direct sunlight and eliminates 99 percent of the ultraviolet rays responsible for the fading of carpets and curtains, although unfiltered visible light can also fade fabric. PDLCs operate on the principle of electrically controlled light scattering. They consist of liquid crystal droplets surrounded by a polymer mixture sandwiched between two pieces of conducting glass. When no electricity is applied the liquid crystal droplets are randomly oriented, creating an opaque state. When electricity is applied the liquid crystals align parallel to the electric field and light passes through, creating a transparent state. Liquid crystal technology has not been a commercial success. The windows are hazy because they scatter rather than absorb light, so there is a fog factor even when the device is in the transparent state. Also, while liquid crystals work well for interior privacy control, the technology is all-or-nothing, on or off - it can’t be used as a shading device. It also tends to be a little expensive for most popular applications, running between $85 and $150 a square foot. Another ‘smart’ window technology, perhaps with a brighter future than liquid crystals, are electrochromic windows, which also attempt to control the amount of daylight and solar heat gain through the windows of buildings and vehicles. As with liquid crystals, a small voltage is required, although in the case of electrochromics the voltage causes the windows to darken; reversing the voltage causes them to lighten. Unlike liquid crystals, however, electrochromic windows can be adjusted to control the amount of light and heat passing through them, a characteristic suggesting a variety of applications. For instance, a small photovoltaic cell can be used to sense the amount of sunlight, darkening the window when the sun is brightest, then gradually lightening the window as the sunlight diminishes, a feature attractive in Sunbelt regions. Electrochromic windows consist of up to seven layers of material, the central three layers sandwiched between two layers of a transparent conducting oxide material, all of which are further sandwiched between two layers of glass. All five layers are, of course, transparent to visible light. These windows function as the result of transport of charged ions from an ion storage layer, through an ion conducting layer into an electrochromic layer. The presence of the ions in the electrochromic layer changes its optical properties, causing it to absorb visible light, the result of which is the window darkens. To reverse the process, the voltage is reversed, driving the ions in the opposite direction, out of the electrochromic layer, through the ion conducting layer, and back into the ion storage layer. As the ions migrate out of the electrochromic layer, it brightens (or "bleaches"), and the window becomes transparent Electrochromic windows can also be used to help keep cars cool. An electrochromic sunroof could darken in the direct sunlight but lighten at other times, providing function while keeping the car cool. Conceivably, electrochromic rear or side windows in a vehicle could darken while the car is parked, keeping the car cool, and then lighten again once the car is started. So far the technology is used only in self-dimming rear-view mirrors that change from light to dark to prevent eyestrain and temporary blindness from the glare of headlights approaching from the rear, then reversing when conditions permit. Unfortunately, the electrochromic process is slow, especially when compared to the newer SPD technology. It can take six seconds for something as small as an automobile’s rear-view mirror to go from clear to dark, and it may take 10 seconds to return to clear. But for something the size of a window it may take six to 10 minutes to change. SPD windows, on the other hand, react in two seconds or less, regardless of the window size. "In certain areas, such as rear-view mirrors, SPD technology goes a lot faster than that," said Joseph M. Harary, executive vice president, Research Frontiers of Woodbury, NY, the lab that developed and now licenses SPD technology. "Plus, you can use a knob or rheostat to control an SPD window. You can’t do that with electrochromics because there would be a six minute delay - you’d never get the knob right. Most people want instant feedback to adjust their window properly and SPD is the only one that will allow you to do that." Electrochromic windows are also expensive, costing on the order of $125 per square foot. Of the three ‘smart’ window technologies, SPD, in which the user can instantaneously control the passage of light through glass or plastic, appears to be the most promising in terms of cost and performance. SPD, though the newest of the window technologies, is actually the result of decades of research seeking a ‘light valve’ technology. Physicist Robert Saxe, founder and CEO of Research Frontiers, worked for 34 years and spent $28 million perfecting his light-valve glass technology. Windows in homes, office building windows, skylights and sun roofs - to say nothing of ski goggles and sunglasses, aviation instruments, automobile dashboard displays and bright, high-contrast digital displays for laptop and other electronic instruments - made with this new SPD technology can now be dimmed or brightened with electronic precision to suit individual needs, allowing an infinite range of adjustment between completely dark and completely SPD, which produces little or no haze in the transparent state, can be controlled either automatically by means of a photocell or other sensing or control device, or adjusted manually with a rheostat or remote control by the user. When used in conjunction with Low-E (low emissivity) glass, SPD can also be used to block ultraviolet light (U-factor). Low-E coatings, sometimes called heat-smart, are microscopically thin, virtually invisible, metallic oxide layers deposited on a window or skylight glazing surface primarily to reduce the U-factor by suppressing radiative heat flow. Low-E coatings are transparent to visible light. Different types of Low-E coatings have been designed to allow for high solar gain, moderate solar gain, or low solar gain. Still, clear insulated glass units (usually called IGUs) dominate commercial and residential glazing technology, although Low-E windows have slowly gained ground, having risen by about one percent a year recently to well over 30 percent residential and 20 percent commercial market share, according to figures from the American Architectural Manufacturers Association, and the National Wood Window and Door Association. A GLASS ACT "It is really simple how SPD technology works," Harary said. "Basically there are millions of black, light-absorbing, suspended-particle devices (SPD) within a film placed between the glass layers. When the user applies a moderate voltage of electricity to the film, the SPDs line up and become perpendicular to the window, allowing more light and increased visibility until the window is completely clear. As the amount of voltage is decreased, the window becomes darker until it reaches a bluish-black color that allows no light to pass through it." In other words, in the "off" state, when no voltage is applied, the particles (whose exact nature is proprietary) are randomly dispersed and therefore absorb light, creating an opaque appearance. Conversely, when in the "on" state the particles orient, or align, changing the character of the glass from opaque to clear. By adjusting the voltage anywhere between "off" and "on", the degree of light can thereby be precisely controlled. Therefore, the user has complete control over the amount of transmitted light from the glass or plastic walls. The black particles are a recent improvement. In the past, particles used in SPDs generally looked dark blue when the device was in its "off" state due to the particles’ inability to absorb blue light well. The new particles look nearly black because they absorb light well throughout the entire spectrum. Black or gray colors, more desirable because they’re neutral, are preferred for most applications. SPD technology actually originated over 100 years ago, when light-absorbing crystals were first discovered, supposedly through an accident with dog urine. According to folklore, an English chemist noticed that a dog who had been fed quinine bisulfate (perhaps for an upset stomach) had urinated in a tray of iodine. From this accident, green crystals (called herapathite) formed in the tray, and the chemist realized that they were able to filter out light. Edwin Land, inventor of the Polaroid camera, was later the first to fashion an SPD device. Failing in an attempt to make large thin sheets of polarizing iodine compounds, Land turned to making submicroscopic crystals by the billions. These he found could be spread on plastic sheets and lined up by electric or magnetic fields. When two such sheets were rotated with respect to each other, a clear view would gradually change to black. Land’s research resulted in 535 patents (second only to Thomas Edison.) In terms of patent count, Research Frontiers isn’t far behind. The lab, which currently hold over 350 U.S. and foreign patents and patent applications on the technology, now licenses its technology to such manufacturers as Dai Nippon Ink & Chemicals, General Electric, Hitachi Chemical, Hankuk Glass Industries (the Korean giant glass works), and Material Sciences Corp., San Diego, (a supplier of specialty films). In March, ThermoView Industries, Louisville, became the first domestic manufacturer specializing in products for the $8 billion replacement window/door industry licensed to produce SPD ‘smart’ There are actually over 500 fabricators in the fragmented U.S. window industry, although the glass itself comes from only six U.S. flat glass manufacturers, including PPG Industries, LOF (Pilkington Libby-Owens Ford), AFG Industries, Ford, Guardian, and Cardinal. In spite of all the licensing activity, SPD windows have yet to appear on the market. Developing the technology and manufacturing processes has been long and difficult. "The technology is perfectly well-suited for window applications and what’s happening now is our licensees are scaling up to make large quantities of the materials needed for production," Harary said. "As you know, there’s a big difference in doing something on a lab scale and doing something on a commercial scale. We’ve successfully developed procedures to scale up the technology and we’ve licensed and trained companies like Hitachi Chemical and Dai Nippon Ink and Chemicals to make the basic emulsions, which are the liquid materials that eventually get turned into a film, the polymers and the particles that we use in our system." Those companies in turn are licensed to sell the emulsions to film makers, who are also licensees, who take the liquid emulsion, coat it onto a substrate and cure it into a film. The film is what goes into the window. "Initially you’ll probably see the film imbedded in the glass itself, say, where it gets laminated on the inside of a insulating glass window," Harary said. Eventually, though, retrofitting will be possible by merely laminating the film to an existing window, hooking it up to an electrical connection and, presto, a ‘smart’ window. "It’s an easy technology to work with because it’s a film," The retail price of the SPD windows have yet to be determined, although Harary’s educated guess is the addition of the particle film will not increase the cost of windows by more than about 20 percent, adding about $15 per square foot, which is considerable less expensive than either liquid crystal or electrochromic "But it’s up to the licensees, because they’re the ones setting the price," he said. The new SPD glass allows blocking of sunlight without curtains or blinds, the particles that block light from the outside would also block light from the inside, so the privacy one expects when bolting out of the shower for the phone or napping on the job would still exist. Harary compares the blockage to a one-way mirror. "Instead of having a reflective coating, you have something that's blocking the light. It's one of these technologies that's going to make people's lives more comfortable," he said.
<urn:uuid:84658412-248b-4c69-a88f-598276837c74>
CC-MAIN-2016-18
http://home.earthlink.net/~douglaspage/id100.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860111581.11/warc/CC-MAIN-20160428161511-00156-ip-10-239-7-51.ec2.internal.warc.gz
en
0.929253
3,552
3.265625
3
DEFINITION--A continuing feeling of sadness, despondency or hopelessness with accompanying symptoms. Major depression occurs in about 1 in 10 Americans but there is continued improvement in treatment. BODY PARTS INVOLVED--Nervous system. SEX OR AGE MOST AFFECTED--Both sexes, but is more common in women; all ages. SIGNS & SYMPTOMS - Loss of interest in life; boredom. - Listlessness and fatigue. - Insomnia; excessive or disturbed sleeping. - Social isolation; feeling not useful or needed. - Appetite loss or overeating; constipation. - Loss of sex drive. - Difficulty making decisions; concentration difficulty; unexplained crying bouts. - Intense guilt feelings over minor or imaginary misdeeds. - Irritability; restlessness; thoughts of suicide. - Various pains, such as headache or chest pain, without evidence of disease. - A truly depressive illness has no single obvious cause. Some biological factors can play a part (physical illness, hormonal disorders, certain drugs). - Social and psychological factors play a part. - Inherited disorders may contribute (manic-depression runs in families). - May relate to the number of disturbing events in a person's life at one time. RISK INCREASES WITH - Unexpressed anger or other emotion. - Compulsive, rigid, perfectionist or highly dependent personalities. - Family history of depression; alcoholism. - Failure in occupation, marriage or other relationships; death or loss of a loved one. - Loss of something important (job, home, etc). - Job change or move to a new area. - Surgery, such as mastectomy for cancer. - Major illness or disability. - Passing from one life stage to another, such as menopause or retirement. - Use of some drugs, such as reserpine, beta-adrenergic blockers or benzodiazepines. - Withdrawal from mood-altering drugs, such as narcotics, amphetamines or caffeine. - Some diseases, including diabetes mellitus, cancer of the pancreas and hormonal HOW TO PREVENT What To Expect DIAGNOSTIC MEASURES-- Medical history and physical exam by a doctor (sometimes a psychiatrist). Psychological testing. APPROPRIATE HEALTH CARE - Self-care for mild depression. - Psychotherapy or counseling along with drug treatment appears to obtain the best results for more severe depression. - Hospitalization or inpatient at treatment center may be required for severe depression. - Rarely, electroconvulsive therapy. - Suicide. Warning signs include withdrawal from family and friends, neglect of personal appearance, mention of wanting "to end it all" or being "a burden to others", evidence of a suicide plan, such as buying or cleaning a gun, sudden cheerfulness after despondency. - Failure to improve. PROBABLE OUTCOME--Spontaneous recovery in many cases, but professional help can shorten the duration and help you learn to cope in the future. Recurrence is common. The recovery rate is high, despite one's pessimism. How To Treat - If symptoms appear mild to moderate, try some self-care ideas: talk to friends and family; exercise regularly; eat a balanced, low-fat diet; avoid alcohol; maintain your normal routines (if overscheduling is a problem though, try to slow down); see fun movies; learn relaxation techniques and practice them; take a vacation if possible; write down your feelings in a journal or diary; try to work out interpersonal problems (it's best however, to avoid making major decisions at this time); stay as active as possible. - Seek support groups. Contact social agencies for help. Call your local suicide-prevention hot line if you feel suicidal. MEDICATION--Your doctor may prescribe: - Antidepressant drugs (often tricyclics) to accompany therapy. - Lithium for alternating mania and depression. ACTIVITY--No restrictions. Maintain daily activities--even if you don't feel DIET--Eat a normal, well-balanced diet--even if you have no appetite. Vitamin and mineral supplements may be necessary. Call Your Doctor If - You have symptoms of depression. - You feel suicidal or hopeless.
<urn:uuid:7b70966f-c725-46be-a5d7-1a01ed5e7c05>
CC-MAIN-2019-18
http://www.healthse.com/diseases/illness160.php
s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578624217.55/warc/CC-MAIN-20190424014705-20190424040705-00503.warc.gz
en
0.848753
956
3.25
3
Non Woven Bag Material The main material for making non-woven bags is non-woven fabric. Non-woven fabric, also known as non-woven fabric, is composed of directional or random fibers, is a new generation of environmental protection data, with moisture-proof, breathable, flexible, light weight, no combustion-supporting, simple decomposition, non-toxic, no impact, rich color, low price, recyclable and other characteristics. Such as the use of polypropylene (pp) granular material as material, through high temperature melting, wire spraying, laying, hot roll take one step production. It is called cloth because it has the appearance and certain functions of cloth. Non woven Bag Material without warp and weft, cutting and sewing are very convenient, and light and simple stereotypes, deeply loved by manual enthusiasts. Because it is a kind of fabric that does not require spinning and weaving, it is just a directional or random arrangement of textile short fibers or filament to form a fiber network structure, and then the mechanical, thermal bonding or chemical reinforcement. It is not made of yarn interwoven and braided together, but the fibers are physically glued together, so that when you reach the scale in your clothes, you will find that you cannot pull out the threads. Nonwovens break through the traditional textile principle, and has the characteristics of short process, fast production rate, high yield, low cost, wide use, material history and so on. Non-woven fabric according to the composition, polyester, polypropylene, nylon, spandex, acrylic, etc.; Different components will have vastly different styles of non-woven fabric. Nonwoven fabric is a kind of nonwoven fabric, it is the direct use of polymer slices, short fibers or filament fibers through the air flow or mechanical net, and then through the thorns, acupuncture, or hot rolled reinforcement, after finishing the non-woven fabric. With soft, breathable and flat structure of the new fiber products, the advantages are no fiber crumbs, strong, durable, silky soft, is also a kind of enhanced materials, and cotton feeling, and cotton fabrics, non-woven bags are simple to form, and the cost of cheap. Advantages of Non woven Bag Material: With polypropylene resin as the primary production material, the proportion is only 0.9, as long as three-fifths of cotton, with loose, feel good. It is made of fine fiber (2-3D). Products moderate softness, comfortable. Polypropylene slice does not absorb water, water content is zero, the product is better water, composed of 100% fiber with porous, good air permeability, easy to keep the cloth dry, easy to wash. Non-toxic, no effect The product is made of food grade materials conforming to FDA, without other chemical components, stable function, non-toxic, no odor, and does not affect the skin. Antibacterial and anti-chemical agents Polypropylene is a chemical blunt substance, mothless, and can block the presence of bacteria and insects in the liquid corrosion; Antibacterial, alkali corrosion, products do not affect the strength due to corrosion. Products with water-based, no mildew, and can block the presence of bacteria and insects in the liquid corrosion, no mildew. Good physical property By polypropylene spinning directly spread into a mesh of hot bonding, product strength is better than the general staple fiber products, strength is not directional, vertical and horizontal strength near. In terms of environmental protection, the source material of most non-woven fabrics used is polypropylene, while the source material of plastic bags is polyethylene. Although the two substances have similar names, they are far apart in chemical structure. The chemical molecular structure of polyethylene has a strong stability, extremely difficult to degrade, so plastic bags need 300 years to decompose; Polypropylene, on the other hand, is chemically weak, its molecular chains simply crack, allowing it to degrade effectively and move to the next stage of the environmental cycle in a non-toxic shape. A non-woven shopping bag can be completely decomposed in 90 days. And non-woven shopping bags can be reused for more than 10 times, and the pollution to the environment after waste is only 10% of the plastic bag. Defects of Non woven Bag Material: 1) Compared with textile cloth, the strength and durability are poor. 2) It cannot be washed like other fabrics. 3) The fibers are arranged in a certain direction, so simply split from the right direction and so on. Therefore, the improvement of production methods should be focused on the improvement of avoiding fragmentation. Maintenance of Non woven Bag Material: 1, to keep clean, often change, avoid moth breeding. 2, seasonal storage, must be washed, ironed, dried, sealed in plastic bags, flat in the wardrobe. Take care to shade out the light to prevent fading. Ventilation, dust removal and humidity should be frequent, not exposed to the sun. The wardrobe should be put into mildew, moth repellent tablets, to avoid cashmere products damp mildew insects. 3.When wearing inside, the lining of its matching coat should be smooth, and hard objects such as pens, key bags and mobile phones should be avoided in pockets to avoid local friction and pilling. Try to reduce the friction with hard objects (such as sofa backrest, armrest, table top) and crochet. It is not easy to wear for too long. Stop wearing or change wearing after about 5 days to restore the elasticity of clothes and avoid fiber fatigue and damage. 4, if there is pilling, not strong pull, must use scissors to cut off the pompoms, to avoid being unable to repair due to off-line.
<urn:uuid:d7a15c92-2ed9-4889-9baa-209e6bc609ca>
CC-MAIN-2024-10
https://www.wzweilai.net/what-is-non-woven-bag-material%EF%BC%9F/
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947475757.50/warc/CC-MAIN-20240302052634-20240302082634-00186.warc.gz
en
0.913814
1,235
2.53125
3
Discovering the Ocean's Deepest Secrets: The Fascinating Lives of Abyssal Creatures Diving into the vast depths of our planet's oceans reveals a world that is as alien and breathtaking to us as any extraterrestrial landscape. The deep sea, often referred to as Earth’s final frontier, presents an environment so extreme that it pushes the limits of life itself. Humanity has always been captivated by what lies beneath the water’s surface, yet we know more about outer space than these deepest parts of our own seas. This article will embark on a fascinating journey to explore abyssal creatures - those enigmatic beings thriving in unimaginable darkness and pressure deep under the ocean surfaces. We invite you to delve deeper with us into this mysterious world, shedding light on secrets hidden within its profound depths. Unveiling Abyssal Creatures: Adaptations for Extreme Living In the realm of marine biology, perhaps nothing is as intriguing as the adaptations of abyssal creatures. These fascinating organisms, dwelling in the deepest parts of the ocean where sunlight cannot penetrate, have developed intriguing features to survive in their extreme environment. One of the striking adaptations observed in many abyssal creatures is bioluminescence. This attribute, rather than being a mere spectacle, is a survival technique. Marine animals use bioluminescence for a variety of purposes, including attracting prey, deterring predators, and communicating with potential mates. This natural light show is indeed one of the most captivating deep-sea adaptations. Another startling feature among some abyssal creatures is gigantism. This phenomenon, where species grow significantly larger than their shallow-water counterparts, is thought to be a response to the scarcity of food in the deep sea. The larger size allows these creatures to travel greater distances in search of sustenance, increases their lifespan, and aids in withstanding the crushing pressures of the deep sea. As we delve into the study of these abyssal creatures and their unique deep-sea adaptations, we are constantly reminded of the resilience of life. Even in the harshest conditions, life finds a way to not only survive but to thrive in the most surprising ways. The ongoing study of these creatures not only enriches our knowledge of marine biology but also gives us a glimpse into the ocean's deepest secrets. Gleaning Insights from Deep-Sea Exploration Technology The realm of the deep sea, home to the most enigmatic and elusive creatures, has always been a subject of exploration that has held our fascination. In recent years, advancements in deep-sea exploration technology have reshaped our understanding of these unfathomable depths. Notably, the development and use of submersibles have been integral in unveiling the mysteries of the abyss. Equipped with robust designs to withstand the extreme pressure and cold of the deep oceans, these submersibles, often aided by underwater robotics, have granted us unprecedented access to the ocean's deepest reaches. They are fitted with powerful lighting and high-resolution cameras, enabling scientists to capture images and videos of the unique organisms inhabiting these depths. Moreover, these advanced machines can collect samples from the ocean floor, providing tangible evidence to bolster our knowledge of deep-sea life forms. These technological advancements have not only enriched our understanding of the deep-sea ecosystem, but also highlighted the pressing need for biodiversity conservation. The previously unseen richness and complexity of life at great depths underscore the value of these ecosystems and the urgent requirement for their preservation. The data collected through these expeditions play a vital role in shaping policy and conservation strategies, aiding in the overall goal of biodiversity conservation at such depth levels. The discipline of oceanography has greatly benefitted from these technological leaps. With the aid of submersibles and underwater robotics, oceanographers are now able to study previously inaccessible areas of the ocean, gaining a more comprehensive understanding of oceanic processes, ecosystems, and biodiversity. This marks a significant stride in the realm of marine research and conservation, altering our perception and appreciation of the ocean's deepest secrets. The Lure of Bioluminescent Beings In the mysterious realms of the deep sea, bioluminescent organisms rule, using their ethereal glow to survive in the inky black darkness. Among the most well-known of these captivating creatures are the anglerfish and the lantern shark. The anglerfish, a creature straight out of a science fiction novel, utilizes a luminescent lure hanging from a protrusion on its head to attract unsuspecting prey into its gaping maw. This luminescent signaling not only serves as a deceptive hunting strategy but is integral to their survival in the deep, dark ocean. Equally compelling is the lantern shark, a small species of shark that uses luminescent photophores in its skin to camouflage itself from predators. By emitting a soft glow that matches the light filtering down from the surface, the lantern shark can effectively disappear in plain sight, a strategy known as counter-illumination. These fascinating adaptations illuminate the incredible resilience and ingenuity of life in the deep sea. The ecological implications of bioluminescence extend far beyond simple predation. It plays a significant role in communication among species, serving as a beacon for attracting mates in the vast oceanic depths where visibility is next to none. Luminescent signaling can also serve as a warning to potential predators, signifying toxicity or danger. This diverse array of functions underscores the complex and vital role bioluminescence plays in the deep sea ecosystem. In the grand scheme of things, these glow-in-the-dark organisms provide us with a deeper understanding of the complex and interconnected nature of life in the ocean's depths. Their existence challenges our perception of life's adaptability, reminding us that even in the most extreme conditions, life finds a way to flourish. Navigating Life Without Light: Food Strategies The abyssal creatures' existence in a realm devoid of light holds significant implications for the feeding strategies they adopt. The absence of photosynthesis, a vital process for energy production in most surface life forms, shapes the entire food web in this mysterious world. As sunlight cannot penetrate to these extreme depths, abyssal organisms have adapted to rely on other sources of nourishment. Predation, scavenging, and detritus feeding are the primary survival tactics employed. Many abyssal animals are opportunistic predators, seizing whatever prey comes within their reach. With food scarcity being a persistent issue, these creatures have evolved to become extremely efficient hunters, boasting an array of adaptations such as large mouths, sharp teeth, and bioluminescent lures to attract prey. Similarly, scavengers play an integral part in the abyssal ecosystem, feasting on the remains of dead creatures that sink from the surface waters. They often possess highly developed sensory organs to detect these 'marine snowfalls.' Lastly, an abundance of abyssal life, namely detritus feeders, subsist on the organic matter that falls from the ocean's upper layers. This 'marine snow' comprises dead organisms, decaying plant matter, and fecal material, serving as a crucial energy source in this light-deprived environment. This unique adaptation allows them to thrive where other species cannot and reinforces their indispensable role in the abyssal food web.
<urn:uuid:8535127a-df48-473e-8c83-2df9aec3c547>
CC-MAIN-2023-40
http://armbcr.org/discovering-the-oceans-deepest-secrets-the-fascinating-lives-of-abyssal-creatures
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233509023.57/warc/CC-MAIN-20230925151539-20230925181539-00706.warc.gz
en
0.910321
1,464
3.265625
3
For you who deal with computer management, be it at office or at home, having a reliable file extension is necessary. It is so in order that your work can effectively be carried out. You can imagine if all the data you do with your computer is just done manually, besides time consuming, it is also disk space consuming. Especially when you do installing software, using file extension is a must. One of the file extension you must be in to it is File Extension TMP. This file extension is usually used for temporary files which you will run them fully later. It is very compatible for it is created for multiple programs or applications. Another is File Extension BIN. This file extension is created to help you efficiently deal with image management like pictures or photographs. As commonly for such a thing, you put them in .jpeg and as such, with File Extension BIN, the work of images, videos, games etc. can be more efficient. The other but not the last file extension is file extension dmg. All such files are created painstakingly for use to ease your work with computer. So, it is high time then that you have to start making use of file extensions. To learn further about them, you can dig up the ComputerFileExtension.com. More file extensions are available there and ready for download. Just browse the one suits your need for your efficient computer work management.
<urn:uuid:2db31568-a977-4789-99be-21d11f27eb7d>
CC-MAIN-2017-09
http://nokiaedition.blogspot.com/2009/02/importance-of-using-file-extensions.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171271.48/warc/CC-MAIN-20170219104611-00430-ip-10-171-10-108.ec2.internal.warc.gz
en
0.954805
284
2.71875
3
Introduction to emotional intelligence According to Horn (2013), although many people have certain opinions about emotional intelligence, the numbers of definitions of the term may be equal to the numbers of authors who addressed this issue. Emotional intelligence has been defined as “the potential to be aware of and use one’s own emotions in communicating with oneself and others and to manage and motivate oneself and others through understanding emotions” (Wharam, 2009, p.11). Alternatively, Bradberry and Greaves (2009) define emotional intelligence as the ability of individuals to understand, express and control their emotions. According to Barrows and Powers (2009) as taken from Oxford English Dictionary hospitality means “the perception and entertainment of guests, visitors or strangers with liberality and good will” (Barrows and Powers, 2009, p.4) Chakraborty and Konar (2009) identify four important attributes of emotions as intensity, brevity, partiality and instability. Intensity relates to the extent of strengths of emotions. Brevity, as an important attribute of emotion relates to its duration. Partiality, on the other hand, is a target such as person or object towards whom or which emotion is directed. Lastly, instability relates to transient psychological and physiological processes experienced by people. According to Bradberry and Greaves (2009) emotional intelligence accounts for 58% of performance for all kinds of jobs. However, Bradberry and Greaves (2009) do not explain methods used to obtain this specific figure and this fact compromises the value and validity of their claim. Moreover, according to Bradberry and Greaves (2009) the levels of personal competence are based on self-awareness and self-management, whereas social competence is based on social awareness and relationship management. Four components of emotional intelligence Source: Bradberry and Greaves (2009) According to Sparrow and Knight (2009) emotions are associated with physiological change such as acceleration of the rate of heartbeat, changes in blood pressure and facial expressions, changes in voice and manners of speaking etc. A serious weakness of the work of Sparrow and Knight (2009), however, is that, authors fail to explore link between specific emotions and relevant changes. Chakraborty and Konar (2009) specify core emotions as fear, anger, disgust, sadness, surprise and joy, and convincingly argue each individual has core emotions, also there are differences between individuals in terms of proportion and tendency to express each particular core emotion. MTD Training (2010), on the other hand, specifies fears and desires as a primary base of many other emotions. Major types of fears and desires are specified by MTD Training (2010) in the following manner: |Fear of disapproval Fear of rejection Fear of failure Fear of losing control Fear of dying Fear of losing our jobs Fear of offending others Fear of being alone Fear of pain Fear of uncertainty |Desire for wealth Desire for happiness Desire for success Desire for acceptance Desire for security Desire for certainty Desire for pleasure Desire for power Desire for growth Major types of fears and desires Source: MTD Training (2010) Blell (2011), on the other hand, stress highly subjective and individual nature of emotions. According to Blell (2011), each person is unique in terms of experiencing the nature and the levels of intensity of emotions and this depends on a wide range of factors such as age, cultural background, personal traits and characteristics, and others. According to Wharam (2009) the history of emotional intelligence dates back two thousand years age when Plato wrote “all learning has an emotional base”. Wharam (2009) explain different stages of evolution of emotional intelligence as a social science, by highlighting prevailing ideas at each stage in a detailed manner. Interestingly, Bradberry and Greaves (2009) do not recognize connection between IQ and emotional intelligence. They argue that “there is known connection between an IQ to and emotional intelligence; you can’t simply predict emotional intelligence based on how smart someone is” (Bradberry and Greaves, 2009, p.18). Barrows, C.W. & Powers, T. (2009) “Introduction to Management in the Hospitality Industry” John Wiley & Sons Blell, D.S. (2011) “Emotional Intelligence: For the Authentic and Diverse Workplace” iUniverse Bradberry, T. & Greaves, J. (2009) “Emotional Intelligence 2.0” Talent Smart Caligiuri, P., Lepak, D.& Bonache, J. (2010) “Managing the Global Workforce” John Wiley & Sons Chakraborty, A. & Konar, A. (2009) “Emotional Intelligence: A Cybernetic Approach” Springer Publications Chapman, M. (2011) “Emotional Intelligence Pocketbook” Pocketbooks Cole, G.A. (2004) “Management: Theory and Practice” 6th edition, Cengage Learning Horn, C. (2013) “Emotional Intelligence: Personal Growth and Achievement through E.I. Skills Development” Wheatmark, Inc Khan, J.A. (2008) “Research Methodology” APH Publishing Corporation Kumar, V.P., Ramesh, K. & Kumar, V.V. (2011) “Work Life Balance – A Quality of Life Technique” VDM Publishing Pping, L.K. & Klopping, L. (2012) “Work-Life-Balance” GRIN Verlag Rosenberg, J.M. (2012) “The Concise Encyclopedia of The Great Recession 2007 – 2012” Scarecrow Press Sparrow, T. & Knight, A. (2009) “Applied Emotional Intelligence: The Importance of Attitudes in Developing Emotional Intelligence” John Wiley & Sons Verick, S. & Islam, I. (2010) “The Great Recession of 2008-2009: Causes, Consequences and Policy Responses” Werner, S., Schuler, R.S. & Jackson, S.E. (2012) “Human Resource Management” Cengage Learning Wraham, J. (2009) “Emotional Intelligence: Journey to the Centre of Yourself” John Hunt Publishing
<urn:uuid:ff11c232-5822-497c-a7e1-4f9d69e05b8b>
CC-MAIN-2023-23
https://research-methodology.net/introduction-to-emotional-intelligence/
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224644574.15/warc/CC-MAIN-20230529010218-20230529040218-00459.warc.gz
en
0.878609
1,370
3.875
4
Great Architect of the Universe |Part of a series on| |In particular religions| |Experiences and practices| The Great Architect of the Universe (also Grand Architect of the Universe or Supreme Architect of the Universe) is a conception of God discussed by many Christian theologians and apologists. As a designation it is used within Freemasonry to neutrally represent deity (in whatever form, and by whatever name each member may individually believe in). It is also a Rosicrucian conception of God, as expressed by Max Heindel. The concept of the demiurge as a grand architect or a great architect also occurs in gnosticism and other religious and philosophical systems. The concept of God as the (Great) Architect of the Universe has been employed many times in Christianity. An illustration of God as the architect of the universe can be found in a Bible from the Middle Ages and the comparison of God to an architect has been used by Christian apologists and teachers. Saint Thomas Aquinas said in the Summa: "God, Who is the first principle of all things, may be compared to things created as the architect is to things designed (ut artifex ad artificiata)." Commentators have pointed out that the assertion that the Grand Architect of the Universe is the Christian God "is not evident on the basis of 'natural theology' alone but requires an additional 'leap of faith' based on the revelation of the Bible". John Calvin, in his Institutes of the Christian Religion (1536), repeatedly calls the Christian God "the Architect of the Universe", also referring to his works as "Architecture of the Universe", and in his commentary on Psalm 19 refers to the Christian God as the "Great Architect" or "Architect of the Universe". Masonic historians such as William Bissey, Gary Leazer (quoting Coil's Masonic Encyclopaedia), and S. Brent Morris, assert that "the Masonic abbreviation G.A.O.T.U., meaning the Great Architect of the Universe, continues a long tradition of using an allegorical name for the Deity." They trace how the name and the abbreviation entered Masonic tradition from the Book of Constitutions written in 1723 by the Reverend James Anderson. They also note that Anderson, a Calvinist minister, probably took the term from Calvin's usage. Christopher Haffner's own explanation of how the Masonic concept of a Great Architect of the Universe, as a placeholder for the Supreme Being of one's choice, is given in Workman Unashamed: |“||Now imagine me standing in lodge with my head bowed in prayer between Brother Mohammed Bokhary and Brother Arjun Melwani. To neither of them is the Great Architect of the Universe perceived as the Holy Trinity. To Brother Bokhary He has been revealed as Allah; to Brother Melwani He is probably perceived as Vishnu. Since I believe that there is only one God, I am confronted with three possibilities: It is without hesitation that I accept the third possibility.. —Christopher Haffner, Workman Unashamed: The Testimony of a Christian Freemason, Lewis Masonic, 1989, p.39 The Great Architect may also be a metaphor alluding to the godhead potentiality of every individual. "(God)... That invisible power which all know does exist, but understood by many different names, such as God, Spirit, Supreme Being, Intelligence, Mind, Energy, Nature and so forth." In the Hermetic Tradition, each and every person has the potential to become God, this idea or concept of God is perceived as internal rather than external. The Great Architect is also an allusion to the observer created universe. We create our own reality; hence we are the architect. Another way would be to say that the mind is the builder. In Heindel's exposition, the Great Architect of the Universe is the Supreme Being, who proceeds from The Absolute, at the dawn of manifestation. For a detailed discussion, see The Rosicrucian Cosmo-Conception. The concept of the Great Architect of the Universe occurs in gnosticism. The Demiurge is The Great Architect of the Universe, the God of Old Testament, in opposition to Christ and Sophia, messengers of Gnosis of the True God. For example: Gnostics such as the Nasoræans believe the Pira Rabba is the source, origin, and container of all things, which is filled by the Mânâ Rabbâ, the Great Spirit, from which emanates the First Life. The First Life prays for companionship and progeny, whereupon the Second Life, the Ultra Mkayyema or World-constituting Æon, the Architect of the Universe, comes into being. From this architect come a number of æons, who erect the universe under the foremanship of the Mandâ d'Hayye or gnôsis zoês, the Personified Knowledge of Life. James Hopwood Jeans, in his book The Mysterious Universe, also employs the concept of a Great Architect of the Universe, saying at one point "Lapsing back again into the crudely anthropomorphic language we have already used, we may say that we have already considered with disfavour the possibility of the universe having been planned by a biologist or an engineer; from the intrinsic evidence of his creation, the Great Architect of the Universe now begins to appear as a pure mathematician." To that Jinarajadasa adds his observation that the Great Architect is "also a Grand Geometrician. For in some manner or other, whether obvious or hidden, there seems to be a geometric basis to every object in the universe." The concept of a Great Architect of the Universe also occurs in Martinism. Martinist doctrine is that the Great Architect must not be worshipped. Martinists hold that whilst it is possible to "invoque" Him, it is not to adore Him. - Hog, Erik. "The depth of the heavens: Belief and knowledge during 2500 years" (pdf file) Europhysics News, (2004), 35(3), p. 78, . - Summa Theologica I. 27, 1, r.o. 3. - Stephen A. Richards (2006). "Thomas Aquinas (1225–1274)". Theology. Pelusa Media Group. - William K. Bissey (Spring 1997). "G.A.O.T.U.". The Indiana Freemason. - Gary Leazer (2001). "Praying in Lodge". Masonic Research. Archived from the original on 13 August 2006. - S. Brent Morris (2006). The Complete Idiot's Guide to Freemasonry. Alpha/Penguin Books. p. 212. ISBN 1-59257-490-4. - Mary Ann Slipper, The Symbolism of the Eastern Star Pages 35 and 36. - "Nasoræans". Catholic Encyclopedia. New York: Robert Appleton Company. 1913. - JOC/EFR (February 2006). "Quotations by James Jeans". - "Mathematics and Mysticism". Wisdom's Frame of Reference. Advaita Vedanta. 2005-11-04. - Curuppumullage Jinarajadasa (1950-11-17). "Introduction to the third edition". Occult Chemistry. - Aurifer (2005-09-11). "The Martinist Doctrine". Sovereign Grand Lodge of the Ancient Martinist Order.
<urn:uuid:e7009e13-1ff3-4170-956a-4e998416ea5a>
CC-MAIN-2014-42
http://en.wikipedia.org/wiki/Great_Architect_of_the_Universe
s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637904722.39/warc/CC-MAIN-20141030025824-00130-ip-10-16-133-185.ec2.internal.warc.gz
en
0.908641
1,561
2.59375
3
How was Jesus' death a real sacrifice if He knew He would be resurrected?God is omniscient. From the beginning He knew that Adam and Eve would sin and be separated from Him. Yet He also knew that He would provide a way for them to be reunited to Him (Genesis 3:15). Jesus, being one with the Father and the Holy Spirit, knew He would die and be resurrected (Mark 10:32–34; 14:27–28). In fact, that is exactly why He came to earth (Matthew 16:21; John 3:16–18; 10:10; 18:36–37; Philippians 2:5–11). Given Jesus' foreknowledge of His resurrection, and the fact that He was resurrected, some wonder if Jesus' death was a real sacrifice. Sacrifice in a general sense is giving up something of value for something of more worth. For example, a sacrifice to a deity in exchange for a good crop, or the sacrifice a soldier makes on behalf of his country. Yet in the context of the Bible, the meaning goes much deeper. Sin separates people from God, and the wages of that sin is death (Romans 3:23; Hebrews 9:22). This refers not just to physical death, but also to spiritual death in which people are separated from God eternally. But God provided a means of forgiveness. Right after the fall, in Genesis 3:15, God spoke the first promise of a coming Savior. He also made clothes of animal skins for Adam and Eve, hinting at the fact that sin leads to death and the required sacrifice for atonement. Later God gave the Israelites a temporary sacrificial system in order to make atonement for sin. The people offered the best of their flock as animal sacrifices to satisfy the need for blood; however, the sacrifice of animals was not perfect either. It was a foreshadowing of the perfect sacrifice that Jesus would one day provide (Hebrews 10:1–18). The sacrifice for sin has always been Jesus' death on the cross, and the way to receive God's forgiveness has always been through faith in Him (Ephesians 2:8–10; Romans 4:1–25). Jesus was the perfect sacrifice. He was fully God and fully human. He experienced all of the pain and temptation known to humanity and yet did not sin (Hebrews 4:14–16). When He died on the cross, He took the sins of the world upon Himself and experienced the wrath of God against those sins. Since He was innocent, His blood "offered for all time a single sacrifice for sins" (Hebrews 10:12; cf. Hebrews 10:1–18). His blood covers all those who put their trust in Him so the Father sees His sacrifice and not our sin. Second Corinthians 5:21 explains, "For our sake he made him to be sin who knew no sin, so that in him we might become the righteousness of God." Although Jesus knew He would be resurrected, His death was still a real sacrifice. A positive outcome does not undermine the journey it took to get there. An Olympic athlete knows the hours he worked strenuously before standing on the podium; the victory does not negate the real sacrifice of those efforts. The mother remembers the grueling pain of labor before she held her baby in her arms; holding her baby does not mean her pains were inconsequential. Jesus will never forget what He endured in order to redeem us. He willingly suffered great emotional and physical distress even though He was innocent. At any moment He could have removed Himself from the circumstances, but He chose to stay. His sacrifice was quite real. Jesus, our Creator, the God of the universe, came to earth as a human baby. He humbled Himself into the lowly helpless form of an infant and was born without fanfare. He lived a life serving others, never expecting the worship and honor He deserved. He experienced pain and temptation as every other human does (Matthew 4:1–11). His sacrifice was not in His death alone, but there was real sacrifice in His life. Praying in the garden of Gethsemane Jesus' soul was in anguish. He pleaded with God to take away the death that awaited Him: "Abba, Father, all things are possible for you. Remove this cup from me. Yet not what I will, but what you will" (Mark 14:36). Nonetheless, Jesus surrendered to God's will and willingly sacrificed His life. His friend Judas betrayed Him, and the rest of the disciples abandoned Jesus in His hour of greatest need (Mark 13:50). Even Peter, His right-hand man, denied knowing Him. Jesus was arrested, mocked, and spat on standing before the Sanhedrin and Caiaphas. Pilate condemned Jesus to death by crucifixion after the Jewish people chose to release a criminal in His place. The Roman soldiers scourged Him, lash after lash tearing away at His flesh. They taunted Him, forcing Him to wear a robe and crown of thorns. He had to carry His own cross tied across His shoulders. Nails were driven into His wrists and feet. Jesus hung on the cross for hours struggling to lift His body to take each breath. He was beaten to a pulp, continuing to lose blood, and dehydrated. His muscles cramped and collapsed in exhaustion. Even so, Jesus called out with one final breath and gave Himself over completely to God: "Father, into your hands I commit my spirit!" (Luke 23:46). This was real sacrifice. Jesus did rise again! He proved victorious over sin and death. But the fact of His resurrection makes the reality of the sacrifice of His life and death no less astounding. In fact, Jesus' resurrection is what proves He is who He claims to be and that His sacrifice fully paid the price for our sin. Jesus' sacrifice leads to the possibility of life for us (John 10:10). Those who have life in Christ are called to offer ourselves as living sacrifices to God, being transformed by the renewal of our minds through the work of the Holy Spirit (Romans 12:1–2; Philippians 2:12–13). We also have the privilege of sharing the good news of Jesus' death and resurrection and the reality of life in Him with others (Romans 10:9–15). Is the death of Jesus Christ or His resurrection more important? What is the passion of the Christ? Why did Jesus have to suffer so badly? What is the reason for Jesus' suffering? Who is responsible for Jesus Christ's death? What is the significance of the blood of Christ? Truth about Jesus Christ
<urn:uuid:ea941c00-348b-4911-a568-fbe4a9166e92>
CC-MAIN-2023-14
https://www.compellingtruth.org/Jesus-real-sacrifice.html
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943747.51/warc/CC-MAIN-20230321225117-20230322015117-00511.warc.gz
en
0.970764
1,381
3.234375
3
Male fern root, also known as Dryopteris filix-mas, Bear’s paw, Knotty Brake, and Sweet Brake, has been used for centuries as a defense against harmful organisms. Beyond that, the male fern root also offers other health benefits for humans such as digestive support and detoxification. Origins of Male Fern Male fern is native to the temperate climates of Asia, Europe, and much of South and North America. The plant is highly adaptable and can grow well in both arid and fertile soils. The root of the male fern, usually harvested in early autumn, is dried for therapeutic purposes. As far back as 103 A.D., Greek and Roman physicians used the male fern root to help expel harmful organisms from the intestines and digestive tract. In fact, it is rumored that Louis XVI of France paid large sums of money to add this powerful fern to his own medicine chest. How Does Male Fern Root Work? The male fern root is loaded with cleansing compounds known as filicin and filmarone. These essential oils are responsible for eradicating harmful organisms by creating a harsh intestinal environment that is toxic to harmful intestinal organisms. Studies also show that male fern root’s oleo-resins cause harmful organisms in the intestines to become immobile, preventing them from attaching themselves to the interior lining of human intestinal walls. In both human and animal case studies, male fern root was shown to have a dramatic effect on eliminating harmful organisms both inside and outside of the GI tract. Health Benefits of Male Fern Root Male fern root is also rich in antioxidants, folic acid, phloroglucinol derivatives, and several other necessary trace essential oils which promote overall digestive health. Traditionally cited benefits of male fern root include: - Powerful astringent - Promotes harmful organism cleansing - Supports digestive health - Natural intestinal cleanser - Encourages normal liver function Supplementing With Male Fern Root Male fern root works to establish an environment in the body that is inhospitable to harmful invaders. Have you used it? I’d love to hear from people who’ve had first hand experience. Leave a comment below and share your tips with us. †Results may vary. Information and statements made are for education purposes and are not intended to replace the advice of your doctor. Global Healing Center does not dispense medical advice, prescribe, or diagnose illness. The views and nutritional advice expressed by Global Healing Center are not intended to be a substitute for conventional medical service. If you have a severe medical condition or health concern, see your physician.
<urn:uuid:5b238014-e003-4adf-9e11-deea637a69eb>
CC-MAIN-2017-51
https://www.globalhealingcenter.com/natural-health/male-fern-root/
s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948530841.24/warc/CC-MAIN-20171213201654-20171213221654-00323.warc.gz
en
0.947058
553
2.859375
3
Today we are going to use math and science to build a rechargeable LED light. For the science part, you will need: Rechargeable Lithium Battery, 150 ohm resistor, and white LED light (from Christmas LED lightbulb). For the math part, you will need the Ohm’s Law (V=IR or R=V/I). For more of these related topics, check out our website link … https://educatetube.com/?p=3617 Home Math Math Applications How to determine the resistor for LED light with Rechargeable Lithium Battery How to determine the resistor for LED light with Rechargeable Lithium Battery
<urn:uuid:9d01b6b1-6190-443f-ba55-3d46fa9c978a>
CC-MAIN-2023-23
https://educatetube.com/how-to-determine-the-resistor-for-led-light-with-rechargeable-lithium-battery
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224649741.26/warc/CC-MAIN-20230604093242-20230604123242-00214.warc.gz
en
0.836748
145
3.65625
4
The Clergy Daughters' School The regime and conditions were reported to be very harsh, and it is believed that the school was portrayed as Lowood School in Charlotte Bronte's novel "Jane Eyre". In 1825 illnesses forced the school to move to the coast, and the Bronte girls returned home to Haworth in the heart of what is now known as West Yorkshire's Bronte Country. Maria and Elizabeth died shortly after, though of course Charlotte and Emily (along with their sister Anne) reached adulthood and became world famous for their novels (including "Wuthering Heights" and "Jane Eyre"). The school itself relocated again to nearby Casterton in 1830, and part of the old school building in Cowan Bridge is currently in use as a self catering holiday cottage, with the rest of the block being private residences. [N.B. Please mention the Eagle Intermedia Yorkshire Dales website when making your enquiries.] Other Bronte related websites and web pages - Bronte Country - The Bronte Birthplace (in Thornton on the outskirts of Bradford in West Yorkshire's Bronte Country) - The Bronte Parsonage (in Haworth in West Yorkshire's Bronte Country)
<urn:uuid:965ed8fa-cd09-459d-95e4-de923cf8c00a>
CC-MAIN-2023-50
https://www.yorkshire-dales.com/clergy-daughters-school.html
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679511159.96/warc/CC-MAIN-20231211112008-20231211142008-00557.warc.gz
en
0.971036
251
2.625
3
The Pangán Bird Reserve was created in 2002 through the interest of the fundación ProAves to conserve forests in the South Pacific in the Nariño foothills on the western slope of the western mountains, in the department of Nariño, Barbacoas Town. This area is considered one of the most biodiverse areas of Colombia with a high number of endemic species which makes it one of the highest priority ecosystems for conservation. It has been affected by the rapid growth of surrounding communities and the expansion of different crops in the area. Also, because of its location a binational corridor can be created together with other protected areas and indigenous reserves, thus promoting the conservation of the habitat of about 45 species of endemic birds like the Pangan, the local name for the Long-wattled Umbrellabird (Cephalopterus peduliger). It is also classified by the Alliance for Zero Extinction as an AZE site http://www.zeroextinction.org Currently the reserve is at an altitude range between 550 and 1,900 meters above sea level, it has more than 17.300 acres of directly protected conservation land and 20.613 acres indirectly preserved by a process of awareness with people from the nearby villages among which are Junín, San Francisco de Cuchirrabo, Mirador de Tajadas, El Gualte and El Gavilán. Climate: It has a minimum temperature of 12 º C and maximum 24 ° C. Biophysical characteristics: It is geographically located in the foothills of the western plain of the nudo de los pastos in Ñambí River Basin, a tributary of the river Telembí and thanks to the purchase of new properties the river Yaguapi is also being protected. Tropical rain forest and montane rain forest are also present. In the El Pangán Bird Reserve there are approximately 360 species of birds, among which 21 are threatened, 2 endangered, 13 near threatened, 4 vulnerable and 49 endemic 4 (EBA 041-Stattersfield et al. 1998). Amongst Herpete there is a record of about 21 species of amphibians in 6 families and 17 species of Reptiles distributed in 7 families. Amongst mammals there is a record of about 36 species distributed in 21 families of which 10 are globally threatened. Likewise, there is a variety of diurnal Lepidoptera, 91 species distributed in 68 genera, 6 families and 14 subfamilies, of which 28 are endemic species, 20 species are rare or very rare and 66 are common species. The Pangán Bird Reserve has a main lodge that can accommodate 45 people comfortably; it also has water service, electricity grid, kitchen and a meeting room. The entrance to the main cabin reservation is made along a 2.7 km long path, in very good condition that allows you to observe the flora and fauna of this area, in addition to this, there are 4 paths through much of the reserve totaling 12km, which are used for ecotourism and research. In addition to the cabin on the reserve, we have a two-story house in Junín, which is used for short stays and as an educational center, as it has a conference room, two bedrooms, bathrooms, showers and water and electricity. El Pangan Bird Reserve – hut. - Respect the natural values of the reserve, plants, and animals. - You are not allowed to collect biological material. - Follow the instructions of the reserves personnel and use the established paths. - Camping is not permitted on the Reserve. - Impermeable boots must be worn. Income and visits: Contact or visit:email@example.com y firstname.lastname@example.org With the support of: Oil pipeline theft threatens biodiversity in the Nariño Friday 8 October 2010. A major Ecopetrol oil pipeline from the Amazon to the Pacific coastal port of Tumaco is being severely damaged as people drill holes into the pipeline to extract and process oil for producing cocaine! The resulting major oil spills run into the “El Pangan Reserve” to destroy forest and contaminate its once pristine rivers. El Pangan Bird Reserve celebrates its 10th Anniversary Wednesday 16 December 2009. Protecting over 11,900 acres of pristine foothill and subtropical forest on the Pacific slope of the Andes in southwest Colombia, El Pangán Bird Reserve guarantees the protection of 48 extremely range-restricted bird species and represents one of the most important protected areas for biodiversity in the Chocó region. Young Conservationists help in El Pangán Reserve Thursday 30 April 2009. Members of ProAves Young Researchers group have been helping manage the El Pangán Bird Reserve in Nariño and outreach activities. However, the group is seeking the support of used binoculars. New bird discovered, but feared extinct Wednesday 25 March 2009. After over 120 years, a new Colombian bird for science has been discovered and decribed from Medellin by ProAves Council members. ProAves searched for the new subspecies called Giles’s Antpitta (Grallaria milleri gilesi), but proclaimed extinct. Ecological disaster: oil spill for cocaine Wednesday 25 March 2009. A critical area for biodiversity and indigenous communities in the Chocó region, has been devastated by an oil spill started by thieves stealing oil to produce gasoline for cocaine processing. Please help: support our forest guardians.
<urn:uuid:0ca1f128-2286-4787-8a60-90c25f810bb6>
CC-MAIN-2021-21
https://proaves.org/en/el-pangn-bird-reserve/
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989914.60/warc/CC-MAIN-20210516201947-20210516231947-00101.warc.gz
en
0.927441
1,138
3.640625
4
From the EPA website: "The Recommended Dietary Allowances (RDA, 1989) for water intake are 1.5 ml/K cal and 980 K cal/day for a child between six months and one year old. Thus, the RDA for a 10?kilogram child is equivalent to 1,275 ml of water/day." From the Food & Nutrition Research Institute (FNRI) website: "For infants, a recommended intake of 1.5 mL/kcal of energy expenditure, which corresponds to the water-to-energy ratio in human milk, has been established as a satisfactory level for the growing (scroll down to "Water and Electrolytes") From the Webdietician website: "Recommendation by National Research Council: 1.5 ml/kcal/day. 1. Mahan, Kathleen L.; Escott-Stump, Sylvia. Krause?s Food, Nutrition, & Diet Therapy, 10th ed. "Nutrition in Infancy" by Cristine M. Trahms, MS, RD, FADA. W.B. Saunders Co., Philadelphia." I hope this answers your question. Google web search: recommended water intake" infants recommended water intake" "1.5ml"
<urn:uuid:26dec1f1-f7a0-40f6-829f-8f3d3ded0976>
CC-MAIN-2014-41
http://answers.google.com/answers/threadview/id/325630.html
s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037663007.13/warc/CC-MAIN-20140930004103-00401-ip-10-234-18-248.ec2.internal.warc.gz
en
0.800802
275
2.65625
3
Here is Karl Barth's commentary: What does "knowledge of good and evil" mean? The expression is obviously too concrete to allow us to accept the interpretation of Wellhausen (Prolegomena, p. 300 f., etc.) that its reference is to science and our general knowledge of things. Indeed, if this were so, it would be impossible to see why God should prohibit this to man and thus prohibit progress from childish ignorance to culture, or why the saga should regard this progress as deadly. For this reason other writers (e.g., Delitzsch) have thought in terms of the problem of progress from childish innocence to moral decision. Knowledge of good and evil characterises intellectual maturity and moral decision.Church Dogmatics 3.1, The Doctrine of Creation, p. 285 I found Julius Wellhausen's perspective intriguing (and ultimately misplaced), particularly because he is best known as the first main proponent of the Documentary Hypothesis. To me equating this "knowledge" with modern academic knowledge is just reading into the text modern (particular German modernist academic) preconceptions and concerns. Wellhausen's view reflects poorly on his scholarship. So what does Barth himself think? He first emphasises how ones view should be shaped by other Old Testament passages about "good and evil". After a survey he concludes The question frequently raised whether we are to understand by "good and evil" what is morally right and wrong, or useful and useless, or pleasant and unpleasant, cannot be answered as though these were alternatives. The Old Testament concepts of tobh and ra' embrace all these things in the instances adduced. To know good and evil is to know right and wrong, salvation and perdition, life and death; and to know them is to have power over them and therefore over all things. The Genesis saga in its account of the fall, and in agreement with the rest of the Old Testament, undoubtedly tells us that man has seized this knowledge and power to his own undoing, and that he must now live in the possession of this knowledge and power. Church Dogmatics 3.1, The Doctrine of Creation, p. 287
<urn:uuid:51430abb-57d4-4c14-891f-b6d12f602c28>
CC-MAIN-2015-11
http://revelation4-11.blogspot.com/2012/12/is-scientific-knowledge-forbidden-fruit.html
s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936462316.51/warc/CC-MAIN-20150226074102-00050-ip-10-28-5-156.ec2.internal.warc.gz
en
0.954744
446
2.765625
3
With benefits like greater reliability, increased energy efficiency and less maintenance for a lower cost of ownership, the transition from discharge lamp-based lighting fixtures to LED sources in the entertainment lighting industry was understandable if not inevitable. Then, as LED engines began to compete with discharge-based fixtures in terms of brightness, the writing seemed to be on the wall – it was surely only a matter of time before discharge-based lights would disappear from use completely. However, that hasn’t been the case. So, why are manufacturers still launching lighting products with discharge sources? “It’s really about what you want to accomplish optically,” stated John Dunn, National Sales Manager at Elation Professional’s headquarter office in Los Angeles. “The physics behind an LED source don’t make sense for all types of fixtures and some effects are still not achievable with LEDs. A great example is a long-throw beam fixture. When you need that pronounced beam of a narrow beam aperture product, you just can’t duplicate the centre intensity that a lamp and reflector gives you with an LED engine.” Roger Hamers, R&D head at Elation’s European office in the Netherlands, picked up the story: “It has to do with the construction of the lamp working with an internal reflector. The size of the lamp and reflector is key. With a discharge lamp, because it is relatively small, the focal point is very near the output of the light, which gives you a narrow, very intense beam. You could do the same with LEDs but because of each LED’s relatively low wattage, to get the same effect you would need a very large LED chip and quite a large reflector, so the fixture would be too big.” Discharge lamps involve an internal electrical discharge between two electrodes in a gas-filled chamber, and it’s at that point that the level of intensity is extreme. It’s that power that can be harnessed, isolated and thrown out as a collimated beam of light. When producing an ultra-narrow beam with an LED source, that concentrated intensity is missing and consequently the beam lacks density and doesn’t travel as far. “It’s at distances that you really begin to notice differences,” Hamers said. “With an LED engine, you don’t have the pronounced centre intensity that you are looking for in a long throw narrow collimator beam fixture.” Hamers explains that in the past, with a discharge lamp, a somewhat large reflector was needed because the electrical arc was relatively big. “Today though we have tiny discharge lamps with arcs of only millimetres. The Platinum lamp has a very small arc of only about 2mm. The light is concentrated in an intense beam, which lets you build a powerful light in a much smaller housing.” It’s that discharge lamp technology that Elation Professional has exploited so successfully, even as it developed and transitioned the majority of its product line to LED. Working with Netherlands-based Philips lighting to develop the Platinum lamp, the short-arc design opened up new possibilities for engineers. The result was a series of Platinum lamp luminaires in 2009 and culminating in the breakthrough IP65-rated Proteus Hybrid in 2017. Platinum lamp technology advanced even further with the introduction of Platinum lamp FLEX technology. “With the Platinum FLEX lamp, basically we have packed the advantages of LED into a discharge lamp,” stated Marc Librecht, Sales & Marketing Director at Elation Europe. “Lamp life improved significantly – five times or more compared to traditional lamps – which has lowered lamp costs over the lifetime of a fixture. The discharge lamp/ballast allows it to be dimmed and set to hibernation mode when not in use. This lowers power consumption and reduces heat, which protects electronic components and reduces overall fixture maintenance. We are now reaping the advantages of both worlds with a new type of discharge lamp.” The Platinum FLEX lamp began a revolution in savings for discharge-lamps yet its development was always about more than economics. “It’s about what you want to accomplish optically,” Dunn reiterated. “In fixtures like our beamspot Smarty series that excel at aerial effects, you want more intense narrow beam optics that can define a beam or pattern and a short-arc discharge source gives you that. It also gave us the freedom to create smaller and lighter fixtures that have as much punch as much bigger models.” The latest fixture to benefit from the FLEX lamp technology is Elation’s Proteus Excalibur, a 0.8° beam fixture with MSD Platinum 500 FLEX lamp that launched last year. “Our goal was to create a really powerful beam moving head fixture that could replace xenon searchlights,” remarked Hamers. “We just weren’t able to create that high beam intensity at this level with an LED engine, not yet anyway.” So, will LED ever be a feasible replacement for long throw discharge lamp-based fixtures? Haitz Law, the principle that the amount of light generated per LED package increases by a factor of 20 each decade, tells us so. Elation’s R&D team has stayed on the forefront of the technology and has already developed products with LED engines that mimic the centre intensity of a discharge source. “We’re emulating that centre peakedness of a lamp in some of our LED products already,” concluded Dunn. “Proteus Lucius, Proteus Maximus, Artiste Mondrian, and Artiste Rembrandt all have a white LED engine designed to create that parallel collimated beam with the pronounced centre. The ratio is not as pronounced as with a discharge lamp, but they are very impressive at distances and a big step in the right direction.” Words: Larry Beck Photos: Elation Professional
<urn:uuid:72e25da9-cb77-4ac3-b1a9-97bd2eed2f39>
CC-MAIN-2023-06
https://www.tpimagazine.com/the-unwavering-importance-of-discharge-lamps/
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500304.90/warc/CC-MAIN-20230206051215-20230206081215-00329.warc.gz
en
0.953294
1,248
2.5625
3
Last updated: March 23, 2021 First Aid for Pregnant Women First Aid for Pregnant Women Two lives are at stake when a pregnant woman goes into cardiac arrest or is choking. By understanding the physical changes brought about by pregnancy, you can respond appropriately to maternal emergencies. Here is a quick reference guide to first aid modifications for the mother-to-be. This guide will also provide resources under each section if you are interested in the research that defines the problem and outlines what you need to know when a pregnant woman experiences a possibly deadly event. Maternal cardiopulmonary resuscitation (CPR) Although most characteristics of maternal resuscitation are similar to the standard adult resuscitation, several aspects are uniquely different. - Call 9-1-1 (or EMS) or direct someone else to call. Tell the operator that there is a pregnant woman in cardiac arrest. This alerts the EMS to take specific measures, such as sending additional providers. Immediate perimortem cesarean delivery (PMCD), or resuscitative hysterotomy, should be anticipated, at the site of the cardiac arrest, within four to five minutes of the arrest. - Start CPR with the woman flat on her back in a supine position. According to the scientific statement by the American Heart Association (AHA), high-quality chest compressions occur when the pregnant woman is supine on a hard surface. If a backboard is used, care should be taken to avoid delays in the initiation of CPR, reduce interruptions during CPR, and prevent line or tube displacement intrapartum maternal cardiac arrest: A simulation case for multidisciplinary providers. - One-person CPR as a bystander: Follow the basic life support (BLS) sequence —C-A-B (chest compressions-airway-breathing)—push hard and fast in the center of the chest at a rate of at least 100 compressions per minute with a depth of 2 in (5 cm). Perform this in cycles of 30 compressions and two breaths. Deliver the chest compressions the same way for a pregnant woman as for a non-pregnant woman. (See two-person CPR for information regarding left uterine displacement)1 - Two-person CPR: Use C-A-B-U (chest compressions-airway-breathing-uterine displacement) if two or more rescuers are at hand. Continuously perform manual left uterine displacement (LUD) when the uterus is felt at or above the umbilicus (approximately 20 weeks pregnant) to help restore blood flow to the heart. When a pregnant woman is supine, this action will reduce aortocaval compression, which is the compression of the inferior vena cava and abdominal aorta by the gravid (pregnant) uterus. Historically, a left lateral tilt of 30° has been used to displace the uterus; however, the AHA reports that a tilt of her body may shift the heart laterally and impact the force of the chest compressions. Therefore, the AHA recommends the left lateral tilt if manual LUD is unsuccessful 1. Furthermore, a manikin study found that the left lateral tilt and manual uterine displacement are equally effective during chest compressions. The researchers of this study also mention that the compressions were easier to perform in the supine position. As a bystander performing one-person CPR, high-quality chest compressions are critical. Nevertheless, if you have a wedge immediately available (or other articles that can act as a wedge, such as a stiff pillow), you can place this under the woman’s right hip to attempt LUD. - If recovered, the pregnant woman should be placed on her left side to increase blood flow to the heart and baby. 2015 AHA statement on cardiac arrest in pregnancy — key points to remember about cardiac arrest in pregnancy derived from the 2015 AHA statement on cardiac arrest in pregnancy. Frequent causes of maternal cardiac arrest in the US — this article discusses the common causes of cardiac arrest in a pregnant woman; these include heart failure, bleeding, amniotic fluid embolism, and infection. Data on pregnancy complications — these figures show trends from 1993 through 2014 of three serious pregnancy complications. Pregnancy mortality surveillance system — information about the pregnancy-related mortality ratio. Cardiac arrest in pregnancy: Out-of-hospital Basic Life Support (BLS) — a one-page algorithm for healthcare providers. Cardiac arrest in pregnancy: In-hospital Basic Life Support (BLS) — a one-page algorithm for healthcare providers. How to determine fundal height — this resource explains the significance of fundal height measurement. Physiological changes in pregnancy — this review highlights the important changes in the cardiovascular system during pregnancy, this includes the normal findings on an ECG. Cardiac disease and pregnancy — outlines the general guidelines for the management of heart disease in pregnant women. John Hopkins OB critical care training: Amniotic fluid embolism, massive transfusion protocol, and cesarean delivery — John Hopkins in-hospital training video. Healthy pregnancy — this article takes you through a healthy pregnancy week by week. Automated external defibrillator (AED) in maternal resuscitation The best way to save the baby is to save the mother. Rapid defibrillation, when indicated, can be life-saving. Use the AED as per standard protocol. The guidelines are the same for the pregnant patient as they are for the non-pregnant patient. Resume compressions immediately after the delivery of the electric shock. ACLS guide to defibrillation — an online guide to the history and types of defibrillation. How and when to use an AED — a step-by-step explanation from the National Health Institute of how and when to use an AED. Overview of AEDs — the Occupational Safety and Health Administration provides a list of resources related to AEDs. Choking when pregnant The universal sign of choking is the hands clenched around the throat; however, this signal may not be present. Other immediate indications include not being able to talk or difficulty breathing or wheezing. If the pregnant woman can cough forcefully, then she should keep coughing. If the woman cannot talk, cry, or laugh, then initiate a modified Heimlich maneuver. In this situation, you protect the developing fetus by using chest thrusts versus abdominal thrusts to dislodge the object. - If you are the only rescuer, initiate chest thrusts before calling 9-1-1 or emergency services. If another person is available, have that person call for help while you begin first aid. - For stability, position yourself behind the pregnant woman with one leg in between theirs. - Place your arms underneath each of the woman's armpits. - Place your fist—thumb side towards the woman with your knuckles pointing towards the sky—in the center of the chest between the breasts. - Deliver repeated chest thrusts to the woman, straight inward in a quick, sharp manner to compress the lungs. - Continue chest thrusts until the object is relieved or the woman becomes unconscious. If the woman becomes unconscious, follow the next steps. - Lower her to a supine position and make sure to call emergency services if not already done. - Perform 30 chest compressions, do a head tilt/chin lift, check for the object, and sweep it out if possible. *Do not* attempt a finger sweep if you *cannot* see the foreign body. - Attempt a rescue breath. If no rise and fall of the chest, reposition the airway and attempt a second breath. - If air does not fill the lungs, perform 30 chest compressions, check for the object again, and sweep it out if possible. - Attempt another rescue breath. If you do not see a rise and fall of the chest, reposition the airway and attempt the breath again. Repeat this process until the airway is open. - At this point, check for a pulse for a maximum of 10 seconds. - If the pulse is present in the absence of normal breathing, continue rescue breathing for one breath every five seconds for two minutes. - After two minutes, reassess the pulse and check for normal breathing. - If no palpable pulse, begin full CPR until: (a) an EMS arrives, (b) an AED arrives, or (c) the woman revives. - Once revived, the pregnant woman should be placed on her left side to increase blood flow to the heart and baby. - The pregnant woman should see her healthcare provider as soon as possible; internal injuries can occur. More resources on choking when pregnant Clearing the airway of a pregnant woman — video tutorial of the modified Heimlich maneuver performed on a pregnant woman. Causes of choking — learn the causes and symptoms of choking. Signals of someone choking — a guide that reviews the signs of choking and detailed information about how to handle the emergency. Blockage of the upper airway — a summary of upper airway obstruction and its possible complications if not promptly treated. Standard heimlich maneuver — this resource provides a review of the standard abdominal Heimlich maneuver for someone (not pregnant) that is choking, including infants. Note: A quick reference guide is not a replacement for CPR and first aid training—get trained today and save a life! Other potential pregnancy complications and emergencies Spot the signs of early labor — review the signs of false labor, stages of labor, and management of labor pain. Emergency childbirth — this article guides you through anatomy and physiology, prehospital care, field delivery, neonatal care, and postpartum care. Vaginal bleeding in early pregnancy — know the difference between spotting and bleeding, when to worry, and what causes vaginal bleeding. Premature Rupture of Membranes (PROM) — a review of what causes the “water to break” prematurely and how a healthcare provider manages the rupture. Seizure — a summary of preeclampsia and eclampsia. Gestational Diabetes — a thorough review of the causes, symptoms, risk factors, and complications of gestational diabetes. Chronic disease in pregnancy — a glance into the risk behaviors and chronic disease of the mother-to-be. Abdominal pain in early pregnancy — case study and commentary of a 34-year-old-woman who was 14 weeks pregnant that presents to the emergency department with five days of nonspecific abdominal pain, nausea, and vomiting. 1. [Jeejeebhoy FM, Zelop CM, Lipman S, et al. Cardiac arrest in pregnancy: a scientific statement from the American Heart Association. Circulation. 2015;132:1747-1773. doi: 10.1161/CIR.0000000000000300]↩ This article is not all-inclusive. Please contact us at firstname.lastname@example.org to contact the author to add information you think would be helpful in an emergency situation.
<urn:uuid:96795c54-ccb8-4ea0-ad4f-8b4ea3ae0ca5>
CC-MAIN-2021-43
https://pacificmedicalacls.com/first-aid-for-pregnant-women
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585321.65/warc/CC-MAIN-20211020121220-20211020151220-00635.warc.gz
en
0.875181
2,313
2.953125
3
It has always seemed to me that if the appropriate groundwork were responsibly laid, if the students were appropriately primed and invested in a lesson, the learning would come easy. It is with that in mind that I’d like to present the following hacks for increasing student engagement. Burn Your Podium While walking down the hall during my prep one day, I noticed a student asleep at his desk in a classroom and several others hunched in full-blown daydream position. As I walked past the door, I noticed the teacher was emphatically lecturing behind a podium. The next classroom I walked past was abuzz—students were working in small groups, apparently engaged in an active discussion, while the teacher moved about the room. No sleeping bodies. The third classroom resembled the first: a booming voice peeling back the layers of U.S. history for 20 drowsy kiddos. Two heads on desks. This piqued my interest. What do my colleagues see when they walk past my classroom? How many of my colleagues are suffering from podiumism? What’s the correlation between sleeping students and lecturing teachers? It didn’t take more than two passes down two long hallways to confirm the correlation—but it doesn’t take a rocket scientist to figure this one out. The hack? Burn your podiums and scatter the ashes! Or simply rely on the student-centered strategies that we know foster discussion and heighten active engagement: - Ask your students to collaborate with their peers in small groups. Groups of three to four encourage total participation—that is, they give every student a chance to speak, to share their ideas with their peers, and to contribute to the work. - Explore innovative discussion structures such as Socratic circles. Ask students to sit in two concentric circles—an inner circle that will discuss a topic, and an outer circle that will (a) assess and provide feedback for the inner circle, (b) provide content and questions to fuel the inner circle’s discussion, (c) hold a simultaneous discussion in a digital backchannel or chat room, and/or (d) provide real-time fact-checking. Students may squirm at first without your voice to guide their discussion, but I’ve yet to experience a Socratic circle that hasn’t taken a profound turn in 10 minutes or less. - Learning stations are a powerful way to transform content that would otherwise be delivered to students from behind a podium. Begin by considering which chunks of content are essential. Trace those chunks back to authentic sources—interviews, maps, hands-on experiments, film clips, nonfiction articles, and so on. Let each station provide students with truly immersive opportunities that showcase the thing you’re inspiring them to be passionate about. Always Choose Choice Choice is a major player in the psychology of human motivation, and even a little bit goes a very long way. Consider framing assignments in creative ways to bring choice into the equation: Read any two books from list X or one book from list Y; solve all of the even or all of the odd problems for homework; write a three-page narrative or create a script for a five-minute film. Integrate Popular Culture Popular songs, advertisements, artists, and television series have enormous power when harnessed for educational purposes. Why? Because a majority of your students will hold a strong prior interest in the pieces of popular culture you can now consider part of your pedagogical repertoire. A number of implementations of this come to mind: using Angry Birds to teach physics, Super Bowl commercials to teach Aristotle’s rhetorical triangle, and pop lyrics to teach close reading. The possibilities are limited only by the imaginations of the teachers who are dreaming up these lessons. What would Beyoncé and Daisy Buchanan discuss over mint juleps? Or lemonade? The skills students glean when working with popular culture seem effortlessly attained and are immediately applicable to more rigorous, complex, and real-world contexts. Make Authenticity Your Compass Authenticity is at the heart of our effective instructional strategies, including the highest levels of project-based learning (PBL). Imagine students who work collaboratively to solve problems pertinent to their community, who rally support and organize fundraisers to champion a cause they truly believe in, who pour more hours into a presentation they’re making to the principal to demonstrate the educational benefits of a field trip than they ever would have spent on a mere homework assignment. When crafting your next unit, whether it be an overarching implementation of PBL or a short lesson on probability, look to the community, to the school, and to student interests for authentic problems that will sustain engagement and drive participation. Turn Work Into Play Even the simplest games, those that teach rote facts and skills and are styled after Jeopardy!, can work wonders with student engagement. If you’re feeling particularly ambitious, consider gamifying your entire classroom (e.g., swapping out grades for points, or applying the game principles that James Paul Gee outlines in his book What Video Games Have to Teach Us About Learning and Literacy) or designing game-based units of study (e.g., creating simulations that immerse your students in the content they’re studying). When colleagues pass by your door, give them something to think about for their classes. But don’t actually burn your podium, please.
<urn:uuid:501f9492-135b-4a88-9fac-80edfcb76748>
CC-MAIN-2018-17
https://www.edutopia.org/article/burn-your-podium-and-other-hacks-jay-meadows
s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125948125.20/warc/CC-MAIN-20180426090041-20180426110041-00022.warc.gz
en
0.940724
1,119
2.984375
3
Leadership has been defined as a process through which a person influences and motivates others to get involved in accomplishment of a particular task. This single definition, although universally accepted, fails to define the particular paths and ways of people who are deemed as great leaders. All great leaders had something unique about them and yet they were bound by greatness that helped them to lead masses to innovation and new ideologies. Since the oldest times known to men, masses have been led by efficient leaders. Such men and women have been responsible for ushering their people into a new and more modern world as we know of it now. Although times have changed, the contributions of these great leaders cannot be forgotten and although practices and ways of doing things have changed as well, the ways of these great leaders cannot be overlooked. What made them great might still be applicable in today’s day and age. Here is a look at some of the greatest leaders of all time and what made them great. Mohandas Karamchand Gandhi, better known as Mahatma Gandhi, was born an ordinary boy with a determination to excel at what he did. After completing law from London, he became the most important part of the Indian freedom struggle against the colonial rule. His policy of non-violence and protest through civil disobedience eventually succeeded when he led his country to freedom in 1947. His main characteristics were resilience, knowledge, people-skills, motivational approach and leading by example. George Washington, known as the founding father of the United States of America, was the leader of the American Revolution and the first president of US. He was a true visionary whose vision has endured for more than 200 years. What made Washington great was his foresight, vision, strategic planning and his ability to lead people to success. The 16th president of the United States is also one of the most well known leaders of all time. He was in office during the American Civil War where he kept the people together and is the only reason that the nation did not break into smaller parts. He also ended slavery in the US by signing the Emancipation Proclamation. His greatest traits were his determination, persistence, beliefs and courage. Although despised through the world, Adolf Hitler was one of the greatest leaders of all time. After becoming the chancellor of Germany in 1933, he was responsible for one of the greatest economic and military expansions the world has ever seen. He successfully invaded more than 10 countries with his brilliant strategy and meticulous planning. His oratory skills, propaganda and planning made him a leader par excellence. One of the greatest leaders of all time, Muhammad led to the spread of Islam in and around Arabia. His contribution to Islam was such that it has become the second largest and the fastest growing religion of the world today. He united a chaotic society in the name of morality and humanity and led his people out of severe persecution and mistreatment. He led his people to a number of migrations and successful victories in wars against armies much larger than theirs. His greatest leadership qualities were his courage, leading by example, motivational approach, persistence and decision-making. Mao was the leader of the Chinese Revolution and the founding father of the People’s Republic of China. He successfully endured and repelled the invasion by Japan during the World War II and subsequently transformed the economy of China into one of the major industrialized economies of the world. Because of him, China is a world power and a potent rival to the dominant United States of America. Nelson Mandela was the first South African president elected in fully democratic elections. Mandela was also the main player in the anti-apartheid movements in the country and served a lengthy prison sentence because of the same. This did not stop Mandela and in fact motivated him to devote his life to uniting his country and he successfully managed to do so after his release from an almost 30 year prison sentence. His main characteristics were his determination, persistence, focus and will. Easily one of the greatest military leaders of all time, Caesar was also one of the best political leaders the world has ever seen. He led several campaigns with numerous victories and was single handedly responsible for the expansion of the Roman Empire. He was also responsible for reforming the Roman government and thus laying the foundation to a great empire. His greatest traits were his decisiveness, boldness, eagerness, motivation, opportunism and strategic planning. Castro was the leader of the Cuban Revolution and later went to become the Prime Minister of Cuba. He also became the President of Cuba from 1976 to 2008. He endured many crisis, invasions and assassination attempts and took them in the stride. His vision for Cuba still stands and he has proved to be an effective leader and commander. His traits of courage, strategy, hiring the right people and dissemination of duties made him the leader he was. Prime Minister of Britain from 1940 to 1945, Churchill led Great Britain against the Nazi Germany during the World War II. He teamed up with allies and consequently led to the defeat and downfall of Hitler. His tenure as the British Prime Minister was in a time of fear and destruction caused by Hitler and his allies. Churchill was known for his fearlessness, determination, unyielding perseverance and undying devotion to his goal.
<urn:uuid:d6037320-cca1-4444-833a-a93ffb17ec75>
CC-MAIN-2017-39
https://www.industryleadersmagazine.com/leadership-and-10-great-leaders-from-history/
s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689775.73/warc/CC-MAIN-20170923194310-20170923214310-00152.warc.gz
en
0.986295
1,069
3.765625
4
Fort Bullen was one of two forts at the mouth of the River Gambia, placed there in 1826 to stop slave ships from sailing out into the Atlantic. It stands on the north bank of the river, and along with Fort James on the south bank constitutes a UNESCO World Heritage Site. Fort Bullen has been open to visitors for some time and tourism officials hope the new museum will add to its attractiveness as a historic site. The museum was financed by the British High Commission in The Gambia. The country used to be a British colony. The British Empire abolished slavery in 1807 and soon took steps to eradicate it throughout its domains. Of course, before that time the empire made huge profits from the slave trade, with the River Gambia being one of its major trading centers for human flesh. One hopes this aspect of British history isn’t ignored in the new museum. [Photo courtesy Leonora Enking]
<urn:uuid:83bba6e8-28c4-4aa9-b16e-2b8924925ed1>
CC-MAIN-2014-35
http://gadling.com/2013/04/22/gambia-and-uk-open-fort-bullen-museum-a-bastion-against-the-sla/
s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535920849.16/warc/CC-MAIN-20140909043029-00074-ip-10-180-136-8.ec2.internal.warc.gz
en
0.956967
190
3.5
4
PLUS TWO MATHS BOOK NCERT Class XII Maths Book Part I and Part II are given below. Maths Part I. Chapter 1 – Relations and Functions · Chapter 2 – Inverse Trigonometric Functions. Book: National Council of Educational Research and Training (NCERT) NCERT Maths Solutions Class 12 NCERT Plus Two Maths Solutions NCERT Solution. Free NCERT Solutions for Class 12 Maths (chapter-wise) in PDF format to Download expert teachers from latest edition books and as per NCERT (CBSE) guidelines. In CBSE Class 12 Inverse Trigonometric Functions, there are only two. |Language:||English, Spanish, Indonesian| |Genre:||Academic & Education| |ePub File Size:||30.61 MB| |PDF File Size:||8.82 MB| |Distribution:||Free* [*Regsitration Required]| MathematicsPartI. NCERT/CBSE class 12 Mathematics book MathematicsPartI . maths is great .. Like · Reply · Mark as spam · 26 · 7y · Nishad Najeem. Text book published by Government of Tamil Nadu. Tamil Nadu Text Books · Additional Collections. Uploaded by anandology-2 on October 10, Text book published by Government of Tamil Nadu. The inverse trigonometric functions hold a great value in calculus for they serve to define many integrals. Matrices are considered as one of the most powerful tools in mathematics. It simplifies our work to a great extent when compared with other methods. There are total 62 questions in 4 exercises of this chapter. In this chapter, you will be delving deeper into the fundamentals of matrix and matrix algebra. Chapter 4: Determinants In chapter 3, matrices and algebra of matrices will be explained. Whereas, in this chapter, you will be studying about determinants up to order three only with real entries. Also, the six exercises are distributed in a way that you will study various properties of determinants, cofactors and applications of determinants in finding the area of a triangle, minors, adjoint and inverse of a square matrix, consistency and inconsistency of system of linear equations and solution of linear equations in two or three variables using inverse of a matrix in these exercises. Chapter 5: Continuity and Differentiability This chapter is the extension of differentiation of functions which you have studies in Class XI. You must have learnt to differentiate certain functions like polynomial functions and trigonometric functions. This chapter explains the very important concepts of continuity, differentiability and relations between them. You will be learning about differentiation of inverse trigonometric functions. Furthermore, you will be getting acquainted with a new class of functions called exponential and logarithmic functions. NCERT Solutions for Class 12 Maths There is a total of eight exercises in this chapter so you will have to dedicate some extra time and effort to get well-versed with this chapter. Chapter 6: Application of Derivatives In Chapter 5, you would learn how to find derivative of composite functions, implicit functions, exponential functions, inverse trigonometric functions, and logarithmic functions. In this chapter, you will study the applications of the derivative in various disciplines such as engineering, science, and many other fields. For instance, we will learn how the derivative can be used to determine rate of change of quantities and to find the equations of tangent and normal to a curve at a point and many more usage of derivatives. We will also use derivative to find intervals of increasing or decreasing functions. There are five exercises in total where you will get the detailed insight into the Application of Derivatives. Finally, the derivative is used to find approximate value of certain quantities. Class 12: Mathematics Chapter 7: Integrals Differential Calculus is based on the idea of derivative. Derivative came into existence for the problem of defining tangent lines to the graphs of functions and calculating the slope of such lines. Integral Calculus eases the problem of defining and calculating the area of the region bounded by the graph of the functions. With a total of eleven exercises in total, you will have to learn each and every topic and their related questions with sheer concentration. Chapter 8: Application of Integrals Here, in this chapter, you will study some specific application of integrals to find the area given under the simple curves, also, area between lines and arcs of circles, parabolas and ellipses standard forms only. There are two exercises in this chapter in which you will also deal with finding the area bounded by the above said curves. Tamilnadu 12th New Books Free Download PDF Tamil & English Medium at www.textbooksonline.tn.nic.in Chapter 9: Differential Equations In Differential Equations, you will be studying some basic concepts related to differential equation, general and particular solutions of a differential equation, formation of differential equations, number of methods to solve a question based on first order - first degree differential equation and some applications of differential equations in different areas. There are total six exercises in this chapter for the students. Differential equations are implemented in a plethora of applications in all the other subjects and areas. Hence, if you study the Differential equations chapter in a detailed manner, it will help you to gain a detailed insight into all modern scientific investigations. Chapter Vector Algebra Did you know that quantities that involve only one value magnitude , which is a real number are call scalars? Unknown March 27, at 7: Unknown March 27, at 9: Unknown March 27, at Unknown March 27, at 6: Unknown March 28, at 7: Unknown March 28, at Unknown March 31, at 1: Unknown May 2, at 1: Unknown March 28, at 6: Unknown March 29, at 5: Unknown March 29, at 7: Unknown March 29, at Unknown April 30, at 6: Unknown March 30, at 3: Unknown March 31, at 4: Unknown April 1, at 8: Unknown April 2, at First and foremost, you should imprint this in your mind that NCERT books are the biggest tool that you can have to get well-versed with the fundamentals. Unknown March 4, at 5: The students explore several real valued functions, and their graphs. We are uploaded the Hindi Medium solutions one by one for all the chapters. Relations and Functions Exercise 1. Best Reference books for Class 12 CBSE – Maths, Physics, Chemistry, Biology, English, Economics Class 10th. Which book is best for Biology class 12? Finally, the derivative is used to find approximate value of certain quantities. We make it a point to provide the students the best help they can get, and the solutions reflect just that.
<urn:uuid:1bc10fdf-ee63-4916-848c-341b3008ec0e>
CC-MAIN-2020-05
http://resourceone.info/education/plus-two-maths-book-13204.php
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250601040.47/warc/CC-MAIN-20200120224950-20200121013950-00143.warc.gz
en
0.919915
1,401
2.984375
3
Does Differentiation Work? A Debate Over Instructional Practices Differentiation, a teaching approach that focuses on different instructional needs for each student, has sparked a heated debate in the education community. Last month, educational consultant Jim Delisle published a piece, "Differentiation Doesn't Work," that said the practice had a place in theory, but can't withstand implementation. In a response, professor Carol Ann Tomlinson, a leading proponent of differentiation, published "Differentation Does, in Fact, Work." To each piece, readers responded in droves. Here's a compilation of comments, tweets, and blog posts in one Storify collection, as assembled by the Education Week Commentary team:
<urn:uuid:f7879ed2-7659-4ed4-b13c-d76134846616>
CC-MAIN-2016-44
http://blogs.edweek.org/teachers/teaching_now/2015/02/does-differentiation-work-an-instruction-debate.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719136.58/warc/CC-MAIN-20161020183839-00347-ip-10-171-6-4.ec2.internal.warc.gz
en
0.960299
140
2.6875
3
Contents - Previous - Next This is the old United Nations University website. Visit the new site at http://unu.edu Scientific experts in fields concerned with the health and well-being of children and their mothers recognize breastfeeding as the best way to nourish infants and to promote the post-partum and possibly the longer-term health of women. This consensus led the world community more than 20 years ago to recommend that infants be breastfed exclusively for four to six months, with continued breastfeeding for two years or longer. Benefits to the infant The benefits to the infant fall into three broad categories: nutritional, immunologic, and behavioural. Reviews of the world's scientific literature by the Working Group reaffirmed the strength of the conclusions regarding the nutritional and immunologic benefits and led to an acknowledgement of the potential for significant behavioural advantages. It is clear that the nutritional benefits to infants extend through the periods of exclusive and partial breastfeeding and possibly beyond the latter period. The high nutritional value of human milk conferred by the high bioavailability of its nutrients, the balance of specific nutrients, and other characteristics is of significant advantage to all infants. It is of most value, however, to infants of families with low economic resources. Human milk substitutes accessible to those infants generally are unsafe because of their inferior nutritional quality and frequent contamination with potentially fatal infectious agents. The bacteriologic safety and high nutritional value of human milk argue strongly for maximizing the intake of human milk, especially among infants of the poor. This is especially important when other foods are added to the infant's diet. Added foods are of inferior quality, are often bacteriologically contaminated, and generally displace human milk, thereby raising the risk of infection and malnutrition in children living in unsafe environments. Although there are recent data that support the extension of exclusive breastfeeding beyond six months among some infants, the published data remain too limited to conclude that the period of exclusive breastfeeding should be extended universally beyond the current recommended period of four to six months. There is no controversy, however, regarding the important role of human milk in supplying essential nutrients to the child during the period of mixed feeding. The major possible exceptions to the full adequacy of exclusive breastfeeding relate to vitamins K, D, and B12. Although there is some disagreement, most medical scientists continue to recommend vitamin K supplementation at birth for all infants, regardless of feeding mode, to prevent haemorrhagic problems in the newborn. Infants with limited exposure to sunlight, or those whose mothers had low vitamin D stores because of low vitamin D intake or limited sun exposure, should receive vitamin D supplements. Infants of strict vegetarians who do not eat eggs or milk products also run the risk of vitamin B12 deficiency and should be provided with an exogenous source of this essential nutrient. Maternal malnutrition also may result in abnormally low levels of some nutrients in human milk. Generally, however, even women living under very harsh conditions will provide sufficient milk of adequate quality to breastfeed infants exclusively for four to six months. More importantly, circumstances that lead to maternal malnutrition almost uniformly result in malnutrition and serious infectious morbidity among non-breastfed infants. The mortality rates of non-breastfed infants in these circumstances are estimated to be 12 times higher than those of breastfed infants. The contamination of human milk by xenobiotics may present safety concerns in some circumstances. Situations in which environmental pollutants, such as heavy metals and organohalides, may contaminate human milk should be evaluated carefully; however, attention must always be given to the benefits and risks presented by the exclusion of human milk and the use of its substitutes. In these circumstances exposure of the infant obviously begins in utero, thus making it more imperative to attend to the contamination of the environment. The major biological xenobiotic of concern is the human immunodeficiency virus (HIV), which is responsible for the acquired immunodeficiency syndrome (AIDS). Although it is clear that human milk may carry HIV, controversy remains as to the conditions that determine the infectivity of HIV in human milk. There is no controversy, however, that the benefits of human milk are much greater than the risks presented by HIV in areas characterized by high rates of infant mortality and malnutrition. In addition to serving as the most reliable, safe, and nutritious food for infants, human milk provides unique immunologic benefits, which result in decreased rates of infection and other desirable outcomes. In the recent past, scientific evidence was limited to the likelihood that human milk conferred passive protection; that is, protection against infectious disease occurred because of the direct interaction between specific milk components and potential pathogens that threaten the infant. However, there was little evidence to support the hypothesis that human milk alters the development of the infant's immune system to provide active as well as passive protection. Data reviewed by the Working Group support the idea that both active and passive mechanisms likely account for the decreased infectious morbidity observed in infants in both economically developing and fully industrialized nations. Of particular interest were data from the United Kingdom demonstrating that breastfeeding for at least 13 weeks had protective effects against gastrointestinal and, to a lesser degree, respiratory infections that lasted beyond weaning. Recent studies have demonstrated that breastfeeding enhances responses by the infant to infectious challenges. Enhanced responses were noted following the administration of parenteral vaccination with diphtheria and tetanus toxoids and Hib (Haemophilus influenzae type b)-protein conjugate, oral poliovirus, and "natural" infections with respiratory syncytial virus. The mechanisms responsible for these responses are the subject of intensive investigations. It is likely that both the high nutritional quality of human milk and its complex immune components (e.g., various growth factors, cytokines, and antiinflammatory factors) are responsible for the improved immune function of breastfed infants. The combined nutritional and immunologic protective effects of human milk against diarrhoeal disease result in a reduced incidence and severity of the disease. As a consequence, breastfeeding protects strongly against diarrhoeal mortality, especially in young infants. Although partial breastfeeding is protective, maximal protection is achieved with exclusive breastfeeding. A reduction in the incidence of respiratory disease is less clearly established, but breastfeeding does appear to reduce the severity of respiratory illness, as reflected by hospitalization rates and mortality. There also is fairly consistent evidence that breastfeeding protects against otitis media, but the effect is less than that seen for diarrhoeal diseases. Evidence for protection against other infectious diseases is less clear, but nonetheless suggestive. Theoretical mathematical projections based on data obtained from the World Health Organization indicate that a 40% reduction in the prevalence of non-breastfeeding would result in a 50% reduction in respiratory deaths and a 66% reduction in diarrhoeal deaths worldwide in children 18 months old or younger. Evidence also was reviewed which suggests that the immunologic benefits may last for a longer term. Investigators have reported that for years after breastfeeding has ceased, breastfed infants have a significantly lower risk than bottle-fed infants of developing type I diabetes, Crohn's disease, and Iymphomas in childhood. Behavioural benefits are more difficult to document. Although it is highly plausible that specific constituents of human milk enhance the infant's neural development and that suckling at the breast promotes desirable emotional ties between mother and infant, objective experimental evidence in support of the hypothesis that breastfeeding directly enhances the infant's behavioural development is limited. The usefulness of most published investigations is restricted by inadequate study designs, inappropriate evaluation tools available to or selected by the researchers, and an overly narrow focus on developmental outcomes, such as IQ scores and psychomotor indices. The narrowness of the focus excludes consideration of interactions between feeding mode and other potentially important modulators of behavioural development (such as reductions in morbidity) and disregards the processes that underlie development. Furthermore, very little attention has been given to the alternative possibility that breastfeeding may limit mental development through, for example, the transfer to the child of toxic substances in milk. This alternative is complicated by the confounding likelihood that the infant's exposure to toxicants is initiated during gestation, the period of maximal vulnerability. Even when these caveats are acknowledged, previously breastfed children appear to have an advantage over bottlefed children in developmental scales, IQ tests, and assessments of other specific cognitive outcomes. Among the most provocative observations are the positive effects on IQ of feeding human milk to premature infants. Although the workshop participants acknowledged controversial aspects of those observations, the need to replicate such studies was recognized widely. The consistency of the evidence argues strongly for evaluations with more robust designs and evaluation tools. Such investigations should permit inferences regarding the nature, degree, and persistence of the potential effects of breastfeeding or human milk feeding on behavioural development and the assessment of the modulation of putative effects by social, economic, and other environmental factors. Benefits to the mother Maternal benefits also fall into three broad categories: reductions in fertility, health benefits of a non-behavioural nature, and positive behavioural outcomes. The Working Group examined the first and second categories in greater detail than the third. Generally, lactation is expected to help women maintain a healthy body weight when sufficient quantities of adequate food are readily accessible and to enhance the physiological efficiency of nutrient utilization under nearly all conditions. The hormonal changes that accompany lactation are expected to influence maternal behaviour in ways that support breastfeeding and promote mothering behaviours. Investigators also have suggested that successful breastfeeding is important to maternal self-efficacy and possibly social empowerment. These expectations likely are most relevant when maternal nutritional and social needs are met. The mother's responses to breastfeeding have been studied much less than those of the infant. A principal limitation is that lactation seldom has been studied in the context of the complete reproductive cycle, which includes the nulliparous period, pregnancy, lactation, and the nonlactating, non-pregnant state that precedes a subsequent pregnancy. The significance of this omission stems from the likelihood that biological strategies for maintaining maternal well-being through the life cycle rely on a healthy physiological preparation for reproduction and adequate pregnancy intervals for maternal repletion. Interactions among the contiguous and interdependent stages within reproductive cycles and the biological effects of the distinct socio-economic, demographic, and environmental conditions in industrializing, newly industrialized, and post-industrialized settings are expected to modulate maternal responses to lactation. Insufficient data were available to the Working Group for an assessment of the effects of lactation on the prevention of maternal obesity and nutrient depletion. Although obesity is of most concern in fully industrialized and newly industrialized nations, it is, ironically, a growing problem among some developing countries with large numbers of undernourished women of reproductive age. Similarly, the paucity of data makes it difficult to assess the global impact of lactation on nutrient depletion of the mother and its potential consequences for maternal and infant health. Issues related to longer-term health outcomes, that is, osteoporosis and breast cancer, were addressed more confidently. Concerns that lactating women may be at greater risk of osteoporosis because of loss of calcium in milk have not been supported by recent studies conducted largely in affluent countries. Current evidence supports a preventive effect of breastfeeding against pre-menopausal breast cancer, but no association has been found between breastfeeding and post-menopausal disease. Data reviewed by the Working Group reaffirmed the suppression of fertility by breastfeeding. The duration of the mother's infertility is directly dependent on her infant's suckling activity. Breastfeeding is most effective in decreasing fertility (and thereby facilitating longer, more desirable interpregnancy intervals) when infants are breastfed on demand and are provided no other sources of food or water. There also are data suggesting that the use of pacitiers may lessen the effects of breastfeeding on fertility by decreasing the infant's suckling activity. The mean anovulatory period for non-breastfeeding women appears to be approximately 50 days. In breastfeeding women, anovulation may persist well into the second year post-partum. Infertility appears to be maintained by a suckling-induced disruption of the normal pulsatile pattern of luteinizing hormone (LH) release (essential for ovulation) and facilitated by an increased hypothalamic sensitivity to the negative feedback effects of oestradiol. The mechanisms responsible for these maternal responses to lactation have not been identified but are sufficiently reliable to have led a group of investigators to conclude that when women fully or nearly fully breastfeed and remain amenorrhoeic, breastfeeding provides more than 98% protection from pregnancy in the first six months post-partum. The programmatic implementation of this conclusion is known as the lactational amenorrhea method (LAM) of natural family planning. Any biological or social factor that either promotes or interferes with the infant's suckling activity (such as a delay in the introduction of complementary foods or the inappropriate or premature introduction of supplementary or complementary infant foods) will, respectively, prolong or shorten the duration of infertility. Discussions on the control of milk synthesis were particularly relevant to these considerations. It is clear that milk synthesis is under autocrine (local) control. The frequency and degree to which the breast is emptied are the principal determinants of the quantity of milk that is produced. Generally, interference with the suckling activity of infants will be reinforced by a subsequent decrease in milk production. Under such conditions, feedback mechanisms will lead to progressive decreases in suckling, which, in turn, will disable mechanisms that disrupt pulsatile release of LH and eventually result in an earlier return of ovulation. Demographic effects of breastfeeding The Working Group examined the demographic effects of the impact of breastfeeding on fertility and infant mortality. It reviewed the impact of breastfeeding on one of the two principal proximate determinants of fertility, the rate of births. The other proximate determinant, the reproductive span (the interval between a woman's first ovulation and the time she either dies or becomes infertile), was not considered, because breastfeeding is not thought to influence it. The effects of breastfeeding on the dynamics of birth intervals may be examined by dividing the birth interval into three parts: the post-partum period (the time between delivery and the resumption of both ovulation and sexual intercourse), the time between the end of the post-partum period and the next birth, and the period of the pregnancy associated with a live birth. Endocrine responses that make lactation possible prolong post-partum anovulation and amenorrhoea through mechanisms that have been reviewed briefly in the preceding section and regulate other reproductive functions (such as luteal function) through mechanisms that are understood less comprehensively. A semi-quantitative assessment of the impact of these effects on fertility suggests that a woman's lifetime fertility may be reduced as much as 50% by prolonged breastfeeding. The demographic impact, however, also will be influenced by the effect of infant-feeding practices on child survival. Unlike the semi-quantitative assessments of the effects of the proximate determinants of fertility on population growth, the impact of the proximate determinants of child mortality on population growth has been more difficult to estimate. Six types of factors have been identified among the principal determinants of child mortality: maternal characteristics, environmental contamination, nutrient deficiency, injury, personal illness control, and the gestational age and development of the newborn. The first three are influenced greatly by breastfeeding. The relations among breastfeeding, fertility, and child mortality are confounded by the socio-economic changes that often accompany changes in breastfeeding patterns. The socio-economic conditions that traditionally have led to decreases in the incidence and duration of breastfeeding tend, over the long term, to have a beneficial effect on the six factors listed above and to diversify and increase the use of contraceptive strategies for birth control. Nonetheless, if socio-economic changes are ignored and the positive impact of breastfeeding on fertility and child mortality on population growth is assessed, it appears that long-term breastfeeding (i.e., breastfeeding into the second year of the child's life) is likely to have only a limited effect on population growth. This, however, does not diminish the health benefits to both mother and infant anticipated from increased birth spacing and the nutritional and immunologic benefits discussed previously. Current worldwide breastfeeding trends The Working Group also reviewed data from demographic and health surveys conducted from 1990 to 1993. It is alarming that under-five mortality remains excessive by any measure in much of the world. For example, in 13 African countries for which data are available, mortality among children between one and four years of age ranged from 318 per 1,000 live births in Niger to 83 per 1,000 in Namibia. As in all regions, infant mortality in those 13 countries generally accounts for an increasing proportion of underfive mortality as the under-five mortality rate drops. It is very likely that improved breastfeeding practices will have a significant impact on child mortality in nearly all economically developing countries. The term "breastfeeding practices" deserves emphasis, because the percentage of children born in the last five years who were ever breastfed ranged from 95% to 97% in the same 13 African countries for which mortality data are available. The percentages of children ever breastfed were similarly high (greater than 90%) in the Asian, South Pacific, and Latin American countries for which data are available. In most developing countries that were surveyed, substantially more than 50% of all infants were breastfed up to 12 to 15 months of age, and more than 25% were breastfed up to 20 to 23 months. The median duration of breastfeeding among children born in the last three years ranged from 17 to 28 months in the African countries that were surveyed. No economically developing country in the regions surveyed had a mean duration of breastfeeding below six months, and in most countries the mean durations were substantially above that level. Yet, consistently across all countries surveyed, the mean duration of breastfeeding was from 5% to nearly 100% greater in rural than in urban areas. In most countries, a minority of infants were fed only human milk through four months of age, although rates varied widely among those countries surveyed. For example, 90% of Rwandan infants were reported to receive only human milk through four months of age. The rates in Tanzania, Kenya, Madagascar, and Namibia ranged from 17% to 47%, and the rates in Burkina Faso, Ghana, Malawi, Niger, Senegal, Nigeria, Zambia, and Cameroon ranged from 1% to 13%. Rates were similarly divergent in other regions of the world. The percentages of infants whose diets were restricted to only human milk and water were similarly divergent among countries but were substantially higher than the percentages of those receiving only human milk. Sociocultural factors affecting breastfeeding Breastfeeding is a learned, not an instinctive, behaviour. Desirable breastfeeding practices must be actively promoted and supported. Successful breastfeeding, therefore, is dependent upon social and cultural factors. Major shifts in breastfeeding practices in fully industrialized countries over the last 30 to 40 years and rural-urban differences in most economically developing countries provide the best evidence of the great influence of sociocultural factors on breastfeeding. The best predictors of breastfeeding practices in fully industrialized countries are sociocultural rather than biological. This also is increasingly true in the industrializing countries, especially in those that are urbanizing quickly. However, recognizing the importance of sociocultural factors in determining infantfeeding practices does not lessen the difficulty of understanding how specific sociocultural factors operate or may be measured adequately to explain variations within and between different infant-feeding patterns. The sociocultural factors that have been examined most often are those that can be integrated easily into biomedical and epidemiologic models, such as religion, martial status, education, and kinship patern. These often are included in assessments of knowledge, attitudes, and beliefs. Yet because infant feeding, and breastfeeding in particular, represents a wide range of highly emotional issues, it is often difficult to obtain reliable and valid data from informants in most studies. Other factors are less commonly studied, because they are more difficult to assess. For example, factors reflective of values, attachment, nurturance, and sexuality require interpretation from social science paradigms and are not as amenable to reductionist models. Nonetheless, all of these factors probably contribute significantly to the links among what people say they know, what they know, and what they practice. As long-term, detailed ethnographic analyses have become increasingly available, a conceptual model has emerged that describes culture as an interaction between style and structure. Style refers to the manner of expression characteristic of an individual, a time, and a place. The application of this model is expected to increase understanding of the influence of sociocultural factors on breastfeeding. Infant-feeding styles communicate fundamental values, attitudes, and beliefs reflected in the interaction between mother and infant during feeding, in how breastfeeding is accomplished, and so forth. These styles of feeding are part of dynamic trends and fashions. Styles in turn are in a dynamic interaction with defined organizational and institutional structures, such as those related to health care, the economy, and governments, each with its own potential influence on infant-feeding choices. An improved understanding of relevant styles and structures should enhance our ability to predict how infant-feeding choices will be affected by changes in sociocultural factors. Despite these limitations, a comparison of the effects of biological and sociocultural factors on measures of breastfeeding success (for example, prevalence and duration) strongly suggests that breastfeeding is biologically robust but highly susceptible to positive and negative sociocultural influences. The principal basis for this conclusion is that breastfeeding is sustainable under the wide range of biological conditions characteristic of affluent women in economically developed countries and of poor women in harsh environments in less economically developed areas. This is not true when breastfeeding is considered under an analogously wide range of sociocultural conditions relevant to breastfeeding. Although it would be a mistake not to recognize the cost that this characteristic presents to poor women (that is, to their biological wellbeing), it is equally fallacious to conclude that adequate breastfeeding can be accomplished only when all biological needs are optimally met. Resources needed to protect, support, and promote breastfeeding The information reviewed by the Working Group did not allow a prioritization of resources needed to protect, support, and promote breastfeeding. It did allow the group, however, to identify resources that would enhance the likelihood of successful lactation in nearly all settings. The paucity of quantitative information available to assess the relative importance of resources needed in specific settings represents a major research gap. The resources identified by the group fell into three broad categories: time, space, and sociocultural/economic support. The physiological and sociocultural information reviewed by the group documented clearly that breastfeeding requires time of the mother. The two principal sources and sinks of time are the family and, when the mother also is employed outside the home, her employer. Because milk production is sustained by physiological processes dependent upon the regular removal of milk, time constraints that result in decreased or inefficient suckling will have a negative impact on milk production and eventually on the sustainability of adequate milk production. Time constraints imposed by employers have marked negative impacts on breastfeeding success because of adverse effects on suckling. Employment policies that recognize the importance of maternal leaves, temporary parttime employment options that do not adversely affect longer-term full-time employment opportunities, and opportunities for breastfeeding in the workplace represent complementary strategies to help establish and sustain adequate lactation. Space is required to breastfeed. Differing perceptions of physical modesty, hygiene, and other concepts dependent upon cultural norms and relevant to infant feeding and maternal well-being will make diverse demands on the characteristics of spaces best suited for the protection, support, and promotion of breastfeeding. These demands apply to family residences, places of employment, and various sites where communities congregate, such as places of worship, businesses, and entertainment. Sociocultural and economic support fall into two subcategories, tangible and intangible support. Examples of the types of tangible support needed to obtain full benefits of breastfeeding are safe and adequate food for the mother and complementary infant foods for the period of mixed feeding when foods other than human milk are introduced to the infant's diet; fair labour compensation that recognizes the needs of families; and adequate housing and related services that protect, support, and promote the hygienic well-being of the family. Examples of intangible support tended to centre around five social sectors: government, business, community, health professions, and educational and research institutions. Those which centre around government represent a wide range of issues. They extend from laws and policies that govern parental leaves to those that lead to differing urbanization trends. Parental leave policies are of obvious relevance; urbanization trends influence family support structures and employment patterns, which affect the protection, support, and promotion of breastfeeding. Although the Working Group recognized the significant influence that the commercial sector plays in determining parental and family leave policies of specific countries, the negative impact of both overt and subtle inappropriate marketing practices by producers of infant foods received more focus. Strategies that have a negative impact on breastfeeding appear designed to decrease suckling at the breast, thereby causing decreased milk production, with increased dependence on human milk substitutes, and undermining maternal confidence in the ability to breastfeed and the general social support of breastfeeding. These strategies are implemented by such diverse activities as direct advertisement to the public and the now discredited distribution of human milk substitutes at little or no cost in health-care settings or directly to family residences. Other issues relevant to the commercial sector's employment policies and the impact of these policies on the time mothers have to breastfeed have been discussed previously. The issue common to communities-at-large, health professions, and educational and research institutions is recognition of breastfeeding as the expected mode of feeding for all infants and, its corollary, the use of human milk substitutes only when specifically indicated. Although all agencies and institutions with interests in infant health recommend exclusive breastfeeding for at least the first four to six months, these recommendations are not commonly reflected in the practices of communities, health professionals, and educational and research institutions. Examples of the consequences of failing to make practices conform with recommendations are inappropriate management of lactation by health professionals who have received inadequate training, poor knowledge and attitudes of many young families relative to breastfeeding because of inattention to lactation in primary and secondary education, and a poor knowledge base for the improvement of lactation practices because of inadequate research support. Conclusions and recommendations The data reviewed by the Working Group provide a strong scientific base for the present recommendations. The present benefits of breastfeeding in all countries and the benefits that are projected when international recommendations are implemented more broadly are of great significance to individuals and organizations responsible for the implementation of scientific knowledge that is highly pertinent to infant and maternal health. The Working Group urges the active protection, support, and promotion of breastfeeding by governments, communities, the commercial sector, educational and research institutions, voluntary organizations responsible for the promotion of maternal and infant health, and, in particular, health professionals and facilities. Especially relevant to this recommendation is the resilience of lactation in the face of harsh biological conditions and the fragility of breastfeeding in the face of inadequate sociocultural and economic support. These characteristics impose a special responsibility on all societies to safeguard the well-being of women by ensuring their access to a safe and adequate food supply throughout their life cycle and to provide adequate time, space, and sociocultural and economic support to women and their families to maximize the health of all children from infancy and the health of women throughout the reproductive cycle. Contents - Previous - Next
<urn:uuid:fb05744a-5258-48cb-a9f4-7d857a04026d>
CC-MAIN-2015-35
http://archive.unu.edu/unupress/food/8F174e/8F174E0p.htm
s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644065330.34/warc/CC-MAIN-20150827025425-00021-ip-10-171-96-226.ec2.internal.warc.gz
en
0.946532
5,617
3.609375
4
The Nunavut Land Claims Agreement (the Nunavut Agreement or NLCA) signed in 1993 provided the Inuit of Nunavut with a comprehensive compensation package of cash, lands and powers and, most importantly, the opportunity to control their own jurisdiction and their own public government in the Canadian federation. It is the realization of this later commitment, with the establishment of the Nunavut Territory on April 1, 1999, that we are celebrating this year. Unfortunately, 10 years on and roughly $10 billion later, Nunavut is still the most financially dependent and underserviced jurisdiction in the country and is still experiencing severe social problems. On its 10th anniversary, it is opportune to ask why Nunavut has not been able to address these problems or create the conditions necessary to free itself from complete financial dependence on the federal government. The answer to this question is not simple. Obviously, the challenges were overwhelming and 10 years is a very short period to meet them. But I would argue that a key part of the problem lays in the parallel public/Inuit governance structure created under the Nunavut Agreement. Essentially, there is a mismatch between responsibilities and capacity: the government is responsible for the delivery of public services such as health, education and employment but has few resources of its own and little power to carry on its duties. On the other hand, the land claims organization — Nunavut Tunngavik Incorporated (NTI) — was given ownership of all of the cash, lands, resource royalties and powers provided in the NLCA, but no responsibility in providing services to the people of the territory. The story of how Nunavut came to have such a structure of governance is recounted elsewhere in this issue. Here I wish to discuss its impact on the Nunavut government. As I will explain, this de facto parallel system of governance keeps the Nunavut government wholly dependent on the federal government to finance even its basic operations. The degree to which Nunavut’s citizens can enjoy a quality of life on par with all Canadians will ultimately depend on the willingness of NTI to support the public government. The self-sufficiency and prosperity of any society depends largely on the financial resources available to its government to provide public programs and services, to generate economic activity and to create employment. In Canada, all governments seek financial independence because of the greater control it affords them to provide culturally or regionally appropriate services to their citizens. The development of natural resources is one of the key means by which provinces and territories achieve economic independence. A fundamental attribute of all Canadian provinces is the ownership of lands and resources, and resulting royalties within their borders. For instance, under its terms of union Newfoundland received the right to all lands, mines, minerals and resulting royalties. Furthermore, in 2005, it received the right to be the principal beneficiary of the oil and gas resources off its shores, without any penalty to its equalization payments. It was through this right that the Newfoundland and Labrador government was able to lift itself out of economic dependency. Nunavut also has significant valuable natural resources. In fact it is blessed with some of the world’s most abundant sources of energy and mineral resources. An estimated 10 percent of Canada’s total oil reserves, 20 percent of its natural gas reserves and significant deposits of uranium, diamonds, gold and iron ore are contained within Nunavut’s borders. In 1993, the NLCA divided the ownership of these resources between NTI and the Government of Canada. The agreement transferred to NTI the right to mines, minerals and royalties on 18 percent (356,000 square kilometres) of the territory, while Canada retained the remainder (minus the communities, which are territorial responsibility). In addition, the Nunavut Agreement ensures that Canada pays resource royalties to NTI equivalent to 50 percent of the first $2 million of resource royalties earned annually and 5 percent of the rest. In comparison, the Nunavut government owns no lands outside of its municipalities’ borders, receives no royalties from the development of its natural resources, and has no means of its own to fund its public services. Consequently, the Nunavut government has remained almost wholly dependent on federal transfers. It receives approximately $1.145 billion annually in transfer payments and targeted funding programs, representing a full 90 percent of its total budget. In other words, the Nunavut Agreement effectively gave NTI one of the most important means through which the Nunavut government could ever hope to become financially independent from the federal government: resource development and resource royalties. The Nunavut government owns no lands outside of its municipalities’ borders, receives no royalties from the development of its natural resources and has little means of its own to fund its public services. Consequently, the Nunavut government has remained almost wholly dependent on federal transfers. Moreover, while the Nunavut government received the relatively small sum of $150 million as initial start-up funding, almost 12 times that amount was actually provided to Inuit under the terms of the Nunavut Agreement. Between 1993 and 2007, Canada transferred $1.173 billion in compensation to the Nunavut Trust controlled by NTI and three regional Inuit associations (RIA) (Qikiqtaani, Kivalliq, and Kitikmeot). According to the deed of the Nunavut Trust, the money must be used for the general benefit of Inuit. The three RIAs control the investments made with the trust fund, whereas NTI controls the distribution of income made from those investments. While it is true that modest amounts of the investment income have been distributed as start-up funds for Inuit businesses, scholarships and hunter support programs, the trust has been primarily used to fund the operations of these four corporations and to fund their subsidiary businesses, but none has been used to help fund public services provided by the Nunavut government. Under the terms of the Nunavut Agreement “no Major Development Project may commence until an IIBA is finalized.” This means that no mine, transportation corridor, hydroelectricity development or any other project can occur on the lands owned by NTI until an IIBA is concluded. An impact and benefit agreement is a legal contract between a developer and the landowner (in this case, NTI or one of the regional Inuit associations) to provide compensation for the use of their lands. Compensation can include lucrative service contracts for enterprises owned by NTI or the RIAs, training and scholarship funding, housing and recreation facilities or financial compensation. On several occasions, the RIAs have become direct shareholders in the mineral development projects. In fact, while these corporations are not for profit, they fund and own in full or in part several mining, exploration construction and companies, airlines, cruise lines and retail industries. Every year, agreements for tens of millions of dollars in contracts are concluded between NTI and multinational mining interests such as Areva Resources, Baffinland Iron Mines Corporation and Agnico-Eagle. But it is difficult to ascertain the true value of these Inuit Impact and Benefit Agreements (IIBAs), because the negotiations take place behind closed doors and their content is made confidential by NTI and the RIAs. The Nunavut government and the municipalities, whose citizens and services are most affected by major development projects, have no voice in the negotiations, and the Nunavut government has no influence over the distribution of benefits. Once they are concluded, only the government of Canada has access to the contents of IIBAs, not the Nunavut government or the general public. If one compares the situation in Nunavut with those in similar jurisdictions, such as the Nunatsiavut land claims agreement in Newfoundland and Labrador, a clearer picture of the potential value of IIBAs emerges. For instance, agreements negotiated between the mining company Inco and the Inuit and Innu of Labrador included funding for a $15-million hospital, at least $1.2 million in recreation and wellness funding, and funding for scholarships and training. The socio-economic conditions in Nunatsiavut and the scale of mineral development are comparable to those in Nunavut. Thus, it seems the benefits that Nunavut could realize from major development are substantial. Despite pleas from the Nunavut government and its municipalities, NTI and the RIAs have refused to provide any IIBA funding for areas of public government responsibility or allow communities to be involved in the negotiations. In 2006, Leona Aglukkaq (now the federal minister of health, then Nunavut’s minister of health and minister for the Status of Women Council) asked NTI and the RIAs to use IIBA negotiations to help fund women’s shelters. Earlier that year, the municipality of Cambridge Bay asked the NTI and the Regional Inuit Corporation to help fund its fledgling health and wellness programs with money from IIBAs. Similarly, the municipalities of Baker Lake, Kugluktuk and Cambridge Bay have asked to be part of IIBA negotiations in order to provide health and wellness programs with funding from major development projects. All met with no success. As a result, a real sense of injustice is growing among Nunavut’s citizens, whose communities lack basic public health and wellness programs and functioning municipal infrastructure. The position taken by NTI and the three RIAs is that they are under no obligation to include communities in IIBA negotiations nor to use compensation money to fund services provided by the Nunavut government. In 2006, at a public hearing for a major mine, a panel of NTI and Kitikmeot Inuit Association officials told the municipality of Cambridge Bay that because the land claim stated that IIBAs are for Inuit, they cannot provide funding that “would provide benefits that would go to non-Inuit, even though the majority of those benefits might go to Inuit.” The problem with such a response is that NTI and the RIAs control the only legal mechanism afforded by the Nunavut Agreement for leveraging benefits to Nunavut’s communities from major development projects. Given that Nunavut continues to suffer suicide rates 40 times higher than the national average, that approximately 50 percent of the population lives in overcrowded housing and that 76 percent of Inuit children do not finish high school, this unwillingness on the part of NTI and the RIAs to help fund community health and wellness programs delivered to a population whom the land claim was meant to benefit is, to say the least, questionable. The ability to make exclusive decisions about all aspects of natural resource development is a central power enjoyed by provincial governments. It allows them to ensure that the negative social and environmental impacts of development are mitigated and that maximum economic benefits are realized by their communities and their citizens. Under the terms of the Constitution Act 1867, provincial governments are given exclusive powers for the exploration, development, conservation and management of natural resources (this power was also extended to the Yukon Territory in 2002). Historically, territorial governments have not been given exclusive powers over lands and resources before decades of negotiations. In comparison, and unlike other territories, Nunavut has an agreement that provides Inuit with decision-making power over the “use, management and conservation of land, water and resources, including the offshore.” However, rather than empowering the Nunavut government, this decision-making power was given to four independent co-management boards, which regulate all aspects of natural resource development in Nunavut. Comanagement is a system of power-sharing commonly negotiated in Aboriginal land claims settlements to give greater decision-making authority to Aboriginal groups over Crown lands and resources. In theory, comanagement arrangements allow Aboriginal people greater decision-making authority over the lands, waters and resources on which they depend for their livelihood. Most of the major land claims in Canada, such as the Nunavik (Quebec), Nunatsiavut (Labrador), Inuvialuit (NWT) and Gwich’in (NWT), include comanagement arrangements for lands and resources. In Nunavut, the decision-making power over lands and resources is distributed among four independent comanagement boards: the Nunavut Impact Review Board, Planning Commission, Water Board and Surface Rights Tribunal. The board members of each organization are appointed equally by the federal government, the territorial government and NTI. Each of these boards issues its own authorizations, has its own office, infrastructure, staff and budget, and is funded by the federal government, at $13 million annually. The intention of the comanagement model is to give greater decisionmaking powers to Aboriginal people over the management of their lands and resources. For an Aboriginal group living in a province or territory and representing a minority of the population, comanagement would, in theory, create a more equitable distribution of control between its land claims organization and a more powerful provincial or territorial government. The situation in Nunavut is quite different, because there Inuit form the large majority of the population and have majority control over the territorial government. The comanagement system removes decision-making power from the public government and divides it among three interests (federal, territorial and Inuit). In effect, no one group is accountable for resource management decisions or has the power to make exclusive decisions on development. Without these critical decision-making powers, the Nunavut government has little influence over how Nunavut’s natural resources are developed, or what benefits its communities will derive. In practice, Nunavut’s comanagement system results only in a lengthier and costlier process that fails to deliver any added benefits to Nunavut’s citizens. In fact, Nunavut’s regulatory system has proven wholly unworkable and is a roadblock to the development of Nunavut’s resources. Over the past five years, reports by the auditor general of Canada and the Department of Indian Affairs and Northern Development have observed the failure of the dozen or more land and resource comanagement boards set up in Nunavut and the Northwest Territories. They also report a lack of accountability, direction and guidance and fundamental technical capacity issues. The mineral industry, the Nunavut government and the government of Canada have all called for a major overhaul of Nunavut’s land and resource management system. In 2007, the federal government commissioned Montreal-based lawyer Paul Mayer to conduct an inquiry to determine the feasibility of devolving more powers to the Nunavut government. His report concluded that one of the largest roadblocks to further devolution of land and resource powers to the Nunavut government was the regulatory system of comanagement boards, which did not create a climate that encouraged investment. Moreover, the report concluded that the land and resource comanagement boards “do not have the ability to manage the regulatory process,” and that they “have no clear mandate and do not seem to understand where their jurisdiction begins or ends.” Ultimately, Nunavut’s success as a legitimate equal in the Canadian federation will depend entirely on the support given to the Nunavut government by NTI. All of the powers and resources necessary for Nunavut to prosper have been negotiated for and have been received by Nunavut’s land claims organizations. All that is needed is for Nunavut’s leaders to finish the great nation-building project that they started by empowering their own Nunavut government with the same powers and resources given to every other province in Canada to allow them to provide for their citizens. To this end, I propose two short-and two long-term solutions for the G future: - First, NTI and the Nunavut government should negotiate targeted and conditional transfers that would allow the Nunavut government to use for the benefit of Nunavummiut the more than $1 billion held by the Nunavut Trust, as well as additional resource royalties earned from Inuit-owned lands. These conditional transfers would target priority social programs and services for Inuit. Already, NTI produces an annual report on the status of Inuit culture and society and identifies priorities for health care, education, housing and other issues. NTI now has only to provide the Nunavut government with the money to help carry out these initiatives. - Secondly, the regional Inuit associations and the Nunavut government should negotiate targeted conditional funding transfers to allow the Nunavut government to improve health and wellness programs in the communities most affected by major resource development. Most of these communities already outlined their priorities during the consultation process for major development projects. - Third, NTI, the federal government, and the Nunavut government should restructure Nunavut’s land and resource management system by eliminating the redundant comanagement boards. Over the past decade this system has proven to be incapable of managing Nunavut’s regulatory process. - Finally, NTI could provide the Nunavut government with the means to lift itself out of financial dependency and improve services to Nunavut’s citizens by transferring its exclusive powers for the exploration, development, conservation and management of natural resources to the Nunavut government on the 356,000 square kilometres of land transferrred to Inuit by the Nunavut Agreement. In 2008, NTI launched a $1-billion lawsuit against the Government of Canada for a perceived lack of progress towards realizing the social, cultural and economic objectives of the Nunavut Agreement. In pursuing this course of action, NTI has failed to understand its own role in Nunavut’s lack of progress. For 10 years, NTI has denied the Nunavut government access to the cash, lands and powers that the Nunavut Agreement provided and that could help the Nunavut government respond to the needs of its citizens. The Inuit of Nunavut have successfully negotiated for the right to sit as legitimate equals in the Canadian federation. No amount of legal redress will make a difference to Nunavut or its citizens if NTI cannot support the government it created. Ultimately, it is up to Nunavut Tunngavik Incorporated to decide whether it will use the benefits it won in the Nunavut Agreement to help lift Nunavut out of financial dependency. Only with NTI’s support will the Nunavut government have any chance of providing its citizens with a standard of living that is on a par with that of all Canadians.
<urn:uuid:f59684ce-6f72-4639-b520-7c17ecbb5bd2>
CC-MAIN-2022-21
https://policyoptions.irpp.org/magazines/canadas-water-challenges/the-prince-and-the-pauper-nunavut-tunngavik-incorporated-and-the-government-of-nunavut/
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662529538.2/warc/CC-MAIN-20220519141152-20220519171152-00346.warc.gz
en
0.955686
3,804
2.90625
3
Data Validation in Excel 2013 In this article we are explaining how to set data validation in Excel 2013. In this article we are explaining data validation in Excel 2013. Data validation is the feature of Excel that prevents the user from incorrect data entry in an Excel worksheet and ensures that only accurate information is entered. For example we have a phone number field that can contain only 10 numeric digits; it can't contain other text data and the number of digits must be a minimum and maximum of 10. Data validation restricts the type of information that can be entered in the cell. Data validation is used to ensure that appropriate and accurate information is entered in the cell. In an Excel worksheet we can set various types of validation such as numeric validation, length validation, list validation etc. How to set data validation to the Excel cell. Open an Excel worksheet and select a cell area where you want to set validation. In the "Data Tools" tab click on "Data validation". The Data Validation window will be shown. In the Data Validation window you will see various types of validation; see: From the Data Validation window you can choose various types of validation. In this example we are using List validation on the "Address" field, Decimal validation on the "Salary" field, and length validation on the "Phone no" field. After specifying a validation we try to enter incorrect information; it will show an error message. In the address field, if we want to enter an address that does not exist in the list then it will not accept it. It accepts only those addresses that are defined in the list validation.
<urn:uuid:147f93fd-51c5-48aa-b1f0-af892a409ffa>
CC-MAIN-2016-22
http://www.c-sharpcorner.com/UploadFile/7e39ca/data-validation-in-excel-2013/
s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049276537.37/warc/CC-MAIN-20160524002116-00032-ip-10-185-217-139.ec2.internal.warc.gz
en
0.832245
344
3.4375
3
Should essays be written in first person? Last Update: May 30, 2022 This is a question our experts keep getting from time to time. Now, we have got the complete detailed explanation and answer for everyone, who is interested! Asked by: Kendrick Jakubowski Sr. Score: 4.1/5 (53 votes) You can use first-person pronouns in your essays, but you probably shouldn't. But like I said, it's complicated. My sense is that teachers usually tell their students to avoid “I” or “me” (or “we,” “us,” “my,” and “our”) because these pronouns are often used poorly. Are essays written in first or third person? Most academic papers (Exposition, Persuasion, and Research Papers) should generally be written in third person, referring to other authors and researchers from credible and academic sources to support your argument rather than stating your own personal experiences. Is it OK to use first person in academic writing? Do: Use the first person singular pronoun appropriately, for example, to describe research steps or to state what you will do in a chapter or section. Do not use first person "I" to state your opinions or feelings; cite credible sources to support your scholarly argument. What tense should an essay be written in? In general, when writing most essays, one should use present tense, using past tense if referring to events of the past or an author's ideas in an historical context. Can essays be in second person? One of the main rules of writing formal, academic papers is to avoid using second person. Second person refers to the pronoun you. Formal papers should not address the reader directly. First Person Dos and Donts Why do we avoid second person POV in academic writing? Generally, it is best to avoid second person pronouns in scholarly writing because they remove the distance between the reader and the writer. Instead, try to use first or third person pronouns to enhance clarity. What is second person in writing? When writing in the second person, address the reader directly. This type of writing feels personal to the reader. Use 'you' and 'your'. "When you see a monster, you should tell them to tidy up." Why are essays written in present tense? Literary works, paintings, films, and other artistic creations are assumed to exist in an eternal present. Therefore, when you write about writers or artists as they express themselves in their work, use the present tense. What type of essay does not require evidence? An expository essay provides a clear, focused explanation of a topic. It doesn't require an original argument, just a balanced and well-organized view of the topic. Why is third person used in academic writing? If you are working on anything formal such as argumentative papers or a research essays, then you must use third person pronoun. This is because it gives your work a picture of objectivity rather than personal thoughts. This aspect of objectivity will make your work look more credible and less biased. How do you start a first person essay? - Choose your topic. First-person essay writing can tackle any subject. ... - Consider your voice. Before beginning their first draft, essay writers should consider the voice and tone of their essay. ... - Jot down a rough outline. ... - Write a rough draft. ... - Go back and edit. Can I use we in academic writing? In academic writing, first-person pronouns (I, we) may be used depending on your field. ... Second person pronouns (you, yours) should almost always be avoided. Third person pronouns (he, she, they) should be used in a way that avoids gender bias. Is US a third person word? Unlike first-person (I, our, we, us, ours) and second-person pronouns (you, your, yours), third-person pronouns in the singular are marked for gender: he and she, him and her, his and hers, himself and herself. How do you start an essay in third person? Third person pronouns include: he, she, it; his, her, its; him, her, it; himself, herself, itself; they; them; their; themselves. Names of other people are also considered appropriate for third person use. Example: “Smith believes differently. According to his research, earlier claims on the subject are incorrect.” What is an example of third person? This perspective directs the reader's attention to the subject being presented and discussed. Third person personal pronouns include he, she, it, they, him, her, them, his, her, hers, its, their, and theirs. What are the 16 tenses in English? - Simple Present Tense. - Present Continuous Tense. - Present Perfect Tense. - Present Perfect Continuous Tense. - Simple Past Tense. - Past Continuous Tense. - Past Perfect Tense. - Past Perfect Continuous Tense. How many English tenses do we have? There are three main verb tenses in English: present, past and future. The present, past and future tenses are divided into four aspects: the simple, progressive, perfect and perfect progressive. There are 12 major verb tenses that English learners should know. How many tenses are there in total? There are three main tenses: past, present, and future. In English, each of these tenses can take four main aspects: simple, perfect, continuous (also known as progressive), and perfect continuous. Do essays have to be in present tense? When you write an essay, an exam answer, or even a short story, you will want to keep the verbs you use in the same tense. ... It should appear in the present tense, "twists," or the other verbs should be changed to the past tense as well. Switching verb tenses upsets the time sequence of narration. Should narrative essays be present tense? Tense: Usually, narrative essays are written in the past tense. The present tense is mostly used to depict a typical situation. An essay narrating a significant past experience/event is also written in the present tense. How do you write a simple present essay? You can write in present tense by simply using the root form of the word. However, if you're writing in third person singular, you need to add -s, -ies, or -es. First person singular: I go swimming every day. Third person singular: She goes swimming every day. What is 4th person point of view? What is the 4th person visual perspective? Traditionally it is considered omniscient. It's often associated with an objective deity who exists outside Earth and thus, this 4th point-of-view is portrayed as a global perspective which sees the world from above. What is the 3 person point of view? Third Person Point of View. In third-person narration, the narrator exists outside the events of the story, and relates the actions of the characters by referring to their names or by the third-person pronouns he, she, or they. What person should I write in? - If you want to write the entire story in individual, quirky language, choose first person. - If you want your POV character to indulge in lengthy ruminations, choose first person. - If you want your reader to feel high identification with your POV character, choose first person or close third.
<urn:uuid:96bdbcfe-f259-4693-8d68-90694cbdf005>
CC-MAIN-2022-40
https://faq-blog.com/should-essays-be-written-in-first-person
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338244.64/warc/CC-MAIN-20221007175237-20221007205237-00746.warc.gz
en
0.942551
1,591
2.765625
3
Everyone knows college isn’t cheap. In fact, college has gotten so expensive that more and more prospective students and their parents are questioning whether the benefits of a higher education justify the cost. But the cost/benefit equation of a higher education matters to more than just the families affected. Because thanks to taxpayer assistance, in some measure we’re all paying the cost of college. Earlier this month, American Institutes for Research (AIR) launched a new website to help publicize college dropout rates and the tax burden they’re creating. The folks at AIR, a non-profit that specializes in research on health and education, point out that fewer than 3 in 5 students who seek a four-year bachelor’s degree wind up with one within six years. AIR’s new website, CollegeMeasures.org, shows that colleges and governments nationally spend more than $3 billion a year on financial aid for new students who drop out after only one year of classes. For those that never return to school, that’s money wasted. The costs and benefits of getting a degree Statistics continue to suggest that college grads make more money. The U.S. Department of Education points out that the median salary of someone with a four-year degree is significantly higher that of someone with a high school diploma. There’s data to back up that claim: Here’s a chart that breaks down average salaries by education from the 1980s through 2008. The median salary of someone with a high school diploma in 2008: $30,000. With a bachelor’s degree, that number went up more than 53 percent to $46,000. But what about the cost? One way parents are asked to judge the quality of a public or private high school is how many of their students graduate and go on to college. Using the same yardstick with a University, one might expect that colleges with higher tuition would be more effective at graduating its students. Unfortunately, however, that’s not the case. Here’s a chart from CollegeResults.org comparing five-year graduation rates to the cost of in-state tuition at some of the largest public universities. As an example, the University of California at Berkeley graduates more than 86 percent of its students within five years at an average yearly tuition cost of $7,165. Compare that to the University of Florida, which graduates about the same percentage, 77 percent, at an average annual rate of $3,257. So in terms of providing a student with a degree, UF gets similar results for about half the price. But contrast both of those with Kent State University, which graduates people at a rate similar to community colleges – 27 percent – but charges a much higher average $8,430 a year in tuition. If you’re considering enrolling yourself or your kid at one of these schools, take a look at these sites, and look into how prospective schools do with academic advising, course offerings, and retention programs. CollegeMeasures.org makes it easy to look at a particular school’s performance and compare it to others in the same state. CollegeResults.org is a little messier in providing the same kind of info, but it also lets you customize and tailor searches a little more to the specific schools and variables you’re interested in. Weighing the alternatives of community and career colleges Not everybody wants or needs to attend a four-year public university. Check out our story, 5 In-Demand Jobs That Pay Well and Don’t Require a 4-Year Degree. Then take another look at the average salary chart from above which notes that in 2008, an associate’s degree boosted the average worker’s net worth 20 percent, from a $30,000 salary to a $36,000 one. A two-year commitment is a lot more manageable for some people, both in terms of time and cost. But there are some downsides. Community colleges were recently knocked for overselling a “poorer-than-expected” education experience by higher education marketing and research firm Norton Norris. In a summary of a research report released Oct. 4 [PDF] the company points out that community colleges: - have poor graduation rates - have poor post-graduation employment rates - have class waiting lists of up to two years - often don’t help with finding jobs - don’t track job placement The full report [PDF] features direct quotes from a number of students about their lousy experiences at community colleges, and none-too-subtly suggests that career colleges are a better option for both the student and the taxpayer. A lot of those complaints, though, are clearly from people coming straight out of high school and not getting the support system they need, which community colleges can rarely afford to provide. Community colleges might make a lot of sense for the right type of person: someone who’s already working – who at some level knows where their life is going and doesn’t need much hand-holding -but wants to return to school and improve their net worth. If you really don’t know what to do in college and just feel like you “should be there,” seriously evaluate your goals and talk to a school counselor. If you feel like you’re wasting time and money, consider working full-time while you figure out what you really want to do. And if you really know what you want but think you just can’t afford it, check out our story 6 Tips to Pay Less for a College Degree to learn about possible scholarships, loan forgiveness programs, and even a handful of schools that don’t charge tuition. If you really want that degree, you can make it work. Disclosure: The information you read here is always objective. However, we sometimes receive compensation when you click links within our stories.
<urn:uuid:6c334169-828c-4c40-b33a-992f675d7015>
CC-MAIN-2021-10
https://www.moneytalksnews.com/is-the-cost-of-college-worth-it/
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178361849.27/warc/CC-MAIN-20210301030155-20210301060155-00373.warc.gz
en
0.962386
1,226
2.828125
3
We now perform mathematical calculations so often and so effortlessly with digital electronic computers that it’s easy to forget that there was ever any other way to compute things. In an earlier era, though, engineers had to devise clever strategies to calculate the solutions they needed using various kinds of analog computers. Some of those early computers were electronic, but many were mechanical, relying on gears, balls and disks, hydraulic pumps and reservoirs, or the like. For some applications, like the processing of synthetic-aperture radar data in the 1960s, the analog computations were done optically. That approach gave way to digital computations as electronic technology improved. Curiously, though, some researchers are once again exploring the use of analog optical computers for a modern-day computational challenge: neural-network calculations. The calculations at the heart of neural networks (matrix multiplications) are conceptually simple—a lot simpler than, say, the Fourier transforms needed to process synthetic-aperture radar data. For readers unfamiliar with matrix multiplication, let me try to de-mystify it. A matrix is, well, a matrix of numbers, arrayed into rows and columns. When you multiply two matrices together, the result is another matrix, whose elements are determined by multiplying various pairs of numbers (drawn from the two matrices you started with) and summing the results. That is, multiplying matrices just amounts to a lot of multiplying and adding. But neural networks can be huge, many-layer affairs, meaning that the arithmetic operations required to run them are so numerous that they can tax the hardware (or energy budget) that’s available. Often graphics processing units (GPUs) are enlisted to help with all the number crunching. Electrical engineers have also been busy designing all sorts of special-purpose chips to serve as neural-network accelerators, Google’s Tensor Processing Unit probably being the most famous. And now optical accelerators are on the horizon. Two MIT spin-offs—Lightelligence and Lightmatter—are of particular note. These startups grew out of work on an optical-computing chip for neural-network computations that MIT researchers published in 2017. More recently, yet another set of MIT researchers (including two who had contributed to the 2017 paper) has developed yet another approach for carrying out neural-network calculations optically. Although it’s still years away from commercial application, it neatly illustrates how optics (or more properly a combination of optics and electronics) can be used to perform the necessary calculations. The new strategy is entirely theoretical at this point, but Ryan Hamerly, lead author on the paper that’s recently been published about the new approach, says, “We’re building a demonstration experiment.” And while it might take many such experiments and several years of chip development to really know whether it works, their approach, “promises to be significantly better than what can be done with current-day electronics,” according to Hamerly. So how does the new strategy work? I’m not sure I could explain all the details even if I had the space, but let me try to give you a flavor here. The necessary matrix multiplications can be done using three simple kinds of components: optical beam splitters, photodiodes, and capacitors. That sounds rather remarkable, but recall that matrix multiplications are really just a bunch of multiplications and additions. So all we really need here is an analog gizmo that can multiply two values together and another analog gizmo to sum up the results. It turns out that you can build an analog multiplier with a beam splitter and a photodiode. A beam splitter is an optical device that takes two optical inputs and provides two optical outputs. If it is configured in a certain way, the amplitude of light that it outputs on one side will be the sum of the amplitudes of its two inputs; the amplitude of its other output will be the difference of the two inputs. A photodiode outputs an electronic signal that is proportional to the intensity of the light impinging on it. The essential thing to realize here is that the intensity of light (a measure of the power it carries) is proportional to its amplitude squared. That’s key because if you square the sum of two light signals (let’s denote this as A + B), you will get A2 + 2AB + B2. If you square the difference of these same two light signals (A – B), you will get A2 – 2AB + B2. Subtract the latter from the former and you get 4AB, which you will notice is proportional to the product of the two inputs, A and B. So by scaling your analog signals appropriately, a beam splitter and photodiode in combination can serve as an analog multiplier. What’s more, you can do a series of multiplications just by presenting the appropriate light signals, one after the other, to this kind of multiplier. Feed the series of electronic outputs of your multiplier into a capacitor and you’ll be adding up the results of each multiplication, forming the result you need to define one element in the product matrix. Rinse and repeat enough times, and you have just multiplied two matrices! There are some other mathematical manipulations, too, that you’d need to run a neural network; in particular you have to apply a non-linear activation function to each neuron. But that can easily be done electronically. The question is what kind of signal-to-noise ratio a real device could maintain while doing all this, which will control the resolution of the calculations it performs. That resolution might not end up being very high. “That’s a downside of any analog system,” says Hamerly. Happily, at least for inference calculations (during which a neural network that has already been trained does its thing), relatively low resolution is normally fine. It’s hard to know how fast an electro-optical accelerator chip designed along these lines would compute, explains Hamerly, because the metric normally used to judge such performance depends on both throughput and chip area, and he isn’t yet prepared to estimate what sort of area the chip he is envisioning would require. But he’s optimistic that this approach could slash the energy required for such calculations. Indeed, Hamerly and his colleagues argue that their approach could use less energy than even the theoretical minimum for a gate-based digital device of equivalent accuracy—a value known as the Landauer limit. (It’s impossible to reduce the energy of computation to anything less than this limit without resorting to some form of reversible computing.) If that’s true for this or any other optical accelerator on the drawing board, many neural network calculations would no doubt be done using light rather than just electrons. With the remarkable advances electronic computers have made over the past 50 years, optical computing never really gained traction, but maybe neural networks will finally provide the killer app for it. As Hamerly’s colleague and coauthor Liane Bernstein notes: “This could be the time for optics.”
<urn:uuid:dadf1593-9784-4bc3-be46-0741626a3bcb>
CC-MAIN-2019-30
https://spectrum.ieee.org/tech-talk/computing/hardware/a-neural-net-based-on-light-could-best-digital-computers
s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195524972.66/warc/CC-MAIN-20190716221441-20190717003441-00511.warc.gz
en
0.940614
1,493
3.453125
3
The laws of quantum physics undoubtedly indicate that matter and antimatter should always be equal in their quantity. (...) What we call matter is simply everything that is composed mostly of quarks, that aggregate into protons (positively charged subatomic particles) and neutrons, and also of electrons (which are negatively charged). Together, they form atoms. On the other hand, antimatter can also create atoms of the same characteristics and overall electric charge, with the only one difference: their nuclei consist of antiquarks (antiprotons and antineutrons), while their mantle is being filled with positrons (electron’s positively charged counterparts). However, if you were to combine atoms of matter and antimatter, you would get a huge explosion of energy. Atom and antiatom would have completely annihilated. They are exact opposites. The laws of quantum physics undoubtedly indicate that matter and antimatter should always be equal in their quantity. Whenever pure energy is converted into particles, a pair of matter-antimatter particles pops up into existence. In accordance with that, one could posit that in the beginning of time there should be exactly the same amounts of matter and antimatter, after which they would have annihilated, making this universe lifeless, composed of pure energy only. Nevertheless, all astronomical observations have clearly shown that matter dominates over antimatter. In fact, there is only matter in this cosmos. So far, no scientific theory was able to reliably explain this outcome. The Nature of Time This is the question that cannot be answered without the involvement of consciousness, despite the fact that many physicists are very reluctant to take it seriously. In accordance with Einstein’s theory of relativity, time is interchangeable with space within the space-time continuum. Time is only one of the dimensions in that continuum. (...) All particles of matter behave as antimatter particles when the arrow of time is reversed. It is easy to conclude that our consciousness actually is moving steadily along the fourth axis of the 4D space-time chart, hence giving us the impression that everything is changing. We are moving along that time axis and everything else around us seems to be moving as well. That seemingly steady translation of our conscious focus along that one space-time coordinate is deeply ingrained in our subconscious. Nevertheless, the whole four-dimensional space-time continuum is static per se. Direction of Time Resolves the Conundrum Another important point is that all the particles of matter behave as antimatter particles when the arrow of time is reversed. That’s what the equations tell us. In other words, if time were starting going backward, the particles of matter would become their opposites – the particles of antimatter. An electron would become positron, a quark would be anti-quark, etc. When we take into account the assumption of our consciousness moving in only one direction of time, then it’s easy to assume that, for us, all the matter would become antimatter if our consciousness would suddenly begin moving backward in time. Still, from the “vantage point” of the static four-dimensional space-time continuum, as there is no specific direction of consciousness’ movement, the total amount of matter and anti-matter is equal. Actually, it’s zero, as matter and anti-matter cancel each other out completely. Therefore, there’s no matter-antimatter asymmetry at all. We just have to take into account our own consciousness, and the puzzle is solved. Please note that most of the articles have a "Read More" break, which is sometimes hardly visible. It is located at the bottom of visible part of the article, on the right side. To continue reading the article, click on that link. This page may contain affiliate links meaning we earn a commission if you use those links. We only recommend pages we appreciate and trust. Check out excellent meditation courses at LiveAndDare.com. We recommend them wholeheartedly!
<urn:uuid:5bb804b8-c468-493a-8d9e-d41014642442>
CC-MAIN-2022-40
https://www.re-integration.com/reintegration-system-blog/matter-antimatter-asymmetry-solved-by-consciousness
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00475.warc.gz
en
0.950925
847
2.65625
3
It takes water to make almost anything, from carpets to cosmetics to cars. Water has emerged as a critical issue for companies in response to increased water demand, climatic risks and potentially negative impacts on brand value. That means companies need to understand their water footprints throughout their value chains and develop standards and processes related to water quality and quantity, surface and groundwater contamination, and access to water by local communities. Many companies have centrally developed water strategies that can be implemented on a global basis and achieve a significant impact. But water is unique and — unlike carbon — all water issues are local, meaning companies must align global standards with local concerns. In this hour-long webcast, on the occasion of the publication of Ford Motor Company’s annual sustainability report, you’ll hear one company’s global water strategy, as well as a conversation with one of the world’s leading experts in corporate water strategy. Among the things you’ll learn: Register below to attend the webcast and receive the recording when it concludes.
<urn:uuid:e7b69297-796b-4d1d-8d54-19c151a79f12>
CC-MAIN-2014-52
http://info.greenbiz.com/06-18-2014FordWebcast.html?src=ford
s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802770399.32/warc/CC-MAIN-20141217075250-00133-ip-10-231-17-201.ec2.internal.warc.gz
en
0.953853
214
2.640625
3
If proton decay is real all isotopes have a half life, those which are considered stable in a world without proton decay will just live much longer than the rest. - 1 Is the half-life of every isotope the same? - 2 Why do isotopes have different half-life? - 3 Does every element have a half-life? - 4 What is isotope half-life? - 5 What things have a half-life? - 6 Do elements have isotopes? - 7 Why do all elements have a half-life? - 8 How do you find the half-life of an isotope? - 9 How do isotopes differ? - 10 What is the longest half-life? - 11 What half-life explained? - 12 Why do we use half-life and not full life? - 13 Which isotope has the shortest half-life? - 14 What is the longest half-life of an isotope? - 15 How do we know half-lives of elements? - 16 Why does uranium have a long half-life? - 17 Do all elements have only one isotope? - 18 What is the half-life of an isotope if 125 g? - 19 Why do all atoms have isotopes? - 20 What elements have no isotopes? - 21 What is the half-life of an isotope quizlet? - 22 What is the easiest way to calculate half-life? - 23 Which radioisotope is most stable and what is its half life? - 24 Do isotopes of an element differ? - 25 What characteristic of an element differs between isotopes? - 26 Which of the following isotopes has the longest half-life quizlet? - 27 Which element has the quickest half-life? - 28 How long is bismuth’s half-life? - 29 What is the half-life of xenon 124? - 30 Do protons have a half-life? - 31 Which element has a half-life of 8 seconds? - 32 How is half-life related to stability of elements or isotopes? - 33 What is the difference between half-life and biological half-life? - 34 What is the half-life of strontium 90? - 35 Are half-lives infinite? - 36 Why do chemists use half-life? - 37 Why do they say half-life? - 38 How do they know the half-life of uranium-238? - 39 Does hydrogen have a half-life? - 40 What is the most radioactive thing on Earth? - 41 Can you touch uranium? - 42 Which group contains elements that have no stable isotopes? - 43 How are isotopes of the same element alike? - 44 What eventually happens to all radioactive isotopes? - 45 Which of the following are isotopes? - 46 Which of the following isotopes is not a radioisotope *? - 47 Which element has the highest no of isotopes? - 48 How do you find the half-life of an isotope? - 49 What is a half-life give an example of the half-life of an isotope describing the amount remaining and the time elapsed after five half-life periods? - 50 What is the half-life of cobalt 57? Is the half-life of every isotope the same? The half-life of a specific radioactive isotope is constant; it is unaffected by conditions and is independent of the initial amount of that isotope. Each radioactive nuclide has a characteristic, constant half-life (t1/2), the time required for half of the atoms in a sample to decay. Why do isotopes have different half-life? Some isotopes are stable indefinitely, while others are radioactive and decay through a characteristic form of emission. As time passes, less and less of the radioactive isotope will be present, and the level of radioactivity decreases. An interesting and useful aspect of radioactive decay is the half-life. Does every element have a half-life? All elements have half-lives because all elements can have radioactive isotopes. However, even the stable isotopes of an element can break down over… What is isotope half-life? Half-life is the length of time it takes for half of the radioactive atoms of a specific radionuclide to decay. A good rule of thumb is that, after seven half-lives, you will have less than one percent of the original amount of radiation. What things have a half-life? For example, uranium-232 has a half-life of about 69 years. Plutonium-238 has a half-life of 88 years. Carbon-14, which is used to find the age of fossils, has a half-life of 5,730 years. Do elements have isotopes? All elements have isotopes. There are two main types of isotopes: stable and unstable (radioactive). There are 254 known stable isotopes. All artificial (lab-made) isotopes are unstable and therefore radioactive; scientists call them radioisotopes. Why do all elements have a half-life? Elements with short half lives exist because each element has stable isotopes, and the decay os isotopes create more isotopes as well. Certain elements have extremely short half-lives, such that they decay at a very rapid pace. How do you find the half-life of an isotope? How do isotopes differ? Isotopes. An isotope is one of two or more forms of the same chemical element. Different isotopes of an element have the same number of protons in the nucleus, giving them the same atomic number, but a different number of neutrons giving each elemental isotope a different atomic weight. What is the longest half-life? The entire history of the universe is but a fleeting moment in time compared with the half-life of xenon-124. Clocking in at a staggering 1.8 × 1022 years, it’s the longest half-life ever directly measured—and roughly 1 trillion times the universe’s age (Nature 2019, DOI: 10.1038/s41586-019-1124-4). What half-life explained? half-life, in radioactivity, the interval of time required for one-half of the atomic nuclei of a radioactive sample to decay (change spontaneously into other nuclear species by emitting particles and energy), or, equivalently, the time interval required for the number of disintegrations per second of a radioactive … Why do we use half-life and not full life? Scientists measure the half-life of a substance because it tells them about the amount of radiation that a given substance will give off. Half-life is a fixed constant for every different substance, allowing experts to accurately predict the lifespan of a material. Which isotope has the shortest half-life? Hydrogen-7 ( about 23x10E-24) has the shortest half life. What is the longest half-life of an isotope? Bismuth-209 (209Bi) is the isotope of bismuth with the longest known half-life of any radioisotope that undergoes α-decay (alpha decay). How do we know half-lives of elements? Half-lives can be calculated from measurements on the change in mass of a nuclide and the time it takes to occur. The only thing we know is that in the time of that substance’s half-life, half of the original nuclei will disintegrate. Why does uranium have a long half-life? All isotopes of uranium are radioactive, with most having extremely long half-lives. Half-life is a measure of the time it takes for one half of the atoms of a particular radionuclide to disintegrate (or decay) into another nuclear form. Each radionuclide has a characteristic half-life. Do all elements have only one isotope? |Standard Atomic Weight||26.981 5384(3)| What is the half-life of an isotope if 125 g? The isotope I-125 is used in certain laboratory procedures and has a half-life of 59.4 days. Why do all atoms have isotopes? Atoms of the same element that contain the same number of protons, but different numbers of neutrons, are known as isotopes. Isotopes of any given element all contain the same number of protons, so they have the same atomic number (for example, the atomic number of helium is always 2). What elements have no isotopes? What is the half-life of an isotope quizlet? Half life is the time it takes for the number of nuclei in a radioactive isotope in a sample to halve. What is the easiest way to calculate half-life? Which radioisotope is most stable and what is its half life? |Element||Most Stable Isotope||Half-life of Most Stable Isotope| Do isotopes of an element differ? Isotopes are atoms with different atomic masses which have the same atomic number. The atoms of different isotopes are atoms of the same chemical element; they differ in the number of neutrons in the nucleus. What characteristic of an element differs between isotopes? Answer: The number of neutrons in the nucleus determines the specific isotope of an element. The only difference between isotopes of an element is the number of neutrons present. Which of the following isotopes has the longest half-life quizlet? Therefore, rubidium-87 has the longest half-life. Which element has the quickest half-life? Copernicium 285 has the shortest half life, which is 5*10^-19 seconds. Longest is definitely uranium 238, over a billion years. How long is bismuth’s half-life? Although bismuth-209 is now known to be unstable, it has classically been considered to be a stable isotope because it has a half-life of approximately 2.01×1019 years, which is more than a billion times the age of the universe. What is the half-life of xenon 124? Xenon 124 is one of those, though researchers have estimated its half-life at 160 trillion years as it decays into tellurium 124. The universe is presumed to be merely 13 to 14 billion years old. The new finding puts the half-life of Xenon 124 closer to 18 sextillion years. Do protons have a half-life? Despite significant experimental effort, proton decay has never been observed. If it does decay via a positron, the proton’s half-life is constrained to be at least 1.67×1034 years. Which element has a half-life of 8 seconds? Meitnerium’s most stable isotope, meitnerium-278, has a half-life of about 8 seconds. It decays into bohrium-274 through alpha decay. The longer the half-life, the more stable the nuclide. What is the difference between half-life and biological half-life? What is the half-life of strontium 90? The most common isotope of strontium is strontium-90. The time required for a radioactive substance to lose 50 percent of its radioactivity by decay is known as the half-life. Strontium-90 has a half- life of 29 years and emits beta particles of relatively low energy as it decays. Are half-lives infinite? In the extreme limit of this approach, all of the electrons can be ripped off of a radioactive atom. For such an ion, there are no longer any electrons available to capture, and therefore the half-life of the electron capture radioactive decay mode becomes infinite. Why do chemists use half-life? Knowing about half-lives is important because it enables you to determine when a sample of radioactive material is safe to handle. The rule is that a sample is safe when its radioactivity has dropped below detection limits. And that occurs at 10 half-lives. Why do they say half-life? A half-life is the time taken for something to halve its quantity. The term is most often used in the context of radioactive decay, which occurs when unstable atomic particles lose energy. Twenty-nine elements are known to be capable of undergoing this process. How do they know the half-life of uranium-238? Does hydrogen have a half-life? The measured binding energy of the deuteron is 2.2 MeV. Hydrogen also exists as tritium with a proton and two neutrons but is unstable with a halflife of 12.32 years. What is the most radioactive thing on Earth? The radioactivity of radium then must be enormous. This substance is the most radioactive natural element, a million times more so than uranium. Can you touch uranium? Because uranium decays by alpha particles, external exposure to uranium is not as dangerous as exposure to other radioactive elements because the skin will block the alpha particles. Ingestion of high concentrations of uranium, however, can cause severe health effects, such as cancer of the bone or liver. Which group contains elements that have no stable isotopes? astatine (At), radioactive chemical element and the heaviest member of the halogen elements, or Group 17 (VIIa) of the periodic table. Astatine, which has no stable isotopes, was first synthetically produced (1940) at the University of California by American physicists Dale R. How are isotopes of the same element alike? Different isotopes of the same element have the same atomic number. They have the same number of protons. The atomic number is decided by the number of protons. Isotopes have different mass numbers, though, because they have different numbers of neutrons. What eventually happens to all radioactive isotopes? All radioactive atoms transform eventually into a stable isotope of either the original or a different element. The unit of measure for radionuclides refers to the rate at which radioactive decays occur in a sample. Which of the following are isotopes? Isotopes are elements with same atomic number but different mass number. Hydrogen and Deuterium are isotopes with same atomic number but different mass number. Hydrogen have atomic number 1 and mass number 1 where as deuterium have atomic number 1 but mass number 2. Which of the following isotopes is not a radioisotope *? Zirconium is an element with the symbol Zr and the atomic number of Zr is 40. It is a transition metal which is lustrous. It is mainly used as a refractory. It is not radioactive. Which element has the highest no of isotopes? The element with the largest number of stable isotopes is tin (symbol Sn and atomic number 50) with 10 isotopes. Tin was first extracted and used in the Bronze Age (circa 3000 BC). How do you find the half-life of an isotope? What is a half-life give an example of the half-life of an isotope describing the amount remaining and the time elapsed after five half-life periods? Explanation: So every half-life period ( t12 ) the activity halves from the start of that period. So, after the second period, activity will be one half of one half, or one quarter of the original. Example : Carbon-14, if left by itself, will have a half-life of 5730 years (wikipedia). What is the half-life of cobalt 57? Cobalt-57 decays with a half-life of 270 days by electron capture and cobalt-60 decays with a half-life of 5.3 years by emitting a beta particle with two energetic gamma rays; the combined energy of these two gamma rays is 2.5 MeV (one has an energy of 1.2 MeV and the other has an energy of 1.3 MeV).
<urn:uuid:282c944a-c47c-4398-94e9-78347455fd2a>
CC-MAIN-2023-40
https://www.youmustknow.net/do-all-isotopes-have-a-half-life/
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506559.11/warc/CC-MAIN-20230924023050-20230924053050-00353.warc.gz
en
0.873094
3,644
3.375
3
the recent government release, “EU Food Safety Authority announcement that neonicotinoid pesticides should not be used on crops attractive to honey bees”(1) , I thought I would consider more carefully the notion that “Only uses on crops not attractive to honey bees were considered acceptable” - this following the recent EFSA report outlining unacceptable risk of neonicotinoids to bees. I am specifically concerned that even restricting neonicotinoid use to non-flowering crops, would not provide sufficient protection for other non-target species that may visit them, such as butterflies and beetles: 1. Pesticide risk assessment already conducts tests on very few species as a representative sample of invertebrates. As it is, assessment only requires that pesticides are tested on: - Daphnia magna (water flea) - Apis mellifera (honey bee) - As well as 4 further species (according to the Chemicals Regulation Directorate “DATA REQUIREMENTS HANDBOOK”- Version 2.2, June 2012) 2. Thus, perhaps EFSA have failed to take into account that testing on honey bees may be assumed to safeguard butterflies (which do not only visit flowering crops), hoverflies, lady birds, lacewings and a range of other species whether or not they forage on flowering crops – indeed, various species may inhabit areas around foliage crops, for example. I am not aware of EFSA examining the data on the other species, as they have for honey bees. Therefore, I am concerned that it is not safe to simply assume that restriction of application of neonicotinoids to non-flowering crops, would be sufficient to protect other non-target invertebrates. In addition to which, the EFSA report stated “In some cases EFSA was unable to finalise the assessments due to shortcomings in the available data”. If is not possible to complete risk assessment for any chemical, for the whole range of species and in all areas, are they legal? 3. Note, the vast majority of invertebrates are beneficial or harmless, and given the uses of neonicotinoids, it is not merely a case of ‘what is applied on farm crops’ either. They can be used on golf courses, in gardens, lawns and potentially on council land. In some countries, they are even used on trees. They are mobile in soil and water, meaning they have the potential to trespass into areas not intended for pesticide, including on land and in aquatic systems. They also persist in soil. Research has shown that even after usage has ceased, they have been taken by plants and presented through flowers and nectar at levels toxic to bees (Bonmatin et al). support the protection of bees, but we also need to consider the other ‘unsung heroes’ of our eco-system – and I would remain in support of the Buglife position requesting a complete ban on neonicotinoids. You can help by sending this letter to your governement representative here. COPYRIGHT 2010 - 2017: WWW.BUZZABOUTBEES.NET ALL RIGHTS RESERVED.
<urn:uuid:d4cbc6d9-6c33-4304-8af7-f94116e8ebc1>
CC-MAIN-2017-51
https://www.buzzaboutbees.net/restriction-of-neonicotinoids-to-non-flowering-crops.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948596051.82/warc/CC-MAIN-20171217132751-20171217154751-00029.warc.gz
en
0.944513
666
2.875
3
The B-24 was a heavy bomber aircraft used extensively during WWII. The B-24 had a high cruise speed, long range, and was able to carry large payloads. The Liberator was deployed by every branch of the US armed forces and allied nations during the war. The planes were used for bombing campaigns in in Western Europe and across the Pacific islands. The B-24 is considered the most produced America military aircraft. Long Range Bomber Concept In 1938, Consolidated Aircraft Corporation was commissioned to produce B-17 Flying Fortress bombers for the United States army Air Corps. Consolidated Aircraft president, Rueben Fleet, believed his company could build a better bomber. The design illustrated a bomber with higher speed and longer range. One of the key differences with the B-24 was the shoulder mounted Davis wing. The design enabled to the increased range and airspeed. The wingspan exceeded that of the B-17 by six feet, the wing area was quite a bit lower. Reports indicate that the wing design made the aircraft somewhat unwieldy and more susceptible to ice formation. The fuselage could be described as “boxy”. The central bomb bays were split lengthwise, with a narrow catwalk in between. The catwalk measured only nine inches wide. The B-24 required a crew of seven to ten. A pilot and co-pilot handled controls along with a crewman that would serve as a navigator or engineer. The bombardier and radio operator remained close the cockpit. Gunners operated .30 or .50 caliber machine guns located on the sides and tail of the aircraft, armament varied between B-24 models. Building the Liberator To ensure that the B-24 bombers were ready for the war effort, 5 facilities were used for production. During the height or production, a B-24 was produced every 63 minutes. Reports from the 1944 claim that pilots and aircrew would sleep near the facility, awaiting completion of their plane. The Willow Run location produced the greatest amount of B-24s. Over ten variations of the B-24 were manufactured. - CO: Consolidated Aircraft, San Diego - CF: Consolidated Aircraft, Fort Worth - FO: Ford, Willow Run - NT: North American, Dallas - DT: Douglas Aircraft, Tulsa Video: Building the B-24 The B-24D was the first mass-produced variation. This aircraft was flown primarily in 1943-1945 over the Pacific Ocean. It was used in multiple bombing campaigns on Axis-controlled island chains. Length: 67 feet 2 inches Height: 18 feet Wingspan: 110 feet Empty weight: 36,500 pounds Maximum takeoff weight: 65,000 pounds Speed: 290 mph Range: 1,800 nautical miles Engine quantity: 4 Engine type: Pratt & Whitney R-1836-65 Twin Wasp, 12,000 hp
<urn:uuid:8b3505fc-ef29-47eb-a115-c301c3eba34e>
CC-MAIN-2018-13
http://calibration.aero/b-24-liberator-heavy-bomber/
s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647322.47/warc/CC-MAIN-20180320072255-20180320092255-00718.warc.gz
en
0.960922
599
3.359375
3