text
stringlengths
144
682k
10+ Facts About Kenya Country Facts About Kenya Country Facts About Kenya Country: Kenya is a country in the African continent, the capital of Kenya: Nairobi. Kenya, a country with 99% of its population is black, a country in which lions, elephants, bears are seen roaming everywhere, a country where middle classes people do not exist or people are very rich or too much poor. Some interesting facts about Kenya country: 1. The history of Kenya is full of slavery, after being enslaved for 100 years, Kenya was liberated from Britain on December 12, 1963. In 1964, he adopted the constitution and declared himself a republic. 2. The scientists believe that humans originated in Kenya because the oldest remains of humans have been found here. 3. There are 2 official languages in Kenya: Swahili and English But here more than 60 local languages are spoken. 4. There is also a small part of the world’s second largest lake ‘Victoria Lake’ in clean water in Kenya. Actually, this lake falls into three countries: 6% in Kenya, 45% in Uganda and 49% in Tanzania. 5. Kenya is a big producer of coffee and most of its earnings are from this, but people here like to drink tea and beer instead of coffee. Facts About Kenya Country 6. The currency of Kenya is Shilling. The price of a shilling is Rs. 0.68. 7. There are no official means of transport in Kenya and traveling here in van etc is like entering a 19 man in a van of 9 men. 8. In Kenya, the value of dowry starts with 10 cows. 9. In the production of Gulab flowers, Kenya is number 3 in the world. Kenya is also called ‘Flower Garden of Europe’. 10. There are about 1132 types of birds in Kenya and 342 of these birds can be seen in less than 24 hours in a single park, which is a world record in itself. 11. In Kenya, around 26 lakh people practice barefoot riding barefoot. Here you will find athletes in every home. 12. The use of natural things (air, water, trees, soil) as much as 32 people in Kenya do so by Americans alone. 13. A lawyer from Kenya offered to give 50 cows, 70 sheep, goats to Barack Obama’s family, but in Revenge, he wanted to marry Obama’s daughter. 14. The strongest law in the world is only in Kenya, on the use of plastic bags. Here, a penalty of up to 4 years of imprisonment or Rs 25 lakh can be imposed on production, sale, and use of plastic bags. Also, read: How much did this article help you? Please enter your comment! Please enter your name here
Celebrating a Broken Climate "Pacemaker" | The Institute for Creation Research Celebrating a Broken Climate "Pacemaker" There is strong geological evidence for one ice age. Yet, secular scientists published an iconic technical paper that detailed multiple ice ages over long periods of time. Why did they miss the mark? How does science confirm the creationist position? And what major global event caused this frigid climate change? Other episodes in this series: How Theology Informs Science Genesis Compromise Unravels the Bible How Consistent Are Old-Earth Clocks? Refuting a Favorite Old-Earth Argument For more radio programs, click here. The Latest First Ever Photo of a Black Hole WFAA News Highlights the ICR Discovery Center Irish Bacteria Could Stop Dangerous "Superbugs" Encouragement and Advice for Homeschool Families Prestonwood Christian Academy Previews ICR Discovery Center
Did You Know Every Parent is Bilingual? “Don’t talk to me like a baby!” You might be familiar with this phrase if you have an older child or have gotten into a spat with a partner or colleague. While baby talk can be construed as condescending when directed at an older individual, it is actually critical to the cognitive development and language learning of infants and toddlers. Linguists and child psychologists refer to baby talk more often as child-directed speech. Many aspects of child-directed speech allow it to facilitate language learning, such as the following: High-pitch and tone variation These qualities characteristic of child-directed speech make it more stimulating, effectively causing the words spoken to be more memorable. Children’s first words are often the ones they hear the most often. This is because repetition is a key component that drives memorization. Thus, the repetition common in child-directed speech helps children learn the language. When using child-directed speech, parents often say expressions like “woof woof” and “beep beep.” This specific type of repetition, called reduplication, also helps with memorization and language learning. Sentences and phrases formed when using child-directed speech tend to include the most important word at the end. For example, parents might say “oh look at the cute little doggy” instead of “there is a cute dog right over there.” This isolation of the word dog helps children learn the word, because they can separate the noises associated with saying the word from the rest of the phrase. When children imitate child-directed speech, they are actually imitating and learning proper grammar. One theory about language acquisition is that much of children’s knowledge is innate. Specifically, some linguists have asserted that children are born with knowledge of syntactic structures and then utilize imitation to learn words to fit into those structures. Complete foreknowledge of grammatical structure prior to birth seems unlikely, especially given this structure is unique to every language. In fact, a closer look at child-directed speech reveals that it is far more properly structured than casual, fragmented conversation between adults. When children imitate child-directed speech, they are actually imitating and learning proper grammar. While children’s capacity to learn may be innate, their language learning is in many ways an imitation game. Each and every parent around the world is fluent in both his or her native tongue and child-directed speech. Child-directed speech doesn’t just exist here in the United States and with English, but in a plethora of cultures and with a multitude of languages. Each and every parent around the world is fluent in both his or her native tongue and child-directed speech. This form of bilingualism is pertinent to infants’ and toddlers’ first language acquisition and cognitive development. Just like child-directed speech improves cognitive development in infants and toddlers, so does learning a foreign language. While parents adopt this child-directed speech with ease, infants and toddlers could also adopt another language with ease. Children can learn more than one language at a time without conflating the two or hindering their progress towards fluency in their native language. In fact, children are noted to become more native-like speakers in a foreign language if they learn the language at a very young age. Just like child-directed speech improves cognitive development in infants and toddlers, so does learning a foreign language. As it happens, children who learn another language at a young age are said to be able to concentrate better in spite of outside stimulus, an important skill in an age when technology, among other things, has become a huge distraction. While all parents are fluent in their native tongue and child-directed speech, not all parents are fluent in other foreign languages… cue Little Pim. In conclusion, while many people may not appreciate when you speak to them like a baby, your infant or toddler loves it. Your child’s engagement with child-directed speech makes it a useful tool to teach words and proper grammatical structures. Via aiding in first language acquisition, child-directed speech improves a child’s cognitive development, just as learning a foreign language can. While all parents are fluent in their native tongue and child-directed speech, not all parents are fluent in other foreign languages… cue Little Pim. Let us join you and your child on a path towards intellectual growth. Works Cited: Musical Spanish Immersion Class in NYC We're excited to launch our partnership with The Pineapple Explorers Club (based in NYC) for their Musical Spanish Immersion Class using our Entertainment Immersion Method® and language learning materials. If you're located near New York City, see below for more details or visit their website linked above: Classes begin MONDAY JUNE 26th at 10 AM in Marcus Garvey Park (upper west corner below playground) & WEDNESDAY JUNE 28th at 10 AM in Central Park (Enter at 79th and walk South, group will meet on the left just before the playground). Cost: $15 a child (cash or Venmo) or find them on KidPass! Bilingual Baby: When is the Best Time to Start? The benefits of introducing your baby to another language are well documented. In our rapidly globalizing society, knowing a second (or third) language provides an obvious edge over the competition in the job market. But, what about its impact on childhood development? While some would suggest that over-exposure to foreign language may cause delay in speaking, this assumption is both unproven and outweighed by the benefits dual-language babies experience as they grow. We know the many benefits, so the question soon becomes: “When do we start?” The answer is surprising. According to an article by the Intercultural Development Research Association, it may be most beneficial to begin second language exposure before six months of age. In a study by psychologist Janet Werker, infants as young as four months of age successfully discriminated syllables spoken by adults in two different languages. Dr. Werker’s work also determined a possible decline in foreign language acquisition after 10 months of age. To give your child their best start, you must begin early. How is this so? The answer can be found in the complex world of the human brain. Our brains react uniquely to language learning at any age, even growing when stimulated by another language. While mankind can acquire a language at mostly any stage, it is exceptionally difficult to do so outside of childhood. From infancy to age five, the brain is capable of rapid language acquisition. Even so, there are varying degrees of acquisition, even for children. After six months of age, infants begin distinguishing the differing sounds of their native tongue and others. Beyond six months, exposing your little one to a brand new language will pose a challenge. That is not to say that teaching your two year-old French is a bad idea! It is merely to say that the earlier you begin teaching your child, the better. Though most babies wont utter their first words before eleven months of age, they develop complex mental vocabularies through the piecing together of “sound maps.” As they gather from what they are exposed to, an infant who hasn’t been immersed in another language during this delicate stage will not piece together adequate sound maps to differentiate another language. The reason for this is rooted in the brain at birth. Children are born with 100 billion brain cells and the branching dendrites that connect them. The locations that these cells connect are called synapses; critical components in the development of the human brain. These synapses are thought to “fire” information from one cell to another in certain patterns that lead to information becoming “hardwired” in the brain. The synapses transmit information from the external senses to the brain via these patterns, thus causing the brain to interpret them, develop, and learn from them. From birth to age three, these complex synapses cause infants to develop 700 neural connections per second. These synapses are critical in sound mapping, and at the age of six months, the infant brain has already begun to “lock in” these new patterns and has difficulty recognizing brand new ones. This is because although your baby is born with all of the neurons they’ll ever need, that doesn’t mean that they’ll “need” all 100 billion. Infancy to the age of three is filled not only with rapid neural expansion, but also with neural “pruning;” a process in which unnecessary connections are nixed and others are strengthened. Exactly which connections are pruned and which are cultivated is partially influenced by a child’s environment. Synapses are cultivated or pruned in order of importance to ensure the easiest, most successful outcome possible for a functioning human being. If a function is not fostered during this stage, it is likely that the neural connections associated with it will fade. For the brain to see a skill as important, you must make it important. To put it plainly, if you only speak to your child in English, the infant brain sees no reason to retain a neural pathway regarding the little Mandarin it has heard. Babies learn about their environment at every age and are internally motivated from birth to do so. Your baby wants to learn and does so by exploring and mimicking the world around them. They’re entirely capable of building a complex knowledge of Mandarin, Arabic, or Italian. So, why not feed their mind and start now? Easy Ways to Introduce a New Language You might find yourself overwhelmed by all the information and advice on how to introduce a new language to your child. There are many products out there, but your child’s best teacher is You! These tips will help you get comfortable introducing the new language: Keep it simple One of the best ways to integrate a new language is by using it during your simple daily tasks. Babies and toddlers are constantly learning about the world around them. Using a second language during your bed or bath time routines is a perfect way to ease into your new bilingual journey. Have the whole family join in The more you use the preferred second language, the faster your child will pick it up. Encourage others in your family, adults and older children, to use the language too. Use holiday and family dinners as a platform to keep introducing the second language. Repeat, repeat, repeat Learning a new language is all about repetition. You might start feeling like a parrot but it will pay off! As your child gets older, they might choose to only speak in English, no worries, just make sure that you repeat back what they said in the language you are trying to introduce. Repetition in your daily life is a great tool that will have great results. Make it fun Raising bilingual children should be fun. Play games, sing songs and embrace the silliness of it all. Keeping it fun is very important because making mistakes is a part of learning and you want your child to not feel discouraged. You can get more specific and learn traditional games and songs. Little Pim offers great easy-to-use language learning products that you can integrate into your family life. Your child can start watching our award-winning series today! Get started on a fun, life-long journey! Bonding With Your Child Through Your Native Language Creating bonds is a very important part of raising children. It allows them to feel nurtured and loved. Sharing your native language with your child is a great bonding experience that can have a life-long impact. Family ties Many parents that are raising bilingual children have ties to the language through family. The technology that we have today makes it so easy to communicate with family that is far away. Your child will have the great advantage of communicating and forming bonds with the extended family. Being able to communicate not only broadens social skills, it definitely expands the family tree. For a fun craft, build a family tree with your little ones when they are old enough to recognize names and photos of relatives. Talk about the relationships between each family member and go over relevant vocabulary in your native language, i.e. words for mother, father, brother, sister, aunt, uncle, grandma, grandpa, etc. Cultural traditions Languages are more than just words. There is a lot of tradition within them. Through the words of your native language, your child will learn about food and traditional dishes, they will learn about music and instruments. They will hear stories that have been told through generations and pick up books of great writers. They will be able to have an understanding and participate in these traditions. The bond between you, your child and family will have stronger roots. The gift that keeps on giving Sharing your native language with your child really is a gift. It will not only set up great advantages when he/she is an adult venturing out in the world, but it will instill a strong sense of self and an emotional connection to others. One day, your child will be in the position to pass down all of the great treasures that are wrapped inside the words of that second language. Need some help introducing your child to a second language? Little Pim makes it fun and easy to learn a new language with resources your child will love! Comment below if you have any questions! Your Baby CAN be Bilingual Experts around the globe agree that language learning begins at a young age. Adults that attempt to learn a new language often struggle, whereas small children have the unique ability to latch on to multiple languages at a time. However, many parents face a dilemma when it comes to the decision of exactly when a child's exposure to another language should begin. It's a topic that poses many valid questions among parents and educators: "When should I begin teaching my child a second (or third) language?" "Should I wait until they can talk?" "Should I wait until they've mastered English?" "Will exposing them to too many languages at once cause communication difficulties later on?" Science has shown us the answer, and it's groundbreaking. Babies can learn multiple languages at a time and have no delay in language development as a result. In fact, beginning multilingual exposure in infancy may give your child an edge over their peers later on. Oral vocabulary is critical for children as they achieve literacy in any language, which doubles when a child is fluent in more than one language. A University of Washington study determined that children exposed to other languages during the first year of life fared better in preschool due in part to the fact that their vocabularies were greatly increased. The bilingual children in the study were shown to understand written language at an earlier age than their peers. Children should be exposed to multilingualism as early as possible. 6-month-old babies can understand spoken language with great clarity. Infants as young as 7 months can understand and keep their languages separate. As children reach a verbal age, they commonly mix languages together, but this is not at all a bad thing. It is a common occurrence in young and old bilinguals alike, called "code switching." Code switching is an almost universal step for children as they learn to verbalize multiple languages correctly. Children that mix their languages do so only temporarily, whereas adults that learn later in life commonly struggle with it. You can teach your child several languages at once without "damaging" them in any way. Considering that over 60% of the world population is multilingual in some way, it's easy to see that human beings are hardwired to know more than one from the start. Learn more about the benefits and how to raise your children to be bilingual at Mom Loves Best. Read the full Mom Loves Best blog post on “How Your Child Can Benefit From Being Bilingual” by Jenny Silverstone for more helpful tips and information. Here at Little Pim, we have many products that encourage language immersion from an early age. Do you have any little polygots running around? If so, let us know in the comments below! Outstanding Information on Teaching Your Child Another Language Teaching your child a second, or even third language, is exciting, stimulating, and fun, not to mention an experience that will bring you and your child closer. Moreover, the best part is you will be doing a great service for your child. Approximately, two-thirds of the world is bilingual and in the United States alone, the number of children who speak a language other than English has increased to 21 percent. The benefits of learning another language are well documented; a few of the benefits include: • - Increased intelligence • - More fluent verbal skills • - Greater memory ability • - Problem-solving savvy • - Improved cognitive skills • - Better reading/writing skills • - Larger worldview As a parent, you may have a lot of questions about how, where, or when to begin the journey of introducing your child to a new language. Let’s look at a few of the questions parents have. When is the best time to teach my child? Research shows that babies and toddlers are prime age for teaching a second language. As astonishing as it sounds, the brain of the baby is wired for learning a language. The sounds of the language are as a pattern to the brain, which acts in ways similar to a computer – coding and decoding the symbols of sound and storing it into the memory. Before the age of six years old is ideal. How can I possibly teach my child another language when I don’t know the language? This is probably the biggest concern and hold back for a lot of parents, but with immersion-style videos, books, and entertaining material, your baby can begin learning the language whether you know it or not. Actually, you will learn right along with your child. Engaging videos are a must to attract the attention of the small child. Our Entertainment Immersion Method® engages a child’s natural love of play and learning through repetition. Colorful books to touch, upbeat music, and flashcards all work to reinforce the language. Where can I find a program that will effectively teach my child another language? At Little Pim, we have developed a highly-visual, language-learning program that children fall in love with. One reason our program is effective is children can relate to their “teacher,” which happens to be the delightful, animated Little Pim panda bear. The books and videos host the adorable panda so children come to know and love the little bear. They will look forward to learning. One child’s parent is quoted as saying her son “loves the animations of Little Pim and often asks to watch them over and over again. He loves to yell the words he knows…” Teaching your child a second language has never been more fun. Choose from our 12 language sets to watch a free preview of Little Pim today! Pokémon Go Guide for Parents with Young Kids pokémon go for kids Everyone is going Pokémon crazy with the release of Nintendo's new app, Pokémon Go. As a parent of little ones, it's important to learn about the pros and cons of this app before letting your kids dive in on the fun. We've been playing for almost a week - for research purposes only, we promise ;) - and have seen the big phenomenon hit the streets of Manhattan and across the country. You've probably heard the news regarding the potential dangers of playing the game or perhaps you've downloaded the app yourself and can't get enough. We've compiled some great tips about how to make Pokémon Go a fun, safe, and educational game to play with your little ones. Protect Your iTunes or Google Play Password from Your Kids Pokémon Go is free to download, but there are in-app purchases to buy PokéCoins for different items in the "Shop." These purchases require you to login to your iTunes or Google Play account, so be sure your kids are not able to do so by disabling in-app purchases or keeping your password safe to avoid getting a huge bill at the end of the month. You and your family can still have all the fun for free as long as you play wisely to collect more items from PokéStops. This app requires cellular data Photo courtesy of J House Vlogs  on YouTube Photo courtesy of J House Vlogs on YouTube Like many mobile apps, playing Pokémon Go will require use of your cell phone's data, so hopefully you have an unlimited data plan or else you'll probably start receiving texts from your carrier warning you that you've used a majority of your data this month. If you're hitting the max data allowed per month, you may need to have your data turned off until the cycle restarts. Also, this app will do a number on your battery life. Make sure you're fully charged before you head out the door or carry a charger with you. Make it Fun AND Educational Playing the app can be rather simple once you understand what to do. You're playing as the Pokémon trainer who collects Pokémon (cute, little "pocket monsters" with unique traits and skills) outside. The app connects to your GPS to show you your location and the whereabouts of Pokémon in the wild, nearby PokéStops, and gyms where you can virtually battle other players. At the end of the day, you and your kids could be walking miles on this virtual scavenger hunt while discovering local landmarks and small businesses that you'd normally never visit. This provides a great opportunity for kids to get outside and explore, with your supervision of course. When you get to a PokéStop and it's a historical landmark, spend time with your little ones to read about the landmark and start discussions about the history. Playing Pokémon Go during summer vacation can be a fun way to teach your kids about your local surroundings and to provide incentives to take trips to the library or museum for more typical summer learning. You can even use family trips to a local gym or PokéStop as an incentive for finishing a desired task or summer reading. Always Be Aware of Your Surroundings According the the AppStore and Google Play store, the recommended age to play is 9+ years due to a warning for "Infrequent/Mild Cartoon or Fantasy Violence." Our biggest concern is having little kids roaming the streets while looking down at their device ("distracted walking") or being "lured" into a dangerous area, which is why we recommend that a parent or guardian is always present to supervise your children, especially your young ones when playing this app. Recent reports mentioned that players are using the "lures" (a feature used to lure more Pokémon to a location) to plan a robbery or to lure children. Always look up when walking and hold onto your kids when crossing a street or intersection. We recommend playing this game at your local park or an area where there is little traffic. Another part of the game involves eggs that hatch into new Pokémon. When you collect an egg, you can incubate it by walking a certain distance (2 km, 5km, 10km) to make it hatch. We love that this feature gets you and your whole family outdoors walking instead of indoors on the couch. Different types of locations have different varieties of Pokémon, so you will have plenty of opportunities to explore fun spots with your kids, for example, when you visit a body of water such as a lake or river, you will see more water Pokémon. It's a Great Way to Make new Friends Parents playing the app with their little ones will quickly notice they aren't the only ones. When walking to a PokéStop or local museum or library that put out a lure to gather people for an event, you will most likely make a connection with another family. Since school is out, now's the perfect time to get out there and meet other parents and children who have similar interests. It's also a great opportunity to connect with your local area's small business owners and support them by buying the family ice cream or a delicious pizza pie! Due to the game's diverse players, you're probably going to meet a bunch of families who are also raising bilingual children. This gives your kids a great opportunity to practice speaking in their second language with other children their age. Language Learning with Pokémon Go Here at Little Pim, we're all about making language learning fun, easy, and effective for young children. We thought of ways to tie in language learning into the game to keep their brains active all summer long. You can have your kids count the number of steps to catch the Pokémon in the foreign language they are learning. If the Pokémon is further away, help them out with the bigger numbers and eventually they will learn all the numbers in the new language. This app also forces you to learn the metric system as the distance to walk to hatch your eggs is in kilometers you can convert them to miles. A recent article by MentalFloss pointed out that according to Google Trends, searches for “how far is 2 km” and “how far is 5 km” spiked after July 6. Create your own flashcard set with a Pokémon Go theme. Choose vocabulary words that you encounter while playing the game, i.e. street, library, tree, ball, catch, throw, as well as all the related animal names you can think of. If you're child is learning Japanese with Little Pim, teach them the 1st Generation Japanese and English Names: [iframe id="" align="center" autoplay="no"] Explore New Cultures NYC Cultures Here in New York City, we have an extraordinary mix of different cultures present within walking distance. For example, you can take a family trip over to Koreatown with your little language learns to get a glimpse of the Korean culture and enjoy the delicious cuisine at an authentic restaurant. Perhaps you'll run into a nice family of native Korean speakers that are also playing the game to spark up a conversation so your child can practice speaking in Korean. Head over to Little Italy to catch some Pokémon and practice your Italian by pronouncing the various food and restaurant names. Enjoy some delicious Italian cuisine when in the area. Learn more about NYC's ethnic neighborhoods from BusinessInsider to begin exploring this summer whether you're a local or just visiting. Have Fun and Be Safe Outdoor play and social interactions for kids is great, but can also present risks. As a parent of little ones, we recommend you supervise your child's cellphone use and play this fun game by their side. Make it a family activity and take the opportunity to teach your kids about "stranger danger" and the risks of "distracted walking." We hope you enjoyed reading this guide and wish you the best of luck in "catching them all!" If you have any other tips for parents playing Pokémon Go with their kids, please comment below. Don't forget that you can also take Little Pim with you during summer vacation with our digital downloads available in 12 languages. Your kids will be speaking a new language in no time with our unique approach. Learn more on our website or contact us during business hours. Enjoy the rest of your summer and stay safe! A Simple Guide: Which language is best for my child to learn? Choice is an incredible gift. For parents, however, it is also paralyzing. When our choice regards our children's education, we catalog every possible option, outcome, success, and worst-case result, don't we? Little Pim applauds such parents who want desperately to choose what's best for their child. We recognize how this deliberation is firmly rooted in love, so we not only gift you with choices, we also equip you with helpful tools to choose. Seeing as multicultural awareness and the growing necessity for well-rounded children has never been as strong, we're thankful for your interest in at least one of the 12 language programs we offer. You've likely had the thought: Which language is best for my child to learn? The following guide should help you confidently navigate your choice, as well as this important note. Children aged 0-6 have brains best for learning up to three languages at once! If you can't choose one, why not consider two or more? Your child will soon thank you for this choice between multiple languages learned. What a unique potential to influence our world! Little Pim's Twelve Language Programs: As the 2nd most common language in the United states, Spanish is one of the simplest languages for English-speaking children to learn and one of the most useful languages in the world for travel. There are over 414 million Spanish-speaking people in the world. Spanish lends well to learning other Latin-based languages in the future such as French and Italian. These languages all have Indo-European roots and share some characteristics that are present in Spanish but not English. Knowing Spanish can open up many job opportunities for your little ones, especially in the United States in healthcare or education industry. Check out LeapFrog's blog to learn about 10 benefits of teaching your child Spanish. Did you know that French is the most widely studied language in the world? As the official language of over 29 countries, French is highly utilized in the world of higher learning, literature, culinary arts, and fashion. It is also recognized as an official language of the United Nations. There are also many words in the English language that have French origins, such as "rendezvous" or "cinema." French is also one of the foreign languages our founder, Julia Pimsleur, chose for her two boys, Emmett and Adrian. Adrian speaks fluent French and Emmett speaks some Spanish, French and Hebrew. Mandarin Chinese It's the most widely spoken language in the world! An increasing demand for Mandarin-speaking employees is just one reason to start your child early! Spoken by over 1 billion people worldwide, Mandarin is an official language of the United Nations. Mandarin Chinese is tonal, which means that pitch is used to distinguish its lexical or grammatical meanings. The earlier a child begins to learn this language, the easier it is for them to pick up on the differences in tone and begin employing them correctly. The latest trends we’ve seen at Little Pim are new parents choosing to teach their child Mandarin alongside romance languages like Spanish and/or French. As the official language of the former Soviet Union, Russian is still spoken in 15 European and Asian countries. Russian is spoken by almost 280 million people worldwide, and is an official language of the United Nations. It is the fifth most frequently spoken language in the world. International political developments and growing business opportunities with multinational companies have led to increased demand and opportunities for Russian speakers. The Russian alphabet is easy to learn and only has 33 letters. It is a Cyrillic script, which is a writing system used for alphabets across Eastern Europe, as well as North and Central Asia. The Russian alphabet is wonderfully phonetic, making it even easier than English as the letters have a consistent pronunciation. Italian remains one of the top 5 languages studied in US colleges. Over 7,500 businesses correspond with Italy hosting over 1,000 US firms. If you're child is a musician or music lover, he or she will love learning Italian. Did you know that Italian is the language with the highest number of words for naming food, restaurants, dishes, and produce? For more reasons to learn Italian, check out The Italian Academy's article on the "Top 10 Reasons to Learn Italian." As the 10th most spoken language in the world, this language has English roots. Phew! There are thousands of words that are closely related known as "cognates." Why not try this language long-associated with academia and science. Knowing German also increases business opportunities as Germany is the #1 export nation in the world. Almost every nation in the world includes some aspect of Japanese culture and commerce. Tourists flock to Japan annually, supping from its offerings and influence. Japanese is the 9th most spoken language in the world, with 128 million speakers. Japan has the 2nd largest economy in the world, which leads to increased demand for Japanese speaking experts. Learning Japanese may also inspire your child to learn the other Asian languages we offer such as Korean or Mandarin Chinese. As an official language of the United Nations, Arabic is the most widely spoken Semitic language. Arabic is spoken by roughly 300 million people. Many English words have Arabic roots; words like 'candy,' and 'spinach.' Yum! According to, "In the last 15 years, U.S. government agencies have expressed a much greater need for Arabic speakers to address the complex political, military, and economic questions surrounding U.S. engagement in the Middle East and North Africa." Over 10 million people speak Hebrew daily. Worldwide, millions more study Hebrew for both religious and cultural reasons. If you or your little ones plan to travel to Israel, learning Hebrew will definitely come in handy as it's the national language. Israel is also one of one fastest-growing high-tech economies in the world. Learning Hebrew can be easy and fun, especially with Little Pim by your side. hebrew for young children Welcome to the language of the Southern Hemisphere! Because this language is rarely studied, speaking it is an incredibly marketable skill. Did you know that Portuguese is the 6th most spoken language in the world, with 215 million native speakers? By learning Portuguese, your kids will have a much easier time picking up any of the other romance languages like Spanish, French, or Italian since they all have Latin roots. Korean is currently growing in popularity due to South Korea's powerful economy, geopolitical importance, art and culture. There are over 80 million Korean speakers in the world and the Korean culture is like no other. Many people choose to study Korean because they fell in love with the culture. Korea is famous for K-pop music and Korean dramas. For more reasons, check out our blog post on why your child should learn Korean. Little Pim's most popular language program outside the United States in our English/ESL program. After Chinese and Spanish, English is the world's most spoken language with over 335 million speakers worldwide. Learning a second language can be fun, easy, and effective with Little Pim. Language learning should always be a positive experience and cannot be rushed. Remember to praise your little ones for speaking in the second language. Teaching your child a foreign language can be a great way to give your child a head start and prepare him or her for the global economy. For more extensive explanations, you can read further here. And of course, please do not hesitate to comment below contact us with any questions. Why Bilingualism is Crucial to Your Child's Future The world is getting smaller and smaller. Jet liners, bullet trains, the internet and new international markets are blurring the lines on our old maps. Our future is changing. The world that our children grow into isn't going to be the one of ours or our parents. That's why it's time to take the future seriously. Parents, grandparents and teachers need to put on their "game faces" and have a serious talk about bilingualism. When a child is bilingual, their mind opens up to an entirely new world. We know that in this ever-changing global economy, those fluent in more than one language have better odds at a brighter future. The United States has seen a rapid change in language and culture over the last century that has facilitated the growth of professional bilingualism in the public and private sectors. To put it into layman's terms: bilingualism = jobs. Translators have always been an important component at every level of government and business. But translating isn't the only profession that requires the mastery of another language. Today, educators and medical professionals often find themselves in situations that require the use of a language other than their native tongue. Complex global affairs have caused leaders to identify a need for bilingual talent within the government. Corporate outsourcing has increased the amount of multilingual interactions in the business world. Many nations around the world are rising as economic superpowers - such as Russia, China, and India - and to learn the languages of such nations increases the desirability of any potential hire. You must be are these things relevant to my child now?  Foreign language careers are on the rise. When your bright-eyed three-year old graduates from college, she'll enter into a job market in which multilingualism is a highly sought after skill. Research done by Korn/Ferry International stated that over 66% of North American recruiters felt that being bilingual would become extremely important over the next 10 years. Today, many HR departments require eligible candidates to be bilingual. If you look on any job posting website, you will likely see hundreds of jobs - even part-time work - that require bilingual candidates. Language learning should start young. Adults can learn languages, but as our brains mature they tend to over-analyze. This makes it incredibly difficult for many adults to pick up a second language. Young children don't have this problem. According to a study at MIT, children go through a "sensitive period" for language learning that lasts until puberty. Between birth and five years of age, the human brain is hard-wired for learning multiple languages*. After age five, this critical window begins to close and it gets much harder to acquire a new language and a good accent.* Language learning is proven to "feed the mind." Learning another language gives kids an educational edge over monolingual peers. Longitudinal studies at Harvard suggested that language learning "increases critical thinking skills, creativity, and flexibility in children." Speaking more than one language can help kids with planning and problem solving. It also helps children with attention and cognition. According to Psychology Today, children in bilingual environments perform better on standardized tests and have better academic performance in general. To give your kids a leg up in a competitive educational environment as well as the job market, it's imperative that language immersion starts now. Getting your child started in language learning can give them the skills they need for a secure future. At Little Pim, we're here to help you through that journey by giving you the tools that you need. If you have questions about how Little Pim could benefit your child, or about the benefits of language learning, don't hesitate to contact us or comment below today.
Aliens may have travelled to Earth already without us noticing, according to a NASA scientist. Silvano P Colombano, a computer scientist at NASA Ames Research Centre, says humans may not have even noticed the extraterrestrials arrival on earth due to the vastly different looks. Colombano says the aliens could have arrived to the planet in the form of something other than carbon-based organisms - meaning they would go undetected. In a research paper, he wrote: "I simply want to point out the fact that the intelligence we might find and that might choose to find us (if it hasn’t already) might not be at all produced by carbon-based organisms like us. Colombano said aliens could have arrived on earth without anyone noticing "The size of the ‘explorer’ might be that of an extremely tiny super-intelligent entity, reports The Independent. "If we adopt a new set of assumptions about what forms of higher intelligence and technology we might find, some of those phenomena might fit specific hypotheses, and we could start some serious enquiry." Colombano also says it is worth re-considering what civilization may look like across the universe. He said scientists should "consider further that technological development in our civilisation started only about 10,000 years ago and has seen the rise of scientific methodologies only in the past 500 years". Undated handout photo issued by Eden's Science Month of a bizarre jellyfish-like being dreamt up by a British scientist as an example of life 'not as we know it' Scientist Dr Maggie Aderin-Pocock said this is what an alien could look like He added that because of this as humans may have a “real problem predicting technological evolution even for the next thousand years, let alone six million times that amount.” Scientist Dr Maggie Aderin-Pocock also said extraterrestrial life could look very different than expected. Basing her theory on how life began on Earth, she believes the mutants would be large marine-type animals with metallic skins and orange backsides. They would suck chemicals from the atmosphere and communicate using pulses of light along their spines.
Return to Learning Lab Cranial Cruciate Ligament Tears and Bracing in Dogs Tearing of the cranial cruciate ligament (CCL) is one of the most common injuries in dogs and it commonly leads to decreased use of the limb, pain, and subsequent arthritis. This condition is typically referred to as cranial cruciate ligament disease and is caused by a combination of factors including aging, obesity, poor physical condition, and genetics. It is important to note that partial tearing of the CCL will lead to a full tear over time and dogs that have a tear in one knee have a 40% - 60% chance of developing the condition in the contralateral knee1. Torn ligaments do not heal and cannot be repaired completely. The PawOpedic™ Custom Stifle Orthosis is the perfect solution for pets with CCL injuries when surgery is not an option, as a preventative measure, or as a postoperative support device. How Does it Work? The CCL is a main stabilizer of the knee joint (stifle). It runs across the stifle attaching the femur to the tibia. The CCL holds the tibia in place and prevents internal rotation and hyperextension. Without the CCL intact, the surrounding ligaments and musculature are all that is left to prevent the tibia from translating forward and internally rotating. PawOpedic's Custom Stifle Orthosis can help prevent this unwanted movement by applying corrective forces that hold the tibia in the proper position in relation to the femur bone. Figure 1. Comparison of unbraced CCL tear on left and braced CCL tear on right. Notice the forward translation of the tibia in relation to the femur in the unbraced injury.
Algebraic Expressions You may be asked to solve some simple algebraic equations, mostly with ratios and proportions, and to convert between various units. Recall from Section 1: Ratios and Proportions, that a fraction is one way to write a ratio. Let’s say that there is a ratio that is expressed as a fraction, and you would like to know what that same ratio would be with a different denominator. This will be useful for comparing different ratios with one another and for presenting ratios in an easily understood manner. Here is an example: We’ll start with a ratio of 2.1:10 This can be written 2.1/10 Now, for the moment, let’s assume that this represents the number of disease cases in a population of 10 people. It can be awkward to think of fractions of people. You would never walk into a room and see 2.1 people standing there talking about the weather. For this reason, in Epidemiology, it is conventional to avoid fractions of people. To do this, you just increase the population size under consideration (the denominator). This is very easy to do if you increase the denominator size by a factor of ten. In that case, all you need to do is move the decimal one place to the right in both the numerator and the denominator (multiply the fraction by 10/10). Like this: 2.1/10 = 21/100 You could keep going, if you wanted to, 2.1/10 = 21/100 = 210/1000 = 2100/10,000 = 21,000/100,000 Similarly, if you wanted to decrease the size of the denominator by a factor of ten for some reason, you would just move the decimal one place to the left in both the numerator and the denominator (multiply the fraction by 0.1/0.1). Like this: 21/100 = 2.1/10 = 0.21/1 = 0.21 What if the ratio does not have a factor of ten as its denominator? For example, let’s say we have a ratio of 8:20,000 and you need to know how many that is per 100,000. 8/20,000 = x/100,000 Here, we can’t just move the decimal around. Happily, there is a simple method we can use to convert this fraction, and solve for x. Some people call it the ‘flying x’ method, or ‘cross multiplication’, because you multiply across the ‘equals’ sign in an ‘x’ pattern. To do this, first multiply the numerator of one fraction by the denominator of the other. This number goes on one side of the ‘equals’ sign. Like this: 8/20,000 = x/100,000 8 * 100,000 = ? 800,000 = ? Then, you do the same thing again with the remaining numerator and denominator. This number goes on the other side of the ‘equals’ sign. Like this: 8/20,000 = x/100,000 8 * 100,000 = 20,000x 800,000 = 20,000x All that is left to do is solve for ‘x’. To do this in our example, we divide both sides by 20,000. So, we get 8/20,000 = x/100,000 8 * 100,000 = 20,000x 800,000 = 20,000x x = 800,000/20,000 x = 40 8/20,000 = 40/100,000  Let’s look at two more examples: 8/21,463 = x/100,000 8 * 100,000 = 21,463x 21,463x = 800,000 x = 37.27 8/21,463 = 37.27/100,000 Remember, if this were something like case/population size, we would want to avoid fractions of people, so we might write it like this: Here’s the second example: 0.53/73 = x/100,000 73x = 0.53 * 100,000 73x = 53,000 x = 726 0.53/73 = 726/100,000 This method works just as well if you want a different denominator. Here’s an example: 1/8 = x/60 8x = 60 x = 7.5 1/8 = 7.5/60  Video Tutorials Interactive Quiz Frequently, you will have to consider units of measurement (like centimeters, people, or bushels) in your calculations. Sometimes, it will be important to have all parts of an equation in the same units. To do this, you may have to convert between units. For example, you may need to convert part of your equation from ounces to grams. The easiest way to do this is to multiply the part of the equation that you need to convert by a fraction that is equal to one. Recall that a number divided by itself is equal to one, like 7/7, 21/21, or 50,000/50,000. Recall also that multiplying any part of an equation by one does not change its value at all. What does all that mean for us here? Just this: If there are 5 ships in a flotilla, then 5 ships/1 flotilla = 1,and 1 flotilla/5 ships = 1 Here’s a real example: There are 0.035 ounces in a gram., so 1 g = 0.035 oz and… 1 g/0.035 oz = 1 = 0.035 oz/1 g You can treat units like numbers. They can be multiplied or divided. This means they can also be cancelled. If you multiply two fractions, and they both contain the same units, except that one fraction has the unit in the denominator and the other in the numerator, they cancel each other out. Like this: oz/person * g/oz = g/person The ounces cancel each other out. So, let’s say each person in a group got 8 ounces of steak, and we need to know how many grams each person got. 8 oz/1 person * 1g/0.035 oz = 8 g/0.035 people This is a very awkward fraction, so we change the size of the denominator: 8/0.035 = x/1 0.035x = 8 x = 228.6 so…each person got 228.6 g of steak. Let’s try another example. 60 inches/1 measuring stick = x cm/measuring stick We know from a table that 1 inch = 2.54 cm. So… 60 in/1 stick * 2.54 cm/1 in = 152.4 cm/1 stick = 152.4 cm/stick This works just as well for denominators greater than 1, or for when you need to convert the denominator. Here’s an example with a denominator greater than 1: 1 forest = 40 trees 20 forest/45 glade = x trees/45 glade 20 forest/45 glade * 40 trees/1 forest = 800 trees/45 glade, and if you want… 800 trees/45 glade = 17.8 trees/glade or 178 trees/10 glade Remember how to get that? An example converting the units of the denominator: 40 dots/1 inch = x dots/cm 40 dots/1 inch * 1 inch/2.54 cm = 40 dots/2.54 cm = 15.75 dots/cm This method of unit conversion is very helpful when setting up equations. If you do it this way, you can check to see if your equation is set up correctly by looking to see if you end up with the units you want. Here is an example using made-up money conversions: Let’s say 2 pounds = 1 dollar, and 1 pound = 150 yen, but you need to know how much 20 dollars is in yen. You could blithely start out with this equation: 20 dollars * 1 dollar/2 pounds * 1 pound/150 yen = x Is this the right equation? Let’s check the units… dollars * dollars/pounds * pounds/yen = dollars2/yen We get ‘dollars squared over yen’, not plain old yen. So, it can’t be right. Let’s try again using our cancellation method. We want to go from dollars to yen via pounds. So, we start with dollars and convert to pounds. dollars * pounds/dollars = pounds Then we convert pounds to yen. pounds * yen/pounds = yen When we put this together, we get dollars * pounds/dollars * yen/pounds = yen This is what we want, so we just plug in the numbers and multiply it out. 20 dollars * 2 pounds/1 dollars * 150 yen/1 pound = 6000 yen/1 = 6000 yen  Video Tutorial Interactive Quiz
District: Safed Population 1948: 1030 Occupation date: 01/05/1948 Military operation: Yiftach Jewish settlements on village\town land before 1948: None Jewish settlements on village\town land after 1948: None Mallaha Before 1948 The village was located on the north bank of Wadi Band, a seasonal watercourse which flowed  into the northwest corner of Lake al-Hula. This wadi was filled by water from the spring of  ‘Ayn Mallaha , which lay to the south of the village and which was one of the most copious  springs in Palestine, yeilding between 1,800 and 2,700 cubic meters of water per hour. The  village of Mallaha lay along a highway that led to Safad and Tiberias. In 1157, following a breakdown in relations between Damascus and the Crusader Kingdom of Jerusalem, Mallaha  was close to the site of a battle between the armies of Nun al-Din ibn Zangi (also known by his first name Mahmud)  and King Baldwin III (in command of the Templars), in which the  Muslims decisively defeated the Crusaders. Their king escaped, however, together with a small bodyguard. The Syrian Sufi traveler al-Bakri al-Siddiqi, who journeyed in Palestine in the mid-eighteenth century, passed by a village named al-Mallaha, which may have been the present village. The Ameri­can biblical scholar Edward Robinson observed in 1838 that al-Mallaha  lay northwest of Lake al-Hula.  The modem Mallaha had a roughly rectangular con­figuration that stretched from north to south. Its entire popula­tion was Muslim. Agriculture was the mainstay  of the village’s economy. In 1944/45 a total of 1,761 dunums was allocated to cereals. Occupation and Depopulation Israeli forces seized Mallaha at the very end of Operation Yiftach (see Abil aI-Qamh, Safad District), on 25 May 1948. They induced the villagers to flee by carrying out a campaign of psychological warfare. However, a direct attack, which may have included the use of mortars, cannot be ruled out, since most of the psychological warfare was conducted some ten days before the reported date of evacuation. Furthermore, Zionist forces directed mortar fire against a number of other neighboring villages at this time in the context of Operation Yiftach. Israeli Settlements on Village Lands There are no Israeli settlements on village lands. The settle­ment of Yesud ha-Ma’ala , founded in 1883, is some 5 km to the southeast. The Village Today The sandy hill on which the village was situated is com­pletely overgrown with tall grass, cactuses, and weeds, as well as an assortment of fig, eucalyptus, and date-palm trees. Amidst the overgrowth, stone rubble from destroyed houses can be seen. The surrounding land is cultivated by the settle­ment of Yesud ha-Maala. Zochrot online
Barak and Yehudah's Paradigm (Ideas #22) Leadership is the art of motivating people to do what they would not otherwise be inclined to do. Axiomatically, Jewish leadership is almost an oxymoron. We are a stiff-necked people for good and for bad. Jews are not prone to bending their will and as a result, many a Jewish leader has had more than their fair share of heartache. In the three parshiot dealing with Yosef, we have a most striking contrast between effective and ineffective leadership. Yehudah's natural leadership abilities are dramatically magnified by his brother Reuven's seeming ineptitude. Reuven's awkward attempts to provide leadership meet with limited success. Yehudah, however, is able to bend the will of his brothers, his father and even the viceroy of Egypt. Great insight can be gained by comparing the way Reuven and Yehudah try to convince their father to allow them to return to Egypt with Binyamin. Yakov's indifference to Reuven's plea is easily understood, given the assurances Reuven gives his father. If Reuven's plan fails and Binyamin is lost, Yakov is given the right to inflict poetic justice (midah k'neged midah) on Reuven by killing his sons. It is hard to imagine why Reuven would believe that Yakov would be motivated by the thought of killing his own grandchildren. In contrast, Yehudah provides convincing arguments, personal responsiblity and appeal to the unity of the group he is trying to lead. Compared to Reuven, who speaks of my children vs. your children, Yehudah speaks about the survival of all three generations together. Knowing that Yakov cares greatly for his children and grandchildren, Yehudah points out that if permission is not granted to go to Egypt, the entire nation will die. Not only does Yehudah stress his personal responsibility to his father, he later lives up to it when Binyamin is arrested: Yehudah's immediate response is to offer himself up as a captive in place of the brother he has sworn to protect. More than anything else, however, Yehudah is a master of timing. Reuven's idea of timing is to address a situation as soon as it arises. Yehudah knows that one must be patient, that silence is better than speaking to men unwilling to listen and consider alternatives. As soon as the brothers return from Egypt without Shimon, Reuven tries to convince Yakov to take the necessary risks and send them back with Binyamin. Yakov was certainly still in shock at the loss of Shimon, and certainly in no mood to be convinced into taking more risks. Yehudah, however, bides his time and waits for the famine to get more pressing - he knows to wait for his father to calm down and realize the questionablity of his own obstinacy. Later, in the midst of a discussion between Yakov and his sons that seems to be going nowhere, Yehudah senses his cue. The very futility of that discussion gives proof to Yehudah's assertion that the status quo is untenable. Yehudah knew that the most convincing proofs would not have worked until that point in time when Yakov was ready to listen. Timing was also what allowed Yehudah to save Yosef from the pit in the first place. Whereas Reuven strikes an uneasy compromise with his brothers, Yehudah is able to get his brothers' full endorsement. Rashi marks the difference between the brothers' responses to Reuven and Yehudah, by noting that the brothers' "shmeeya" of Yehudah denotes acceptance, something absent from their earlier acquiescence to Reuven. As in the previous example, Yehudah is only able to convince his brothers once they calmed down and their guilt feelings started to seep in. The brothers had to be ready to listen. While Ehud Barak's initial moves seemed impressive(see ideas 12), he is now showing that he lacks Yehudah's most powerful leadership tool - timing. Indeed, Barak mounted arguments that, like Yehudah, spoke of national unity and personal commitment. He distinguished himself from Rabin by showing sincere appreciation for the settlers. He avoided Rabin's unfortunate decision to villify his opponents on the right and instead sought to explain why his policies were for the benefit of all Israelis. So far, so good. In August, Barak could have convinced a majority of Israelis to make great sarifices for the sake of peace. I believe that the further concessions now being discussed would have been doable (and even productive) several months ago. That was then. Yehudah would have realized that by now, the window of oppourtunity (or insanity for those of you on the right) has been shut. Barak himself articulated the gut reaction of most Israelis when under attack - no negotiations and no new concessions. Right or wrong (and I think it's right), we are not in a mood to be generous to people who resort to violence when they cannot get what they want. Our emotions are still frayed by the random killing and maiming of Jews endorsed by Palestinians across political lines. Right or wrong, we are not ready to listen. Like Yehudah would have, Barak should have realized that silence is better than trying to convince people at a time when they cannot listen. The timing has been lost and what is left is to wait for better opportunities in the future. Perhaps Yehudah had the luxury of not having election deadlines, but ultimately it makes no difference. A true leader will always be able to save the day when the timing is right. When the timing is not right, a skilled leader bides his time. Otherwise he ends up becoming like Reuven: at best gaining resentful compromises and at worst becoming totally irrelevant.
Home > News > Content What Is A Skirt Conveyor Belt? Apr 22, 2019 The skirt belt is a pvc conveyor belt with a skirt and a skirt. Mainly to prevent loose things from falling from both sides. With the development of domestic industry, the food, tobacco, medicine, agriculture, electronics and other industries have gradually applied lightweight skirt conveyor belts, which are made of high-frequency welding of thermoplastic elastomers such as pvc and pu. The processing is flexible and the application is light. Multi-variety, high performance, light weight, multi-function, long life, large conveying angle, wide range of use, small floor space, large conveying capacity, no transfer point, reduced civil construction investment, low maintenance cost, and solved the ordinary conveyor belt or The conveying angle that the pattern conveyor belt cannot reach. 1, no partition conveyor belt 2, there is a partition conveyor belt product structure: 1. High strength and high wear resistance base belt has greater lateral stiffness and longitudinal flexibility. 2, high strength hot vulcanized rubber corrugated ribs. 3. A horizontal partition that prevents the object from sliding down. Daily maintenance: 1. Keep the conveyor belt clean and avoid direct sunlight; 2, placed in a roll, not folded; 3. Different conveyor belts should not be used together; 4. The running speed of the conveyor belt should not be too fast;
November 6, 2014 by ACE Physical Therapy and Sports Medicine Institute     Back to BLOG   trigger finger Treating Trigger Finger by ACE Physical Therapy and Sports Medicine Institute Tips for Trigger Finger. • Trigger finger effects woman more than men and usually between the ages of 40 – 60 years old. • Using ice, gentle stretching and massage to the nodule can usually reduce the symptoms significantly. • If you have a trigger finger, attempt to move it to full extension slowly.  Forceful pulling will irritate the tendon nodule further. • Using a splint to prevent finger flexion can help to reduce the inflammation. • Seek the advice and treatment from a Physical Therapist if you have the symptoms of a trigger finger. If your finger or thumb catches or locks in bent position, you could be suffering from trigger finger. This common condition can develop when people whose jobs or hobbies require repetitive finger and thumb for movements. It also commonly affects people with conditions such as rheumatoid arthritis, gout or diabetes. People suffering from trigger finger may experience the finger(s) getting “stuck” in a certain flexed or bent position. What Causes Trigger Finger? Trigger finger can result from tendon damage. Finger movement relies on a group of muscles that originate either near the elbow (extrinsic) or in the palm area of the hand (intrinsic).  Extrinsic muscles have very long tendons that begin to course their way from the distal portion of the forearm through the palm aspect of the hand and make it to varying parts of the different fingers and thumb.  When the tendon moves it slides through a tight tunnel or tendon sheath that extends the length of the finger. Overuse can cause tendon damage, causing tendinitis and pain.  In severe cases the tendon can thicken, forming a nodule at the “mouth” of the tendon sheath tunnel. This makes it difficult time to slide through, so the tendon gets “stuck” in a flexed position. This is known as a trigger finger.   The person suffering from the trigger finger symptoms might experience a “popping” sensation when the nodule passes through the mouth of the tendon sheath tunnel.  The nodule is usually painful to palpation or touch. The most common cause of trigger finger is the extensive use on one’s hands.  Repeated grasping of objects during work, sports or hobbies can lead to the development of the finger flexor muscle tendinitis and nodules.  People with diabetes and rheumatoid arthritis experience trigger finger symptoms more frequently than those who do not suffer from these conditions.  Women who are between the ages of 40 and 60 years old suffer from trigger finger more often than men of the same age groups. Treating Trigger Finger Normally a doctor will need to examine the affected finger.   Initially, treatment focuses on reducing the inflammation and pain using the standard approach to treating any other form of tendinitis.  This begins by applying ice, heat, and massage; doing gentle exercises; using a brace; and taking a NSAIDs. If this is unsuccessful, the doctor might give the person a steroid injection and refer to them Physical Therapy. The Physical Therapist introduces gentle exercises to help increase blood flow to the tendons.  The exercises help to “re-align” the new tendon fibers as they heal from the tendinitis condition.   Massage and stretching will begin to reduce the size of the nodule and restore the normal gliding action of the tendon. The therapist might use various modalities such as ultrasound, laser, electrical stimulation, ice massage and iontophoresis to reduce the inflammation in the tendon.   In the worst case scenario, the surgeon has to perform a minor surgery and “open” the mouth of the tendon sheath which makes it easier for the enlarged nodule to pass through it.   Physical therapy following surgery is geared at reducing the size of the nodule and restoring normal motion and strength of the finger flexors. Trigger finger is a common condition that can effect people who uses their hands extensively.  The nodules that form in the tendon tissue can become large enough that they do not pass through the mouth of the tendon sheath tunnel.  This condition with symptoms of pain on the nodule and restricted motion of a given finger with a possible “popping” sensation as the finger moves into full extension can usually be successfully treated with Physical Therapy. Therapist if you have the symptoms of a trigger finger. Vist our main website at www.ACE-pt.org Back to BLOG Terms and Conditions  |  Privacy  |  Locations & Registration
By CRG Staff Malcolm CasadabanIn September of 2009, Malcolm J. Casadaban, a molecular genetics professor at University of Chicago, died from exposure to a form of bubonic plague, only 12 hours after he was admitted to the hospital. Dr. Casadaban had been studying Yersinia pestis, or septicaemic plague; however, university officials maintain that the strain had been weakened to make it safe for research. The source of Dr. Casadaban's illness was not discovered until too late. The revelation that he had died from exposure to the same plague bacteria he had studied for eight years led to a slew of questions. If the bacteria was supposed to be safe for research, how could this have happened? After eight years of studying the bacteria, why did Dr. Casadaban only contract it now? And if he contracted the disease in the laboratory and then went out into the world, might others have caught it? The Centers for Disease Control and Prevention are investigating the incident along with the University of Chicago, but details about Dr. Casadaban's death and his experiments have been held tightly by the University, even from his own family. Dr. Casadaban's family is still trying to learn what led to his death, and answers from the University and the CDC have remained far from forthcoming. One can hardly help but wonder: are the investigators stumped, or is there something they are keeping to themselves? Search: GeneWatch View Project Genetic Testing, Privacy and Discrimination View Project
Show Summary Details Page of date: 23 April 2019 Abstract and Keywords The aim of the chapter is twofold: explaining the prerequisites for providing a phonetic/phonological annotation of speech data; and presenting the different systems that can be used to encode the phonetic and phonological events present in the speech signal. Since phonetic/phonological annotation can be seen as the assignment of a label to a specific unit in the data, the segmentation of the speech signal and the assignment of labels are crucial tasks in the annotation process, regardless of the system chosen. As to the presentation of the systems, a distinction can be made between systems that are primarily designed to represent the segmental dimension of the speech signal and those that encode prosodic events such as stress, phrasing, and tonal or intonational patterns. In this chapter, we explore the advantages and limitations of the systems presented by considering the different types of speech data that one may want to annotate (standard data or non-standard data such as acquisition data, pathological speech, etc.) as well as the amount of knowledge about the language spoken that the annotator needs to have in order to successfully transcribe speech with the system in question (whether its phonology is known, etc.). Keywords: transcription systems, corpus annotation, prosodic transcription, segmental transcription, levels of representation, phonological representation, phonetic representation, orthographic transcription, audio annotation Please subscribe or login to access full text content.
nutrition for bodyAll residing issues need meals to outlive. Since low-calorie diets are widespread when selling weight loss, one of the important considerations with these low-calorie diets is the risk of shedding muscle mass, a unfavorable final result by way of weight loss, well being danger, and purposeful power. Foods to keep away from are pink meat, corn, and rye. Limit meals high in unhealthy saturated fat (pink meat, cheese, butter, and ice cream) and trans fats (processed products that contain partially hydrogenated oil), which increase your risk for illness. Eat a number of colorful vegatables and fruits day-after-day, and range the types of proteins you eat,” Solomon says. Listed here are fascinating things that happen to your body when you start maintaining a healthy diet. Additional, your body will be shedding all the surplus water you’ve got retained from excessive sodium consumption and highly processed meals that you just were consuming earlier than. Vitamin C is important for development and restore of all physique tissues, helps heal cuts and wounds, and retains enamel and gums wholesome. Throughout therapy for breast cancer, some people may need extra protein than regular. Ailments often occur in case you endure from a lack of vitamins. By monitoring physique composition, dietitians can design vitamin packages set to maximise muscle maintenance and development. Benefits: Acts as a disease-fighting antioxidant; may assist to keep up a wholesome immune system. Unsaturated fat are important on your body as they provide important fatty acids your physique cannot make.
Next Content Previous Content Chapter 0: Instructor's Guide to Integrating Concepts in Biology How does this book help your students achieve their learning potential? How do people learn? What is the best way to retain information? A study led by Daniel Udovic at the University of Oregon13 compared two introductory biology courses: one was an active-learning course where students constructed their own knowledge, and the other was a traditionally taught lecture course. Udovic and his colleagues measured the mean percentage change in a pretest versus a posttest for each of the two courses (Figure 1). The test covered basic concepts in evolution, natural selection, ecosystems, communities, and populations. Changes in individual performances between pretest and posttest are plotted on the y-axis. Purple bars are means for the active-learning course, and teal bars are means for the traditional course. Class sizes were 61 for the active learning course and 62 for the traditional course. Figure 1 Average change in test scores organized by type of question. Changes are the difference in individual performances between pretest and posttest. Purple bars are averages for the active-learning course, and teal bars are averages for the lecture course (+ 1 SE). * = p < 0.05; ** = p < 0.01; *** = p < 0.001; p > 0.05. Figure 1 from Udovic et al., 2002, by permission of Oxford University Press and AIBS.) These results are supported by other studies, as well as a meta-analysis of active learning studies.11,12 Years of research has shown how people learn best: 1) people learn best if they are actively engaged in constructing their own knowledge, 2) people retain information better when new material is directly related to information they already know through previous study or their world experience, and 3) comprehension is greater when people are interested in the material.15 These insights and findings are not new; thousands of years ago, a ancient Chinese Confucian philosopher is credited with this educational advice: “Tell me and I'll forget. Show me and I'll remember. Involve me and I'll understand.” Active learning allows students to construct their own knowledge, which enhances acquisition and retention of information and concepts. Prior knowledge and interest are leveraged through Ethical, Legal, and Social Implications readings in each chapter. ICB enables your students to achieve their full learning potential by helping them to control their own education and encouraging them to “discover” content and concepts for themselves by analyzing real data in the context of thought-provoking research questions. ICB encourages your students to construct their own knowledge using published figures and tables. The data are from peer-reviewed scientific research as they appeared in the original publications. In traditional textbooks, the words are presented as fact, and figures are used merely to illustrate the words. ICB uses figures to supply the facts while words help your students extract the essential elements from the experimental data. In short, your students will construct their own knowledge so that they can learn and retain the information. As they gain knowledge in biology, your students will find that they can learn more and retain new information more easily. ICB uses case studies as context to help your students connect to the new information. You can reinforce major concepts by covering fewer examples in more depth so that your students can spend more time learning and less time memorizing. The text will guide you and your students in interpretation and analysis, and will help them contextualize their new knowledge into a framework that we call the five Big Ideas. The ready-to-use PowerPoint files make it easy for you to implement this approach to learning in your everyday classroom sessions. Publishing Information Citation: Paradise, C. (2015). How does this book help your students achieve their learning potential?. Retrieved from
The Men of Jane Eyre Essay In many works, gender relationships play a significant role. In Charlotte Bronte’s Jane Eyre, the main character has, to state it meekly, an interesting relationship with males. The novel is considered a bildungsroman. A bildungsroman is a novel that tells the story of a child’s coming of age, so to speak. It is the narration of the maturation including all childhood experiences, situations, and the emotions that follow with them. Knowing this, the audience can ascertain that Charlotte Bronte’s life involved many disheartening situations and relationships with men. In the novel there is no significant completely positive male characters. Having viewed some biographies on the author, I fell it is safe to say that this is consistent with Bronte’s real life. Being a male, I must state that the novel is upsetting in the fact that it appears at first glance to be quite feminist. However, if that is how her life truly transpired, who am I to judge her novels intention. A motif is a recurring theme, structure, or literary device used in a given work. The goal of this essay is to observe the motif of gender relationships in the early part of this novel through the male characters. I will specifically analyze Jane’s relationships early in the novel with John Reed, Mr. Brocklehurst, and Mr. Rochester. The aim is to show the male influence to deny Jane’s desire for equity and dignity. The first relationship the audience views Jane have with someone from the opposite gender is with her cousin, John Reed. Jane’s relationship with John can be described in one word: intimidations. Early in Jane’s life it is evident that she can be intimidated rather easily. Let she had enough intestinal fortitude to lash out from time to time when she felt the line had been crossed. For example, on page 22 Jane states, “John had mot much affection for his mother and sisters and an antipathy to me. He bullied and pushed me; not two or three times in the week, nor once or twice in the day, but continually: every nerve I had feared him, and every morsel of flesh on my bones shrank when he came near. ” Here Bronte shows John’s complete and total disrespect for people, specifically women. His constant hitting and abusive attitude toward Jane brought her to conclude on page 22, “I was bewildered by the terror he inspired. ” Obviously John is not a good soul. This is even noticeable at an early age. As we find out later in the novel. According to Mrs. Reed, he does not turn his life for the better. On page 232 she states, “John gambles dreadfully and always loses – poor boy! John is sank and degraded, I feel ashamed for him when I see him,” he is even suicidal. The lack of strong positive male influence early in life is blatantly obvious to the reader. Which is brought Jane to become, as seen on page 23, “habitually obedient to John” as what becomes common with men in general throughout much of the novel. This, coupled with Mrs. Reed’s negative influence, “investigated some strange expedient to achieve escape from insupportable oppression” (pg 27). This, the reader goes on the novel as Jane moves on to the next stage in life: The Lowood School. And with that experience with the Reeds, the audience can observe foreshadowing in her future relationships with men. To be more specific, she moves onto the next oppressive male while at Lowood: Mr. Brocklehurst. Mr. Brocklehurst’s relationship to Jane and the novel in general can also be described in merely one word: authority. Excited to be out of her oppression at the Reed’s, Jane comes to encounter a new male figure impeding her life. He is everything one can imagine to be corrupt in an institution of education. He makes the students eat, and dress, when and how he permits. He is a dictator promoting a life of moderate, mundane, and limited access to material things. On page 72 he states, “my plan in bringing up these girls is, not to accustom them to habits of luxury and indulgence, but to render them hardy, patient, and self-denying. ” These can be considered by some pious ideals, however he is hypocritical giving his family luxuries innumerable. His daughters wear dress of excessive extravagance and he dines in wonderful meals. He dislikes Jane from the start based on what he ahs been told by Mrs. Reed and Jane’s tendency to speak her mind. He punishes her labeling her a liar. On page 76, he punishes her as so, “let her stand half an hour longer on that stool and let no one speak to her during the remainder of the day. ” Jane is chastised by Brocklehurst leaving her believing she is not accepted or trusted by anyone in Lowood. She accepts these things, as Brocklehurst is her superior authority. The fact that men always seem to be Jane’s superior is a common theme throughout Jane Eyre. She is always obeying them or answering to their requests without as much as the tiniest gripe. When speaking to Helen Burns, Jane gets the true description of Brocklehurst as seen by most. Helen states on page 78, “Mr. Brocklehurst is not a god; nor is he even a great and admired man; he is little liked here; he never took steps to make himself liked. Had he treated you as an especial favourite, you would have found enemies. ” Mr. Brocklehurst is not a good man; he is simply another obstacle Jane has to overcome to gain equality, which is one of Jane’s aims in her life. After being at Lowood for eight years, Jane decides it is better to branch out. She finds herself a governess position at Thornfield estate, the proprietor being a man named Rochester. Upon first glance, Mr. Rochester appears similar to the other males that have been involved in Jane’s life: condescending and sometimes insulting. Jane states on page 126, “Let Miss Eyre be seated, said he; and there was something in the forced stiff bow, in the impatient, yet formal tone, which seemed further to express, “What the deuce is it to me whether Miss Eyre be there or not? At this moment, I am not disposed to accost her. He does not appear impressed with her presence or anything Jane has to offer. For instance, her piano playing, as he states on page 130, “You play a little I see; like any other English School girl: perhaps rather better than some, but not well. ” His condescending insights on her piano playing (as well as her sketches) are somewhat insulting but he appears interested enough to string her along, so to speak. Jane finds Rochester, “changeful and abrupt” (133). Once finishing the book, this kind of interaction that Jane and Rochester have can be seen as a type of courting. But Jane’s tendencies remain the same. For instance, when Rochester says ‘come’ she obeys without argument, as is the case with the other men in the novel. Eventually, Jane falls in love with Rochester, but does not express it to him. She acts as she normally would around him. She is complacent in the fact that she accepts his superiority complex. It is not until after Rochester unexpectedly asks her to marry, that the audience can see a change in Jane. Once learning of Rochester’s original wife Bertha, Jane comes to her senses. She stops living her life according to men and leaves Thornfield; the reader can see that this was ‘the last straw. She will not settle for second any longer and will do no such thing in something as special as marriage. She leaves, but eventually comes back to marry Rochester only when she is considered and equal. Throughout this novel, Jane battles with dominating men. The characters analyzed above are examples of men who believe women are inferior and treat them as they see according. All Jane wants in life is to be treated as an equal; she struggles with this throughout her life because she has to conform to male superiority at some points just to get things accomplished. But that is exactly where Jane’s internal struggle lies. It is as though the audience can visualize Jane question herself throughout the novel, ‘Why am I listening to him? Is it because he is a man? ‘ Finally, she comes to grips with what she wants in life and gains it; albeit half her life is over trying to gain this dignity and equality she so dearly wants. Jane overcomes this when she goes back to Rochester as an equal not only in regards to love and respect, but also financially. I shall end with a quote from page 116 that summarizes Jane’s feelings:
Causes And Effects Of Global Warming (Essay Sample) Causes and Effects of Global Warming There are diverse challenges impinging our environment. Destruction of the forests, pollution, and the use of every of our earth’s natural resources are among a few. Nonetheless, none of these problems is as serious as global warming as it affects both everyone and everything. Global warming refers to the rise of the temperature within the atmosphere as a result of carbon dioxide emission (Haldar, 2011).  Individuals must put matters in their own hands by taking serious actions towards stopping this catastrophe, moreover, must recognize the magnitude of this issue before things go from bad to worse. The future solely relies on actions we take towards slowing global warming. Accordingly, it is important to understand the causes as well as consequences of global warming. Causes of Global Warming Scientists have linked global warming with an increase in greenhouse gases within the air generated via human undertakings, for instance, deforestation, combustion of non-renewable sources, in other term fossil fuels among others. These activities generate huge quantities of greenhouse gas production which is the primary cause of global warming. According to White (2017), greenhouse gases absorb heat within the globe’s atmosphere in order towards keeping the earth warm towards sustaining existence; this natural method is referred as the greenhouse effect, which is a natural process. With the absence of these gases, the universe will be cold on behalf of sustaining human beings, as well as other living things. The greenhouse effect occurs because of stability of the main kinds of greenhouse gases. Nonetheless, whenever unusually high degrees of these gases mount up within the temperature, a lot of heat begins getting confined, besides, results in the development of greenhouse effect. Man-made generated discharges have been linked with heightening greenhouse degrees that are increasing global temperatures as well as triggering global warming. Greenhouse Gas Emissions As Well As The Enhanced Greenhouse Effect Greenhouse gases are generated naturally as well as via human undertakings. Unluckily, greenhouse gases produced via human undertakings are accumulated towards the air to an extremely higher degree compared to any natural method may eradicate them. Overall degrees of greenhouse gases have tremendously amplified following Industrial Revolution. Just a few groups of human undertakings are triggering the accumulation of major greenhouse gases including carbon dioxide, fluorinated gases, methane, and nitrous oxide to increase (White, 2017). The Common of human carbon dioxide productions comes from the combustion of non-renewable sources, for instance, petroleum, coal and so on so that human beings may power automobiles, generate electricity, and keep warm. Other vital sources originate from industry such as cement manufacturing. Methane is generated via mankind during the usage and production of non-renewable sources, livestock as well as crop farming (White, 2017). Fluorinated gases are utilized within cooling, refrigeration, as well as manufacturing applications. Nitrous oxide emissions originate from the utilization of synthetic fertilizers, fossil fuel burning as well as livestock manure management. For the last 7 decades, deforestation has been an enormous activity via human beings and converting forestry to farms possess danger in relation to greenhouse gas emissions. For decades, humans have burned as well as cleared forests with an aim of clearing land in lieu of cultivation. This possesses a serious impact on the ecosystem through releasing carbon dioxide in the air and concurrently lessening the plants which may clear carbon dioxide in the environment (White, 2017). Whenever forests are cleared, soil disturbance, as well as heightened degrees of decay within converted soils, generates carbon dioxide productions. Additionally, this raises soil corrosion, as well as nutrient percolating that, may additionally lessen the region’s capability towards acting like carbon sink. Effects of Global Warming Melting of Glaciers Melting of glaciers generates a plethora of complications for mankind and other living things on the globe (Goldman, Kumagai, & Robarts, 2013). As a result of raised global warming, the sea level will automatically rise that will result in flooding, consequently creating havoc among human life. Moreover, it will likewise endanger numerous species of living things and therefore, hampering the balance of the environment. Regions within the Arctic are fading away, besides, flowing in main oceans. Increasing temperatures generates a great threat towards wildlife as well as entire environments within these areas. Having glaciers melting at immense rates, a series of actions is being produced, which cannot be reversed. Climate change Unbalanced weather patterns have by now began showing outcomes. Raised precipitation in terms of rain has been experienced within polar as well as sub-polar areas. Extra global warming will cause additional evaporation that will trigger more rains. Both plants and animals cannot simply adapt towards raised rainfall. Plants might die and animals might migrate to different regions, which may cause the whole ecology out of balance (Casper, 2010). Whereas it might be flooding within Savannah, serious drought is occurring elsewhere within the globe. As temperatures increases, the existence of drought has raised within the Western United State (Casper, 2010). In addition to no precipitation and heat waves, entire forests have started to vanish comprising thousands of trees within Colorado’s Rockies. Huge scale evaporation will become the primary cause of droughts within numerous places especially Africa. Even though it is revolving under the large pressure of water calamity, raised global warming could additionally make the condition worse and will trigger malnutrition. As temperature becomes extra warm, it may impact the health of human beings and the illnesses they are vulnerable to. With the rise in the rain, waterborne illnesses are prone to spread such as malaria (Haldar, 2011). The globe will become very warm, thus heat waves are prone towards increasing, which may trigger a severe effect on the humans. Frequent Wildfires Whereas wildfires are as a result of natural causes, with the extra carbon dioxide within the air, as well as hotter summers, the proof is vivid. More regular wildfires keep on surfacing in diverse amounts every year (Haldar, 2011). The degree at which they combust is higher than the last, besides, with the emission of carbon dioxide in the air, individual’s lives are in danger, also wildlife greatly suffers. Every time there is a wildfire, the oxygen levels lessen responsible for combating the dangerous levels of carbon dioxide that are being emitted in the atmosphere. Severe Precipitation There is insurmountable scientific proof that global warming leads to increased precipitation. Also, global warming generates conditions which may result in extra powerful summer storms as well as hurricanes. Cities located in the coastal regions encounter more problems as precipitation poses serious flooding. Global warming is primarily caused via greenhouse gases that occur naturally and are either directly or indirectly produced by humans. Amid the sources of human-generated gases are deforestation and burning of fossil fuels among others. Some effects of global warming include droughts, severe precipitation, frequent wildfires, diseases, and climate change. • Casper, J. K. (2010). Changing Ecosystems: Effects of Global Warming. New York: Infobase Pub. • Goldman, C. R., Kumagai, M., & Robarts, R. D. (2013). Climatic Change and Global Warming of Inland Waters: Impacts and Mitigation for Ecosystems and Societies. Chichester, West Sussex, UK: John Wiley & Sons Inc. • Haldar, I. (2011). Global warming: The causes and consequences. New Delhi: Mind Melodies. • White, D. (2017). The Causes of Global Warming. Retrieved from: related articles
Anicuts affect Mahanadi's flow By India Water Portal on 27 Apr 2018 Gopal Nishad, a fisherman in his early 40s, is frustrated that there is hardly any fish left in the Mahanadi’s basin at Pitaibandh due to the lack of water in the basin. This basin is located near Rajim-Nawapara in Chhattisgarh, the proposed site for the fourth anicut on the Mahanadi. He reminisces the good old days when he, along with his brother, used to catch plenty of fish from the Mahanadi.  He is upset about the river’s current situation and blames the state government for it. The flow of the river has been adversely affected by the three anicuts built around a 15-km radius of Rajim-Nawapara in the river basin. Anicuts are small weirs built to divert water from rivers into canals dug on their banks. They are built to provide water for a whole lot of things including irrigation, groundwater recharge, arresting soil erosion and reducing flood peaks. Rajim-Nawapara is a religious place where the Mahanadi, Sondur and Pairi rivers confluence. The state government has been organising a kumbh mela here since 2006. As per the residents of Rajim, the anicuts on the Mahanadi have mainly been built to store water for the Rajim kumbh extravaganza. They believe that there are political motives behind building them and that three anicuts within 15 km of Rajim-Nawapara make no sense scientifically.  The Mahanadi is the lifeline for more than one lakh residents of not only Rajim and Nawapara, the twin cities bordering Gariyaband and Raipur districts of Chhattisgarh, but also of the villages on the periphery like Lakna, Pitaibandh, Ravad and Navagaon.  The residents say that none of the objectives of building anicuts was fulfilled by the state government. Instead of having an impact assessment study of these existing three anicuts, the government have proposed to build one more anicut at Pitaibandh that will further hamper the basin and the livelihoods of people. They have also been opposing and questioning the kumbh mela because the mela and the pilgrims it attracts affect their lives adversely. “Around 5000 people in Rajim were diagnosed with jaundice and other water-borne diseases in 2015. The water quality is deteriorating every year. People in the area are not even getting treated water for drinking,” says Ajay Jain, a resident. Recent studies from the Western Ghats show that small dams markedly reduce the diversity of fish communities in rivers. Moreover, it’s the ecologically sensitive or specialised species such as endemic and migratory fish that tend to be most strongly hit. Ecological changes to the river and forest impact local human communities as well, especially those whose livelihoods depend on the river. Fish catches decline, local irrigation cycles and water mills are disrupted, water-use rights and access to the landscapes become restricted and drinking water sources are compromised. “It is becoming tough for the fishing community to sustain after the anicuts were built on the Mahanadi. You will be surprised to know that in the last 15 years, the government has erected three anicuts around 15 km of Rajim-Nawapara without doing any study. The three anicuts on the river have reduced the flow of the river and its availability for fish to sustain. Out of the 500 fishermen households that were involved in fishing, only eight or 10 households are catching fish now,” says Rajuram Nishad, a senior fisherman in Pitaibandh. The lack of water has affected farming, too. The government, in its letter to the divisional commissioners, have asked them to discourage farmers from going for summer paddy due to shortage of water. The water stored in these anicuts is also not enough to sustain the agricultural needs of farmers in these areas. As per the farmers of Ravad, Pitaibandh and Navagao, there were more than 500 farmers that were directly dependent on Mahanadi for agriculture. “Almost 400 farmers have left the agriculture profession since 2006 because of the lack of water availability in the region. They are now working as labourers for sustenance,” says Raju Nishad, a farmer and a resident of Ravad. These residents do not see any visible action from the state government to improve their drinking water condition and are concerned about the declining groundwater table. They feel the need to decommission anicuts and ban festivals like kumbh mela on the river bank.  Here is a photo essay that would provide a glimpse of the dilapidated state of the Mahanadi and its basin. See more photos here.
The Free Energy Overunity Motor An overunity motor is something that people have been fascinated with, and have been working towards achieving since the 17th century. An overunity motor is essentially a perpetual motion device that creates more energy than it needs to operate. The main problem that has been keeping people from achieving this in the past is friction. Friction is the number one robber of energy from types of generators. Friction is one of the hardest things for mechanics to reduce. Some people claim that they have already created an overunity motor by using electromagnetic energy. A motionless electromagnetic generator could in theory achieve greater than 100% efficiency, because you have no moving parts, and therefore no friction. The Magniwork motor and the Adam’s motor are two of the more popular of these devices. As you can imagine both of these inventions have many devoted followers, and skeptics. Every time someone mentions in the newspaper or on television that they have, or are very close to creating this type of motor, the scientific community meets them with a lot of skepticism. They are right to be skeptical, because in essence a generator of this type is violating the first law of thermodynamics. This law is about the conservation of energy and states that energy cannot be created or destroyed. So anytime a scientist or inventor claims that they have created a motor or a generator that is greater than 100% efficient, it seems impossible. Some people claim that we have already created these with Adam’s motor and Magniwork generator. Both of these are great and controversial examples. We are living in an exciting time, because a true overunity motor would revolutionize our society, and create free energy for everyone. This would even end our foreign oil dependence. Please visit our website for more information on the overunity motor, or on how to build your own Magniwork perpetual generator. Share Your Thoughts
ayn rand anthem essay it is an affliction to be born with powerful intellectual capacity and ambition in Ayn Rand's apocalyptic, nameless society in Anthem. Collectivism is ostensibly the moral guidepost for humanity, and any perceived threat to the inflexible, authoritarian regime is met with severe punishment. The attack on mankind's free will and Is a man better off conforming with evil or escaping from chains that hold him from being an individual? In the novel, Anthem, written by Ayn Rand, the narrator lives in a dystopian population where people must refer to themselves in first person as the great “WE”, because individuality is the prominent sin. The story is written Starting an essay on Ayn Rand’s Anthem? Organize your thoughts and more at our handy-dandy Shmoop Writing Lab. Anthem, by Ayn Rand, is a very unique novel. It encircles individualism and makes the reader think of how people can conform to society and do as they are told without knowing the consequences and results of their decisions. Also, it teaches the importance of self expression and the freedom that comes along with being Imagine a world where people are only expected to live up to 45 years old. In today's society, there are countries that experience this. In the novel Anthem, by Ayn Rand; there are many factors like lifestyle, government, medicine, and education that lead to this. There are a couple of ways where the world in the novel is A captivating novelette in which a man's priority is to serve only for his brothers, Ayn Rand's Anthem illustrates a society that has suffered the ghastly consequences of collectivism. She depicts an oppressive culture in which the word “I” is unheard of and men belong to the collective “We.” Men's lives are determined through The collectivist society in which Equality 7-2521 lives is similar to the Nazi and Communist states of the twentieth century. The rulers of this society do not permit any individual to think freely; all must subordinate themselves to the state. "Collectivism," Ayn Rand notes, "means the subjugation of the individual to the group ambition essay in life animal farm squealer propaganda essay americanism essay 2015 answers for essay questions article sat an essay on the origin of free-masonry albert speer essay questions ap literature essays that scored a 9 advantages and disadvantages of capital punishment essay attention grabber for romeo and juliet essay analytical essay topics for othello apnea outline2c paper2c sleep term abstract for argumentative essay animal essay outline accounting topics for dissertation australian masculinity essay app essay on leadership amcas coursework ap credit an analytical essay does ap language and composition synthesis essay 2008 academic essays on twin peaks argumentative literary analysis essay apa essay format citations arguments for less homework apus history essay questions Maecenas aliquet accumsan Or visit this link or this one
Consequences of Synthacaine as Research Chemical 09/07/2013 17:11 Synthacaine is a research substance which is mostly used into new medical inventions. All research chemical compounds are maintained for research industries for inventions and have by scientists and research chemical experts. Many Nations happen to be passed strict laws and regulations against personal utilization of research chemical compounds. Still these substances are enormously offered by research chemicals suppliers on their online web-stores. Individuals are crazy in love with stimulant compounds, and crazy people always enter into search of recent stimulants and prefer to experiment themselves. Nowadays Synthacaine along with other stimulants might get easily be online web-stores. Many web-store suppliers can sell with quality assurance but 18 plus people could cope with them online. Synthacaine is a brown colored powder and internet-based suppliers also sell them by means of capsules too. Frequently such compounds including Synthacaine and 5-MeO-DALT aren't recommended for private consumption by people since these compounds are considered unsuitable for private consumption or can't be consume in type of drink or capsules. These powders may affect on health and can be the reason of Comma, depression, body pain, headache, nausea, cold and lots of other short-term and lengthy-term health issues. In certain overdose cases people also have lost their lives too.
Cannabis or marijuana refers to the products derived from the cannabis sativa plant which are typically bred for their potent trichomes. These sticky glands contain high amounts of a substance commonly referred to as THC or tetrahydrocannabinol which is well known for its psychotropic abilities. The controversy surrounding the use of cannabis dates back a very long time. In the 1930s, growing concerns about the dangers of abuse led to cannabinoids being banned for medical use in the US and many other countries worldwide. In some countries, the laws for possession of cannabis which is also widely known as marijuana are very severe. These laws may mean that we have long been denied a potentially excellent natural treatment that has applications ranging from and anxiety relief to potential cancer breakthroughs. A comprehensive review published in 2007 into the potential health benefits of cannabis stated that despite the possible risk of mild addiction, the therapeutic value of this herb was far too valuable to simply ignore. (1) The same review goes on to say that cannabis has been used for its medicinal benefits for millennia and that it has the ability to treat a wide range of illnesses including cancer, anorexia, pain and inflammation. It can also be used for a number of neurodegenerative disorders such as Alzheimer’s disease, Parkinson’s and Tourette’s syndrome. Given that cannabis has so much medical value, the question that I’m sure we would like answered is why governments including the British government and the federal government of the US is so keen to ban it even for medicinal use. The answer according to some is that they can see its huge market potential and that they want a part of it. It is also of course a very real competition for major pharmaceutical companies and losing out on their bottom line is not something that the pharmaceutical industry finds attractive. The cannabis plant species contains a unique group of carbon compounds often referred to as phytocannabinoids. THC or tetrahydrocannabinol is the main ingredient responsible for its psychoactive ‘high’ effects but it also contains numerous other medicinal compounds including cannabinol, cannabidiol, cannabigerol and cannabichromene. Some of these cannaboids are believed to have significant and anti-inflammatory and pain relief activities without the psychoactive effects of THC. Both cannabis and hemp oil are derived from the same species – cannabis sativa but hemp oil is actually quite different to cannabis oil. Hemp is actually a high growing species of the plant which is commonly used for industrial purposes like topical ointments and fiber and paper. Unlike cannabis oil, hemp oil has THC only in trace amounts and is not considered to have the same medicinal value as cannabis oil. Both hemp oil and cannabis oil also contain cannabidiol or CBD which contains medicinal properties. However, there is far less of this compound contained in hemp than cannabis oil. Cannabis is actually around 100 times more potent than hemp oil meaning the dose of hemp oil necessary to have equivalent effect would be extremely high. Research into cannabis essential oil is regulated strictly and we are in the early stages of its development. We can only hope that in the future more research will be done and that regulatory bodies become more accepting of its potential to treat people safely. The health benefits of cannabis oil are caused by these medicinal applications. Here are the top health benefits of cannabis oil: Cannabis oil is often suggested for people who suffer from chronic pain, inflammation and occasionally in emergency pain relief. This is the reason why people who have been diagnosed with cancer turn to cannabis-related products, including cannabis oil, when they need relief from the pain of the chemotherapy or the disease itself. One of the major historical uses of the cannabis plant has been to ease pains and inflammation. Indeed, there is evidence that it has been used for thousands of years for these purposes. There is modern evidence that cannabis and cannabis oil is effective in relieving pain by inhibiting the neural transmission in the body’s pain pathways. It has the potential to alleviate chronic pain as well as inflammation which is why many cancer patients choose to take it while undergoing chemotherapy. (7) Many people take cannabis to deal with severe rheumatism and arthritis as well as other chronic pains. Other studies have demonstrated that it can be used to ease neuropathic pain. It is considered to be safe when taken in appropriate doses and studies have found that it is largely well tolerated. Eases Multiple Sclerosis Pain The effects of cannabis oil, more specifically the cannabinoids such as THC, help to control seizures by attaching to the brain cells that are responsible for regulating relaxation and controlling excitability. There is some evidence emanating from small scale studies and anecdotal reports that the cannabidiol content of cannabis oil may be helpful in preventing seizures and could be a novel treatment for epilepsy. Animal studies have demonstrated that treatment with cannabis oil may prevent numerous cardiovascular diseases including atherosclerosis, heart attacks and strokes. A British study published in 2014 found that the results of the animal studies were applicable to human heart conditions. (6) They demonstrated that cannabinoids could cause the blood vessels to relax and dilate paving the way to improved circulation and reduced blood pressure. This is exciting news and we can only hope that further studies will be forthcoming to further prove that cannabis oil has major implications for heart health. people who need to increase their weight following illness or because of an eating disorder like anorexia nervosa. Cannabis oil can stimulate the body’s digestive system and induce hunger in those lacking appetite. While cannabis oil can stimulate appetite by helping release hormones responsible for hunger, certain hormones responsible for hunger suppression can also be stimulated by cannabinoids. So conversely, depending on which hormone gets stimulated, cannabis oil might also be effective in reducing appetite and controlling obesity. In order to do this effectively, it is necessary to manipulate the cannabinoids to stimulate the appropriate hormones. Basically speaking, cannabis oil may be effective in treating both obesity and eating disorders like anorexia in the future. For those who would like to read more about the weight control potential of cannabis oil, you can click on the link at the bottom of the article. (1) Cannabis oil is definitely worth exploring further as it can relax the troubled mind and stimulate the release of pleasure hormones. This combination of effects can lead to stress reduction and pace the way for feelings of calm and general wellbeing. Cannabinoids found in the oil are responsible for these positive emotional effects on the body’s nervous system. Several recent studies have demonstrated the potential value of cannabis oil in stress relief as well as related issues like insomnia. An Israeli study published in 2013 demonstrated that treatment with cannabinoids following some form of traumatic experience might help control the emotional responses to that traumatic event and prevent stress related responses. Researchers found that cannabinoids were effective by minimizing stress receptors in the hippocampus which is the area of the brain responsible for emotional response. (2) A more recent review published in 2015 found that cannabis treatments were effective in reducing anxiety and restlessness in military veterans suffering from PTSD. Whether inhaled or orally administered as, cannabis oil produces a wide range of positive nervous system effects including increased feelings of pleasure and calm and this review of 4 trials found that cannabis was able to significantly improve certain symptoms of PTSD including nightmares, insomnia and anxiety. (3) While there is need for further research, the studies done so far as well as a large body of anecdotal evidence are very promising for the future use of cannabis oil to treat anxiety, stress and sleeping disorders. Asthma is a common respiratory disease affecting up to 300 million people the world over. It is responsible for numerous deaths each year and the search for a natural and effective treatment has been ongoing for many years. Cannabis has actually been used to treat asthma for many thousands of years including in traditional Chinese and Indian systems. Cannabis oil may be an effective natural treatment door asthma because of its natural anti-inflammatory ability as well as its analgesic effects and in particular its ability to dilate the bronchial tubes which allows more oxygen to flow. During the early 1970s there were several studies that investigated the bronchodilatory effects of cannabis for people suffering from asthma many of which showed very positive results although these studies used cannabis cigarettes rather than oil. (4.5) Early reports of research have shown that the active ingredients in cannabis oil can reduce tumour size and have preventative effects on cancer, and says that the oil makes it easier to beat cancer for those suffering with the disease. If you would like to take advantage of the health benefits of cannabis oil, contact us at CBD International. There is a considerable amount of excitement regarding the ability of cannabis oil to cure cancer and it often quite understandable makes headlines. Unfortunately, these headlines are often overly optimistic and can be misleading for both patients and their families. Scientists have been able to discover that many different cannabinoids have a range of positive effects under laboratory conditions. These effects include: • Triggering cancer cell death in a process known as apoptosis • Preventing the division of cells • Preventing new blood vessels from becoming tumors Up to now the most positive effects have been seen when using a combination of purified THC in combination with cannabidiol which is a cannabinoid that counteracts the psychoactive effect of THC. Cannabis oil can be applied topically to the skin in order to promote a healthy and glowing looking skin. When applied topically, cannabis oil can help stimulate the shedding of older and dead skin cells and promote the growth of new ones to replace them. Cannabinoids can help promote the production of lipids which help fight chronic skin conditions including acne and psoriasis. It is also possible that cannabis oil can help prevent the signs of aging like wrinkles and skin spots and blemishes because it is high in natural antioxidants that help fight against cellular damage caused by free radicals. By reducing the amount of stress that we feel, cannabis oil can also help prevent skin diseases that tend to break out during times of anxiety like eczema or rosacea. It sounds odd but cannabis oil may be a great tonic for your hair. Of course, I am not talking about smoking it but actually applying it to the hair in the same way you would apply any other oil. Here are just some of the ways that your hair can benefit from cannabis oil. • It stimulates hair growth. If your hair is thinning then cannabis contains important fatty acids that can help to stimulate its growth. These same fatty acids can also add sheen to your hair helping it look healthier and shinier. • It nourishes the scalp. Cannabis oil contains gamma-linoleic acid which helps to nourish and moisturize your hair. For those of you with dry hair, cannabis oil may be just what you need. It also helps protect the scalp and combat skin issues like dandruff. • It strengthens the hair and follicles. Cannabis oil contains lots of protein and because hair is mainly made up of proteins, it can help strengthen and revitalize weak or damaged hair strands. It also strengthens the hair follicles and improves elasticity. • As a shampoo or a conditioner. Simply add some cannabis oil to your shampoo to give your hair an overall treat. Cannabis oil is rich in vitamin E which can stimulate hair growth and revitalization. Because of its abundant fatty acids, you may even be able to skip your normal conditioner. There is a certain amount of evidence demonstrating the ability of cannabis oil to treat eye conditions like glaucoma and macular degeneration. Glaucoma is a serious optic nerve disease that might lead to loss of proper vision and even blindness. Glaucoma is caused by a build-up of fluid within the eye which then puts too much pressure onto the retina, lens and optic nerve. While numerous factors may contribute to nerve damage in people suffering from glaucoma, intraocular pressure or IOP is definitely related. The good news is that the American Glaucoma Society says cannabis can reduce the level of IOP in both people suffering from glaucoma and those without the disease.Unfortunately, its effects are temporary and patients would need to use cannabis oil every few hours in order to shore up the effects so for now more research is necessary. There are several ways to use cannabis oil for medicinal purposes. Many people who use the oil ingest it directly by way of an oral syringe often combined with another liquid to mask its potency. The appropriate dosage and frequency of delivery depends on several factors including the patient’s tolerance to cannabis and the condition they are trying to treat. In general, it is recommended that you start out taking very small doses and increasing the dose over time when your body gets more used to it. Cannabis oil can also be applied topically which is a popular option in people to treat pain from conditions like arthritis. The advantage of topical application is that you experience many of the pain killing effects with far less chance of experiencing any psychotropic side effects. Despite all of the potential benefits of cannabis oil, there are still precautions that you should be aware of. • Cannabis oil may have a negative effect on your cognition. It can affect your ability to concentrate, memorize, think and learn. • Cannabis oil can cause drowsiness and operating machinery or driving a vehicle may be difficult. • Mixing cannabis oil with other medication like pain killers, anti anxiety drugs, muscle relaxants and antidepressants is not advised. Cannabis oil can cause drowsiness which is exacerbated by many medications. • Pregnant women or women trying to conceive should not use cannabis oil. Some evidence suggests that women that used cannabis at the time they conceived or during their pregnancy increased their risk of giving birth to a child with low weight or birth defects. • Nursing mothers should not use cannabis oil. Laws differ from country to country. For those of you reading this from the UK, despite changing attitudes, you still cannot buy cannabis oil as easily as many people feel they should be able to. It is not available in the essential oil sections of your local store for example but there it is not impossible to come by contact us. As we mentioned earlier in the article, it is now legal for medical reasons in numerous states in America though its cultivation and use remains illegal in many others. While cannabis still has a certain amount of stigma attached to it, the evidence is mounting that it has an important role to play in human health. From well-designed scientific studies to a mountain of anecdotal evidence, the signs are extremely promising. Cannabis oil may help you to overcome a range of issues from arthritis pain to cancer Please let us know if you have ever used cannabis oil and what you used it for and let us know how effective you found it to be. CONTACT US Supporting reseach: (1) http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3202504/ (2) http://www.ncbi.nlm.nih.gov/pubmed/23433741 (3) http://www.ncbi.nlm.nih.gov/pubmed/26195653 (4) http://www.nejm.org/doi/full/10.1056/NEJM197305102881902 (5) http://thorax.bmj.com/content/31/6/720.abstract (6) http://www.ncbi.nlm.nih.gov/pubmed/24548820 (7) http://www.cmaj.ca/content/182/14/E694.abstract © 2019 - Lively Group by Studio Media Themes Company number: 10902922 Make an appointment on        0844-567-1272 es_ESSpanish en_GBEnglish
cinder cone noun Geology. a small, conical volcano built of ash and cinders. Nearby words 1. cinco de mayo, 2. cincture, 3. cinder, 4. cinder block, 5. cinder concrete, 6. cinder patch, 7. cinder track, 8. cinderella, 9. cinders, 10. cindy Origin of cinder cone Examples from the Web for cinder cone Science definitions for cinder cone cinder cone [ sĭndər ] A steep, conical hill consisting of glassy volcanic fragments that accumulate around and downwind from a volcanic vent. Cinder cones range in size from tens to hundreds of meters tall.
Infographic Template Galleries Created with Fabric.js 1.4.5 Characters in Frankenstein Victor Frankenstein The Monster Robert Walton Frankenstein created a hideousmonster who is intelligent and sensitive. All who see him are scared and the feeling of abandonment makes the monsterseek revenge on his creator. Walton's letters open and closethe book. He is the one that picks up Frankenstein off the ice, helps nurse him back to life, and listens to his story. He records Frankenstein's story in a letter written to his sister Margaret. Frankenstein is the narrator of the book. He is fascinated with science, discovers the secret of life, and creates an intelligent monster. He is scared of his own creation and keeps it a secret. He soon realizes how helpless he is in preventing the monster from ruining his life. Alphonse Frankenstein Elizabeth Lavenza Henry Clerval William Frankenstein Justine Moritz Alphonse is Victor's father. He is very sympathetic towards his son and helps him in times of pain. He always encourages Victor to remember the importance of family. Henry is Victor's friend who nurses him back to health when he gets sick. He then begins to follow in Victor's footsteps in becoming a scientist. Elizabeth is four or five years younger than Victor and is adopted into the Frankenstein family. Victor's mother rescues her from a cottage in Italy and takes her into her home. Her dying wish is for Victor to marry Elizabeth. William is Frankenstein's youngest brother. The monster strangles him in the woods in order to get back at Victor for abandoning him. The death of William makes Victor very sad and guilty for creating the monster. Justine was a young girl adopted by the Frankenstein family. She was blamed for the death of William because the monster slipped Williams watch into her pocket. She was executed even though the crime was committed by the monster. "Frankenstein Clip Art." Frankenstein Clip Art. N.p., n.d. Web. 01 May 2014. SparkNotes. SparkNotes, n.d. Web. 01 May 2014. Create Your Free Infographic!
Elementary Algebra Published by Cengage Learning ISBN 10: 1285194055 ISBN 13: 978-1-28519-405-9 Work Step by Step In order to add or subtract fractions, we start by making a common denominator (bottom of the fraction). Thus, we multiply 2/1 (which is the same as 2) by 6/6 to obtain: $-\frac{12}{6}$-($\frac{5}{6}$)=-$\frac{17}{6}$. Update this answer! Update this answer
Religious Beliefs and Spirituality in North Korea You are here: Countries / North Korea North Koreans are traditionally practitioners of Buddhism and Confucianism in its earlier years. In the 18th century, Christian missionaries came to the land and spread Toman Catholicism. However, since the leadership of Kim Il Sung in the 1950s, North Koreans have not practiced religious activities as the Juche ideology has been inculcated in them. The Juche ideology or “self-reliance” is placing much belief on a human being. This particular ideology has been taught to each and every citizen in North Korea – young and old, men and women, and even the children. It is also taught in schools for the students to learn about it even when they are still young. As a result of deep inculcation of the ideology in each citizen, they have all treated Kim Il Sung as the Supreme Being, as evidenced by them thanking him every single day, in everything that they do. According to local stories, Kim Il Sung and his family has descended from the heavens and came down on Mt. Paektu. There they transformed into human beings and came to lead the country of North Korea. People who fled from the country to China, South Korea, and other countries said that it is required of them to place portraits of Kim Il Sung and his son Kim Jong Il in the best walls of their house. The families who are so much loyal to the Party even bow down to the portraits. More importantly, it is required of them to worship Kim Il Sung with all their hearts even after his death. To the North Koreans, this strong belief on Kim Il Sung has become a religion in a sense.
Bio Huma Netics, Inc. The Value of Humic Substances in the Carbon Lifecycle of Crops Issue link: Contents of this Issue Page 0 of 7 1 © 2017, Bio Huma Netics Inc. The Value of Humic Substances Publication No. HG-170215-01 The Value of Humic Substances in the Carbon Lifecycle of Crops: Humic Acids, Fulvic Acids, and Beyond. By Larry Cooper, with Rita Abi-Ghanem, PhD E veryone who works in agriculture is aware of the basic lifecycle of crops: plants are seeded, they are nourished and they grow, they are harvested, and what is not consumed by (us) higher life forms is returned to the soil—where it is broken down through mineralization and by microorganisms so that it can be used to nourish the next cycle of crops. That relatively simple scenario is created by a wonderfully complex interchange of chemical, physical, and biological actions that scientists are still struggling to fully understand after 10,000 years of practical farming. As with everything else on this planet, the story of humic substances begins and ends with carbon. All life on this planet is carbon-based: humans, animals, plants, insects, microorganisms . . . carbon is essential to building everything biological and keeping it powered and in working order. Plants pull carbon from the air (carbon dioxide) and through a series of reactions merge the carbon with energy from sunlight (photosynthesis) and hydrogen from water, eventually creating carbon-rich organic compounds required by plants throughout their metabolic pathways. A very important characteristic of carbon as an element is that it has a unique capacity to modify itself and, through functional group extensions, combine with many other elements to form shorter and longer carbon chains, rings, and complex organic compounds as required in the processes. 1 What Are Humic Substances? When plants end their life cycle, their components are decomposed through the aid of mineralization and microorganisms and returned to the soil as organic matter. About 70% of soil organic matter is humus, a brown to black complex variable of carbon-containing compounds that is slow to decompose under natural conditions and can persist in the soil for several hundred years. The humic substances that, in turn, make up humus are relatively large organic carbon-chain complexes that are composed of carbon, oxygen, hydrogen, nitrogen, and sulfur. These humic substances, which contribute to the brown or black color of surface soils, can be divided into three major categories: humin, humic acids (HAs), and fulvic acids (FAs). 2 (See Fig. 1.) These are functional categories based largely on molecular size and their solubility in water adjusted to different pH conditions. 3 Humins are very large molecules (molecular weight of 100,000 to 10,000,000 Da) that are not soluble in water at any pH level and are, consequently, very slow to break down. Within soil, humin improves structure, water- holding capacity, and stability. Humin also functions as a cation exchange system that aids the soil's ability to storehouse plant nutrients. Humic acids have a smaller molecular size than humins (molecular weight of 50,000 to 100,000 Da, with 1,000s of carbon rings) and are soluble in water under alkaline conditions. Because other elements readily bind to humic acid molecules in a form that can be easily absorbed by The following article was originally published in the January 2017 issue of AgroPages Magazine. Humic substances play an important role in soil fertility and crop yield. This article provides a basic overview of what humic substances are, how they are created, and how they work. Discussion is provided on how to add humic content to crop soil, including the use of commercial products such as the Huma Gro ® line of carbon-rich organic acids. Figure 1. Chemical Properties of Humic Substances 4 Articles in this issue view archives of Bio Huma Netics, Inc. - The Value of Humic Substances in the Carbon Lifecycle of Crops
Scroll to Content Necropolis Radimlja  is most famous site of tombstones, medieval tombstones in Herzegovina. Necropolis is one of the most valuable monuments of the medieval period in Bosnia and Herzegovina. An estimated 60,000 are found within the borders of modern Bosnia and Herzegovina and the rest of 10,000 are found in what are today Croatia, Serbia, and Montenegro. Appearing in the 11th century, the tombstones reached their peak in the 14th and 15th century, before disappearing during the Ottoman occupation. They were a common tradition amongst Bosnian Church, Catholic and Orthodox followers. The epitaphs on them are written in Bosnian Cyrillic alphabet “Bosančica” and the alphabets belonging to the Bosnian church and medieval Kingdom of Bosnia. The necropolis has 133 tombstones, of which is decorated with 63. About 30% of them have a form of plates, 25% of the chest and 25% of cases of the base. Their most remarkable feature is their decorative motifs, many of which remain enigmatic to this day: spirals, arcades, rosettes, vine leaves and grapes, suns and crescent moons are among the images that appear. Figural motifs include processions of deer, dancing the kolo, hunting and, most famously, the image of the man with his right hand raised, perhaps in a gesture of fealty. Stećci have been nominated to the UNESCO World Heritage List as Joint Cultural Heritage by the four countries in 2009.
Wool and the Wool Trade During the Middle Ages monastic orders ran large tracts of land in the Lake counties. They kept sheep for their wool, which was a major commodity. This must have meant they also used horses and ponies to market the resulting wool crop each year. grey Heltondale ponies << Left: Photo of grey Heltondale mares: courtesy of Barbara Müller It is likely that the monks' ponies were of varying types. It is also possible there may have been an Irish Connemara influence in the area. Norsemen, who had settled in Ireland, later migrated to Northern England, and could have brought their own type of pony with them. Cistercian abbeys in particular are said to have been fond of keeping grey animals, the colour being recognised as a badge of their ownership (Richardson). Travel was possible at all times of the year, even through winter, as shown by the itineraries of various Kings from John onwards. Hindle says: "the baggage train, comprising from ten to twenty carts and wagons, containing everything from the treasury to the king's wardrobe, had to move about with the king and must have required adequate roads. The kings were almost constantly on the move, and there are few recorded complaints about the condition of the roads." He also notes that "after the Black Death had reduced the population, there was a shortage of labour and people started to move to find better paid work." Whatever their colour, large numbers of ponies must have been in use to carry wool from the monastic lands to the ports. "There are ... records of Newcastle merchants buying Cumbrian wool for export in 1397 and again in 1423, 1427 and 1444." (Postan, cited in Williams) "Similarly there is evidence that ... Bristol merchants were shipping Kendal cloth to Spain ... Southampton Brokerage books referred to Kendal traders by name in the autumn of 1442, and record that between November 1492 and March 1493, eleven Kendal traders made a total of 14 journeys to Southampton, carrying packs of cloth." Working for the monasteries 11th and 12th century monastic work for ponies could include pack work carrying wool, woollen goods, and local metal ores; shepherding; and wolf hunting by professional "wolvers" whose job was to protect the sheep on the "sheepwalks". Pack ponies were called "capuls". Kirkstall Abbey had horse breeding "ranches" in the Slaidburn (Lancashire) area up until the dissolution. A local farmer has deeds for his farm that actually mention that horses were kept rather than sheep or cattle as they were able to escape the predations of wolves. (D Higham of Slaidburn, in an email to author 2002). Some of the pack bridges and great lengths of wall on the fells are believed have been built by monastic landowners. These boundaries, and years of energetic shepherding on a daily basis, have laid the foundations of the "heafed" or "hefted" flocks which still pass on the knowledge of their traditional territory from mother to lamb each year. Grey colour, monastic owners: cause OR effect? It has been asserted by various authors that the Cistercian order - the "White Friars" - used white or grey horses and ponies. This is possible; it would be rather like a company owning a fleet of cars such as Fords or Vauxhalls, a sign of corporate ownership, but not an exclusive one. It would be an easy objective to achieve because grey horses were common at that time, but it does not mean, of course, that there were no grey horses or ponies outside of monastic ownership. That the colour grey was common in the general equine population of Northern England in the early 16th century, can be shown by studying a list of 252 horses that were returned to Northern soldiers in 1513 after the Battle of Flodden, 95 were grey. It was easily the most frequent colour of all, and it did not include "white". (Dent) These horses were not from monastic stock; they belonged to the farmer-soldiers who were being "demobbed" after Henry VIII's Scottish campaign. The Dissolution of the monasteries did not take place until 1540, a generation later, so we cannot maintain the idea that the monks had cornered the market until, suddenly, at the Dissolution, all the greys came back into general ownership. There clearly were lots of greys that were in secular ownership before then so the claim does not make sense chronologically. If the monks bred horses and ponies, they would find it easy to breed greys if much of their stock was grey; to breed a grey foal you have to use a grey mare or stallion. But greys were common elsewhere too and if a secular owner bred a grey he might sell to a neighbour just as easily as to a Cistercian friary.
Provided by: dict_1.12.1+dfsg-8_amd64 bug dict - DICT Protocol Client dict word dict [options] [word] dict [options] dict://host:port/d:word:database dict [options] dict://host:port/m:word:database:strategy dict is a client for the Dictionary Server Protocol (DICT), a TCP transaction based query/response protocol that provides access to dictionary definitions from a set of natural language dictionary databases. Exit status is 0 if operation succeeded, or non- zero otherwise. See EXIT STATUS section. -h server or --host server Specifies the hostname for the DICT server. Server/port combinations can be specified in the configuration file. If no servers are specified in the configuration file or or on the command line, dict will fail. (This is a compile- time option, ./configure --enable-dictorg, which is disabled by default.) If IP lookup for a server expands to a list of IP addresses (as does currently), then each IP will be tried in the order listed. -p service or --port service Specifies the port (e.g., 2628) or service (e.g., dict) for connections. The default is 2628, as specified in the DICT Protocol RFC. Server/port combinations can be specified in the configuration file. -d dbname or --database dbname Specifies a specific database to search. The default is to search all databases (a "*" from the DICT protocol). Note that a "!" in the DICT protocol means to search all of the databases until a match is found, and then stop searching. -m or --match Instead of printing a definition, perform a match using the specified strategy. -s strategy or --strategy strategy Specify a matching strategy. By default, the server default match strategy is used. This is usually "exact" for definitions, and some form of spelling- correction strategy for matches ("." from the DICT protocol). The available strategies are dependent on the server implementation. For a list of available strategies, see the -S or --strats option. -C or --nocorrect Usually, if a definition is requested and the word cannot be found, spelling correction is requested from the server, and a list of possible words are provided. This option disables the generation of this list. -c file or --config file Specify the configuration file. The default is to try ~/.dictrc and /etc/dictd/dict.conf, using the first file that exists. If a specific configuration file is specified, then the defaults will not be tried. -D or --dbs Query the server and display a list of available databases. -S or --strats Query the server and display a list of available search strategies. -H or --serverhelp Query the server and display the help information that it provides. -i dbname or --info dbname Request information on the specified database (usually the server will provide origination, descriptive, or other information about the database or its contents). -I or --serverinfo Query the server and display information about the server. -M or --mime Send OPTION MIME command to the server. NOTE: Server's capabilities are not checked. -f or --formatted Enables formatted output, i.e. output convenient for postprocessing by standard UNIX utilities. No, it is not XML ;-) Also error and warning messages like " No matches...", " Invalid strategy..." etc. are sent to stderr, not to stdout. -I, -i, -H and similar: host<TAB>port<TAB>strategy1<TAB>short description1 host<TAB>port<TAB>strategy2<TAB>short description2 host<TAB>port<TAB>database1<TAB>database description1 host<TAB>port<TAB>database2<TAB>database description2 -a or --noauth Disable authentication (i.e., don't send an AUTH command). -u user or --user user Specifies the username for authentication. -k key or --key key Specifies the shared secret for authentication. -V or --version Display version information. -L or --license Display copyright and license information. --help Display help information. -v or --verbose Be verbose. -r or --raw Be very verbose: show the raw client/server interaction. Specify the buffer size for pipelineing commands. The default is 256, which should be sufficient for general tasks and be below the MTU for most transport media. Larger values may provide faster or slower throughput, depending on MTU. If the buffer is too small, requests will be serialized. Values less than 0 and greater than one million are silently changed to something more reasonable. --client text Specifies additional text to be sent using the CLIENT command. --debug flag Set a debugging flag. Valid flags are: The same as -v or --verbose. raw The same as -r or --raw. scan Debug the scanner for the configuration file. parse Debug the parser for the configuration file. pipe Debug TCP pipelining support (see the DICT RFC and RFC1854). serial Disable pipelining support. time Perform transaction timing. The configuration file currently has a very simple format. Lines are used to specify servers, for example: or, with options: server { port 8080 } server { user username secret } server { port dict user username secret } the port and user options may be specified in any order. The port option is used to specify an optional port (e.g., 2628) or service (e.g., dict) for the TCP/IP connection. The user option is used to specify a username and shared secret to be used for authentication to this particular server. Servers are tried in the order listed until a connection is made. If none of the specified servers are available, and the compile-time option (./configure --enable-dictorg) is enabled, then an attempt will be made to connect on localhost and on at the standard part (2628). (This option is disabled by default.) We expect that will point to one or more DICT servers (perhaps in round-robin fashion) for the foreseeable future (starting in July 1997), although it is difficult to predict anything on the Internet for more than about 3-6 months. 0 Successful completion 20 No matches found 21 Approximate matches found 22 No databases available 23 No strategies available 30 Unexpected response code from server 31 Server is temporarily unavailable 32 Server is shutting down 33 Syntax error, command not recognized 34 Syntax error, illegal parameters 35 Command not implemented 36 Command parameter not implemented 37 Access denied 38 Authentication failed 39 Invalid database 40 Invalid strategy 41 Connection to server failed dict was written by Rik Faith ( and is distributed under the terms of the GNU General Public License. If you need to distribute under other terms, write to the The main libraries used by this programs (zlib, regex, libmaa) are distributed under different terms, so you may be able to use the libraries for applications which are incompatible with the GPL -- please see the copyright notices and license information that come with the libraries for more information, and consult with your attorney to resolve these issues. If a dict: URL is given on the command line, only the first one is used. The rest are If a dict: URL contains a specifier for the nth definition or match of a word, it will be ignored and all the definitions or matches will be provided. This violates the RFC, and will be corrected in a future release. If a dict: URL contains a shared secret, it will not be parsed correctly. When OPTION MIME command is sent to the server (-M option) , server's capabilities are not User's dict configuration file System dict configuration file dictd(8), dictzip(1),, RFC 2229 15 February 1998 DICT(1)
re: fascist dweebery "Just after the Civil War, some former Confederate officers, fearing the vote given to African Americans by the Radical Reconstructionists in 1867, set up a militia to restore an overturned social order. The Klan constituted an alternate civic authority, parallel to the legal state, which, in its founders' eyes, no longer defended their community's legitimate interests. In its adoption of a uniform (white robe and hood), as well as its techniques of intimidation and its conviction that violence was justified in the cause of the group's destiny, the first version of the Klan in the defeated American South was a remarkable preview of the way fascist movements were to function in interwar Europe. It is arguable, at least, that fascism (understood functionally) was born in the late 1860s in the American South." —Robert O. Paxton, The Five Stages of Fascism
The Border Collie has been called many different names throughout history, such as, ‘Working Collie’, ‘Old Fashioned Collie’, ‘Farm Collie’ and ‘English Collie’. James Reid who was the secretary of the international sheepdog society officially named the breed the Border Collie in 1915. The name was aptly given to the breed as it worked the borders between England and Scotland, in particular Northumbria. Interestingly in old Gaelic the word Collie was a rural term for anything useful, hence Collie dog or useful dog. 12-15 years Dogs: 48-55 cms (19-22 inches) Bitches: 45-53 cms (18–21 inches) 11-29 kgs (25-65 lbs) The Border Collie’s coat abundant, long and soft. The climate on the border regions could be bitter so a hardy dog was needed. Collies have a double coat in order to bear the cold. Border Collies are medium sized dogs and can weigh anywhere between 11 and 29kgs (25-65 lbs). Their bone structure can be lightweight through to heavy boned and they have a distinctive long tail reaching to at least its hock, which is carried low or raised when excited and is never carried over the back. Its ear carriage can be pricked, dropped or one of each. Highly intelligent with an instinct to herd and work. Needs training and mental stimulation from an early age, as some under-challenged collies can become neurotic. Border Collies are also known for being incredibly loyal. Black and white is most common but Border Collies can also come in red and white, tri-colour, liver, blue merle, red merle and yellow or white with small amounts of black or red. © Vision Online Publishing
health care   What asthma relievers are available? Asthma reliever is a drug that provides relief from asthma symptoms and is the most commonly used asthma medication. During an attack, the airways constrict, so short acting relievers are taken to relax the smooth muscle. Relievers do not reduce the underlying inflammation associated with asthma and therefore do not prevent asthma attacks. Beta-2 agonists: Beta2-agonists are the standard asthma therapy. Beta-2 agonists act on molecule-sized receptors on the muscle of the bronchioles. The medicine fits the receptor like a key fits a lock and stimulates the muscle to relax. Examples of those which act for a short time (three or four hours following a single dose) are salbutamol and terbutaline. These medicines are inhaled from a variety of delivery devices, the most familiar being the pressurised metered-dose-inhaler (MDI). They are used when required to relieve shortness of breath. Short-acting bronchodilators should be used on an "as needed" basis to overcome attacks. Long-acting bronchodilators work by keeping the airways open for several hours. They are taken regularly whether asthma symptoms are present or not. Longer-acting beta-2 agonists include salmeterol and formoterol. Their action lasts over 12 hours, making them suitable for twice daily dosage to keep the airways open throughout the day. Anticholinergics: Inhaled anticholinergic drugs open the breathing passages, similar to the action of the beta-agonists. The nerve impulses cause the muscles to contract, thus narrowing the airway. Anticholinergic medicines block this effect, allowing the airways to open. The size of this effect is fairly small, so it is most noticeable if the airways have already been narrowed by other conditions, such as chronic bronchitis. Inhaled anticholinergics take slightly longer than beta-agonists to achieve their effect, but they last longer than the beta-agonists. An anticholinergic drugs is often used together with a beta-agonist drug to produce a greater effect than either drug can achieve by itself. Ipratropium bromide (Atrovent) is the inhaled anticholinergic drug currently used as a rescue asthma medication. Theophyllines: Theophylline acts as both a bronchodilator and an anti-inflammatory. It usually is used in addition to beta2 agonists and other anti-inflammatory drugs. Theophyllines are given by mouth and are less commonly used in Britain because they are more likely to produce side effects than inhaled treatment. They are still in very wide use throughout the world. All three types of reliever can be combined if necessary. More information on asthma Respiratory & lung diseases Mainpage Topics in respiratory and lung diseases Lung diseases Occupational lung diseases Respiratory infections Respiration disorders Broncheal diseases Pleural diseases Lung transplant Featured articles on respiratory and lung diseases COPD (Chronic obstructive pulmonary disease) Lung cancer Pulmonary hypertension Cystic fibrosis Severe acute respiratory syndrome (SARS)
10 Environmental Impact Facts That Make Solar Energy The Positive Choice For Everyone: 1. Solar energy is energy produced by the sun. The sun belongs to all of us: it always has and it always will. 2. Solar energy is abundant. In fact, the earth will never run out of it. Even in the winter months most parts of the earth receive sufficient light from the sun’s rays. 3. The sun’s energy is clean: unlike fossil fuels and nuclear power, solar power does not cause harm to humans or animals. It does not leave a “carbon footprint.” 4. Solar energy is quiet. That means solar panels do not create noise pollution. 5. Solar electric use does not contribute to the greenhouse effect. The greenhouse effect is a problem caused by non-renewable fossil fuels that contribute to climate change. 6. An advantage of home solar power is that it allows users to begin lessening the amount of “carbon footprint” immediately. And if a home owner does not want to generate 100% of energy from solar power right away, that person can begin smaller with a partial solar system that generates some energy. Solar energy allows a user to make a positive environmental impact in stages. 7. The homeowner who moves to home solar begins using a self-sustaining solar system to generate his or her own home power. That homeowner begins to realize there are additional self-sustaining uses for solar energy. The electricity a home solar system creates for a household can also be used for an ELECTRIC CAR. These are transportation sources that further lessen the amount of nonrenewable, polluting fuels generated by fossil fuels for personal vehicles. 8. Solar energy is entirely renewable. 9. The realities of solar panels for home use mean the once remote location of some small (or large) vacation house is no longer remote in any disadvantageous way. Anyone who has ever considered owning a place, say, in the mountains, but worried about the additional environmental strain caused because such a “remote” location places an additional strain on the energy grid no longer need be concerned. The realities of solar energy make that “remote” location no longer remote in terms of energy availability. And an electric vehicle, powered by solar electricity generated with solar panels, is an environmentally friendly, non-polluting transportation source that takes a person there cleanly in the most environmentally friendly manner possible. 10. Solar power is flexible. That flexibility of solar panels installation gets the solar user thinking in an entirely more environmentally creative way about this clean electric power source. That leads to an environmental solutions perspective, rather than the conventional way of looking at things that needlessly wastes the earth’s decreasing resources and adds to the fossil fuel problem. Instead, the solar user will now be part of the renewable energy solution. Request a free solar estimate today! First Name* Last Name* Your Email* Phone* (no dashes) Zip Code* Current Electricy Provider* Monthly Power Bill* Roof Shade* Are you a homeowner?  Yes No
He was a prolific writer, churning out novels, poetry, plays, and essays. He was widely admired in the Netherlands in his own time for his writings, as well as his status as the first internationally prominent Dutch psychiatrist. Van Eeden also incorporated his psychiatric insights into his later writings, such as in a deeply psychological novel called "Van de koele meren des doods" ("The cool lakes of death"). Published in 1900, the novel intimately traced the struggle of a woman addicted to morphine as she deteriorated physically and mentally. Van Eeden sought not only to write about, but also to practice, such an ethic. He established a communal cooperative called Walden, taking inspiration from Thoreau, in BussumNorth Holland, where the residents tried to produce as much of their needs as they could themselves and to share everything in common, and where he took up a standard of living far below what he was used to. This reflected a trend toward socialism among the Tachtigers; another Tachtiger, Herman Gorter, was a founding member of the world's first Communist political party, the Dutch Social-Democratic Party, in 1909. In his early writings, he was strongly influenced by Hindu ideas of selfhood, by Boehme's mysticism, and by Fechner's panpsychism. In his later life, van Eeden became a Roman Catholic. Van Eeden also had an interest in Indian philosophy. He translated Tagore’s Gitanjali.
The Goose That Lays the Golden Egg Putting a pinch of salt on your chips makes them taste better than before. And putting a second pinch of salt on makes them taste even better. Putting a tablespoon of salt on your chips, however, doesn’t make them taste even better again. It renders them inedible. Why? Because quantities matter. Too much of a good thing becomes a bad thing. Don’t kill the goose that lays the golden eggs. “Scaling up” “Scaling” something up merely means to make it larger than it was before. This could mean blowing a photograph up to twice its original size so that it can be seen from further away, cooking an extra large chilli so that you can feed more people, or hiring more staff – with more people on the job you can process orders quicker and grow your business. The thing with scaling though, is that there is almost always a knock-on effect. Ships, printing presses, and atomic bombs Scattered throughout human history are some key moments where it all of a sudden things that had previously scaled up slowly scaled up incredibly rapidly, and what always happened next was that the world was changed dramatically. From the 14th century onwards, the increasing size and reliability of naval technology made it more possible than ever to travel to, trade with, and conquer, the far reaches of the globe. Cue modern colonisation. In 1439, Gutenberg’s printing press made it possible to reproduce the written world at scale, enabling – amongst other things – peasants to read the bible for the first time. Cue the Reformation. On August 6th 1945, the US proved to the world that something no larger than a traditional bomb could now wipe out an entire city. Cue the end of World War II and the start of the Cold War. Just because something can be scaled doesn’t mean it should. What are you trying to accomplish? Answer that first, and then look at the different parts of your operation that it’s possible to scale up. Scaling up the right things gives you more time and energy each day with which to focus on the critical parts of what you do – the parts that you and only you can do. It helps to eliminate annoying distractions, and reduces genuine waste. But scaling up the wrong things – whilst possibly making you incredibly successful in a worldly sense – could kill the very goose that lays the golden egg. In a search for more, more, more, you might inadvertently destroy the very essence of what it is you’re doing. And if that’s gone, what’s the point? If all that matters to you is making a profit, for example, then the modern world offers you untold avenues for scaling up. You can buy influence, you can lobby governments, you can cut your production costs by exploiting sweatshop workers who have no choice… That doesn’t mean you should, though. If you’re trying to do something different than that, something better, something with a little more to it, something richer, something deeper… Embrace and enable your humanity We are entering a technological age where it is becoming more and more possible – and more and more critical – to treat each other properly. To acknowledge that we’re all in this shit together – this weird and wonderful gift called life. We are able to do this because machines can and will do more and more of the menial tasks traditionally performed by humans. The point of scaling up technology is to enable our humanity, not to destroy it. Some things benefit from being streamlined, made more efficient, scaled up… but not our humanity. We need to be free to express, to learn, to grow. And we can’t do that if we’re constantly trying to measure ourselves against arbitrary standards, or do this thing faster, or do this other thing more efficiently. Human beings weren’t designed for the efficiency computers were designed for. Let the computers do what they do. And let yourself be a human, warts and all. Your inherent humanity is the goose that is laying the golden eggs. Don’t kill it. Enable it. Leave a comment
Navigational view of the brain thanks to powerful X-rays Navigational view of the brain thanks to powerful X-rays A "navigational map view" of a section of a mouse brain. Like the map view of an Earth imaging program, this image of a brain section takes cues from actual imaging performed with highest-energy X-rays at a synchrotron and turns them into a graphic depiction. The imaging concentrates on a mesa-scale of the brain analogous to the map view of Google Earth. The scale could be useful in studying how the brain computes. Credit: Georgia Tech / Eva Dyer If brain imaging could be compared to Google Earth, neuroscientists would already have a pretty good "satellite view" of the brain, and a great "street view" of neuron details. But navigating how the brain computes is arguably where the action is, and neuroscience's "navigational map view" has been a bit meager. Now, a research team led by Eva Dyer, a computational neuroscientist and electrical engineer, has imaged brains at that map-like or "meso" scale using the most powerful X-ray beams in the country. The imaging scale gives an overview of the intercellular landscape of the at a level relevant to small neural networks, which are at the core of the brain's ability to compute. Highest-energy X-rays Neural forest for the trees Electron microscopy already captures neuronal details in impressive clarity. Functional magnetic resonance imaging (fMRI) makes great visuals of brain structures and broad neural signaling. Navigational view of the brain thanks to powerful X-rays The highest-energy x-rays in the country are generated at Argonne National Laboratory's (ANL) Advanced Photon Source synchrotron. Brain imaging done here was converted into a graphic depiction of the brain at the mesa-scale, a level that could be useful in better understanding brain signaling. Credit: Argonne National Laboratory / R. Fenner So, why do researchers even need mesoscale imaging? "FMRIs image at a high level, and with many microscopes, you're zoomed in too far to recognize the forest for the trees," Dyer said. "Though you can see a lot with them, you also can miss a lot." "If you look at brain signaling on the level of individual neurons, it looks very mysterious, but if you take a step back and observe the activity of a population of hundreds of neurons instead, you might see simpler, clearer patterns that intuitively make more sense." In an earlier study, Dyer discovered that hand motion directions corresponded with reliable neural signaling patterns in the brain's motor neocortex. The signals did not occur across single neurons or a few dozen but instead across groups of hundreds of neurons. Mesoscale imaging reveals a spatial view on that same order of hundreds of neurons. Megamap dreams The researchers have also been able to couple their new meso-level imaging technique with extremely detailed electron microscopy. And that has the potential to take them closer to a kind of Google Earth for the brain by combining mesoscale or map-like views with zoomed-in or street-like views. "We have begun doing X-ray tomography on large brain tissues, then we've gone deeper into specific tiny regions of interest in the same tissue with an electron microscope to see the full connectome there," Dyer said. The connectome refers to the total scheme of the hundreds of individual connections between neurons. The researchers hope to someday be able to switch from a mesoscale view to close-up view, a bit like Google Earth. Zeroing in then zooming in "I think what we're going to need in neuroscience is this ability to traverse across different scales," Dyer said. She envisions a future multi-scale imaging technology that is useful in understanding neurological diseases. "We want to be able to tell somebody researching a disease what the underlying anatomy of their lab sample is in an automated way," she said. "You could navigate using this mesoscale view to get the context of where the damage is." Navigational view of the brain thanks to powerful X-rays Eva Dyer in her office at the Wallace H. Coulter Department of Biomedical Engineering at Georgia Tech and Emory University. Dyer is at the location at the Georgia Institute of Technology. She is looking at a meso-scale image of a mouse brain section she created in collaboration with Bobby Kasthuri at Argonne National Laboratory's Advanced Photon Source synchrotron. Credit: Georgia Tech / Rob Felt Then the user could zoom in on a blocked artery or destroyed tissue analogous to the way satellite imagery can zoom in on traffic jams to see what's causing them. From X-ray to graphic image Like a navigational map, the final images in the study were colorful, clear, mesoscale graphic depictions. They were based on the X-ray tomography, but a lot was involved in getting from the X-ray to the image. First, the thick section of brain rotated in the high-energy X-ray beam, which was transformed into an image analogous to the output of a CT scanner. Then structures and characteristics were identified by humans and algorithms before they were computed into three-dimensional, color-coded vasculature and cell bodies. The details of individual cells were very basic. In neurons, often the nuclei were visible in the X-ray tomography image, and axons wrapped in myelin (white matter) sometimes appeared as well. Pragmatic computation The new mesoscale imaging of brain samples also has pragmatic advantages. It may be possible to examine minuscule brain regions piece by piece with electron microscopes then compute them together into a complete image of the brain, but it's hardly practical. "Producing a three-dimensional map of just a cubic millimeter of the brain with an electron microscope requires processing petabytes of data," Dyer said. By contrast, the researchers need 100 gigabytes of data to compute a one-cubic-millimeter image of brain tissue using mesoscale X-ray tomography scans of thicker brain sections. But the researchers' goal is to not have to slice the tissue at all. "Eventually, we want to be able to image whole brains, as is, with this method to see the entirety of their neural networks and other structures." Explore further Flashing neurons in worms reveal how the brain generates behavior More information: Eva L. Dyer et al, Quantifying Mesoscale Neuroanatomy Using X-Ray Microtomography, eneuro (2017). DOI: 10.1523/ENEURO.0195-17.2017 Citation: Navigational view of the brain thanks to powerful X-rays (2017, October 18) retrieved 23 April 2019 from https://medicalxpress.com/news/2017-10-view-brain-powerful-x-rays.html Feedback to editors User comments
Assignment: Fundamentals Of Database Systems Add in library 100 Download3 Pages 670 Words 1. What is structured data and unstructured data? Give an example of each from your experience with data that you may have used. 2. Give a general definition of information retrieval (IR). What does information retrieval involve when we consider information on the Web? 3. What is meant by navigational, informational, and transformational search? 4. What are the different phases of the knowledge discovery from databases? Describe a complete application scenario in which new knowledge may be mined from an existing database of transactions. 5. What are the goals or tasks that data mining attempts to facilitate? 1. Structured data are data that is stored into some strict format and structure. For example, data stored in relational database have some rigid structural properties. Hence, these are structural data. On the contrary, there may be unstructured data, where there is no such format or structure followed while storing data. It has very limited application. An example is a text file containing some data, HTML files with some data etc. 2. As said by, Gerald Salton, IR or Information Retrieval is, “the discipline that deals with the structure, analysis, organization, storage, searching, and retrieval of information”. So, in general, IR is a process or retrieving information from a collection of documents or information in response to some query provided by some user. IR is mainly related unstructured or semi-structured data and information retrieval. 3. In the case of information used for web searches, there may be 3 types of search. Those are, • Navigational search that refers to the process of finding some particular piece of information quickly as per user query. An example is, searching for ‘earthquake’ on Google search. • Informational search that refers to the process of finding out latest information on some topic. For example searching for research activities on IR. • Transactional search that refers to the process of reaching to some site for further interaction. For example, searching to open a Facebook Account. 4. There are 6 different phases of knowledge discovery from database are, 1. Selection of data 2. Data cleansing 3. Enrichment 4. Data encoding or transforming 5. Data mining 6. Reporting Consider an example of a transaction database for a retailer. The database contains information about the consumers, like name, address, contact number, item purchased, quantity, price, total amount etc. So, a new sets of different knowledge can be retrieved from this database through KDD. The stages are, 1. In data selection, different sets of information on some item or entity will be selected. For example, customers from some geographical area. 2. During data cleansing process, the format of the data will be checked. For example, whether the ZIP code is in same and right format or not etc. 3. During enrichment, data from other sources like social media, demographics etc. will be added to data. 4. During data transformation different encoding can be used to shorten or compact the data formats. 5. Data mining will be used to find patterns based on different factors. 6. All results will be reported in understandable formats. 5. The goals of the tasks facilitated by data mining attempts are, • Predictions of the behavior of some data in future. • Identification of data patterns. • Classification of data into different partitions or categories. • Optimization of limited resources like space, time, cost etc. and maximization of output variables like profits etc. Cellary, W., & T. Morzy, E. G. (2014). Concurrency Control in Distributed Database Systems. Elsevier. Elmasri, R., & Navathe, S. B. (2013). Fundamentals of Database Systems. Pearson . Manning, C. D., Raghavan, P., & Schütze, H. (2008). Introduction to Information Retrieval. Cambridge University Press. Mullins, C. S. (2013). Database Administration: The Complete Guide to DBA Practices and Procedures. Addison-Wesley Professional. Özsu, M. T., & Valduriez, P. (2011). Principles of Distributed Database Systems. Springer. Rahimi, S. K., & Haug, F. S. (2010). Distributed Database Management Systems. John Wiley & Sons. Silberschatz, A., Korth, H. F., & Sudarshan, S. (2011). Database System Concepts (6th ed.). McGraw-Hill. Zaki, M. J., & Wagner Meira, J. (2014). Data Mining and Analysis. Cambridge University Press. Most Downloaded Sample of Management 271 Download1 Pages 48 Words Toulin Method Of Argumentation 202 Download9 Pages 2,237 Words Consumer Behavior Assignment 367 Download13 Pages 3,112 Words Internet Marketing Plan For River Island 325 Download9 Pages 2,203 Words Strategic Role Of HR In Mergers & Acquisitions 353 Download7 Pages 1,521 Words Relationship Between Knowledge Management, Organization Learning And HRM Free plagiarismFree plagiarism check online on mobile Have any Query?
Tag Archives: chinese restaurant syndrome Monosodium Glutamate (MSG) Q: What is ‘MSG’ and is it harmful? A: Monosodium glutamate (MSG) is a compound added to foods to enhance natural flavors. It’s made of glutamate, water and sodium. As a building block of proteins, the amino acid glutamate is found naturally in a wide variety of foods. Glutamate is also produced by the human body. Glutamate from MSG is metabolized in our system in the same way as naturally-occurring glutamate. American consumption of MSG is estimated to be about 0.55 grams per day, though Taiwanese intake averages roughly 3 grams daily. A U.S. government committee on food additives evaluated MSG in the late ‘80s and concluded that the substance does not represent a health hazard for the general population. In one study, adult men consumed diets containing up to 147 grams MSG daily (200-300 times higher than normal consumption) for up to 42 days and researchers observed no adverse reactions to this dosage (Bazzano et al, 1970). Surprisingly, MSG contains less sodium than table salt. MSG is 14% sodium, while the salt shaker holds 40% sodium. Therefore, MSG can actually be used instead of salt to obtain the same palatability. So why the bad reputation? The late 1960’s saw a flurry of reports that consumption of large amounts of MSG causes flushing, lightheadedness, facial pressure and chest tightness. This cluster of reactions has been dubbed Chinese Restaurant Syndrome (CRS). However, well-designed experiments of the past decade have not been able to attribute CRS symptoms to MSG. It is possible, however, that some individuals may, in fact, be sensitive to MSG after ingesting greater than 3 grams in the absence of food. Asian cuisine is not the only place you will find MSG! Even Italian dishes contain it and some highly-seasoned restaurant meals may contain up to 5 grams. MSG is available for purchase in the spice section of many supermarkets, and on packaged foods labels, you can find the ingredient listed as “monosodium glutamate.” Filed under Meal Tips, Medical Conditions
Think Twice Before You Add Salt to Your Food ‘Namak swaad anusar’. This is a well known term heard in all cookery shows, but it seems our Indian taste buds demand much more than what is needed. As per a study we consume salt 5 times than what our body needs, hence making our bodies prone to a host of diseases. Normal table salt contains sodium bicarbonate, monosodiumglutamate and sodium lactate besides sodium chloride. Our body needs salt forfunctions such as maintaining blood pressure, transmission of nerve impulses,maintaining pH balance and help facilitate proper digestion. Its deficiency canoften lead to muscle cramps. On the other hand, as per the WHO, high sodiumconsumption (>2 grams/day) and insufficient potassium intake (less than 3.5grams/day) contribute to high blood pressure and increase the risk of heartdisease and stroke. Now, as per reports Indians on an average consumedangerously high levels of upto 10.98g of salt every day.[1] Infact, babies should not be fed salt at all because theyneed just about 1 gm (0.4 gm of Sodium) till their 1st birthday. In the case ofbabies who are being fed salt, they end up consuming way more than what isrequired by their bodies. The required amount of sodium comes from breast milkor formula. Most mothers feel children do not eat well because it isbland and tasteless, however, in turn they end up giving more than the amountnecessary as per his age and end up putting too much pressure on his kidneysthat are not equipped for this type of pressure. Too much sodium in the dietcould lead to excess calcium excretion by the kidneys which in turn causeskidney stones. It can also lead to dehydration, obesity and hypertension inadulthood. What’s more, an estimated 2.5 million deaths could be preventedeach year if global salt consumption were reduced to the recommended level. Owing to the gravity of the condition, a group ofChennai-based doctors, and experts from IIT-Madras and IIT-Bombay will be partof an international meet on salt awareness, to be held in Chennai and Mumbai inNovember this year. They will deliberate on different methods to convince thegovernment to bring in a legislation and create public awareness. Cutting salt out of your diet not only helps you livelonger, let us look at some of the benefits: *One of the main health benefits of not eating salt is itnormalizes the pressure in the body.  *The intake of saltin your food will only lead to dehydration. If you want to stay hydrated insummer, the best thing to do is remove this ingredient from your food. *Increase Weight Loss The best health benefit of not eatingsalt is that it helps in cutting fat faster. Those who don't consume too muchof salt will lose weight drastically. *More Energy Though it may sound strange, but not consumingsalt will actually help to make you feel more active and energetic. This isanother health benefit of not eating salt. *Reduces Hypertension: The low intake or no intake of salthelps to keep your blood pressure intact. It also reduces the tension strain inyour body. *Destroys Taste Buds: Over a period of time salt kills yourtaste buds. To avoid this, increase the intake of your fruits and vegetableswhich is much more healthier.[2] *Reduces Risk of Stroke: Salt is one of the main culprit ofstroke. Minimum salt in food maintains normal blood pressure. *Keeps The Bones Healthy: Too much salt consumption destroyscalcium in bones. This can lead to osteoporosis, especially in senior citizens.Therefore it is advisable for older people to avoid salt entirely. *Avoids Body Bloating: Many women experience body bloatingduring their monthly menstrual cycle. The reason - salt intake. One of thehealth benefits of not eating salt is it helps to not encourage bloating. *Kidney Disorders: Excess salt consumption increases bloodpressure and leads to increased excretion of calcium by the kidneys thusforming stones.  Go a week without salt and you will know the wonders foryourself.1] Readmore at: By Madhuri Shukla Thaker   (0)   Comment
The batteries generate 2.33v watts each There’s is no doubt that talent is locked in several areas of Nigeria, waiting to be discovered. Just recently a group of Nigerian school students made interesting discoveries and invented impressive machines. Reports reaching anaedoonline reveal that a village school located in Akwa, Anambra state has gifted kids who have no access to electricity, internet or world-class civilization, yet they built a generator which runs on water, a bio-digester, a torch light powered by potatoes and a rocket launcher. Anambra school children built a generator which runs on water The creation of a generator which runs on water will be one of the firsts in the world. Interestingly, the machine generates 610 watts of electricity and was tested for about five hours in which it powered a bulb. Photos of the inventions of the brilliant minds have been making waves online, with many agreeing that with such limited exposure, the students can bring such to life. There’s no stopping them if they have access to better and quality education. Anambra school created a digester which converts waste to useful products Part of their invention was also a digester which breaks waste including plastics into different useful and valuable components that can be used for other things. The students also made batteries from discarded Dettol bottles. Each cell generates more than 3 watts of electricity. They also made a rocket launcher. Surprisingly, reports state that the kids have never heard of fourth industrial revolution.
Do You Know What Good Debt Is? It may sound counter-intuitive, but good debt is actually a certain type of liability. While there's numerous bad debts like auto loans, credit cards, there are also good debts that can help grow the value of an asset, and increase your net worth.  Here are some examples of good debts, according to Investopedia: • Education - Education and success have always gone hand and hand. In general, the more education an individual has, the greater that person's earning potential. An investment in a technical or college degree costs a bit up-front, but is a smart investment to make long term, and should pay itself back within a few years with the increased earning potential. • Real Estate & Mortgages - Investing in a mortgage is a good debt for two reasons: The interest from the mortgage is tax deductible, and can save you a significant amount of money each year when tax season comes around. Also, you can leverage more money when purchasing real estate. A $300,000 dollar house may sound like a lot of money, but certain loan programs allow for just 3.5% down, which is only about $10,500 down.  If you want to invest in a mortgage, this is it... Don't let these historically low interest rates slip away. Find the right home loan for you with Petra Cephas. Realtor Login Welcome, (First Name)! Forgot? Show Log In Enter Your Platform My Profile Log Out
Mr jones animal farm essay prompts There are a number of conflicts in Animal Farm: the animals versus Mr. Jones, Snowball versus Napoleon, the common animals versus the pigs, Animal Farm versus the neighbouring humans, but all of them are expressions of the underlying tension between the oppressors Start studying Animal Farm Questions. Learn vocabulary, terms, and more with flashcards, games, and other study tools. The often drunk farmer who runs the Manor Farm before the animals stage their Rebellion and establish Animal Farm. Mr. Jones is an unkind master who indulges himself while his animals lack food; he thus Animal Farm is short and contains few words that will hamper the readers understanding. The incidents in the novel allow rebellion against man alive. On Midsummers Eve, Mr. Jones becomes too drunk to feed or care for the animals, and the He warns them that there may be other animal traitors in their ranks. A Animal Farm Critical Essays Mr. Jones, attempted to retake the farm by force. But the animals were waiting for them. What is George Orwell's message in the novel Animal Farm? Since Animal Animal Farm Essay. Length: 1147 words (3. 3 doublespaced pages) Rating: Excellent. Open Document. From Mr. Jones a cruel farmer who feeds his animals to little and works them to hard, to Napolean a pig that will have you killed for a bottle of liquor. Through stupidity, narrow mindedness and pure cowardice of some animals we view Get free homework help on George Orwell's Animal Farm: book summary, chapter summary and analysis, quotes, essays, and character analysis courtesy of CliffsNotes. Animal Farm is George Orwell's satire on equality, where all barnyard animals live free from their human masters' tyranny. Inspired to rebel by Major, an old boar, animals on Mr. Jones Animal Farm Quotes and Prompts for Essay. Flash cards containing bullet points for the introduction, conclusion, paragraphs and quotes of an Animal Farm Critical Exam Essay. STUDY. PLAY. Introduction. Abuse of Power Plot Setting Characterisation Pay of Mr Jones Quote Lazy, Harsh. Quote 2 Together with the rest of the animals, they succeed in driving Mr. Jones off the farm, which is subsequently renamed Animal Farm. " They develop several laws about the equality of animals and the inferiority of human practices and Suggested Animal Farm essay topics and questions to write about. Written by George Orwell, Animal Farm tells the tale of a group of animals who launch an idea with good intentions but quickly come to power hungry dictators. How is the February Revolution comparable to the overthrowing of Mr. Jones? Discuss the song Beasts of Writing Assignments for Animal Farm by George Orwell Option for A, B, C grade on check for reading Mr. Jones Czar of Russia Seven Commandments Communist Manifesto Write a personal reflection essay on why you think you failed to understand this story. Give three reasons and at Animal Farm; A Student Essay; Animal Farm by: George Orwell Squealer convinces them that Snowball had actually fought alongside Mr. Jones against the animals. badwe see the rousing affect Old Majors song The Beasts of England has on the animals and how it prompts them to overthrow the tyrant Farmer Jones and create Animal Farm begins on Manor Farm in England. After Mr. Jones, the neglectful owner of the farm, has drunkenly shut the animals away and gone to sleep, the animals all assemble in the barn to hear Unit: Animal Farm 6 Writing Prompts 1. I could never follow a dictator 2. Peer pressure plays a big role in my life 3. You can always trust Animal Farm Essay Prompts Animal Farm Example Essays Genetically Altered Food Genetically Altered Food Genetic modification of organisms in general is a biotechnological process that forces genes to behave according to certain characteristics.
Java applet Tutorial Showing and manipulating pics via MemoryImageSource 1.01 Introduction Welcome to my first Java tutorial. This one deals with a very simple zoomer. We load a picture into memory and zoom in and out until the user aborts the execution. For output we use a very simple and un-advanced method, only good enough for demonstration purposes. Have fun...  (((you can get the complete source here))) 1.1 Create Applet and Thread First we need to create a new Class, inherited  from java.applet.Applet (which is inherited from java.awt.panel), with the interface Runnable (not absolutely necessary for the applet, but for the thread). An applet needs (in contrast to an application) no  main() method, because the init() and start() methods manage the execution of the applet. And we have to import some classes for loading and processing the pictures. import java.awt.* import java.awt.image.* public class bild extends java.applet.Applet implements Runnable   // applet source code here We also need some variables for zooming and some for saving the picture in memory. int width=216; // width of picture int height=288; // height of picture int zoomfaktor=3; // zoom level int zstart=1<<zoomfaktor;  // zoom start value int zstop=zstart*3; // zoom stop value boolean quit=false; // if quit==true execution of applet stops Image zoomimage; // contains the picture Image display; // buffer for output int zoomimageAr[]=new int [width*height]; // array for saving the zoomed picture int displayAr[]=new int [width*height]; // array for output MemoryImageSource mic; // keeps order of picture and array Thread hThread=null; // the thread The thread is created by calling the method start() which is done the first time the browser shows the applet or if the execution is continued. The method stop() aborts the execution (e.g. browser is closed), the thread is stopped. public void start(){   if (hThread==null){     hThread = new Thread(this); public void stop(){   if (hThread != null){ 1.2. Initialization A constructor is not necessary, because we initialize the thread in init() , which is always called when the browser loads the applet. But we have to use a MediaTracker to supervise the loading of the resources (pictures, sounds, ...), because we don't know when they are completely loaded. The method getCodeBase() returns the path to the applet, so there is no need to use absolute paths. By using getimage (URL, string) we could load and process gif and jpeg pictures. To get the color information of every single pixel we copy the data with a PixelGrabber from the Image variable to our (previously defined) array. The array has to be defined as array of int  because every pixel uses 4 x 8 bit for color information, the highest byte is the alpha value (for loaded pictures, this value is always 0xFF, opaque). The other three bytes represent the color values for red, green and blue. With the help of our MemoryImageSource object we later (after processing) re-convert the array back to Image. public void init()   MediaTracker tracker = new MediaTracker(this); // create MediaTracker   showStatus("Loading Image...");  // status bar shows "Loading..."   zoomimage = getImage(getCodeBase(),"razor.jpg"); // load the picture   tracker.addImage(zoomimage,0);  // adds the picture to  // the MediaTracker     tracker.waitForID(0); // wait for the picture to load   catch (Exception e) // catch "loading error" exception   resize(width, height); // convert Image to array    mic=new MemoryImageSource (width, height, displayAr, 0, height);   showStatus(" "); public void makear(){   PixelGrabber Grabber=new PixelGrabber(zoomimage.getSource(), 0, 0, width, height, zoomimageAr, 0, width);   catch (Exception e){ 1.3 The Calculation The method run() is called by the thread and contains the actual code. Not much to do here, the new picture is calculated  by ZoomIn (int) and displayed with repaint(). Then we stop the thread for 30 ms, to slow down the demo. The next Tutorial will have automatic time measurement to get better control of the speed (on different pc's the same execution speed). One problem of Java is that it manages the memory usage on it's own and some methods and objects are quite generous with memory. The GarbageCollector does his work in the breaks. But according to my experience he doesn't use this breaks very wisely, that's why the applet performance gets worse and worse after some time of running and you get the feeling that it formats your HDD. To prevent that memory gets to low, we call gc() by ourselves every time we had a complete zoom cycle. ZoomIn(int) does the actual calculation of the next image, but optimization is currently very low. The method is synchronized as paint() is to prevent the simultaneous output and calculation of the same picture (distorts the displayed picture) To display the calculated image (saved in displayAr) on the screen we copy the data from the array to the  Image object by using  createImage (MemoryImageSource). Here is a flow chart which shows the way of the image till output : getImage PixelGrabber Calculation Memory- razor.jpg --------> zoomimage -----------> zommimageAr ----------> displayAr --- Image-     | Source   to screen <----------- display <---------- mic <--  drawImage createImage   public void run(){    int zaehler=1;   int n= zstart;   while (!quit){     if (n==zstop) zaehler =-1;     if (n==zstart) zaehler =1;     ZoomIn (n);      catch (Exception e){}     if (n==zstart) System.gc(); public synchronized void ZoomIn (int n){   int x=0;   int y=0;    int xbuf=0;   int ybuf=0;   int i2buf = 0;   int mult216[]=new int [height];                                  // to save one mutliplication   for (int i=0;i<height;i++) mult216[i]=i*width;   for(int i2=0;i2<=height-1;i2++){     x = mult216[(((i2<<zoomfaktor)+xbuf))/n];     for(int i1=0;i1<=width-1;i1++){        y =(((i1<<(zoomfaktor))+ybuf))/n;   display = createImage(mic); 1.4 Output to screen To get our new picture on screen as quick as possible, we overwrite the paint(Graphics) and update(Graphics) methods.  paint(Graphics) is obvious (because we need to have our own customized output), but the update(Graphics) method is important, because normally Java would first clear the screen and then repaint it which is very slow and will result in heavy flicker. Besides we can overwrite the whole picture thus no deleting/clearing is necessary public synchronized void paint(Graphics g) public void update(Graphics g)   paint (g); 1.5 The Rest The remaining methods take care of the mouse action : public boolean mouseEnter(Event e, int x, int y){   showStatus("Erster Test von Boris");   return true; public boolean mouseExit(Event e, int x, int y){   showStatus(" ");   return true;   if (!quit){   return true; 1.6 Epilogue und Preview That's all ! Our funky Zoomer is done. The zooming algorithm isn't very good (quick and dirty) as the output method MemoryImageSource is. In the flowchart you can see how many steps you need to produce one picture and that many steps are very time and performance consuming which results in the low speed. But for simple effects (like this one) this method is OK. In the next tutorial I will use another method (for output) and I will show how to use the good old 8 bit pallet. As usual I'm thankful for any kind of suggestions, ideas, comments and code improvements because my Java skills are far from  ©©©© 18.Juli.00 Tobias von Loesch ©©©© 08.Aug.00 Oldskool
“No matter how many times she tried to imagine that scene with the yellow light that she knew had been there, she had to struggle to visualize it. She was beaten in the dark, and she had remained there, on a cold, dark kitchen floor.” Liesel subconsciously thought. What author Markus Zusak is doing here is using colours purely as symbols. In reality the scene is light with yellow, however in Liesel’s mind and the mind of myself as I read this, darkness and the colour black is apparent. This is because Liesel has just come to the realisation that she will never see her birth mother again. This makes me imagine a dark, black, bleak scene as black is commonly associated with death, mourning and sadness – all feelings which Liesel is currently going through. “Even Papa’s music was the colour of darkness” Liesel thought. This represents how nothing, not even Papa and his accordion playing, could change things as he is not her real parent – her real parents are gone. In reality however, the scene is lit yellow. “The dark, the light. What is the difference?” thinks Liesel. Here Zusak is showing us that while the mourning goes on, character Liesel had already lost her family and is none the less willing and prepared to move on and carry on – to reach the happy yellow despite the sad darkness that surrounds. This is a key message Zusak is portraying via Liesel. Join the conversation! 1 Comment 1. These observations are very helpful. Consider the relationship between Liesel’s perception of the world here (via colour) and that of the narrator, Death. There’s a strong parallel. Why do you think this is? What’s the benefit of using colour as a symbol? Respond now!
How You Can Keep Fees Low On Your Merchant Account May 7, 2018 How Can You Accept Credit Cards Without a Merchant Account? May 8, 2018 The Most Common Uses Of FPGA Technology The use of field programmable gate arrays is very abundant. They can be found in many different items today. It is a type of microcontroller, a device that is geared to be programmed and used with an electronic device. When they are reprogrammed, there is no hardware that needs to change. They have configurable components that are easy to use. They can be used in many different applications. There are also many benefits to using these opposed to many other applications that are similar in nature. Here are the most common uses of FPGA boards that can be found in popular products today. What Is A FPGA Board? Field programmable gate arrays are applications that are specifically designed to used in certain devices. They are designed with logic blocks. This allows them to be quickly programmed, and also reprogrammed, using different types of software. They have what are called flip-flops which make them very easy to utilize. They also have different architecture. This would include hard blocks, logic blocks, and different types of three-dimensional architecture which can be used in many situations. Why Were These Initially Created? These were initially made by companies that were trying to improve the computer industry. Companies such as Xilinx FPGA and Altera from Intel represent a large portion of the market. There are other manufacturers that make, but they were the main ones back in the 1980s and 1990s. Today, they can be found in products that are sold which are in your house, or those that you will use outside. What Are FPGAs Used In? If you are driving your vehicle down the road there are going to be Direct Components Inc – Altera FPGA inside of the vehicle to help it function. They can also be used in industrial applications, as well as millions of different consumer products. The telecommunications industry would not be able to function without them, and they are also essential for digital networking. They can be found in many different industries which would not be able to provide products or services without field programmable gate arrays. What Industries Use These Boards? Industries that are using these boards including the aerospace and defense industry. If you see a rocket heading up into space, is going to have these inside. The audio industry also uses them. This is for portable electronic devices, signal processing systems, and the high-tech speech recognition programs that people use all over the world. As mentioned before, the automotive industry uses them. They are also essential in medical tools and instruments. If you get an ultrasound, MRI, CT scan, or even an x-ray, there will be FPGAs inside of them. Other industries will include those that have large telescopes, security systems, and the digital imaging processors that you have in your computer. Without these field programmable gate arrays, many of these products in industry would not exist. Other Uses For Field Programmable Gate Arrays There are many other industries that use this type of electronic device. There are supercomputers that are only able to calculate at high speeds because of them. Data mining systems, industrial imaging, and ASIC prototyping requires them to be used. There are also many other places such as wireless communications such as broadband, but radios also use them around the country. When you are looking at your digital display, or if you have a broadcasting system, FPGAs allow them to function. How To Find The Best Ones Currently Available The best ones tend to be from the two companies that were already mentioned. That’s because they are the oldest and have really gone through some very important technological advancements. The size of these is shrinking which means they are made by machines more than ever before. This also means that they are more complex and capable of performing more functions. If you don’t know how to program one, you can always find a business that will be able to help you. You can do this, you can probably create any type of device that you want. There are businesses that are specifically designed to take your ideas and make them a reality. Many of these are going to use FPGA technology. Now that you know a little bit more about field programmable gate arrays, and their common uses, you may have a few ideas on how you could use them for ideas that you have. There will always be professionals that can take your ideas, provide you with prototypes, and help you create anything that you would like to use or sell. At the very least, you now know what some of the more popular devices in the world are using. Modern technology today, at least in part is because of these unique devices. If you would like to learn more about the common uses of FPGA technology, or speak to a professional that can help you create something, all of this can be found on the web or in your local phone directory. Comments are closed.
The Evolution of Cognitive Psychology over Time Add to cart Essay #: 061255 Total text length is 5,869 characters (approximately 4.0 pages). Excerpts from the Paper The beginning: The Evolution of Cognitive Psychology over Time This is a brief paper that explores the evolution of cognitive psychology over time. The paper shall define the term, ‘cognitive,’ and shall explain the interdisciplinary perspective of cognitive psychology. The paper shall also describe the emergence of cognitive psychology as a discipline and shall assess the impact of the decline of behaviourism on the discipline of cognitive psychology. In the end, what we find is that cognitive psychology is characterized by a long pre-history before it burst onto the scene around the middle of the twentieth century. At the same time, cognitive psychology appears to have grown out of the shortcomings of behaviourism insofar as the latter could... The end: .....hology sprang into action. What we can apparently say is that cognitive psychology has a long history that began with philosophy and then proceeded through psycho-physics towards structuralism and then towards behaviourism. From there, we arrived at cognitive psychology –and its success and growth to prominence owes much to the inability of behaviourism to answer certain basic questions about the human brain and about the process of learning. Works Cited Robinson-Reigler, Gregory & Robinson-Reigler, B. (2008). Cognitive Psychology: Applying the Psychology of the Mind. New York: Pearson. Singh, A. (1991). The Comprehensive History of Psychology. India: Motilal Banarsidass Publ. Watson, J. (2008). Behaviorism. USA: Read Books.
Nitrates and Nitrites in Drinking Water Nitrogen can take different forms in nature and is important for life in both plants and animals. The most common form of nitrogen found in well water is nitrate. Wells with high levels of nitrates are more likely to be privately owned or shallow and affected by human activity. Health Concerns: Are nitrates or nitrites harmful to my health? The first is risk of “blue baby syndrome,” also called methemoglobinemia: • Poisoning can occur when babies drink formula made with nitrate- or nitrite- contaminated tap water. • The baby’s blood is less able to carry oxygen due to the poisoning. • Affected babies develop a blue-gray color and need emergency medical help immediately. • Babies under six months of age are more at risk. The second is the potential formation of chemicals called nitrosamines in the digestive tract. Nitrosamines are being studied for long-term links to cancer. No standards have been set for this yet. Source: How do nitrates and nitrites get into my water? Nitrate contamination of water usually comes from fertilized agricultural fields, septic system failures, or compost piles that are too close to wells. Learn how to take care of your septic system Testing: How do I know if nitrates or nitrites are in my water? You cannot see, smell or taste nitrates or nitrites. Testing is the only way to know if nitrates or nitrites are in your drinking water. The Health Department recommends testing your private water source for nitrates every five years. The maximum contaminant level for nitrates in water is 10.0 mg/L and for nitrites is 1.0 mg/L. The Health Department recommends taking action to remove or lower nitrates if levels are above 5.0 mg/L. Order a test kit for nitrates Treatment Options: Can I remove or lower the levels of nitrates in my water? Nitrate levels can be lowered in drinking water with treatment. Re-test for nitrates after any treatment system is installed to make sure levels are below the drinking water standard. Anion Exchange Anion exchange is a treatment like water softening, but uses a different media that exchanges the nitrates for chloride. This is installed as a whole house system (point-of-entry or POE). Reverse Osmosis This system uses a synthetic membrane that allows water to go through but leaves nitrates behind. The membrane is continually rinsed. This system is typically installed under a kitchen sink (point-of-use), but can also be installed POE system. Install a system with a National Sanitation Foundation (NSF/ANSI) Standard 58 Certification. Search for an NSF/ANSI-certified reverse osmosis treatment system Vermont Wastewater and Potable Water Revolving Loan Fund The NeighborWorks Alliance of Vermont Single Family Housing Repair Loans and Grants
Change Location arrow Dictionaries Home Teachers' Resources Students' Resources Which Dictionary? Longman Corpus Network New words Dictionaries for Research iPhone dictionaries Technical Support OK Button Longman Corpus Network The Longman Written American Corpus Bullet Point What is the Longman Written American Corpus? The Longman Written American Corpus is a dynamic corpus of 100 million words comprising running text from newspapers, journals, magazines, best-selling novels, technical and scientific writing, and coffee-table books. The composition of the corpus is constantly being refined and new material added. The design of the Longman Written American Corpus is based on the general design principles of the Longman Lancaster English Language Corpus and the written component of the British National Corpus. Like other corpuses in the Longman Corpus Network, words can be concordanced, wordlists created, and statistical features analysed, allowing lexicographers to compare and contrast usage in British and American English.
Water management Where we will solve this challenge How can we share knowledge, build capacity and better invest in digital agriculture in Zajecar? Farmers in the Zajecar municipality are facing challenges due to knowledge sharing, capacity building and investment in innovative technologies are part of a transformation into sustainable food and agricultural production systems. Read more How much can water contribute to improving the livability and beauty of the places? Proposal, reflections and suitable solutions for the urban context of Cagliari. Read more How can we prepare Brno for Climate Change? Read more How to improve the fluvial environment all together through a process of participatory democracy Monastier di Treviso For four years now, the communities of the Meolo Vallio Musestre rivers have activated a voluntary process of participatory democracy to improve the many environmental components that link the rivers to the territory. Read more Read more How can we sustainably use groundwater through public artisan fountains? Sustainable use of groundwater through public artisan fountains as alternative water supply to the city and the human factor of environmental impact due to inadequate management and use of this resource. Read more How can the city centre of Brno become car-free? Read more How can we develop urban adaptation to climate change projects in water sensitive areas? Developing urban adaptation to climate change project-ideas is important to enhance socio-economic and environmental values in water sensitive areas Read more How can we improve water management in Puebla City? Puebla City Due to the growing city, water is getting polluted and more scarce. Help and offer solutions to to improve the water situation in Puebla City. Read more How do we make water management more sustainable? water management 1_square.jpg The World Economic Forum Global Risk Report identified failure of climate change mitigation and adaptation and the water crises – droughts and floods – as the risks with the largest expected global impact over the coming decades (WEF, 2016). Flood events are occurring more frequently causing major damage in urban areas. The frequency and intensity of rain events will increase in the future. Besides flood risk, water shortage is an increasing concern. A recent global study shows that 1 in 4 cities already is water stressed, and climate change and urbanisation will increase the risk for water shortage in peri-urban river basins (McDonald et al., 2014). Water is essential for life. These challenges ask for a systemic approach and a transition in urban planning and urban water management. Water plays an important role for the liveability of cities. We have to rethink the way we deal with water in our cities to create green, resilient and circular cities, so called Water Smart Cities, where collaboration between businesses, public authorities, researchers and citizens plays a unique part to ensure rapid transition. Many cities deal with increasing risks for water shortage, floods and heat waves
Open main menu Background historyEdit Animation of a schematic Newcomen steam engine. – Steam is shown pink and water is blue. The atmospheric engine invented by Thomas Newcomen in 1712, often referred to simply as a Newcomen engine, was the first practical device to harness the power of steam to produce mechanical work.[1] Newcomen engines were used throughout Britain and Europe, principally to pump water out of mines, starting in the early 18th century. James Watt's later Watt steam engine was an improved version of the Newcomen engine. As a result, Watt is today better known than Newcomen in relation to the origin of the steam engine. In industry, crushers are machines which use a metal surface to break or compress materials into small fractional chunks or denser masses. Throughout most of industrial history, the greater part of crushing and mining part of the process occurred under muscle power as the application of force concentrated in the tip of the miners pick or sledge hammer driven drill bit. Before explosives came into widespread use in bulk mining in the mid-nineteenth century, most initial ore crushing and sizing was by hand and hammers at the mine or by water powered trip hammers in the small charcoal fired smithies and iron works typical of the Renaissance through the early-to-middle industrial revolution. It was only after explosives, and later early powerful steam shovels produced large chunks of materials, chunks originally reduced by hammering in the mine before being loaded into sacks for a trip to the surface, chunks that were eventually also to lead to rails and mine railways transporting bulk aggregations that post-mine face crushing became widely necessary. The earliest of these were in the foundries, but as coal took hold the larger operations became the coal breakers that fueled industrial growth from the first decade of the 1600s to the replacement of breakers in the 1970s through the fuel needs of the present day. The gradual coming of that era and displacement of the cottage industry based economies was itself accelerated first by the utility of wrought and cast iron as a desired materials giving impetus to larger operations, then in the late-sixteenth century by the increasing scarcity of wood lands for charcoal production to make the newfangled window glass[2] material that had become—along with the chimney— 'all the rage'  among the growing middle-class and affluence of the sixteenth-and-seventeenth centuries;and as always, the charcoal needed to smelt metals, especially to produce ever larger amounts of brass and bronze,[3] pig iron, cast iron and wrought iron demanded by the new consumer classes. Other metallurgical developments such as silver and gold mining mirrored the practices and developments of the bulk material handling methods and technologies feeding the burgeoning appetite for more and more iron and glass, both of which were rare in personal possessions until the 1700s. Things only became worse when the English figured out how to cast the more economical iron cannons (1547), following on their feat of becoming the armorers of the European continent's powers by having been leading producers brass and bronze guns,[3] and eventually by various acts of Parliament, gradually banned or restricted the further cutting of trees for charcoal in larger and larger regions in the United Kingdom.[2] In 1611, a consortium led by courtier Edward Zouch was granted a patent for the reverberatory furnace, a furnace using coal, not precious national timber reserves,[4] which was immediately employed in glass making. An early politically connected and wealthy Robber Baron figure Sir Robert Mansell bought his way into the fledgling furnace company wrested control of it, and by 1615 managed to have James I issued a proclamation forbidding the use of wood to produce glass,[4] giving his families extensive coal holdings a monopoly on both source and means of production for nearly half-a-century. Abraham Darby a century later relocated to Bristol where he had established a building brass and bronze industry by importing Dutch workers and using them to raid Dutch techniques. Both materials were considered superior to iron for cannon, and machines as they were better understood. But Darby would change the world in several key ways. Where the Dutch had failed in casting iron, one of Darby's apprentices, John Thomas succeeded in 1707[5] and as Burke put it: "had given England the key to the Industrial Revolution"[5]. At the time, mines and foundries were virtually all small enterprises except for the tin mines (driven by the price and utility of brass) and materials came out of the mines already hammered small by legions of miners who had to stuff their work into carry sacks for pack animal slinging. Concurrently, mines needed drained resulting in Savery and Newcomen's early steam driven pumping systems. The deeper the mines went, the larger the demand became for better pumps, the greater the demand for iron, the greater the need for coal, the greater the demand for each. Seeing ahead clearly, Darby, sold off his brass business interests and relocated to Coalbrookdale with its plentiful coal mines, water power and nearby ore supplies. Over that decade his foundries developed iron casting technologies and began to supplant other metals in many applications. He adapted Coking of his fuel by copying Brewers practices.[5] In 1822 the pumping industries needs for larger cylinders met up with Darby's ability to melt sufficient quantities of pig iron to cast large inexpensive iron cylinders instead of costly brass ones,[5] reducing the cost of cylinders by nine-tenths.[6] With gunpowder being increasingly applied to mining, rock chunks from a mining face became much larger, and the blast dependent mining itself had become dependent upon an organized group, not just an individual swinging a pick. Economies of scale gradually infused industrial enterprises, while transport became a key bottleneck as the volume of moved materials continued to increase following demand. This spurred numerous canal projects, inspired laying first wooden, then iron protected rails using draft animals to pull loads in the emerging bulk goods transportation dependent economy. In the coal industry, which grew up hand in hand as the preferred fuel for smelting ores, crushing and preparation (cleaning) was performed for over a hundred years in coal breakers, massive noisy buildings full of conveyors, belt-powered trip-hammer crushing stages and giant metal grading/sorting grates. Like mine pumps, the internal conveyors and trip-hammers contained within these 7—11 story buildings. Industrial useEdit In operation, the raw material (of various sizes) is usually delivered to the primary crusher's hopper by dump trucks, excavators or wheeled front-end loaders. A feeder device such as an apron feeder, conveyor or vibrating grid controls the rate at which this material enters the crusher, and often contains a preliminary screening device which allows smaller material to bypass the crusher itself, thus improving efficiency. Primary crushing reduces the large pieces to a size which can be handled by the downstream machinery. Types of crushersEdit Portable Close Circuit Cone Crushing Plant The following table describes typical uses of commonly used crushers: Type Hardness Abrasion limit Moisture content Reduction ratio Main use Jaw crushers Soft to very hard No limit Dry to slightly wet, not sticky 3/1 to 5/1 Heavy mining, Quarried materials, sand & gravel, recycling Gyratory crushers Soft to very hard Abrasive Dry to slightly wet, not sticky 4/1 to 7/1 Heavy mining, Quarried materials Compound crusher Medium hard to very hard Abrasive Dry or wet, not sticky 3/1 to 5/1 Mine, Building Materials Horizontal shaft impactors Soft to medium hard Slightly abrasive Dry or wet, not sticky 10/1 to 25/1 Quarried materials, sand & gravel, recycling Vertical shaft impactors (shoe and anvil) Medium hard to very hard Slightly abrasive Dry or wet, not sticky 6/1 to 8/1 Sand & gravel, recycling Vertical shaft impactors (autogenous) Soft to very hard No limit Dry or wet, not sticky 2/1 to 5/1 Quarried materials, sand & gravel Crusher buckets Soft to very hard No limit Dry or wet and sticky 3/1 to 5/1 Heavy mining, Quarried materials, sand & gravel, recycling Jaw crusherEdit Operation of a dodge type jaw crusher Dodge type jaw crusher Gyratory crusherEdit Ruffner Red Ore Mine gyratory crusher Cone crusherEdit Compound cone crusherEdit Compound cone crusher (VSC series cone crusher) can crush materials of over medium hardness. It is mainly used in mining, chemical industry, road and bridge construction, building, etc. As for VSC series cone crusher, there are four crushing cavities (coarse, medium, fine and superfine) to choose. Compared with the same type, VSC series cone crusher, whose combination of crushing frequency and eccentricity is the best, can make materials have higher comminution degree and higher yield. In addition, VSC series cone crusher’s enhanced laminating crushing effect on material particles makes the cubic shape of crushed materials better, which increases the selling point. Symons cone crusherEdit Symons cone crusher (spring cone crusher) can crush materials of above medium hardness. And it is widely used in metallurgy, building, hydropower, transportation, chemical industry, etc. When used with jaw crusher, it can be used as secondary, tertiary or quaternary crushing. Generally speaking, the standard type of Symons cone crusher is applied to medium crushing. The medium type is applied to fine crushing. The short head type is applied to coarse fine crushing. As casting steel technique is adopted, the machine has good rigidity and large high strength. Single cylinder hydraulic cone crusherEdit Multi-cylinder hydraulic cone crusherEdit Multi-cylinder hydraulic cone crusher is mainly composed of main frame, eccentric shaft, crushing cone, mantle, bowl liner, adjusting device, dust ring, transmission device, bowl-shaped bearing, adjusting sleeve, hydraulic control system, hydraulic safety system, etc. The electric motor of the cone crusher drives the eccentric shaft to make periodic swing movement under the shaft axis, and consequently surface of mantle approaches and leaves the surface of bowl liner now and then, so that the material is crushed due to squeezing and grinding inside the crushing chamber. The safety cylinder of the machine can ensure safety as well as lift supporting sleeve and static cone by a hydraulic system and automatically remove the blocks in the crushing chamber when the machine is suddenly stuffy. Thus the maintenance rate is greatly reduced and production efficiency is greatly improved as it can remove blocks without disassembling the machine. Impact crusherEdit Horizontal shaft impactor (HSI) / HammermillEdit The HSI crushers break rock by impacting the rock with hammers that are fixed upon the outer edge of a spinning rotor. HSI machines are sold in stationary, trailer mounted and crawler mounted configurations. HSI's are used in recycling, hard rock and soft materials. In earlier years the practical use of HSI crushers is limited to soft materials and non abrasive materials, such as limestone, phosphate, gypsum, weathered shales, however improvements in metallurgy have changed the application of these machines. Vertical shaft impactor (VSI)Edit Scheme of a VSI crusher with air-cushion support VSI crusher VSI crushers use a different approach involving a high speed rotor with wear resistant tips and a crushing chamber designed to 'throw' the rock against. The VSI crushers utilize velocity rather than surface force as the predominant force to break rock. In its natural state, rock has a jagged and uneven surface. Applying surface force (pressure) results in unpredictable and typically non-cubical resulting particles. Utilizing velocity rather than surface force allows the breaking force to be applied evenly both across the surface of the rock as well as through the mass of the rock. Rock, regardless of size, has natural fissures (faults) throughout its structure. As rock is 'thrown' by a VSI Rotor against a solid anvil, it fractures and breaks along these fissures. Final particle size can be controlled by 1) the velocity at which the rock is thrown against the anvil and 2) the distance between the end of the rotor and the impact point on the anvil. The product resulting from VSI Crushing is generally of a consistent cubical shape such as that required by modern SUPERPAVE highway asphalt applications. Using this method also allows materials with much higher abrasiveness to be crushed than is capable with an HSI and most other crushing methods. Mineral sizersEdit The three-stage breaking action: initially, the material is gripped by the leading faces of opposed rotor teeth. These subject the rock to multiple point loading, inducing stress into the material to exploit any natural weaknesses. At the second stage, material is broken in tension by being subjected to a three-point loading, applied between the front tooth faces on one rotor, and rear tooth faces on the other rotor. Any lumps of material that still remain oversize, are broken as the rotors chop through the fixed teeth of the breaker bar, thereby achieving a three dimensional controlled product size. The rotating screen effect: The interlaced toothed rotor design allows free flowing undersize material to pass through the continuously changing gaps generated by the relatively slow moving shafts. The deep scroll tooth pattern: The deep scroll conveys the larger material to one end of the machine and helps to spread the feed across the full length of the rotors. This feature can also be used to reject oversize material from the machine.[7] Crusher BucketEdit A crusher bucket is an attachment for hydraulic excavators. Its way of working consists on a bucket with two crushing jaws inside, one of them is fixed and the other one moves back and forth relative to it, as in a jaw crusher. They are manufactured with a high inertia power train, circular jaw movement and an antiestagnation plate, which prevents large shredding pieces from getting stuck in the bucket's mouth, not allowing them to enter the crushing jaws. They have also the crushing jaws placed in a cross position. This position together with its circular motion gives these crusher buckets the faculty of grinding wet material. This is the crushing jaws movement in a Xcentric Crusher bucket, with a patented technology. For the most part advances in crusher design have moved slowly. Jaw crushers have remained virtually unchanged for sixty years. More reliability and higher production have been added to basic cone crusher designs that have also remained largely unchanged. Increases in rotating speed have provided the largest variation. For instance, a 48-inch (120 cm) cone crusher manufactured in 1960 may be able to produce 170 tons/h of crushed rock, whereas the same size crusher manufactured today may produce 300 tons/h. These production improvements come from speed increases and better crushing chamber designs. See alsoEdit 1. ^ "Science Museum - Home - Atmospheric engine by Francis Thompson, 1791". Retrieved 2009-07-06. 2. ^ a b James, Burke (1978). "Chapter 6. Fuel to the Flame". Connections, UK ed. "Connections: Alternative History of Technology" (Time Warner International/Macmillan 1978) (ninth, pbk ed.). Little, Brown and Company (North America) / Macmillan, London. p. 304. ISBN 978-0-316-11681-7. by 1600, England was facing an acute timber crises, thanks largely to the increase in glass production 3. ^ a b Burke, James, "Connections", page 167 4. ^ a b Burke, James, "Connections", page 168 5. ^ a b c d Burke, James, "Connections", page 170 6. ^ Clark, Ronald W. (1985). "page 63". Works of Man: History of Invention and Engineering, From the Pyramids to the Space Shuttle (1st American Edition. 8"x10" Hard cover ed.). Viking Penguin,Inc., New York, NY, U.S.A., (1985). pp. 352 (indexed). ISBN 9780670804832. Within a few years, however, the cost was reduced by nine-tenths as it was found cast-iron cylinders could be produced with sufficient accuracy. 7. ^ The MMD Group of Companies."MMD Sizers". The MMD Group of Companies, 2005, p 3.
3 Things to Know About Battery Sulfation An automobile relies on two separate yet equally important energy sources. First, gasoline provides the power needed to run your engine and move your wheels. Second, your car battery provides the electrical energy necessary to power everything from your headlights, to your radio, to your cell phone charger. Without a working battery, your car won’t even start. Manufacturers have cleverly designed automotive batteries to recharge when you drive your car. Yet as time goes on, a battery still tends to develop problems that can affect its ability to provide the needed charge. One little-known yet all-too-common problem goes by the name of sulfation. This article takes a closer look at three key things to know about battery sulfation. 1. Sulfation Accounts for Many Early Battery Failures If you have ever had a battery that died before the end of its expected lifespan, sulfation may have played a role. Sulfation ranks as a common cause of early failure for lead-acid car batteries. Even before it causes your battery to die altogether, sulfation affects your car in a number of potentially debilitating ways. Sulfation involves the formation of lead sulfate crystals on the surface of the battery’s plates. At small levels, these crystals don’t pose any serious threat. But as the deposits grow progressively larger, they effectively cut down on the amount of active material inside of your battery. As a result, your battery’s performance swiftly drops off. A battery suffering from sulfation exhibits a significant decrease in cranking power. You may notice that your battery struggles when attempting to start up your car. Sulfation also increases the chances of boil over, which involves acid boiling and spilling out of the battery. Sulfation also decreases the effective run time of your battery between charges. Any of these effects can contribute to a car battery dying earlier than expected. 1. Sulfation Affects City Drivers More Car owners should understand that sulfation happens naturally and affects all batteries to some degree. Small amounts of sulfate crystals form virtually every time your battery gets used. Yet batteries subjected to certain conditions stand a much greater chance of developing debilitating sulfation. One of the most common causes of sulfation involves batteries that remain perpetually undercharged. If your battery never charges all the way, it won’t reach a state of full saturation. On a chemical level, such undercharging makes it much easier for sulfate crystals to form. A fully charged battery, by contrast, possesses a charge enough to minimize sulfation. If you regularly use your car for highway travel, you don’t have to worry about sulfation as much as a city driver. Those longer drives at higher speeds give your alternator plenty of time to bring your battery to a full charge. City drivers, on the other, run a much greater risk of sulfation, since the shorter stop-and-go driving often fails to charge your battery all the way. 1. Sulfation Can Often Be Reversed The chemical change that leads to sulfation can also work in reverse. When your battery runs, sulfates form. The process of charging your battery causes the sulfates to undergo a process known as gassing. Gassing causes the lead sulfate crystals to turn back into sulfuric acid and lead. As sulfation grows more advanced, often because of a chronically undercharged battery, reversing the process grows harder and harder for your alternator to accomplish on its own. Yet you can often restore proper performance using a battery desulfator. A battery desulfator eliminates even the toughest sulfate deposits by exposing your battery to bursts of high voltage. Battery sulfators do their job so well that they can even bring dead batteries back to life. For more information about how to prevent sulfation and keep your car battery in top shape, contact the auto experts at Evans Tire & Service Centers.
Is religion a positive reality in your life? If not, have you lost anything by forfeiting this dimension of your humanity? This book compares the theology of Tillich with the psychology of Jung, arguing that they were both concerned with the recovery of a valid religious sense for contemporary culture. Paul Tillich, Carl Jung and the Recovery of Religion explores in detail the diminution of the human spirit through the loss of its contact with its native religious depths, a problem on which both spent much of their working lives and energies. Both Tillich and Jung work with a naturalism that grounds all religion on processes native to the human being. Tillich does this in his efforts to recover that point at which divinity and humanity coincide and from which they differentiate. Jung does this by identifying the archetypal unconscious as the source of all religions now working toward a religious sentiment of more universal sympathy. This book identifies the dependence of both on German mysticism as a common ancestry and concludes with a reflection on how their joint perspective might affect religious education and the relation of religion to science and technology. Throughout the book, John Dourley looks back to the roots of both men's ideas about mediaeval theology and Christian mysticism making it ideal reading for analysts and academics in the fields of Jungian and religious studies.
Czech Republic Share this! The Czech Republic  Czech: Ceská republika, pronounced ['t??ska? 'r?pu?bl?ka] ( listen), short form Cesko ['t??sk?]) is a landlocked country in Central Europe. The country is bordered by Poland to the northeast, Slovakia to the east, Austria to the south, and Germany to the west and northwest. The Czech Republic is a pluralist multi-party parliamentary representative democracy. The Parliament has two chambers: the Chamber of Deputies and the Senate. It is a member of the European Union, NATO, the OECD, the OSCE, the Council of Europe and the Visegrád Group,deals on hotels, The Czech state, known as Bohemia and later as the Bohemian Crown, was formed in the late 9th century. The country reached its greatest territorial extent during the 13th and 14th century, under the rule of the Premyslid and Luxembourg dynasties,cheap flights, Following the Battle of Mohács in 1526, the Kingdom of Bohemia was integrated into the Habsburg monarchy as one of its three principal parts alongside Austria and Hungary. Bohemia later became the industrial powerhouse of the monarchy and the core of the Republic of Czechoslovakia which was formed in 1918, following the collapse of the Austro-Hungarian empire after World War I. After the Munich Agreement (signed by Nazi Germany, France, Britain and Italy), Polish annexation of Zaolzie and German occupation of Czechoslovakia and the consequent disillusion with the Western response and gratitude for the liberation of the major portion of Czechoslovakia by the Red Army, the Communist Party of Czechoslovakia won plurality (38%) in the 1946 elections,hotel deals The Czech Republic made economic reforms such as fast privatizations. Annual gross domestic product (GDP) growth stood at around 6% until the outbreak of the recent global economic crisis. The country is the first former member of the Comecon to achieve the status of a developed country according to the World Bank. In addition, The Czech Republic has the highest human development in Central and Eastern Europe, ranking as a "Very High Human Development" nation. It is also ranked as the most democratic and healthy (by infant mortality) country in the region and as the third most peaceful country in Europe.
Stereophonic & Monaural Audiotester What is Stereophonic Sound Logo Over the past few days we introduced the notion that stereo, and the intricacies that make up stereo, may not be widely known to those enjoying her esteemed capabilities. So, in this dramatic conclusion, The Prudent Groove offers, without sarcastic interruption, the cliffhanging outcome to, What is STEREOPHONIC SOUND? RCA Victor Horizontal Logo Presented by RCA Victor The two channels of sound picked up by the needle are then unscrambled by the stereo cartridge. The cartridge directs them into separate amplifier circuits, where they are magnified and fed in turn into two separate loudspeakers. The two speakers finally translate the musical impulses into intelligible sound which you hear in your living-room stereophonically. Stereo Diagram Woman The net of it is an overlapping and blending which gives music a more natural, more dimensional sound. For the first time, your ears will be able to distinguish where each instrument and voice comes from-left, right or center. In short, enveloping in solid sound, you will hear music in truer perspective. Stereophonic sound is the latest step in an improvement process that began about 80 years ago. In listening to it, you will enjoy the highest achievement yet in the art of recording. "Go Stereo" Type What is Stereophonic Sound Logo Yesterday we inaugurated the intricate fascinations of Stereophonic Sound. Today, we pick up where we left off and continue with RCA Victor’s detailed explanation of the technical differences between recording a monaural record with that of the stereophonic record. So, without further ado, I present part 2 (out of 3) of RCA Victor’s What is Stereophonic Sound? RCA Victor Horizontal Logo Presented by RCA Victor Let’s compare hearing to seeing for a moment. You see images on your left with your left eye, images on the right with your right eye. Yet, because your brain can do two jobs at one, you get a total unified picture in its true perspective. Stereo sound is simply the attempt to give you music as it is heard by both ears. Essentially, what happens is that two microphones, left and right, pick up what goes on in the orchestra at the recording session. These two microphones feed the musical impulses to two soundtracks on tape. The two soundtracks are then pressed into the grooves on a stereo record. Stereo Diagram Needle The sound from a record partly depends upon how the needle moves or vibrates. For example, when Edison designed his phonograph to play cylindrical records, he made the needle vibrate up and down. This is called the “hill and dale” system, or vertical cutting. On a conventional, monaural record, however, the needle moves from side to side, or laterally. The lateral movement has been used ever since the flat record replaced Edison’s cylinder. What about the stereo record? Each groove on a stereo record has two soundtracks cut into it, and they are cut into it both laterally and vertically. In order to pick up the two soundtracks, a stereo needle capable of moving complexly has been developed; it vibrates both laterally and up and down. Simultaneously, the lateral movement picks up one channel of recorded sound, the vertical movement the other. What is Stereophonic Sound LogoThink you can speak confidently about the intricate details of stereophonic sound? Think you’ve licked the volatile, short-lived, simultaneous ear experience? Over the next three days, The Prudent Groove will leisurely lift the contents of one coveted RCA Victor insert explaining, in intimate detail, exactly, What is STEREOPHONIC SOUND? The following is presented, without esteemed interruption, by The Prudent Groove. Part 2 will follow tomorrow. I’ll be completely honest and admit that I learned more than I thought I needed while transcribing this informative insert. Maybe RCA Victor was onto something. RCA Victor Horizontal Logo Presented by RCA Victor Stereophonic sound on records is finally here. It will be widely discussed, widely written about, and, perhaps, widely misunderstood. It cannot help but be; it is a complex achievement as well as an extraordinary one. We offer the following primer on the subject with the hope that it will both help you in understanding how and why stereo works and enhance the hours of listening pleasure stereo will offer in your home. Before stereo recording techniques were developed, the impulses of music were picked up by only one microphone. These impulses were then fed to one tape and from there to the conventional, monaural record, which you heard in your living-room through one loudspeaker. The conventional record offered brilliant sound and exciting sound, but, of necessity, it also offered only one-dimensional sound. Mono Diagram WomanNow, the simple and obvious fact remains that we all have two ears, and we are used to hearing things dimensionally. Generally speaking, your left ear has a tendency to hear what goes on in the left side of a room, your right ear, what goes on in the right side of the room. Your brain then does two jobs. It combines both the impression received by the left ear and that received by the right ear into one total impression which we call music. At the same time, it retains the spatial or dimensional impression, music to the left and music to the right.
Adaptive Keyboarding Adaptive Keyboarding FAQs What grade range is appropriate to use Adaptive Keyboarding? The Great Keyboarding Adventure is ideal for grades 3-5 and The Urban Keyboarding Explorer best suits grades 6-8 and older. Teachers should choose one or the other and not assign both versions of Adaptive Keyboard to the same class. What should teachers do when first assigning Adaptive Keyboarding? We recommend explaining the importance of distributed practice in learning a skill like keyboarding. It’s important for students to know that if they consistently practice and be patient, they will make progress. Providing a short demonstration of the program is also helpful, along with discussing the student dashboard, their routine, and setting WPM and accuracy goals for the end of the year. Our recommendations can be found later in this FAQ and in the Resources page in the educator dashboard. Teachers should also reinforce keyboarding ergonomics, such as correct posture and hand placement. It’s also important to remember to assign Guided Practice assignments to ensure that students are applying what they learn immediately after their lessons. What are Guided Practices? Guided Practices (GP) follow the EasyTech Keyboarding Lessons and need to be assigned individually by the educator. For example, if a student completes the ‘Keyboarding: Home Row’ lesson, the student can then be assigned the Guided Practice ‘Home Row F and J.’ When students click to launch a GP, they will be taken into a Keyboarding module and will be completing this one assignment. Teachers can assign scaffolded Guided Practice curriculum, focusing on a specific keyboarding skill set (such as home row, upper row, lower row, etc.), to ensure students are proficient with one skill area before progressing to the next. What can students expect from the Formative Assessment? Students will be asked to type a passage that covers the entire keyboard and is 76 words in length. The time spent on the Formative Assessment may vary depending on the student’s typing proficiency. The program uses Formative Assessment results as a starting point to diagnose which areas the student needs the most practice; Adaptive Keyboarding will assign typing exercises catered to individual student’s needs. Students must type the entire passage or complete the amount of time set by the teacher in the Adaptive Keyboarding class settings before students can move on to other Adaptive Keyboarding exercises. If a student stops or closes the program without finishing the Formative Assessment, their score will not record and they will have to restart the assessment the next time they use the program. Students can expect to complete the Formative Assessment at the beginning of each level; students advance to a new level after logging 60 minutes of active typing practice. How can educators help their students succeed in the Formative Assessment? The Formative Assessment has students type a passage that covers the entire keyboard and is between 76 words long. The default setting in Adaptive Keyboarding is that there is no time limit for the Formative Assessment, meaning students can type in the Formative Assessment for as long as they need until they’ve typed the entire passage. As it may take longer for some students to complete the whole Formative Assessment, teachers may prefer to set a time limit on the Formative Assessment and can do so in their Teacher Dashboard under ‘Settings.’ This means that a student will actively type for that set amount of time, the Formative Assessment will end, and the program will assign typing challenges based on what the student completed. If the student does not type the entire passage in that time frame, the student will not be penalized nor will the experience be affected. Students must type the entire passage or complete the specified amount of time set by the teacher in the Adaptive Keyboarding class settings before they can move on to another Adaptive Keyboarding exercise. If a student stops or closes the program without finishing, then their score will not log and they will have to restart the assessment the next time they use the program. Formative Assessment Time Limit Recommendations by Grade Level: • Grades 3-5: It is recommended that students spend 1-3 minutes on the Great Keyboarding Adventure Formative Assessments. The Great Keyboarding Adventure assessment contains 76 words. • Grades 6-8: It is recommended that students spend 3-5 minutes on the Urban Keyboarding Explorer Formative Assessments. The Urban Keyboarding Explorer assessment contains 76 words. It is recommended that students type as long as they can without frustration. If students are struggling, please consider reducing the time limit on the assessment. The more students practice, the more they will improve. If your students are struggling to complete the assessment, assign the EasyTech Finger Placement Lessons and Guided Practices to ensure confidence in their typing ability. What types of exercises does Adaptive Keyboarding prescribe? There are three types of exercises students will be completed in the program: • Muscle Memory: These exercises are based on the repetition of different iterations of the prescribed target keys. • Word Challenge: The program searches a database of word banks for words containing a high density of target letters including sight words, digital literacy vocabulary, commonly misspelled words, and vocabulary commonly used in standardized assessments as well as other word libraries. The libraries are grade specific to reinforce learning in other areas. • Zone Challenge: These exercises target the finger that needs strengthening. What are Story Challenges? In addition to exercises, students will be prescribed Story Challenges to choose from. Unlike the exercises which students need to continue until they reach the end, Story Challenges are timed so that students type as much as then can before the clock runs out. The stories themselves are pulled from classic literature and grade-level curriculum, but their difficulty is based on the past performance of the student. For example, students with high WPM and accuracy will receive Story Challenges with more capital letters and punctuation marks. Even students who type very fast will not reach the end of a Story Challenge. Students using the Great Keyboarding Adventure will be stopped after 3 minutes of typing and those using Urban Keyboarding Explorer will be stopped after 5 minutes. Is there a list of how many lessons there are in Adaptive Keyboarding? Since there is an unlimited number of typing exercises available in Adaptive Keyboarding, we don’t have a comprehensive list that will capture them all. Your students will have plenty of material to practice with to strengthen their typing accuracy and speed! How long does each level take students to complete? Each level takes 60 minutes to complete and is based on practice time on the keyboard exercises. This will typically take students two weeks if they practice for the recommended 10-15 minutes 2-3 times each week. Is there a way to change which level students are on in the practice? Since the levels are based on practice time, teachers and students are not able to change which level they are on. Once the user completes 60 minutes of active practice, s/he will progress to the next level. Can a student pause an exercise in Adaptive Keyboarding? If a user stops typing for 30 seconds, the exercise will automatically pause. A modal will then appear on the screen that will instruct the user to hit the ‘Enter’ or ‘Return’ key or to click the button on the screen to continue the exercise. How do students complete Adaptive Keyboarding? Adaptive Keyboarding is different from other lessons and exercises in that it’s not meant to be “completed” in a conventional sense. Students are meant to continually practice in the program, even after students have reached their goals. I want my students to go into Adaptive Keyboarding for weekly practice, how will students find it? There are two things educators can do. We recommend that teachers duplicate the enrollment of their main class to create a separate class just for Adaptive Keyboarding. This helps students know where to go every day without having to potentially sift through their assignment list. The second option is to make Adaptive Keyboarding the first assignment students work on in the class. As students complete other curriculum items, Adaptive Keyboarding will remain at the very top of their assignment list, helping them find it easily. How do students progress if my class is set to the ‘Forced’ assignment sequence? If a class is set to ‘forced,’ that typically means a student is required to complete one assignment at a time and can only move on once that lesson is complete. Since Adaptive Keyboarding is meant for continual practice and cannot be “completed,” this is an exception to the rule. Once students log their first score in Adaptive Keyboarding that meets or exceeds the class’s minimum passing score, they will be able to move on to the next lesson and Adaptive Keyboarding will remain active in their list of assignments. I want to set realistic goals for my students; what are recommended typing goals? The Goals by Grade (below) should serve as a suggested guide to assess your students’ keyboarding performance. These are based on education standards for keyboarding curriculum as evaluated through a variety of districts and state expectations. Accuracy and Word-Per-Minute goals can be adjusted depending on the unique needs of your student or class. How do I change Adaptive Keyboarding settings for my class? To change class settings, teachers should open their Teacher Dashboard and click on the ‘Settings’ tab in the top right corner of the screen. Here teachers will be able to change the class’s Target Goals, how much Game Time students earn based on how much typing practice they’ve completed, the Grade Level for the typing exercises, and the Assessment Time Limit. Teachers can also change these settings for individual students by clicking on the ‘Student’ tab. Any changes made under this tab will only be applied to the selected student. Unless a student has been given individualized settings, the settings applied to the class will auto-populate in the student tab. How can I view the student experience in Adaptive Keyboarding? Educators can click on ‘Student Mode’ when in the Dashboard to experience Adaptive Keyboarding as a student. Doing this will give teachers a firsthand look at the Formative Assessment, Challenges, DinoTyper, and the Student Dashboard. Educators can click on the ‘Return to Teacher Mode’ button at the bottom of their screen to exit the student view. How often are new scores recorded in my gradebook? Scores will appear in educator gradebooks as percentages that consider the accuracy of the last 5 practices the student completed (please note that the “last 5 exercises” is weighted, as some practices are longer than others). Scores update automatically in the teacher’s and student’s dashboard. Where do I access Adaptive Keyboarding reports and what information is pulled? There are two places you can access Adaptive Keyboarding reports at the student and class level: through the ‘Adaptive Keyboarding Report’ link in your class reports tab and from the ‘View’ button on the ‘Curriculum Item Details’ page. Your Adaptive Keyboarding Educator Interface will launch and you can select the ‘Report’ tab to pull individualized student or class reports that document the student’s recent words per minute (WPM), recent accuracy rate, time spent using Adaptive Keyboarding, current prescriptions and more. These reports can be exported and used outside of the app. The individual student report is a PDF of the student’s keyboarding data for the time period selected and the class report is a spreadsheet and includes all the data about students in the class. Coordinators can pull Adaptive Keyboarding Raw Data reports from their coordinator homepage by selecting the Adaptive Keyboarding link under the ‘Keyboarding Tools’ header. This will launch the Adaptive Keyboarding Administrator Interface. Administrators can view a snapshot of their district’s keyboarding metrics and, from the ‘Report’ tab, pull a spreadsheet which contains each student’s recent and initial words per minute (WPM), recent and initial accuracy rate, best streak, total minutes, and total exercises launched. More information about the Adaptive Keyboarding reports available to coordinators can be found here. More information about the Adaptive Keyboarding Reports available to teachers can be found here. Why am I seeing ‘Anonymous School’ on graphs and reports? If your students access Adaptive Keyboarding outside of the platform using a third-party learning management system such as Google Classroom, Schoology, or Canvas, you will see ‘Anonymous School’ on graphs and reports instead of your school’s name. These outside systems do not designate students by school; therefore, the school is conveyed as ‘Anonymous’ and all student scores are grouped together regardless of campus. Will students’ scores stick with their accounts? Yes! Students’ statistics will be retained year after year and from class to class. This means that if two teachers have both assigned Adaptive Keyboarding to the same student, then that student’s progress will be reflected in both classes. He or she will not start over at the beginning. Likewise, students who used Adaptive Keyboarding last school year will begin where they left off when they start the next school year. Reports will be available based on the date range that the teacher sets. Since the Formative Assessment will occur at the beginning of each level and resets the prescription (and problem keys are reassessed throughout the level, adapting to the student’s needs), reports will pull the most recent data from the student’s account. Will Adaptive Keyboarding work on mobile devices? Adaptive Keyboarding is compatible with mobile devices that are connected to an external keyboard. Please note that does not recommend using Adaptive Keyboarding on a mobile device without an external keyboard. If using a mobile device, should be launched through a mobile browser on that device (specifically in mobile Safari and mobile Chrome on an iPad and a Nexus10) and the user should turn the tablet to be in ‘landscape’ position; the App does not support Adaptive Keyboarding. Does this require Flash Player to run? No. no longer uses Flash Player to develop curriculum. Adaptive Keyboarding is written in HTML5. Why are students getting an error message when they start Adaptive Keyboarding? An error message in Adaptive Keyboarding can be caused by your device’s clock not being inaccurate. This can affect how our servers pass information. Verifying that your computer’s clock is accurate against international time can often resolve this issue. Related Videos Adaptive Keyboarding Webinar Adaptive Keyboarding Walk-Through
Jonathan Woetzel discusses the role of artificial intelligence (AI) in driving China’s economic growth and its growing influence in our lives. Jonathan Woetzel将与我们探讨人工智能对于推动中国经济发展以及对人们生活的影响。 Learning a language is an expensive proposition. Wang Li, a Chinese entrepreneur, developed an app called Liulishuo that allows users to speak directly into their phones and receive immediate feedback on their pronunciation. The app leverages artificial intelligence (AI) technologies, such as reinforcement learning, to constantly learn and improve its processes. In China and other countries, factories are rushing to build systems to replace thousands of humans with less fallible machines. “Techno-disruption” of the economy is increasingly studied by consultants, institutions and private business that are trying to understand the role of technology in changing manufacturing and workforces. In order to better understand what AI means for China, US-China Today spoke with Jonathan Woetzel, a senior partner at Mckinsey, Shanghai, who focuses on understanding the role technology plays in economic growth. Since data is so important to most machine learning models and AI models, how do you view the role of data sharing, in terms of helping Chinese companies succeed within the field? Chinese companies have the advantage of having Chinese customers, and there are a lot of them. So data is plentiful, with about 650 million mobile internet users now and 700 million-plus [who have internet access]. It’s just a huge number of people who have an enormous ability to generate data. So it’s two or three times whatever’s in Europe or the United States. That’s a big advantage for Chinese companies in terms of being able to look at patterns and understand a behavior, all of which is fairly unregulated. The data privacy rights and protocol has not really been established for the most part, so companies have pretty free and open access to all sorts of consumer data. The government role in this is relatively passive, and government typically doesn’t regulate services until there’s a problem. For example, Alibaba had set up an online bank for about 11 years before the government decided it was appropriate to license it. And even when there are licenses, for example, for insurance providers, then the government issues one national license as opposed to in the U.S., where it’d be 50 state-by-state licenses. So it’s much easier to share data in China than it is in the U.S. in that particular industry. China does of course need to put in more legal protection for privacy, but at this point it’s not stopping the development of AI in any way. How do you view research papers from secondary or tertiary Chinese universities? Have you seen AI innovation diffuse out to a lot of the second- and third-tier cities? First of all there’s a very clear difference in quality across Chinese universities. The quality of Chinese publications isn’t as high as those from the U.S. or the U.K., even though there are many more of them. China has, I think, 2,000 institutes of degree-granting institutions, but most of those 2,000 are repurposed technical institutions. The depth and breadth of the teaching is not really comparable to a U.S. college or something. I’m sure that there’s a lot more [innovation], but we still would question to some extent the influence. But that said, at some point, given that the vast majority of the scientific talent on this planet is Chinese, I’ve no doubt that they will catch up. Have you seen Chinese research start to diffuse down into manufacturing and other sectors, or are you still mostly seeing it have an impact for internet companies like Alibaba, Tencent, etc.? [I think there is an equal amount] of research done in the corporate commercial sphere as in the academic. And within the corporate commercial sphere, in terms of what industries are digitized, ICT is at the top of the list, which would include the internet companies, but would also include electronic manufacturers like Lenovo. Or telecom equipment manufacturers like Huawei or ZTE. They would be early adopters [of AI]. They think about their manufacturing 4.0 strategies, which are essentially AI-enabled strategies. And they are about substituting capital for labor to achieve higher economies at scale, lower better yields and lower cost of energy and so forth. So, I think everyone gets to that point and incorporates AI and machine learning into their operations. How do you view the large Chinese investment into Western tech companies? Are you seeing any of the technologies picked up in Silicon Valley coming back and being applied and enhanced within the Chinese tech sector? You could say, “Is Didi an example of Uber or Zhaopin an example of Linkedin?” And to some extent, of course they are. But I don’t think you can really patent an idea. By the same token, I think you should start thinking about stuff that comes the other way as well [from China to the U.S.]. QR codes is an example of Chinese innovation that I think should be accepted in the U.S., but for regulatory reasons it’s probably not going to be. Chinese companies and the Chinese government are interested in adopting from the west. Typically, Chinese adoption happens based on commercialized products. So they’re not as interested in taking something that’s in the blueprint stage and trying to figure out how to make it work. This is more likely to be the way we see products in the valley get to China: they’re first commercialized and deployed, mostly in the U.S., then they are deployed in China. How does a statement or directive issued by Xi Jinping, for example, diffuse into the economy? What about on a local level? Obviously, everybody does listen fairly carefully to the words of the leadership mostly because they view it as good for their political future. The way that China works is that government spending is limited like every other government in the world: it’s typically re-allocating a budget every year. Everybody has an IP budget, so sometimes the IP budget is spent on hardware, or sometimes on software. That’s true at all levels of government. But the government generally tries to decentralize, so all funding and decisions are typically made at a city, or even a district level within a city. That’s when it comes to capital expenditure (CAPEX) by the government. The government’s role is to enable financing to be provided at commercial rates, at relatively low costs. That’s appropriate, given the fact that China has more money than it knows what to do with. So the government makes sure that the financing is available, but it still requires an entrepreneur and a business structure to create the business. So government doesn’t typically own the startup, in other words. That said, there might be a fund that it controls and invests in, but it won’t own it outright. Another aspect is actually national infrastructure, like broadband infrastructure, and that’s funded by national companies like China Mobile or State Grid. And again, they have more money than they know what do with. So China’s one big advantage is having actual functioning infrastructure, which is funded directly through these national companies. So that’s gonna be a large and increasingly important differentiator for China going forward.
Basal Metabolic Rate Basal Metabolic Rate is an estimate of how much energy (in calories) the body burns when at rest. For example, the basal metabolic rate of Sally, a 20-year-old woman who is 5’6” and weighs 150 pounds, is about 1520 kcal per day. If Sally stays in bed all day, her body will burn 1520 kcal just to keep her breathing and keep her heart beating. But if Sally gets up and moves around, she will use more energy. Our basal metabolic rate decreases as we age. Add flashcard Cite Random
Relative Frequencies Of Drugs Implicated Advances in drug development have allowed the replacement of many potentially toxic drugs with ''safer'' alternatives. For example, oxyphenisatin has been withdrawn as a laxative in most countries, alpha-methyldopa is now rarely used as an antihypertensive agent, and perhexilene has been replaced by alternative, safer agents. As might be expected, this has led to a change in the pattern of implicated drugs causing hepatotoxicity over the last few decades. In the 1960s chlorpro-mazine was most commonly associated with hepatotoxicity (Cook and Sherlock, 1965) and in the 1970s halothane was responsible for 25% of hepatotoxic drug reactions reported (Dossing and Andreasen, 1982). Even though this has led to a significant reduction in use in the United Kingdom, halothane continues to account for significant numbers of hepatotoxic adverse reactions in Europe and New Zealand (Friis and Andreasen, 1992; Pillans, 1996). Similarly, liver injury secondary to antitubercular drugs such as isoniazid continues to be reported worldwide (Acharya et Table 37.1. Drugs causing adverse hepatic reactions. Acute hepatocellular and mixed pattern of liver injury (or acute hepatitis) NSAIDs: diclofenac, ibuprofen, naproxen, nimesulide, piroxicam, sulindac Anaesthetics: enflurane, halothane, isoflurane Antimicrobials: ketoconozole, ofloxacin, sulphamides, sulphones, terbinafine, tetracyclines; antimycobacterials such as isoniazid, pyrazinamide, rifampicin; anti-HIV agents such as didanosine, indinavir, zidovudine Neuropsychotropics: tricyclics (most), fluoxetine, paroxitine, pemoline, sertraline, tacrine, riluzole; illegal compounds such as cocaine and Ecstasy Antiepileptics: carbamazapine, phenytoin, valproate Cardiovascular drugs: bezafibrate, captopril, diltiazem, enalapril, lisinopril, lovastatin, simvastatin, ticlopidine Antineoplastic and immunomodulatory agents: cyclophosphamide, cis-platinum, doxorubcine, granulocyte colony stimulating factor, interleukin (IL)-2, lL-12, tamoxifen Others: etretinate, glipizide, herbal remedies, ranitidine Acute cholestatic pattern of liver injury and cholestatic hepatitis Hormonal preparations: androgens, oral contraceptives, tamoxifen Antimicrobials: clindamycin, co-amoxiclav, co-trimoxazole, erythromycin, flucloxacillin, troleandomycin Analgesic/anti-inflammatory drugs: gold salts, propoxyphene, sulindac Neuropsychiatric drugs: carbamazapine, chlorpromazine, tricyclic antidepressants Antineoplastic and immunomodulatory agents: asparaginase, azathioprine, cyclosporin Cardiovascular drugs: ajmaline, captopril, propafenone, ticlopidine Others: allopurinol, chlorpropamide Chronic hepatitis and/or cirrhosis Aspirin, diclofenac, halothane, herbal medicine (germander), isoniazide, methotrexate, methyldopa, nitrofurantoin, papaverine, vitamin A Chronic cholestasis and ductopenia Ajmaline, carbamazepine, chlorpromazine, co-amoxiclav, co-trimoxazole, erythromycin, flucloxacillin, methyltestosterone, phenytoin Granulomatous hepatitis Allopurinol, carbamazapine, cephalexin, dapsone, diltiazem, gold salts, hydralazine, isoniazid, methyldopa, nitrofurantoin, penicillin, penicillamine, phenytoin, procainamide, quinidine, sulphonamides, sulphfonylureas Macro and microvesicular steatosis Amiodarone, asparaginase, buprenorphine, corticosteroids, flutamide, female sex hormone, methotrexate, perhexiline, salicylate, tacrine, tetracycline, valproate, zidovudine Hepatic vascular lesions Hepatic vein thrombosis/veno-occlusve disease: azathioprine, dacarbazine, combination chemotherapy (carmustine, cytarabine, mitomycin, thioguanine, urethane), oral contraceptives Sinusoidal dilation/peliosis: anabolic steroids, azathioprine, hydroxyurea, oral contraceptives Perisinusidal fibrosis: azathioprine, methotrexate, vitamin A Androgens, oral contraceptives For more comprehensive lists see Farrell (1994), Pillans (1996), Desmet (1997), Erlinger (1997), Larrey (2000), Krahenbuhl (2001), and Lucena et al. (2001). al, 1996; Ostapowicz et al, 1999; Lucena et al, 2001). As the relatively "high-risk" agents have been replaced, relatively rare reactions to commonly prescribed "low-risk" agents have become the most important cause of hepatotoxicity. In the last few years, non-steroidal anti-inflammatory drugs (NSAIDs) such as diclofenac and sulindac, antimicrobials such as co-amoxiclav, flucloxacillin and erythromycin, and H2 antagonists have become important causes of hepatotoxicity (Pil-lans, 1996; Lucena et al., 2001). In addition, hepatotoxicity due to substances that were previously thought to have little toxicity, such as "Ecstasy" (recreational amphetamine) and herbal remedies, are being increasingly recognised (Lar-rey, 1997; Andreu et al., 1998). A brief list of the drugs which are important causes of hepatotoxi-city and the pattern of the liver injury are shown in Table 37.1. Insider Nutrition Secrets Insider Nutrition Secrets Get My Free Ebook Post a comment
Linex VS Mac VS Windows 2805 Words Feb 2nd, 2014 12 Pages Linux vs. Mac vs. Windows Linux vs. Mac vs. Windows The operating systems Linux®, Macintosh® (Mac) and Microsoft® Windows® are the main software to every computer system to run properly along with other hardware. These operating systems (OS 's) are very different in several ways, but they also have some similarities too. Linux, Mac and Windows use memory management, process management, file management and security management to operate the computer systems correctly. The first management to compare and contrast between the three OS is memory management. Memory management is a process of handling computer memory to different programs to operate a system effectively. This type of …show more content… The user could be running a service or daemon that will continue running when the user is no longer logged into the system. Generally, users cannot modify processes that are being run by other users, the root (system administrator) user or the process owner would have to modify any process. However, some users can be granted special permissions to be able to modify the processes of others, if the system administrator would like to give them that permission. Mac Process Management Because Mac is a BSD-style operating system, it is also a multi-user operating system. However, since Mac is more of a desktop OS, it is not common to see many users accessing the system at the same time. In spite of this, the process management in Mac is very similar to Linux because they are both UNIX-based operating systems. One key difference is that with Mac most people would modify and control processes from with the GUI and most users would modify the processes using ps on Linux. Windows Process Management In general, Windows is not a multi-user operating system, unless it is a Windows Terminal Server, meaning that it was not designed to handle more than one user at a time, whereas Linux and Mac are both multi-user operating systems and is capable of handling multiple users at a time. Windows however, can run multiple processes as different users concurrently. With all three operating systems, users cannot terminate or modify the process priority for other More about Linex VS Mac VS Windows Open Document
Nuclear Energy : Nuclear Power Plants 1257 Words Apr 5th, 2016 6 Pages For the past 50 years, the United States has been using nuclear energy as one of it’s main non-renewable energy sources. The source of nuclear energy comes from nuclear power plants, which efficiently generates large quantities of energy and has low greenhouse gas emissions, compared to traditional coal power plants. Currently, there are 61 nuclear power plants operating in the U.S. and using nuclear power plants as a main energy source has always been a controversial problem within U.S. society. By the time nuclear power plants bring people convenience, they bring more disadvantages instead. Nuclear electric power generation uses thermal energy that is created by nuclear fission, which is similar to thermal electricity power generation in coal power plants. Compared to coal power plants, nuclear power plants have extremely low greenhouse gas emissions; such emissions are the main factors in the increasing climate change today. The public and the government both want to decrease our greenhouse gas emissions, so nuclear energy would be the best option in that sense. But replacing all coal power plants is unrealistic, so more nuclear power plants have arose to solve this issue. Secondly, scientists and researchers have already invested time and money in the nuclear generation technology needed to make power plants. With the knowledge of nuclear technology, we are able to generate more efficiently with methods in nuclear power plant development. A main advantage of nuclear… More about Nuclear Energy : Nuclear Power Plants Open Document
A long time ago, in a world far, far away… the world ran on pretty lax cables. Unable to keep up with the high-demand borne from better technology, companies traded copper cables with advanced cables. Thus, the fiber optic cables were born! Fiber optic cables have grown by leaps and bounds since Dr. Charles Kao began his work in the 1960s. Since his pioneering work fiber optic cables have become thinner, faster, and more versatile than ever. All Cables Are (Not) Created Equal Fiber optic cables are a result of their creation. That is to say, what works in one building will not—and should not—work in another space. Engineers have worked hard to identify different needs for specific industries and have tailored cables to meet those needs. This specificity has led to significantly more advantages to fiber optic cables over copper cables. We have identified a handful of these advantages to give you the tools to make the best choice for your business. Faster Speeds Fiber optic cables have one distinct advantage over copper cables: they are significantly faster! By using light to transmit the data, the fiber optic cables have the ability to run a mere 31% slower than the speed of light. That’s still fast as heck! Increased Bandwidth Fiber optic cables were designed to carry more data, faster. In order to do that, it requires greater bandwidth. Copper cables were initially designed for voice transmissions and didn’t require that much bandwidth. Ability to Travel Longer Distances Faster speeds mean longer distances! Depending on the network and the cable, so fiber optic cables can carry data as far as 25 miles. Thinner and More Durable Thin fibers are significantly lighter than copper. They’re much more durable, too! A lighter cable that’s sturdier than the copper cables? Now, that’s hard to pass up! Advantages to Fiber Optic Cables in the Workplace There’s a right cable for every project and organization. Finding the perfect fit can be a difficult, time consuming endeavor without the right expertise. Skip the hassle and call in your professional guide to fiber optic cables for schools, organizations, and buildings in the Norfolk area. A brief phone call to our cable experts will get you prompt answers to your questions about premises and campus wiring products. Contact us today!
Compelo Medical Devices - Latest industry news and analysis is using cookies ContinueLearn More Medical Radiation Exposure Of The US Population Greatly Increased Since The Early 1980s: NCRP Report A recent National Council on Radiation Protection and Measurements (NCRP) report stated that the US population is now exposed to seven times more radiation each year from medical imaging exams than in 1980. In 2006, medical exposure constituted nearly half of the total radiation exposure of the US population from all sources. The increase was primarily a result of the growth in the use of medical imaging procedures, explained Kenneth R. Kase, senior vice president of NCRP and chairman of the scientific committee that produced the report. “The increase was due mostly to the higher utilization of computed tomography (CT) and nuclear medicine.These two imaging modalities alone contributed 36% of the total radiation exposure and 75% of the medical radiation exposure of the U.S. population.” The number of CT scans and nuclear medicine procedures performed in the US during 2006 was estimated to be 67 million and 18 million, respectively. The NCRP Report No. 160, Ionizing Radiation Exposure of the Population of the US, provides a complete review of all radiation exposures for 2006. naturally within the human body. Other small contributors of exposure to the US population included consumer products and activities, industrial and research uses and occupational tasks. NCRP is working with some of its partners like the American College of Radiology (ACR), World Health Organization and others to address radiation exposure resulting from the significant growth in medical imaging and to ensure that referrals for procedures like CT and nuclear medicine are based on objective, medically relevant criteria (e.g., ACR appropriateness criteria). This year marks the 80th anniversary of NCRP’s founding and the 45th anniversary of its charter from the U.S. Congress under Public Law 88-376.
Obesity / Overweight What is obesity? Obesity is a chronic disease that is associated with having an excess amount of body fat (via genetic and/or environmental factors) that is difficult to control, even when dieting. It is classified as having a Body Mass Index (BMI) of 30 or greater. BMI is commonly used as a screening tool and should not be used as a diagnostic of your health or body fatness. Being overweight is defined as having a BMI between 25 and 30. How is BMI calculated? What are the health risks associated with obesity? What is metabolically healthy obesity? Schedule an appointment in Fairbanks, Alaska Dr. Nick Sarrimanolis and staff provide weight management services to patients in and around Fairbanks, Alaska. You can also get prescriptions for effective weight loss medications.* If you are overweight or obese and you want to shed pounds and improve your overall health, request an appointment today.* We also provide non-invasive, laser treatments that heat and damage fat cells.* To learn more about this procedure, Sculpsure, click here. To request an appointment online, use the form on our website. You can also give us a call at (907) 451-1174. *Individual results may vary; not a guarantee. Join Our Newsletter Request an Appointment (907) 451-1174
Islamic Empire Arabia in the seventh century AD was a harsh place to live, with no established state and no rule of law, outside the governance of the Byzantine and Sassanid empires to the north. It was home to a tribal society, full of internecine conflict, with a polytheistic religion followed in the settled areas, and with Mecca serving as a center of one of these pagan cults. Despite its obvious later importance, the history of Mecca as an important early center may have been played up somewhat in order to increase its significance, as some scholars think that it was a relatively minor settlement prior to the advent of the Islamic empire Once Muhammad began his military campaign, Islam spread swiftly to cover the western half of Arabia, and the very east of Arabia (the eastern half of the modern United Arab Emirates and Oman plus Bahrain). From there, after his lifetime, it spread further to encompass huge areas of the world thanks to military campaigns and the winning of voluntary converts. The Hijrah (Islamic Historical) Era AD 622 - 632 Muhammad is believed to have been born in Mecca around 570, a member of one of the prominent tribes there, but not a member of the ruling elite itself. The exact location of his birth is unknown and no marker or memorial exists, primarily so that the attention of the faithful is not drawn away from the worship of God. Muhammad was an orphan by the age of six. Taken in by other members of his clan, he became a successful, married trader, reaching the upper echelons of society. According to tradition, he found this lifestyle to be unsatisfactory and, at the age of forty, he underwent a dramatic revelation that changed his worldview. He began preaching this revelation in Mecca and, despite opposition by the ruling Quraysh and later suggestions that he was operating purely on a political basis, he won converts. His first wife, Kadijha, a trader who was older than Muhammad, could be claimed as the first Muslim, as she believed his revelations even before he did. Failing to make headway with his ideas in Mecca, Muhammad fled the city with his converts, heading for the oasis settlement of Yathrib (later known as Medina), and narrowly avoiding an assassination attempt in the process. The band that he took with him, and the converts he made at Medina, went beyond kinship or tribal allegiances and was instead based on ideology, something that was entirely new in Arabia. The year was AD 622, and the event was the Hijrah (or Hijra), the 'cutting off from the past'. A new age had begun in Arabia. Hijrah began on 16 July 622. Died 7 June in Medina The first stage of the conflict begins when Muhammad decides to attack a trade caravan belonging to the Quraysh Meccans, who are very powerful and are determined to destroy these new heretics, as they see them. They know of Muhammad's plan and reroute the caravan, sending a small force of about 900 men in its place. This outnumbers the Muslims, but when the two forces meet at Badr, it is the smaller side that wins. The victory is an important justification of Muhammad's new ideology. Some of the pagan tribes and the Jewish tribes that have long been based at Medina have grown resentful of Muhammad's growing power and his determination to impose his Constitution of Medina upon them. They are also concerned with Muslim attempts to destroy Meccan trade, which forms a major source of income for many tribes, perhaps especially the Jewish ones. Now the Banu Qaynuqa, one of the three main Jewish tribes at Medina, are banished from Medina, allegedly for conspiring with Muhammad's enemies at Mecca. The fact that they are banished rather than executed suggests that Muhammad still hopes for a reconciliation. Soon afterward, Mecca sends a much greater force to avenge the defeat of 624. The result of the Battle of Uhud is a draw. The Meccans return with an army of 10,000 warriors to face Muhammad's 3,000. There is no question of giving the vast Meccan force battle, so Muhammad retreats into Medina to offer a siege, known as the Battle of the Trench, after the well-dug defensive work in front of Medina. The siege collapses within a couple of months due to a lack of supplies and equipment, but just after the Meccan forces leave, one of the remaining Jewish tribes is accused of holding negotiations with them. Muhammad, now the powerful if the modest ruler of Medina, declines to be involved in what happens next. The Jewish tribe is besieged in their southern Medina fort for twenty-five days and when they surrender, the men are massacred and their women and children sold into slavery. The event is not greatly shocking to the people of Arabia at the time (and has been alleged to have been embellished by the surviving descendants of the tribes), but it lays the seeds for later Jewish-Arabic conflict and hatred. An important moment is marked when Muhammad wins unstated but unambiguous recognition from the Quraysh that he and they are equals. He announces that he is going on Hajj, a pilgrimage to Mecca, which must be undertaken without weapons. He and his followers are stopped by Quraysh cavalry about thirteen kilometers (eight miles) from Mecca. There, through negotiation with the Quraysh, Muhammed wins acknowledgment that he can return the following year providing he gives up raiding Meccan trade caravans and drops his title (these terms supply the so-called Treaty of Hudaibiya). He views this apparent climb-down as a worthy price to pay for peace today and the chance of making fresh converts and alliances against the Quraysh tomorrow. Following the treaty, he attacks the Jewish Khaybar oasis in the Battle of Khaybar, possibly because the Banu Nadir are there, busy inciting hostilities against him. Muhammad leads an expeditionary force to the island of Bahrain, where he fights no battles and meets no enemies. Nevertheless, the people of the island are won as converts. In the same year, the Quraysh break the Treaty of Hudaibiya by attacking one of Muhammad's tribal allies. Muhammad is able to quickly put together a huge army that marches on Mecca. The Quraysh, suddenly heavily outnumbered, are in no position to do anything but surrender, their power broken. Muhammad forgives them, declaring an amnesty for all but ten individuals (some of whom are also later pardoned) Most of the inhabitants of Mecca convert to Islam voluntarily, without it being imposed, and the pagan idols in and around the Kaaba are destroyed. With this peaceful 'conquest' the Arab tribes become followers in droves. Muhammad returns to Medina and, within a year, he is master of all of Arabia. Rightly Guided Caliphs / Rashidun Caliphate AD 632 - 661 The Rightly Guided Caliphs were Muhammad's companions, or 'sahaba', although the concept was only established by the later Abbasids. The Islamic caliphate was created based on the idea that the caliph was the direct successor to Muhammad's political authority, and each caliph was chosen either by his predecessor before death or by a council. Upon the death of Muhammad, it was Abu Bakr who calmed his distraught converts. Soon afterward, a gathering at Medina of the most important figures in early Islam selected Abu Bakr, Muhammad's close companion, as his successor. The city itself was selected as the growing empire's first capital. Another of the companions was Amr Ibn Al-Aas, the military commander who was responsible for the conquest of Egypt. Assumed the title Khalifah, 'successor' to the Prophet. Abu Bakr's accession triggers the Ridda Wars, or Wars of Apostasy, when several Arabic tribes, including Christian Arabs in Jordan, and other Arabs in Arabia, Oman, and Yemen, refuse to fully observe strict Muslim practices. Abu Bakr's campaigning defeats all of them, establishing Islamic rule over all of Arabia, including tribes such as the Kedarites. Following this, he sends armies towards Byzantine Syria and Sassanid Iraq. Umar ibn al-Khattab / Umar I the Great Killed by a slave. It is under the leadership of Umar that Islam begins its rapid expansion outside Arabia. Eastern Roman Emperor Heraclius is defeated, and Palestine and Phoenicia are conquered in 636 and 637 respectively. Mesopotamia is conquered from the Persians in 637, and Jerusalem falls in 638. Roman Syria, Egypt, and Libya are taken in 638-640, and the Persians themselves are defeated in 642. Following Umar's murder, a council of electors nominates Uthman as his successor. Uthman ibn Affan Of the Umayyad Clan. Expansion continues under Uthman. The Georgian kingdom of Iberia is taken in 645, inroads are made in Tunisia from 647, and Persia is fully overrun by 651, along with Khorasan, where an Islamic emirate is formed to govern this rather wild region. Former Kushanshah territory in what later becomes Afghanistan is taken in 652 but attempted invasions of the kingdom of Dongola and the island of Sicily are repulsed in the same year. However, Uthman's style of leadership is perceived by some as being too much like that of a king, and he is murdered. Ali takes command, although he is not fully accepted by the governors of Egypt. The growing empire begins to threaten Armenia. Aided by the Byzantines, Armenia defends itself, but the Arab campaign continues northwards into the Caucuses under General Salman. He concentrates on the towns and settlements of the western coast of the Caspian Sea and on defeating the Khazars. A description of this campaign is based on a manuscript by Ahmed-bin-Azami, and it mentions that '...Salman reached the Khazar town of Burger... He continued and finally reached Bilkhar, which was not a Khazar possession, and camped with his army near that town, on rich meadows intersected by a large river' This is why several historians connect the town with the proto-Bulgarians. The Arab missionary Ahmed ibn-Fadlan also confirms this connection, as he mentions that during his trip to the Volga Bulgars in 922 he sees a group of 5,000 Barandzhars (Jalandhar) who had migrated a long time ago to Volga Bulgaria. He also encounters a group of people who may tentatively be identified with the Venedi. Ali ibn Abi Talib Son-in-law & cousin of Muhammad. Assassinated The feature is the second historical follower of Islam. Some Muslims see him as one of several possible leaders while others believe him to be divine. The Sunni/Shia split between Islam is created by this rule, with Sunni Muslims counting Abu Bakr as the first legitimate Caliph, while the Shi'a count Ali as the first truly legitimate Caliph. For two decades around these years, the First Islamic Civil War rages in Arabia, and Ali is assassinated in 661. Hasan is appointed as his successor. Hasan ibn Ali Son. Forced to resign. Hasan, regarded as a righteous ruler by Sunni Muslims, is recognized by only half the Islamic empire. He is challenged and ultimately defeated by Mu'awiya, the Umayyad governor of Syria. Umayyad / Omayyad Caliphate AD 661 - 749 The governor of Islamic Syria, Mu'awiya, was one of the main challengers against Hasan ibn Ali during the First Islamic Civil War. He claimed descent from an ancestor who was common to both him and the Prophet Muhammad, although their clans within the encompassing Quraish tribe were different. After he had overcome Ali and the other claimants he founded the Umayyad dynasty, named after his great-grandfather, Umayya ibn Abd Shams, and made the position of caliph a hereditary one. The capital was established at Damascus just over a decade after the dynasty was founded. The rival Hashemite clan of the Quraish tribe was granted the emirate of Mecca in the tenth century. Abbasid Caliphate AD 750 - 1258 The Abbasids were the second of the two great Sunni dynasties that ruled the Islamic empire. The Abbasid caliphs officially based their claim to the caliphate on their descent from Abbas ibn Abd al-Muttalib (AD 566-652), one of the youngest (non-ruling) uncles of the prophet Muhammad, by virtue of which descent they regarded themselves as the rightful heirs of the prophet as opposed to the Umayyads. The latter were descended from Umayya and were a separate clan from that of Muhammad's in the Quraish tribe. Following the overthrow and massacre of the Umayyads, the Abbasids never managed to assert their authority in Islamic Iberia, but they did install loyal governors in Egypt and Syria. They also put themselves forward as representatives of the Hashemites, the clan which had previously lost out in the rivalry with the Umayyads for the caliphate. The capital of the Abbasid caliphate was in Baghdad, and the equality of all Moslems was established at the same time as they took control. Despite its bright beginning, the dynasty slowly became eclipsed by the rise to power of the Turkish army that it had itself created, the Mamelukes. Different trajectories To begin to understand the rich history of Islam, let’s start with the historical context and events that led to Islam’s spread. For example, Islam initially spread through the military conquests of Arab Muslims, which happened over a very short period of time soon after the beginning of Islam. However, only a small fraction of the people who came under Arab Muslim control immediately adopted Islam. It wasn’t until centuries later, at the end of the eleventh century, that Muslims made up the majority of subjects of the Islamic empires. The spread of Islam through merchants, missionaries, and pilgrims was very different in nature. These kinds of exchanges affected native populations slowly and led to more conversion to Islam. As Islamic ideas traveled along various trade and pilgrimage routes, they mingled with local cultures and transformed into new versions and interpretations of the religion Another important thing to note is that not all military expansion was Arab and Muslim. Early on in Islamic history, under the Rashidun caliphate—the reign of the first four caliphs, or successors, from 632 to 661 CE—and the Umayyad caliphate, Arab Muslim forces expanded quickly. With the Abbasids, more non-Arabs and non-Muslims were involved in the government administration. Later on, as the Abbasid caliphate declined, there were many fragmented political entities, some of which were led by non-Arab Muslims. These entities continued to evolve in their own ways, adopting and putting forth different interpretations of Islam as they sought to consolidate their power in different regions. The first Arab Muslim empire Within roughly two decades, they created a massive Arab Muslim empire spanning three continents. The Arab Muslim rulers were not purely motivated by religion, nor was their success attributed to the power of Islam alone, though religion certainly played a part. Non-Muslim subjects under Arab Muslim rule were not especially opposed to their new rulers. A long period of instability and dissatisfaction had left them ambivalent toward their previous rulers. Like all other empires, the first Arab Muslim empires were built within the context of the political realities of their neighboring societies. During the Rashidun caliphs, Arab Muslim forces expanded outward beyond the Arabian peninsula and into the territories of the neighboring Byzantine and Sasanian Empires. These empires were significantly weakened after a period of fighting with one another and other peripheral factions like the Turks, economic turmoil, disease, and environmental problems. The Arab Muslim conquerors were primed to take advantage of this; they were familiar with Byzantine and Sasanian military tactics, having served in both armies. With the Byzantine and Sasanian Empires on the decline and strategically disadvantaged, Arab Muslim armies were able to quickly take over vast territories that once belonged to the Byzantines and Sasanians and even conquer beyond those territories to the east and west. Most conquests happened during the reign of the second caliph, Umar, who held power from 634 to 644. The Rashidun caliphate constructed a massive empire out of many swift military victories. They expanded for both religious and political reasons, which was common at the time. One political advantage the Rashidun caliphate held was their ability to maintain stability and unity among the Arab tribes. Distinct, feuding Arab tribes united into a cohesive political force, partially through the promise of military conquest. However, this unity was tentative and ultimately gave way to major divergences that disrupted state and religious institutions in the coming centuries. A new political structure The Rashidun can be credited for military expansion, but did Islam truly spread through their conquests? Significant conversion and the cultural exchange did not occur during their short rule, nor were complex political institutions developed. It was not until the Umayyad Dynasty—from 661 to 750—that Islamic and Arabic culture began to truly spread. The Abbasid Dynasty—from 750 to 1258—intensified and solidified these cultural changes. Before the Umayyads, Islamic rule was non-centralized. The military was organized under the caliphate, a political structure led by a Muslim steward known as a caliph, who was regarded as the religious and political successor to the prophet Muhammad. The early caliphate had a strong army and built garrison towns, but it did not build sophisticated administrations. The caliphate mostly kept existing governments and cultures intact and administered by governors and financial officers in order to collect taxes. The Rashidun caliphate was also not dynastic, meaning that political leadership was not transferred through lineage. During this period, it seems the Arab tribes retained their communal clan-based systems of choosing leaders. However, to sustain such a massive empire, more robust state structures were necessary, and the Umayyads began developing these structures, which were often influenced by the political structures in neighboring empires like the Byzantines and Sasanians. Under the Umayyads, a dynastic and centralized Islamic political state emerged. The Umayyads shifted the capital from Mecca to Syria and replaced tribal traditions with an imperial government controlled by a monarch. They replaced Greek, Persian, and Coptic with Arabic as the main administrative language and reinforced an Arab Islamic identity. Notably, an Arab hierarchy emerged, in which non-Arabs were accorded secondary status. The Umayyads also minted Islamic coins and developed a more sophisticated bureaucracy, in which governors named viziers oversaw smaller political units.The Umayyads did not actively encourage conversion, and most subjects remained non-Muslim. Because non-Muslim subjects were required to pay a special tax, the Umayyads were able to subsidize their political expansion. The Umayyads did not come into power smoothly. The transition between the rule of the Rashidun and the first Umayyads was full of strife. Debates raged about the nature of Islamic leadership and religious authority. These conflicts evolved into major schisms between Sunni, Shia, and Ibadi Islam. Ultimately, there were many factions that regarded the Umayyads as corrupt and illegitimate, some of whom rallied around new leaders. These new leaders claimed legitimacy through shared lineage with the prophet Muhammad, through the prophet’s uncle, Abbas. They led a revolt against the Umayyads, bringing the Abbasid caliphate to power. The Abbasids were intent on differentiating themselves from their Umayyad predecessors, though they still had a lot in common. Abbasid leadership was also dynastic and centralized. However, they changed the social hierarchy by constructing a more inclusive government in a more cosmopolitan capital city, Baghdad. The distinction between Arab Muslims and non-Arab Muslims diminished, with Persian culture exerting a greater influence on the Abbasid court. Under the Abbasids, Islamic art and culture flourished. They are famous for inaugurating the Islamic golden age. Religious scholars, called ulema, developed more defined religious institutions and took on judicial duties and developed systems of law. It was also during Abbasid rule that many people converted to Islam, for a multitude of reasons including sincere belief and avoiding paying taxes levied on non-Muslims. As a result, Islamic culture spread over the Abbasids’ vast territory. There are accounts of the trade connections between the Muslims and the Rus, apparently, Vikings who made their way towards the Black Sea through Central Russia. On his way to Volga Bulgaria, Ibn Fadlan brought detailed reports of the Rus, claiming that some had converted to Islam. The last Muslim kingdom of Granada in the south was finally taken in 1492 by Queen Isabelle of Castille and Ferdinand of Aragon. In 1499, the remaining Muslim inhabitants were ordered to convert or leave. Poorer Muslims who could not afford to leave ended up converting to Catholic Christianity and hiding their Muslim practices, hiding from the Spanish Inquisition, until their presence was finally extinguished. The generally accepted nationalist discourse of the current Balkan historiography defines all forms of Islamization as results of the Ottoman government's centrally organized policy of conversion or dawah. The truth is that Islamization in each Balkan country took place in the course of many centuries, and its nature and phase were determined not by the Ottoman government but by the specific conditions of each locality. Ottoman conquests were initially military and economic enterprises, and religious conversions were not their primary objective. True, the statements surrounding victories all celebrated the incorporation of the territory into Muslim domains, but the actual Ottoman focus was on taxation and making the realms productive, and a religious campaign would have disrupted that economic objective. The remaining Muslim converts in both elected to leave "lands of unbelief" and moved to territory still under the Ottomans. Around this point in time, new European ideas of romantic nationalism started to seep into the Empire and provided the intellectual foundation for new nationalistic ideologies and the reinforcement of the self-image of many Christian groups as subjugated peoples Some Muslims in the Balkans chose to leave, while many others were forcefully expelled to what was left of the Ottoman Empire.[59] This demographic transition can be illustrated by the decrease in the number of mosques in Belgrade, from over 70 in 1750 (before Serbian independence in 1815), to only three in 1850. Europe and the Islamic World, 1600–1800 At the beginning of this period, the European presence in the Islamic world was largely based on trade. Dutch, French, English, and Portuguese merchants first arrived in the late fifteenth century, attracted by the wealth that could be acquired in exporting luxury items to the European market, and encouraged by the Mughal and Safavid governments, which desired trade partners to stimulate the economy. Diplomatic ties later officially cemented these partnerships. The first British representatives arrived in Persia in 1622 and the French in 1638. The Portuguese landed in India in 1498 and the French soon afterward, but the British, under the aegis of the East India Company, would prove to be the chief force in the subcontinent. Sir Thomas Roe brokered the first trade treaty in 1615. Ottoman empire was initially more isolated as it had a strong domestic trade network, but in the eighteenth century, it began to receive European merchants and consuls as well as to send out its own. One mission from Turkey visited the court of Louis XV of France in the 1720s. As the Europeans were introduced to many new kinds of textiles , carpets , spices, and clothing , So too was the Islamic world enriched. European art circulating among court artists transformed painting under both the Mughals and the Safavids. By carefully copying the engravings in sixteenth-century illustrated Bibles presented by Jesuit missionaries, Indian artists learned techniques of modeling and spatial recession that they then applied to their own works. Illustrations in books Of herbals affected the way flowers and plants were depicted. In Persia, oil paintings had a greater effect, the lifesize portraits of Louis XIV sent to Isfahan eventually metamorphosing into Zand and Qajar state portraits. Although manuscripts such as the Bellini Album ( attest that European drawings were known in Turkey, it was exposure to the French Baroque that captured the local imagination. Soon after the return of travelers to Versailles, flamboyant architectural ornament began to appear on both royal residential buildings and mosques By the end of the period, European colonial interests had upset this equitable cultural exchange. The British East India Company established an army to protect its commercial interests in India; its 1757 defeat of the Nawab of Bengal led to further armed conflicts and finally to the 1858 declaration of British sovereignty over the country. The British also became involved in interdynastic conflicts in the Arabian Peninsula and established a military post in Muscat, Oman. Napoleon invaded Egypt in 1798, and though he was forced to withdraw from the area in 1801, the French would later occupy parts of North Africa. The Dutch became involved in lands further east, especially in the Indonesian archipelago, where islands controlled by different Muslim rulers were united as one colony. The fall of the Roman empire and the rise of Islam The transformation from the ancient world to the medieval is recognized as something far more protracted. "Late Antiquity" is the term scholars use for the centuries that witnessed its course. Roman power may have collapsed, but the various cultures of the Roman empire mutated and evolved. "We see in late antiquity," so Averil Cameron, one of its leading historians, has observed, "a mass of experimentation, new ways being tried and new adjustments made." Yet it is a curious feature of the transformation of the Roman world into something recognizably medieval that it bred extraordinary tales even as it impoverished the ability of contemporaries to keep a record of them. "The greatest, perhaps, and most awful scene, in the history of mankind": so Gibbon described his theme. He was hardly exaggerating: the decline and fall of the Roman empire was a convulsion So momentous that even today its influence on stories with an abiding popular purchase remains greater, perhaps, than that of any other episode in history. It can take an effort, though, to recognize this. In most of the narratives informed by the world of late antiquity, from world religions to recent science-fiction and fantasy novels, the context provided by the fall of Rome's empire has tended to be disguised or occluded The answer was to be found on the front of the papyrus sheet, within the text of the receipt itself. The "Margarita", it appeared, were none other than the people known as "Saracens": nomads from Arabia, long dismissed by the Romans as "despised and insignificant". Clearly, that these barbarians were now in a position to extort sheep from city councilors suggested a dramatic reversal of fortunes. But it was also, so the receipt declared in the Saracens' own language, "the year twenty-two": 22 years since what? Some momentous occurrence, no doubt, of evidently great significance to the Saracens themselves. But what precisely, and whether it might have contributed to the arrival of the newcomers in Egypt, and how it was to be linked to that enigmatic title "Magaritai", PERF 558 does not say. We can now recognise the document as the marker of something seismic. The Margarita was destined to implant themselves in the country far more enduringly than the Greeks or the Romans had ever done. Arabic, the language they had brought with them, and that appears as such a novelty on PERF 558, is nowadays so native to Egypt that the country has come to rank as the power-house of Arab culture. What was it that had brought the Arabs as conquerors to cities such as Herakleopolis, and far beyond? The ambition of Ibn Hisham was to provide an answer. The story he told was that of an Arab who had lived almost two centuries previously and been chosen by God as the seal of His prophets: Muhammad. Although Ibn Hisham was himself certainly drawing on earlier material, this is the oldest biography to have survived, in the form we have it, into the present day. The details it provided would become fundamental to the way that Muslims have interpreted their faith ever since. And that the revelations of Muhammad did indeed descend from heaven, it is still pushing things to imagine that the theatre of its conquests was suddenly conjured, over the span of a single generation, into a set from The Arabian Nights. That the Arab conquests were part of a much faster and more protracted drama, the decline, and fall of the Roman empire, has been too readily forgotten. The influence of the novel and its two sequels has been huge and can be seen in every subsequent sci-fi epic that portrays sprawling empires set among the stars – from Star Wars to Battlestar Galactica. Unlike most of his epigoni, however, Asimov drew direct sustenance from his historical model. The parabola of Asimov's narrative closely follows that of Gibbon. Time will prove him correct. Without ever quite intending it, he founds a new religion and launches a wave of conquest that ends up convulsing the galaxy. In the end, we know, there will be "the only legend, and nothing to stop the jihad". There is an irony in this, an echo not only of the spectacular growth of the historical caliphate but of how the traditions told about Muhammad evolved as well. Ibn Hisham's biography may have been the first to survive – but it was not the last. As the years went by, and even more lives of the Prophet came to be written, so the details grew ever more miraculous. Gawping at the crumbling masonry of Roman towns, they saw in it "the work of giants". Gazing into the shadows beyond their halls, they imagined ylfe ond orcnéas, and Orthanc a generic – "elves and orcs", and "the skillful work of giants". Most of these poems, though, like the kingdoms that were so often their themes, no longer exist. They are fragments or mere rumors of fragments. The wonder-haunted fantasies of post-Roman Europe have themselves become specters and phantasms. "Alas for the lost lore, the annals and old poets." How France Defeated the Islamic Empire—1,300 Years Ago When the ISIS terrorists murdered 130 victims in Paris, they did so in the name of history. ISIS—the Islamic State of Iraq and Syria—seeks to recreate the medieval Islamic caliphate that once stretched from Spain to Baghdad. And so they attacked France, which they perceive as yet another Western obstacle to their grand ambitions. But before they took on France, perhaps they should have studied their history better. They would have learned that it was the French who stopped the Islamic empire from overrunning western Europe 1,300 years. In 732 CE, at the height of the Dark Ages after the fall of Rome, Islam seemed unstoppable. Boiling out of the Arabian desert just a century before, the Muslim armies conquered North Africa, Spain, the Caucasus and the Middle East with astonishing speed in what must have seemed like a medieval blitzkrieg. The tough, bold and highly motivated desert warriors plundered the decaying corpses of the ancient empires: the Roman, the Byzantine and the Sassanid (Persian). And as the victorious armies of the Umayyad Caliphate surged out of the Iberian peninsula into southern and then central France, they must have savored the prospect of adding the Christian lands of western Europe to their domains. If they accomplished that, the Islamic empire might have become the superpower of its day, the medieval equivalent in the military and economic power of the modern United States But they had not reckoned with the Franks, a Germanic-speaking people who took advantage of the decaying Roman Empire to settle in France and Belgium (and as you can guess, from whom the name "France" derives). It was the Franks who stopped the Islamic empire's advance at the Battle of Tours, a city in the middle of modern-day France As with many medieval battles, hard numbers and facts are scarce. It appears that Abdul Rahman al-Ghafiqi, governor of Muslim-occupied Spain, entered southern France with perhaps 80,000 soldiers to extend the domains of the Islamic empire, and perhaps more important in that era, plunder the rich Gallic countryside (a practice that ISIS continues today). The Muslim forces were composed of Moorish (Arab and Berber) light cavalry "who fought from horseback, depending on bravery and religious fervor to make up for their lack of armor or archery, " writes historian Paul Davis in his book 100 Decisive Battles: From Ancient Times to the Present. "Instead, the Moors fought with scimitars and lances. Their standard method of fighting was to engage in mass cavalry charges, depending on numbers and courage to overwhelm any enemy; it was a tactic that had carried them thousands of miles and defeated dozens of opponents. Their weakness was that all they could do was an attack; they had no training or even concept of defense." Facing the Umayyad army was an army of possibly 30,000 Franks led by Charles Martel, who confronted the invaders near Tours. The Frankish way of war was the antithesis of their opponents. "The Franks were hardy soldiers that armed themselves as heavy infantry, wearing some armor and fighting mainly with swords and axes," Davis writes. Does this sound familiar? A heavily armed and armored Western army versus lightly armed but more mobile Arab troops? In some ways, Tours was a precursor to the fighting we have seen in Iraq and Afghanistan, and the sort of tactics that ISIS has successfully used today. But this time, those mobile tactics didn't work. Islamic cavalry repeatedly charged the Frankish lines, but the Franks held firm against the lightly armed Moorish horsemen. More significantly, one Arab chronicler of the battle wrote that fear for their plundered loot induced the Muslim army to retreat: "But many of the Moslems were fearful for the safety of the spoil which they had stored in their tents, and a false cry arose in their ranks that some of the enemies were plundering the camp; whereupon several squadrons of the Moslem horsemen rode off to protect their tents." The Islamic army returned home, their dreams of glory and wealth unfulfilled. But the consequences were far more momentous than lost plunder. "Had the Moslems been victorious in the battle near Tours, it is difficult to suppose what population in Western Europe could have organized to resist them," Davis writes. By the nineteenth century, the Western empires had carved up the Muslim world; had the Battle of Tours turned out differently, ISIS could be ruling the Western world. This is not to glory in Frankish victory; Christian Europe showed neither mercy nor morality when it terrorized, murdered and pillaged its way to Jerusalem during the Crusades 400 years later. However, the fact remains that the westward expansion of the Islamic empire had been halted. The caliphate had been defeated by the French. Something that ISIS would do well to remember. M I Ro Photos by yahoo.com/images
Kazakh hunters in Western Mongolia have trained golden eagles to retrieve game in order to sustain a people living in a desolate region. This symbiotic relationship is a form of falconry that has existed for hundreds of years. Scarce resources and limited hunting opportunities make for an environment that forces these two carnivores to work in unison to make the most of every hunt. The golden eagle is blindfolded on the morning of the hunt and released from higher ground by a Kazakh riding a horse. When the prey (a fox) is secured by the eagle and its talons, the Kazakh hunter races down the hillside and finishes the kill. The meat is shared by the hunter and the bird, and the hide is used for clothing. Watch this video clip from an episode of Human Planet on BBC and experience the thrill of an ancient hunt.
1. Squeakfest talk 2. Exploring Angles with MicroWorlds and Circular Reasoning 1. Montessori Math 2. Montessori and Technology 3. Articles 4. Talks 5. Album Pictures Related Links Center for Talent Development Recommended Books Sign up for a Logo Summer Institute From October 27 to October 30, 2011, NAMTA held a fantastic conference on Psychogeometry, a newly translated and edited version of a work originally published in Spanish as Psicogeometría. An Italian version was edited by math professor Benedetto Scoppola and then translated into English. Kay Baker (an icon in the Montessori community who holds a doctorate in math education) edited the English version. She also worked with Benedetto Scoppola and graphic designer Miep van de Manakker to revise the illustrations and make sure they were properly integrated with the text. On Saturday, the editors talked about the book. Benedetto Scoppola's talk was mainly an overview of the book, but there was one nugget that's not in the book that I decided to record while it's still fresh in my mind. The Isoperimetrical Problem The subject was the "isoperimetrical problem". This issue came up in Benedetto's discussion of the figures on page 182 (a 4x8 rectangle) and 183 (a 6x6 square) of Psychogeometry. Two figures are isoperimetrical if they have the same perimeter. For example, a 4x8 rectangle and a 6x6 square both have a perimeter of 24. The "problem" is that these two figures don't have the same area. Benedetto noted that the difference between their areas is a perfect square (36 - 32 = 4). Benedetto pointed out a related pattern on the multiplication board. If you find 36 on the hundred board and then move diagonally up to the right, the next number you find is 35 (5x7), then 32 (4x8), then 27 (3x9), then 20 (2x10). These are all areas of rectangles that are isoperimetrical to the square of 6x6, and their areas all differ from 36 by a perfect square: 36-35=1, 36-32=4, 36-27=9,36-20=16. multiplication pattern on the diagonal Further observations Benedetto didn't mention this explicitly, but you can see this activity as a beautiful way to reinforce the formula (a+b)(a-b) = a^2 - b^2 with examples like 8x4 = 6x6 - 4, which can be written as (6+2)(6-2) = 6^2 - 2^2. Students can start with any perfect square on the board to find relations like this. So pretty! Another point of interest: From the multiplication board, select a rectangle that has the number 1 in the upper left corner. The tile in the lower right corner gives the number of tiles in the rectangle. For example, in the rectangle below, there are 32 tiles: multiplication of 4 x 8 Here's another example--a square with 36 tiles: multiplication pattern 6 x 6 For another activity relating area and counting to the multiplication table at the elementary level, see Counting and Multiplication (pdf or Microsoft Word).
Skip Nav How to Write a Thesis for a Diagnostic Essay What Is a Thesis Statement in an Essay? ❶Body should have three paragraphs with three strong arguments that will be corresponded to those paragraphs. Diagnostic Essay Outline What Are Some Persuasive Writing Topics in Middle School? Language & Lit Legal Stuff Choose a topic, set a timer and start writing. After going through such procedure several times, you will be able to gain confidence in finishing your task promptly and qualitatively. The idea behind writing a diagnostic essay is to present what you are capable of at the particular curve of the learning point. The emphasis is not on the particular set of knowledge that you have but on your skills and on whether you can effectively apply them during the next term. If you are anxious about the upcoming writing of your diagnostic essay, you can contact our essay service for the comprehensive advice. We can answer all your questions on how to write a diagnostic essay properly in order to show your skills and talents to the most. Open Navigation Close Navigation. What is a diagnostic essay? Remember about the structure The best outline which is suitable for the diagnostic essay is a common five- paragraph structure. Write down the main thesis statement. Additionally, all students will typically have to write on the same topic, but sometimes there are two or three options to choose from. Since you have a limited amount of time to write your essay , you should have a comprehensive plan in your head as to what exactly you will be doing — to allocate just enough time to go through your topic, give it some thinking, write the essay itself, and go through it once again before submitting it. Remember that you should only proceed to writing your introduction when you have made sure that you understand the topic and already know what you will write in the main body of your diagnostic essay. In other words, before writing your essay introduction, you need to know what exactly you are introducing. To start off, it is a good idea to paraphrase the topic that you were given to write about. Then, you have the three key points on which you have decided to focus on the main body of your essay, so they ought to be briefly introduced. For example, if you are to write about the hardest decisions that you had to make in your life, you must not focus on only one decision. You can suggest three variants and give them a scope. So, each paragraph of your main body will be dedicated to one of these alternatives, and you will briefly mention them in your introduction. Then, you end your introduction with a thesis statement. In our example, your thesis statement can tell the reader which factors you consider to label a certain decision the hardest one to make and why. As we have mentioned, the main body of your diagnostic essay will usually consist of three paragraphs. In each of these paragraphs, you take one of the three key points that you have mentioned in your introduction and expand upon each. The first body paragraph should be the strongest one. Following our example with the topic of hard decisions, the first paragraph would talk about the hardest decision that you had to make, in your opinion. You apply the factors of consideration from your thesis statement to explain to your reader why exactly you consider this example of a decision to be the hardest. While performing make sure that you have a clear presentation of your ideas When you will finish reread the text for several times to make sentences free or almost free from errorsand And the last but not least — the length has to be at least words. Diagnostic Essay Outline As any other essay diagnostic one consists of the same parts: Each of them has their peculiarities that you have to remember in order not to make mistakes. Body should have three paragraphs with three strong arguments that will be corresponded to those paragraphs. When you start a new paragraph write an argument in the first sentence, later open the argument and say how it is connected to the topic and thesis statement. Last sentence of the conclusion is kind of an appeal to the reader to find out more about the topic. The third body paragraph could be dedicated to the fact that knowing another language opens up new career paths in the future. The conclusion of this essay would rephrase the thesis statement and contain a summary of the arguments presented above. The thesis statement could contain the thought that the main reason behind the change is pressure. In that case, the first body paragraph would explain how family members and friends apply pressure to change you to their liking. And the third body paragraph would explain how society in general pressures people to change who they are. I try to help every student. Higher education, Master of Psychology. If you contact us after hours, we'll get back to you in 24 hours or less. How to Write a Diagnostic Essay Categories. What is a diagnostic essay? Main Topics Privacy Policy Write a diagnostic essay well by dividing your time effectively and using good prewriting techniques. Write a clear thesis statement, logical body paragraphs and a clear conclusion that echoes the thesis. Pre-writing Techniques. When you receive a prompt for a diagnostic essay, consider the amount of time you've been given to write it and set . Privacy FAQs Writing the diagnostic essay thesis is similar to writing any essay thesis. The thesis statement is a clear, concise and authoritative sentence that provide the structure, organization and topic of the essay. About Our Ads The main aim of a diagnostic essay is to identify the strengths and weaknesses of a student's writing early in the course of a semester in order to help the instructors get insight into the type of curriculum, feedback, exercises, and other course relaated exercises that they they will need to give to the students holistically and individually in . A diagnostic essay is best written by dividing the paper into three sections: introduction, body, and conclusion. A diagnostic essay is time-bound thus the author must set aside some time to go through the question and plan how to effectively write the essay. A captivating introduction and a clear thesis are aspects that make a diagnostic . Cookie Info WRITING DIAGNOSTIC ESSAY CONCLUSION. Any good essay should end with a powerful conclusion. However, it gets especially tricky here, because your task is time-bound and you have very limited time to end your essay on a powerful note. This is why it is critical to make your diagnostic essay conclusion brief. The diagnostic essay is a teaching tool used by many educators to give them an idea what skills students already have coming into a new class. By evaluating the class's diagnostic essays, the instructor can help students work on .
About Guatemala Thought to be named after the many trees that blanket the volcanoes, mountains and lowlands, this tiny country situated in Central America has a population of only 14,000,000.  The major portion of the population is settled around the current capital, Guatemala City, and La Antigua (literally 'the old' capital).  Smaller towns and villages are scattered throughout the 22 regions.  Bordered by Mexico, El Salvador, Honduras, and Belize, Guatemala also has coastal regions on both the Caribbean and the Pacific Ocean.  The topography is mainly mountainous but also extends to black volcanic sandy beaches, mangrove forests, the rainforests of Tikal, lakes, rivers, wetlands, and towering volcanoes.  Tikal National Park was named the first mixed UNESCO World Heritage Site.  Also a World Heritage Site, the cobbled streets and colonial facades of the city of La Antigua, attract international visitors, specifically during the annual Lenten festivities.  Celebrations  involve processions of giant floats containing biblical characters carried by locals through streets lined with alfombras  (literally carpets of flower petals, fruit and vegetables created by local homeowners and businesses).  To be involved in the processiones and Easter re-enactments is a great honour for Guatemalans and preparations take place many months in advance. The Catholic faith is the dominant religion throughout the country, though indigenous beliefs are still practised, (often interwoven and adapted in the smaller communities where it is not unheard of for traditional Catholic church buildings to house idols and symbols of Maya ritual). Guatemala's journey to the present winds a treacherous path beginning with unearthed artefacts that date back as far as 12,000BC.  Pre-Columbian  arrowheads and evidence of maize production suggest the foundations of a hunter-gatherer society which evolved to become a part of the mighty Maya heritage. Evidence of the Maya survives long after the collapse of the civilisation in 900AD.  The colourful dress of the remaining indigenous population is common on the streets of modern-day Guatemala, in stark contrast to the introduction of modern strip malls and office blocks.  Apart from the officially-spoken Spanish, 22 Maya languages have been identified in Guatemala including K'iche (spoken by a minority 9% of the population), Kaqchikel, Mam and Q'eqchi.  The geography of Guatemala lends itself easily to natural disasters and earthquakes, mudslides, volcanic eruptions, and hurricanes have all been the source of major disruption and loss of life. In the Colonial period of Spanish leadership, Guatemala was both Audencia and a Captaincy General until its independence was declared on September 15, 1821.  The politics of Guatemala have proved over the years to have been as tumultous and damaging as natural forces beginning with the dictatorship of Juan Manuel Cabrera (helped to power by the United Fruit Company).  The Peace Accord was signed in 1996 and ended the most recent period of armed conflict. In 1871 Guatemala's Liberal Revolution was assisted by Justo Rufino Barrios who attempted to modernise, improve trade, introduce new crops and manufacturing facilities.  It was at this point in time that coffee was introduced as a means of crop production.  Today, many coffee plantations continue to produce quality coffee beans that are exported worldwide to places such as the famous Starbucks chain. Poverty and violence are now the focus for change for Guatemala.  In 2005 the World Health Organisation published figures that suggested that 21.5% of the population were living on less than $1 a day and in 2008 an estimated 40 murders a week were reported in Guatemala City.  Domestic violence and crimes against women are endemic and often go unreported.  Guatemala has the third lowest rate of contraceptive use in the Americas, 49% of children under 5 suffer from chronic malnutrition and this figure rises to 68% amongst the indigenous population.  It is estimated that 30% of pregnant women have nutritional deficiencies. Raising awareness of the social issues in Guatemala today is crucial in the effort to increase standards of living throughout the population.  In 1992, the Nobel Peace Prize was awarded to Rigoberto Menchu for her work raising international awareness of the government sponsored genocide against the indigenous population during the Guatemalan Civil War (1990-96).  Many charities work within the country to continue to promote issues of health, hygiene and well-being. A country of contrasts and diversity, from its ancient roots in the Maya ruins of Tikal, through the earthquake-ravaged colonial facades, to the modern day McDonalds and hypermarkets. Everywhere in Guatemala there is evidence of hard times, but for every image of destruction and decay there is another of hope and rebirth, whether the repeatedly rebuilt churches of Antigua or the giant cross and message of biblical hope mounted atop a city slum.  An inner strength shines through in the warmth and welcoming smiles of the people and the beauty of the environment and underlines the uniqueness that is Guatemala.
Common Causes of Minnesota Auto Accidents Last year, 455 people died and over 33,000 were injured in motor vehicle accidents on Minnesota highways. Approximately 1.2 million die each year nationwide in traffic crashes, and the World Health Organization predicts a 65% increase in automobile fatalities by 2020. When we look at the most common causes for auto accidents, we see that the average motorists' primary hazards are the drivers themselves. Distracted driving Even talking on cell phones with a hands free device can be distracting for the driver. Texting or checking Email while driving has become a dangerous habit, and as of August 2008 it is illegal to use any wireless communication device to read, compose, or send an electronic message while driving in Minnesota. Other distractions such as changing the radio, eating, shaving, applying make-up, reading newspapers or maps, passenger distractions, looking at scenery, and rubbernecking are common diversions that can pull the drivers attention away from the road. Driving under the influence It is estimated that over 11,000 people have died as a result of drunk driving incidents in 2008. It is estimated that one person dies every 30 minutes from an alcohol related automobile collision. In 2003, impaired drivers accounted for 30% of weekday and 53% of weekend auto fatalities nationwide. Aggressive driving Tail gaiting, frequent and unsafe lane changes, excessive honking and not allowing others to merge are dangerous driving habits that contribute to auto collisions. Aggressive driving is more prevalent in urban areas where traffic congestion and the proximity to other aggressive drivers tends to contribute to this risky behavior. Running red lights, racing, and speeding are common causes of traffic collisions. Not only does speeding reduce the time a driver has to react to a traffic situation, it also significantly increases the force of impact in a car crash. The energy released in an impact more than doubles when a collision occurs at 60mph rather than 40mph, according to the Insurance Institute of Highway Safety. Driver fatigue Auto accidents due to drivers suffering from fatigue usually occur at the early hours of the morning and in the middle of the afternoon. Fatigue can often affect the skills of the driver before the motorist even feels any symptoms. Those who work erratic shifts are very likely to suffer driver fatigue and fall asleep at the wheel because of their irregular sleep patterns. You can protect yourself while you are driving by following the rules of the road and being a responsible and alert motorist. Doing so would give you the opportunity to avoid a potentially perilous driving situation - if you had the chance to see it coming. If you were the unfortunate victim of an auto accident due to the negligence of another driver, contact the skilled attorneys at Hazelton Law Firm today, and we will review your case for free.
Production Of New Varieties Besides the gratification that always accompanies the growing of plants, there is in plant breeding the promise that the progeny will in some way be better than the parent, and there is the certainty that when a stable variety of undoubted merit has been produced it can be sold to an enterprising seedsman for general distribution. In this way the amateur may become a public benefactor, reap the just reward of his labors and keep his memory green! The production of new varieties of plants is a much simpler process than is commonly supposed. It consists far more in selecting and propagating the best specimens than in any so-called "breeding." With the majority of the herbs this is the most likely direction in which to seek success. Suppose we have sown a packet of parsley seed and we have five thousand seedlings. Among these a lot will be so weak that we will naturally pass them by when we are choosing plantlets to put in our garden beds. Here is the first and simplest kind of selection. By this means, and by not having space for a great number of plants in the garden, we probably get rid of 80 per cent of the seedlings--almost surely the least desirable ones. Suppose we have transplanted 1,000 seedlings where they are to grow and produce leaves for sale or home use. Among these, provided the seed has been good and true, at least 90 per cent will be about alike in appearance, productivity and otherwise. The remaining plants may show variations so striking as to attract attention. Some may be tall and scraggly, some may be small and puny; others may be light green, still others dark green; and so on. But there may be one or two plants that stand out conspicuously as the best of the whole lot. These are the ones to mark with a stake so they will not be molested when the crop is being gathered and so they will attain their fullest development. These best plants, and only these, should then be chosen as the seed bearers. No others should be allowed even to produce flowers. When the seed has ripened, that from each plant should be kept separate during the curing process described elsewhere. And when spring comes again, each lot of seed should be sown by itself. When the seedlings are transplanted, they should be kept apart and labeled No. 1, No. 2, No. 3, etc., so the progeny of each parent plant can be known and its history The process of selecting the seedlings the second year is the same as in the first; the best are given preference, when being transplanted. In the beds all sorts of variations even more pronounced than the first year may be expected. The effort with the seedlings derived from each parent plant should be to find the plants that most closely resemble their own parents, and to manage these just as the parents were managed. No other should be allowed to flower. sale to the trade. necessarily finicky, and because there are already so few varieties of most species that the operation may be left to the activities of Primrose Quince facebooktwittergoogle_plusredditpinterestlinkedinmail
mobile icon Roman and Anglo-Saxon Newcastle's castle was built on the ruins of a Roman Fort called Pons Aelius. ‘Pons’ is the Latin word for ‘bridge’ while ‘Aelius’ comes from the family name of the Emperor Hadrian, so the name means something like ‘Hadrian’s Bridge’. It was named after the Roman bridge across the Tyne which it guarded, which stood where the Swing Bridge is today. It was built in around 122AD, about the same time as Hadrian’s Wall, which it formed part of. It was built of timber, and was rebuilt in stone in around 211AD. The soldiers who garrisoned it originally were members of a tribe called the Cugerni from Germany. This regiment was later replaced by a regiment of the Cornovi – a British tribe from near Manchester. This was the only regiment of British soldiers stationed along Hadrian’s Wall. The fort was abandoned in around 400AD when Roman rule in Britain ended. The Anglo-Saxons who came after them built on the ruins. Today, the only things which can be seen of the Roman fort are the lines of cobbles on the ground around the Keep which mark where the foundations of the buildings used to be. We don’t know very much about Newcastle after the Romans left. The Anglo-Saxon historian Bede writes about a settlement 12 miles from the sea called ‘Ad Murum’ meaning ‘On the Wall’ which some people have suggested was on the site of modern Newcastle. Later writers say that Newcastle was known as ‘Monkchester’ in Anglo-Saxon times, although there is no evidence of a monastery here. What we do know is that there was an Anglo-Saxon cemetery on the site of the old Roman fort, underneath the Castle Keep and the railway viaduct. Over 600 graves were excavated between the 1970s and 1990s. There are also the remains of the tower of a small church under one of the viaduct arches, but no evidence of the surrounding settlement. The church and cemetery were still in use when the Normans invaded in 1066.
What exactly is cryptography? Usually cryptography is the technique to transform readable data to unreadable data. We cope with it every day of our life. Many significant aspects of science use cryptography, but everyone of us has been utilizing it for years, yet did not understand what he/she was doing. You can write and study infinitely when it involves cryptography, therefor this can be a little peak in the places where it’s used. Now let us see where cryptography can be used! (Also have a look at http://www.vpn-hosting.net/ipvanish/)! I do not get it, what does this actually mean? Believe of the common folks. All of us have secrets, we’ve got lots of them, and a few are so special that people would rather die then tell something about it. Is not it? Another really straightforward example originates from family lives. A family may be looked at just like a little community consisting of 2-10 members, differing from nation to nation and based on which you call “family”. You go someplace with your loved ones. You should request your dad if you are going to your own cabana which stands in an incredibly lovely area, and you also do not need others to find out you are going there. You merely request your old man: “When could we go there?” And that is it. You only used cryptography! Why? Just because others who heard what you have just said do not understand what you are talking about. The function of cryptography in our lives This technique is really significant, that we could not do lots of things without it. Why thus? Well I would like to describe for you. I am going to now require a few of the main aspects of cryptography use. We are now living in a modern world. We should deliver e-mails, either for company, to friends, businesses, famous people whose address we’ve. It will not matter. We send e-mails on a regular basis. Individuals deliver around 210 billion e-mails daily ! When you deliver an e-mail, it must get trough the web – a giant network comprising lots of computers most of which are unguarded and attackable. Lots of individuals want to steal information from others, occasionally just for pleasure, but risk comes when it is about something different. Just believe a minute of how large the Internet is. The primary three states in the maximum amount of net users list are: 1.China (253.000.000 users) 2.USA (220.141.969 users) 3.Japan (94.000.000 users) That is a lot! There are approximately 6,720 billion people on earth. And just the primary three states have 0,567 billion Internet users. That’s around 8,43%. Now picture what’s on the market. How can e-mails get shielded while they’re being sent? All connections between routers and routers themselves need to be ensured. That’s done through the use of data encryption. Normally there would be two approaches because of this security. The first one would be to use PGP (Pretty Good Privacy). That is both the name of a computer program as well as the protocol itself. However, what’s pgp protocol in fact? This is a way to secure e-mails, a standard in cryptographically protected e-mails. Essentially it’s combined with MIME Security. Before encrypting with pgp, message body and headers needs to take MIME (Multipurpose Internet Mail Extensions) canonical format. “multipart/encrypted” denotes encrypted pgp information and must include the next parameter: The multipart/encrypted consists of two parts. The initial part is a MIME body with “application/pgp-encrypted” content type and includes the control information. In addition, the message body must include the following line: Variant: 1 Complete information for decrypting is included by the pgp packaged format. The next component can also be a MIME body, using a more straightforward construction. It includes the encrypted information itself and it’s tagged with an “application/octet–stream” content type. The next process is a catchy one. Sender possesses a safe site, receiver has a username and password, and receiver may see the message after logging to the web site. However ISPs can encrypt communication between servers using TLS (Transport Layer Security) and SASL (Simple Authentication and Security Layer). E-mail servers utilize this type of protection between each other for example, these servers need their communicating shielded so no accidental server will get a replica of any email going through these e-mail servers. TLS can also be utilized in a variety of organizations. TLS can also be used with POP3, IMAP, and ACAP. If HTTP is protected by TLS, it provides more security then straightforward HTTP. Lots of present customer and server products support TLS, but many provide no support. Let us check on additional information about TLS/SSL. TLS (Transport Layer Security) and SSL (Secure Sockets Layer) are nearly exactly the same, actually TLS is the successor of SSL; there are only minor differences between them. They’re used for: instant messages, e-mails, browsing, web faxing. Well, two of the above mentioned are used by everyone. E-mails and browsing the Internet: things you do practically regular. TLS plays a significant job online, particularly in communications seclusion and endpoint authentication. HTTP, FTP, SMTP, NNTP, XMPP are all protocols with TLS protection. TLS can add security to any protocol which uses a trusted connection (such as TCP – Transmission Control Protocol). TLS is mostly used with HTTP to make HTTPS. In addition , we should mention that TLS is growing in SMTP recently. In case of VPN, TLS is used to tunnel a complete network stack. VPN is going to be discussed in its details afterwards. Let us just consider HTTP (Hypertext Transfer Protocol) and FTP (File Transfer Protocol). There are approximately 63 billion sites all around the globe, and approximately 1 trillion unique URLs! Nearly all of these have plenty of visitors daily. Visualize how significant servers are, how significant their security is. What would occur if an average hacker could break into any server? Catastrophe! He’d then break another and another and another… Information could be stolen every single minute; Internet would not have any safe zone. You’d be scared to send e-mails, to post anything to a site/forum. It is difficult to realize what would occur without security, the majority of which is performed by cryptography. Lots of us also use FTP (File Transfer Protocol) to transfer information between two computers. It works as if you’d open Windows Explorer to look at files and folders. The sole difference is the fact that on an FTP connection you can even download files, not simply see or browse them. There are lots of FTP servers and clients accessible online. These programs can facilitate your work, it is possible to arrange your downloads in the event you make use of the client side, or you are able to arrange what others can download in the event that you make use of the server side. May seem to be a simple method to transfer files out of your pals, from your nearest and dearest, for your loved ones, is not it? FTP even lets one to use usernames and passwords for the protection. All the above mentioned is clear and pleasant said, but even this manner FTP is exposed! How so? Affecting its design, FTP is constructed in a way which supplies skill for users to a single network as the transport has been processed to sniff information including: files, usernames, and passwords. There’s no built in security or data encryption. A popular remedy because of this security issue would be to use either SFTP or FTPS. Be attentive! It is perplexing. SFTP and FTPS are two quite otherwise functioning file transfer protocols, they’re different. SFPT is SSH (Secure Shell) File Transfer Protocol. SSH additionally uses public-key cryptography, which works like this: you’ve a text to encrypt, and there is a public key along with a private key. Text gets encrypted using the public key, but just who knows the private key can decrypt it. Using its design – the use of public-key cryptography – SSH is essentially used to log into a machine and run commands, but also can transfer files (trough SFTP or SCP), as well as supports tunneling and port forwarding. FTPS is often called FTP/SSL. FTPS uses SSL/TLS below normal FTP to encrypt the control and information routes. VPN (Virtual Private Network) is just like a virtual computer network. Why thus? Consider the Web. How can it work? It contains lots of computers and servers linked to every other. And how do connections exist and work? They exist physically, they may be linked with wires. Essentially the user comes with an ISP (Internet Service Provider) trough which it obtains access to the World Wide Web. Now, what is the difference between Internet network linking and Virtual Private Network linking? VPN uses virtual circuits or open connections to get the network together. All fine, but VPN needs security to be efficient and employed. Well, it’s a unique security system. I will reflect on VPN security problems. Authentication is needed before VPN connection. Should you be a known and trusted user, you might have use of resources inaccessible to other users. More interesting is that servers might also be required to authenticate themselves to join the Virtual Private Network. Unusual mechanism, users are comfortable with being required to authenticate themselves on a web site or server…but a server also wants authentication? Yes, it does! There are many different authentication mechanisms used in VPNs. A few of these mechanisms are contained in firewalls, access gateways as well as other apparatus. A VPN authentication mechanism uses: passwords, biometrics or cryptographic systems that might be coupled with other authentication mechanisms. Safe VPNs are built to supply needed privacy for the users. The essence of this consists in cryptographic tunneling protocols. Secure Virtual Private Network ensures message integrity, confidentiality and sender authentication. We are able to see how significant cryptography is in our own lives. These were fairly technical information on cryptography use. But let us consider a few other examples also, not too technical! Abbreviations. Maybe you are clever, sensible, but you are lost if someone uses an abbreviation and you also do not understand where it comes from and what it means. Suppose you’re on a vacation and hear someone saying: “I got that cool things from an excellent FTP server”. You do not understand what this is about if you aren’t comfortable with File Transfer Protocol, and do not understand what it means and where it’s used. Think of the previous days, the 19th century as well as the start of the 20th century. Individuals had no mobile phones, no internet, and no email sending chance. When they had a need to say something to someone who was far away from them, and they did not need to use phones…what could they do other then seeing that person or those individuals? They used the Morse code. That is recognizable to us, but a lot people just know what it means, not how to comprehend or create Morse code itself. There were two common answers to generate Morse code. One of it worked only for short spaces normally. It had been something like you pick up an item and hit another thing to make sound; that sound was the Morse code. Another option worked for large spaces also. Suppose it was night time, along with a boat was sailing on the sea or on the ocean, fighting an enormous thunderstorm. Back then, folks had lots of wooden boats, which could not resist in front of a huge storm’s power. So if there have been folks on the floor, 1-2 kilometers from the boat place, they may have used a torch to direct the boat safely to the coast. The strong point of the torch Morse coding was that it worked even during day. Most generally it had been utilized to request assistance, if someone was in trouble during day. Lots of times there were individuals who’d small boats, got themselves far from the seashore, and did not understand how to get back to the coast. It was terrifying, and individuals could not manage themselves to “think” where to go. So that they waited until a boat came close sufficient to be on sight, and after that they used the torch, and were monitored if fortunate. We use telephones and mobile phones to convey. Phones transmit electrical signals over a complex telephone network. This technology enables virtually anyone to communicate with virtually anyone. The single difficulty is given by the truth that phones can simply be eavesdropped. Eavesdroppers just need three items to do the operation: a pick-up apparatus, a transmission link as well as a listening post. If a person has the previously discussed components, it can be an eavesdropper. The pick-up apparatus is most typically a mic or a video camera. These devices can record audio or and video pictures after to be converted to electrical signs. Additionally some listening devices can save information digitally and after that send it into a listening post. The transmission link may be a cable or a radio transmission. A listening post enables tracking, record or retransmitting signs. It is as close as the next room, or several blocks away. An eavesdropper simply needs to place a bug in your phone, plus it is prepared. Do not get confused, it is just a matter of seconds to install a bug. All these approach is founded on installing apparatus. Landlines can be tapped anywhere between your phone and also the telephone company’s office. Anyway, the installer of the telephone tap desires physical access to the telephone cables. There are many techniques to get access. This second approach is called bugging, which calls for no apparatus installing and needs no accessibility to the casualty’s phone. It’s possible for you to guard yourself against eavesdropping with phone encrypting apparatus. Mobile phones are employed by nearly every second man on ground. It’s all the functionality of a simple phone, but it adds more services like: SMS, MMS, E-Mail, Internet, Gaming and Bluetooth. Mobile phones automatically connect to GSM towers or satellites, seeing to which of them is better in time as well as accessible. Cell phone signs could be picked up just as a backyard satellite dish pulls television signs in. To guard yourself against eavesdropping, it is possible to get cell phone encrypting apparatus. Luckily there are encrypting apparatus for both telephones and mobile phones. Many children like to devise new things and investigate everything around them! Likely you understand about a few children encrypting their messages or diaries like deciding on a custom ABC. That’s simple to do. You get an extreme character for every letter of the ABC, and just you and also the ones who need in order to read your messages understand which symbol corresponds to which character. We have seen lots of distinct places of where cryptography can be used in our days or in days gone by. As a common man, it is possible to find cryptography everywhere around yourself! It is so awesome how much science got, plus it keeps going and going, getting lots of new knowledge daily. E-mails and Internet are utilized by a growing number of folks daily. We only can not imagine our lives without it. And all of the work and get secured according to cryptography.
Skip to main content Revenue is the income a firm obtains from the sales of its goods or services. Three terms must be understood: • Total revenue (TR) - all the revenue earned by the business. Total revenue = price x quantity demanded. • Average revenue (AR) - total revenue divided by number sold. • Marginal revenue (MR) - the increase in total revenue as the result of one more sale. This is not necessarily the same as the price. It is only the same as price, if price remains constant. Revenue curves vary depending on whether price is constant at all levels of output (as in the case of a firm which is a price-taker), or falls as output increases (as in the case of a firm who is a price-setter). Look at Figures 1 and 2 below to see the difference this makes to the shape of the average / marginal revenue and total revenue curves: Figure 1 Revenue curves - constant price (price-taker) Figure 2 Revenue curves - falling price (price-setter) You have to be able to calculate revenue, in any form, from data, then draw and interpret curves. Time for an example, and for you to do some work again! Output (units) 0 1 2 3 4 5 6 7 8 9 10 Total revenue ($ 000) 0 100 180 240 280 300 300 280 240 180 100 Plot this with output on the horizontal axis and revenue on the vertical axis. Look at it and then we will do some more calculations. There is also a static version of this graph available. Total revenue rose at first, reached a maximum, and then declined. From the total revenue curve data above, now calculate the figures for marginal revenue and average revenue. Once you have had a go, click on the answer link below to check your calculations. Answer - revenue calculations Now, plot the marginal and average revenue curves from this data as well. Examine it. What does it tell you? There is also a static version of this graph available. Observation of the graph shows: • Both AR and MR fall as output increases. • AR and MR start at the same point on the Y-axis, at the same level of revenue. • MR can and does become negative. • Using the first graph as well, when MR is zero, TR is at its maximum. Output is 5.5 units. Within economics you will meet: • Normal profit - is that level of profit which is just sufficient to keep the firm in its present use. Normal profit is assumed to be an element of the ATC curve. • Supernormal profit (or abnormal profit)- this is any profit made in excess of normal profit. The definitions of supernormal and normal profit mean that profit on a diagram drawn by an economist shows supernormal profit only. Normal profit is included as an element of the ATC curve and arises where ATC = AR. Examine the following diagrams (we'll look at how to build these diagrams in more detail later on): Figure 3 Firm in perfect competition - supernormal profit Figure 4 Monopoly - supernormal profit This has to be compared with the accountant's definition of profit. Accounting profit The difference between revenue from sales and the costs incurred in making these sales, regardless of any credits given or taken. Accountants deal in facts. They do not get involved with concepts such as normal profit. Governments tax accounting profit, not normal profit. Why do firms try to make a profit? Profit has many uses: • It is the return to the entrepreneur. • It is a source of funds for development • It is a motivator. Profit is a driving force within business. It is an incentive to invest for investors. It lies behind all cost reduction exercises, as the aim of cost reduction is profit maximisation.
Distance Core Shamanism (for People and Animals) Distance Core Shamanic Practices “Shamanism is the oldest spiritual practice known to humankind.” -- Colleen Deatsman, The Hollow Bone: A Field Guide to Shamanism “Core shamanism consists of the universal or near-universal principles and practices of shamanism not bound to any specific cultural group or perspective.” -- Anthropologist Michael Harner, Founder of the Foundation for Shamanic Studies Over tens of thousands of years (many anthropologists have dated the practice back over 100,000 years), our ancient ancestors developed a spiritual-based system for personal empowerment, problem-solving, and spiritual healing. This system is now referred to as “Shamanism” or, currently “Shamanic Practices.” Shamanic practices have been used in every culture across the world. It is a form of spirituality that is based upon living in balance (it’s all energy) with all other forms of life on this planet -- the people, the plants, the animals, and the Earth itself. Shamanic practices have survived for centuries simply because they have proven to be successful in enabling people to achieve physical, mental, emotional, and spiritual energy balance -- to not only continue to exist in an often harsh environment, but to THRIVE within that environment. Shamanic practitioners always have been -- and still are -- the “healers” within their tribes and communities. They were the “seers” and the “sages” whom the people turned to for balance and harmony in relationships, for healing remedies, and for knowledge about critical decisions that needed to be made on a daily basis (such as, “where shall we hunt to find food?” and “where shall we set up our summer camps?”). Traditionally, the health of individuals and the community as a whole was the responsibility of the shamans. If the shamans did not do their job well, the people starved or suffered needlessly. Shamanic practices have survived the ages (and the comings-and-goings of specific civilizations, nation-groups, and countries) because the practices are experiential, based upon “divine” revelation, and can adapt and respond to the unique needs of the situation at hand. Although Shamanism is not a religion, it is the “ancestor” to all forms of human religion. Core shamanism (the basic shamanic practices, as practiced around the world) embodies the most wide-spread and time-tested practical system of spirituality and mind-body-soul healing known to humankind. It encompasses a timeless wisdom of indigenous cultures shared around the planet, and passed down through thousands of years -- to us. The word “Shaman” (pronounced shah-maan) comes from the language of the Evenki people, a Tungusic tribe in Siberia, and it means “one who sees in the dark.” It is a word that was adapted widely by anthropologists to refer to people who enter an altered state of consciousness (similar to meditation or deep prayer) at will; people who “journey” into non-ordinary reality in order to acquire knowledge and power (in the form of energy) to help others. Shamanic “seeing” is a whole-body sensing and knowing, whereby the Shaman receives visual images, insights, flashes, and direct knowledge from this “non-ordinary” reality. Shamans, basically, develop the ability to enter a deep state of consciousness (that is, outside of what is experienced in the regular “waking” state of time and space) with discipline and purpose in order to directly seek spiritual/divine guidance, gain insight or energetic power, and/or to determine the core of physical, mental, emotional, and/or spiritual imbalances. This non-ordinary reality that Shamanic Practitioners learn to access exists just beyond our everyday perceptions. [Keeping in mind the scientific demonstration that the Universe is pure movement, with no distinction between light (patterns of energy) and matter (patterns of energy) and is pure vibration loaded with the potential to manifest into an infinite array of patterns -- the Quantum Inseparability Principle -- would attest that what we call “non-ordinary” reality is simply our reality accessed by using all of our innate senses. As Einstein pointed out, energy is equal to matter (E=MC2) -- so “solid” objects that you can see, feel and touch, are matter AND energy, which you cannot see, feel, or touch.] Many people say that they only believe what they can touch or see; yet, science has proven that 99.99999+ percent of an atom is open space! That is, everything we touch or see is verifiably…nothing! Matter exists as both a “wave” and a “particle” -- it can both exist and be non-existent (“wink” in and out of existence). So what is perceived as “real” is actually a disturbance in space -- a measureable “wrinkle” in the energy of space. It only has a “perceived” solidness. And it is only this perceived solidness that differentiates matter from space (which is unseen by our basic five senses). The study of quantum mechanics has demonstrated that the vast space between the particles is filled with energy -- much more so than what is produced to create a particle. That is, the space between the particles of the atom is filled with POWER (or potential). It has more power than the particles it produces because it’s potential to produce is always greater than what is actually produced. All that to say -- it’s all energy. And what is “unseen” by our basic senses is just as real and just as powerful (if not more powerful) than what is seen by our basic senses. What is “unseen” is “seen” by the Shamanic Practitioner when in a deep state of consciousness referred to as “non-ordinary” reality. Everyone can, with training and intent, access this non-ordinary reality. (The Australian aborigines call the non-ordinary reality the “Dreamtime.” It is referred by as the “Other World” in Celtic traditions, or as the “Spirit Worlds” by Native Americans and other indigenous peoples.) Modern quantum physics has proven that there are, in fact, many worlds, dimensions, and “realms” that overlap and interact with one another. Shamanic Practitioners merely serve as a “bridge” between the different dimensions. (Just like we know that light is made up of a broad spectrum of frequencies, some of which we can see -- and some of which we can’t -- “ordinary” reality merely has the appearance of having a solid, physical form, while “non-ordinary” reality does not.) Shamanic Practitioners, including Kristina, see the soul of the world (the “energy” of life and consciousness) in every rock, stream, bird, animal, cloud. This belief holds that all life has a soul; a soul created from, filled with, and sustained by an intelligent Spirit-guided life-force energy – the “Spirit Way” is the way of energy – and it’s all energy! Shamanic Practitioners believe that this intelligent Spirit-guided life-force energy is omnipotent, formless, and omnipresent. This Spirit-guided life-force energy moves within every living thing -- connecting every living thing. And this Spirit (whether it is called God, or Great Spirit, or Creator -- or any other name) is just as real and just as tangible as flesh -- even if we can’t see it with our physical eyes, taste it with our tongues, or hold it in our hands! This all-pervasive, divine life-force energy has “assistant” energies (including angels, “power” animals, or any other compassionate helping spirit). These assistant energies interact with all of us on a daily basis -- from birth to death. These “assistant” energies take on specific “energetic signatures” (energy forms) and appearances that Shamanic Practitioners can see, feel, hear, experience, and interact with in non-ordinary reality. These “assistant” energies are called “helping spirits” by Shamanic Practitioners. However, they have been referred to as angels, guardian spirits, spirit teachers, spirit helpers, spirit guides, spirit allies, ascended masters, power animals, elemental spirits (including faeries, elves, leprechauns, gnomes, etc.), or Archetypes -- depending upon the culture or religious orientation of the ones “perceiving” them. Whatever name these helping spirits are given, or in whatever form they allow themselves to be perceived by humans -- they are pure energy! Shamanic Practitioners know that all worlds (ordinary and non-ordinary reality) host an infinite number and variety of helping spirits. They believe that each person is born with at least one helping spirit that protects them and shares their “power” (energy) with them. The job of all helping spirits is to guide, protect, teach, and link people with a higher energetic vibration so they can experience enhanced energy, vitality, health, harmony, and wisdom of action. Working with these helping spirits, Shamanic Practitioners work to affect spiritual healing and harmony in our ordinary reality -- and they do this to ensure the survival and well-being of the people they serve: their clients, their students, their family, their friends, they elders, their community, and, ultimately, all of humanity and all of the Earth. Throughout the world, and throughout the ages, people (entire cultures) have believed in these helping spirits -- invisible to the naked eye in this reality because they vibrate at a much higher frequency and are far less physically dense (called “subtle” or “etheric” energy in quantum physics). Helping spirits are, in essence, just higher-vibrational, spiritual energy forms -- existing just outside our ordinary, everyday, five-sense perception. Basically, Shamanic Practitioners work with compassionate helping spirits in order to restore balance, “wholeness,” harmony, and power to people, animals, plants, places, and the entire planet. This restoration, which takes place on a spiritual level, enhances physical, mental, emotional, and spiritual well-being of all life. (Shamanic Practitioners see all physical, mental, and emotional imbalances as manifestations of spiritual imbalances. Unlike western medicine specialists, Shamanic Practitioners do not see their clients as separate parts working separately or disconnected symptoms; they see their clients as WHOLE people; people connected and interconnected to the entire web of life.) Throughout history, Shamanic Practitioners have provided a wide range of services and served a wide range of roles. Some of the services provided by traditional indigenous Shamans (including Native American Medicine Men and Medicine Women) included: • divining information, wisdom, knowledge from both ordinary and non-ordinary reality; • acting as intermediaries between ordinary and non-ordinary reality in order to restore (personal and tribal) physical, mental, emotional, and spiritual balance and well-being; • preparing the people for hunting, gathering, and agriculture activities; • seeing the future; explaining/interpreting signs and omens; • officiating/leading ceremonies (for rites of passage, such as birth, marriage, and coming-of-age); • retrieving lost soul parts (soul essence) and extracting “negative” intrusions; • communicating with the dead and helping the dying transition to the next level; • using plants, plant energies, and plant spirits for healing purposes; • communicating with nature spirits and weather elements; • singing songs to invoke, connect with, and/or honor helping spirits; or for healing; • diagnosing illnesses; teaching apprentices; • setting bones, pulling teeth, and treating wounds; interpreting dreams; • delivering babies and conducting soul-crossings for the deceased; • counseling, advising, and mediating for individuals, couples, families, and tribes; and many, many other practices! Basically, at one time, the way of the Shaman was practiced only by hunters and gatherers primarily in order to find food and other resources for their tribe and for energy/spiritual healing. They accomplished this by achieving the expanded state of awareness in which they connected to “spiritual” help. In these traditional cultures, there were often only a few gifted people in the community who were able to perform the shamanic survival duties for sustenance, support, guidance, and healing on behalf of others. Today, in the twenty-first Century, Shamanic Practitioners work more for individuals instead of tribes; more for the entire community of humankind instead of just one culture. They often practice “combination” shamanism -- a combination of specific indigenous techniques and “core” shamanic techniques (techniques based upon common values, philosophies, and practices found at the core of all traditional shamanic practices). Today, more and more “Shamanic Practitioners” are realizing (actualizing) that the path of direct revelation brings humanity into a relationship with the spiritual world -- based upon the belief that everything that exists is alive and has a spirit (and a voice), and that there is an energy field that connects all life. This awareness of interconnection is shared by tribal shamans at one end of the human continuum and by quantum physicists and Zen Buddhists at the other. Shamanic practices offer a direct path to spiritual energy healing and physical, mental, emotional, and spiritual balance in life. The “dis-eases” of modern life that shamanic practices have been proven to help with include: anxiety, stress, fear/phobias, addictions, chronic pain, dealing with loss/grief, personal empowerment, trauma, fatigue, and recovering from accidents. Shamanic practices do not supplant Western medicine, but serve as a less invasive way with demonstrable success. These practices are considered a form of Complementary or Alternative Medicine (CAM), and, as such, they are recognized by the National Institute of Health as “conducive to health and wellness.” Typically, Shamanic Practitioners use special, sacred “tools” to accomplish the meditative “journey” state that is required in order to enter “non-ordinary” reality. These tools are, primarily, the rattle and the drum. Research has demonstrated how the steady beat of the drum affects the brain, inducing a deep “theta” wave state. In comparison, when we sleep (and are dreaming) our brain typically “fires” nerve impulses in the “delta” wave state (at 1 to 3 cycles per second or Hertz/Hz). When we awaken, the brain typically moves rapidly toward the “alpha” brain wave state (at 8 to 13 Hz). This is a resting state in which we are awake and aware -- but not doing anything in particular. When we start moving about and getting down to work, we move into a “beta” wave state (at 13 to 20 Hz). Beta brain waves are high-frequency waves in which we function for most of our “waking” reality. During a typical day, the left hemisphere of our cerebrum (the higher brain or “logical” brain) functions primarily in beta waves. The right brain (our “emotional” and “intuitive” brain) remains in alpha. As we move through the day, we shift back and forth between these two halves of our brain, and shift from beta (working, concentrating, problem-solving) to alpha (daydreaming, reflecting, creating). The theta brain wave state is an in-between zone, in which the brain typically fires nerve impulses between 4 and 7 Hz. This is a deep, reflective, dreamlike state -- a state that has been recorded in Zen Masters, people in transcendental meditation, and Shamanic Practitioners during their “journeys” to non-ordinary reality. When Shamanic Practitioners listen to the monotonous, rhythmic beating of a drum or the shake of a rattle -- at 4 to 7 beats or shakes per second -- both halves of the cerebrum synchronize, slow down, and start firing at 4 to 7 Hz. This allows the practitioner to enter the theta state. The drum, in particular, has been scientifically shown to produce changes in the human central nervous system (affecting the electrical activity in the brain). Native Americans have a simpler explanation why the drumbeat works so effectively in reaching a state that connects people to the spiritual -- that it has the same “pulse” as Mother Earth. (Science has even confirmed this -- showing that the Earth’s background base frequency, called “Schumann Resonance” has been rated at 7.8 cycles per second, with theta registering in the same brain wave state.) Distance Core Shamanic Services As a Core Shamanic Practitioner, Kristina offers these distance healing services include:  ·         Divination (through the use of objects of nature, or through journey); ·         Dreamwork (interpretation and wisdom-insight from journey); ·         Power Animal Retrieval (connecting/reconnecting you with your own Power Animal); ·         Soul Retrieval (locating lost soul energy due to trauma, abuse, addiction, etc.); ·         Extractions (releasing you from unwanted “negative” energies that no longer serve you); ·         Psychopomp (sacred work with the dying to ensure peace and harmony; and journey work to ensure that deceased loved ones have safely passed to the light); ·         Healing Words (connecting you with the power of words for healing and balance); Have a question that you struggle with; that you just can’t seem to find the answer for? Should you take the job? Should you quit the one you have? Should you marry this person? Is it the right time to have a child? Yes, you do already have the answers within you! However, working with divine, compassionate healing spirits, Kristina can journey into non-ordinary reality and bring back information, wisdom, and/or knowledge that can confirm or, at least, solidify, your own inner wisdom. Using the ancient and time-test technique of using natural objects (such as rocks) or specifically journeying into non-ordinary reality, Kristina can access a higher Source in order to help answer your questions. If you’re looking for someone else to make your decisions for you, this shamanic technique is not for you; however, if you want to seek clarity of purpose and thought -- to find and connect with you own inner wisdom and the Source of all guidance, then this a be a fun, exciting, and deeply resonating experience! Questions to Avoid ·         “Yes” or “no” questions cannot be answered (such as, “Is she/he the right mate for me?). This is because it is believed that these types of answers take away (or reduce) your power of choice -- YOU must always find, and be responsible for, your own “Yes, I will…” or “Yes, he is…” and your own “No, I won’t…” or “No, she isn’t…” decisions! ·         “Will I…” questions cannot be answered (such as, “Will I win the lottery?”). This is because it is believed that these types of answers are “prophesy” (telling the future), which our helping spirits will not do -- because this then affects your forward-motion in life (your ability and probability of making decisions/choices based upon this type of prophesy). Our future is not to be revealed as fact -- not until it is our present, and it is a fact! ·         “Should I…?” questions cannot be answered (such as, should I look for a new job?). Again, what you should and should not do in your own life is always your choice and your responsibility (which is a very, very good thing -- it is what gives you power and maturity in your being). ·         Questions about time/timing cannot be answered (such as, “Am I going to meet the right person by the spring?”). This is because our helping spirits “live” outside of this time/space realm -- they cannot answer when something will or will not happen. Questions to Ask ·         “How can I…?” questions can be answered (such as, “How can I maximize my chances of getting this job?”). In journey, Kristina can receive specific instructions, insights, and answers to these types of questions. (And what a rewarding journey experience!) ·         “What are the consequences of…?”  (such as, “What are the consequences of driving to see my friend in San Francisco?”) In journey, Kristina can receive specific information regarding the potential consequences of an action you are considering taking. (Keeping in mind the potential aspect -- as our future is never written in stone; every second and every action can lead to new futures and new possibilities!) ·         “Why…?” questions can be answered (such as, “Why am I experiencing this illness?). In journey, Kristina’s spirit helpers (and your own) will provide rich potential reasons for what you are currently experiencing in your physical, mental, emotional, and spiritual body. Whether exploring divination through the answers provided by natural objects (a technique that Kristina can perform for you -- or one that she can teach to you) or through journeywork, answers are often given in detailed, rich, metaphorical and archetypal language -- the language of your soul. Working with Kristina, you can become a detective in discovering a much deeper answer than any your “conscious” mind can provide. This is often a “team” effort -- the helping spirits may “show” Kristina you flying a World War II airplane, but you get the opportunity to dig deep and discover what that means to you -- and how that answers your question in a way you could only otherwise dream of! Speaking of dreams -- has a “big dream” rocked your world, but you’re just not sure exactly what it means; or what it means for your life and your future? In this unique spiritual environment, you will have the opportunity to work with Kristina in exploring the nature, meaning, and use of your dreams (and your archetypal/dream language). You will learn how to de-code the symbols and archetypal metaphors of your dream life -- and see what Source is trying to communicate to you -- for your energetic health and well-being! Our dreams regularly provide the ways and means to improve and enhance our lives, and yet they go “unnoticed” by most of us, most of the time. This is a tragedy! Focusing on the message behind the imagery can make the difference between dis-ease and enhanced well-being; between depression and contentment; between walking down a road that will not serve your highest needs and diving into the source of pure joy and fulfillment! Kristina can journey to the source of your dreams -- and find the answers you need to move forward in a healthy and productive and integrated way! Layer-by-magical layer, you can work with Kristina, as a team, to find the truth within the dream; the truth that resonates to the core of your being -- ensuring that your dreams manifest in a bountiful and positive manner. Power Animal Retrieval Throughout history, animals have played a vital role in providing material and spiritual support and insight for human life. Our ancestors depended upon animals for much of their material life. Animals provided food, clothing, ornamentation, tools, weapons, medicine, shelter, transportation, and companionship. They served as “harbingers” of seasonal change, danger, and fluctuations in the weather. Compared to humans, animals collectively possess greater strengths and powers. They can run faster; swim better; see, hear, and smell more acutely; climb better, hunt more successfully, and (of course) fly. Our ancestors looked to animals (and to animals in spirit or “archetypal” form, who had even greater powers than individual animals) as teachers, companions, and guides throughout life. This connection to animals is built-in to the core of our DNA. Many indigenous cultures believe that we all have at least one “Power Animal” (or “Totem” animal) that travels with us from birth. This Power Animal offers a unique and specific energetic essence, such as courage (perhaps Bear?) or wisdom (perhaps Owl?) or joy (perhaps Dolphin?) or patience (perhaps Turtle?) or love (perhaps Dove?) or freedom (perhaps Eagle?) or acceptance or grace or focus or…. When we were children, we were in direct contact with our Power Animals; however, as we grew older (especially in western cultures) we were encouraged to disconnect from this source of wealth and knowledge -- as well as from with anything and everything that could not be “sensed” by our basic five senses. In that process, we lost a very critical “power” -- a power designed to enhance our existence in this realm. Working with Kristina (and using an ancient, time-tested technique), you can reconnect with your Power Animal -- and, by doing so, the specific energetic essence that Power Animal holds for you. When you reconnect to this lost power, you reconnect to a core element/energy of your soul!  Once in relationship with a human, each Power Animal serves as a teacher, revealing the power (or “medicine”) that it carries for you. And once you have reconnected, you can stay connected to this power and energy. Whenever you need to, or want to, you can fly with Eagle, swim with Dolphin, stand tall and courageous with Bear, run wild and free across the mountain meadows with Wolf! [Keeping in mind, of course, that one Power Animal does not have more “power” than another. The spirit of the Mouse has just as much power as the spirit of the Tiger -- but each offers you different power essences. Do you need to realize the power of “moving” through the small details (like a Mouse) or do you need to realize the power of “courage” (like a Grizzly Bear) to achieve your larger dreams?) What is your Power Animal? And why are you waiting? Soul Retrieval As we grow and move through our lives, we are bombarded by “traumas” that exceed the ability of our soul to comprehend. Maybe we were abused as children (physical, sexual, mental, or emotional); traumatized by a car accident; brutalized by combat; or enslaved by addictions to alcohol or drug -- whatever the reason, we lost, in that moment, an essential piece of our soul. In modern language, this is often referred to as Post-traumatic Stress Disorder (PTSD). PTSD is often characterized by: 1) re-experiencing the events (through flashbacks forcing you to relive the trauma over and over again, bad dreams, or anxious/frightening thoughts); 2) avoidance symptoms (staying away from places, events, or objects that are reminders of the experience; feeling emotionally numb; feeling strong guilt, depression, or worry; losing interest in activities that were enjoyable in the past; and/or having trouble remembering the event); or 3) hyperarousal symptoms (being easily startled; feeling tense or “on edge,” and/or having difficulty sleeping, and/or having angry/emotional outbursts). In traditional language, it was simply referred to as Soul Loss. If you are experiencing Soul Loss, you might hear yourself saying such things as: •  I felt like a part of me died that day. •  I feel like I lost my innocence when that happened. •  It was as if I lost my freedom (my faith, my hope, my will to live, etc.) that day. Or, more correctly, “I feel like I lost a piece of my soul when that happened.” Well, on an energetic level, you did exactly lost. You did, in fact, lose a piece of your soul. Your soul is who you really are. Contrary to popular belief, you are not your thoughts, emotions, opinions, ideas, and what you do for a living -- you are so much more! You are energy -- pure, high-vibrational energy. Your soul is pure spiritual essence that flows through you and around you. It is the life-force energy that sustains you. If you have become “disconnected” from or “lost” some of this energy (part of your soul) then you are operating (or attempting to operate) at less than full capacity. And, if this is the case, you are probably experiencing a myriad of symptoms of that energy loss. This “Soul Loss” is a spiritual, emotional, and psychological “dis-ease” that occurs when part of your soul “splits off” due to trauma (abuse, neglect, addiction, abandonment, accident, surgery, the breakup of a relationship, even witnessing a traumatic event). The trauma doesn’t even have to be considered “severe” in order to result in soul loss (pain, as always, is relative). Soul Loss can result in symptoms such as depression, anxiety, fatigue, emotional “numbness,” a weakened immune system, frequent or chronic illness, and even a feeling of “emptiness” inside that no amount of activity can fill. Soul Loss need not be permanent. Across the globe and throughout time, Shamans and Shamanic Practitioners have journeyed to the Source and brought back the soul “energy” that was lost. The simple, yet very powerful, act of reconnection (“retrieving”) this soul energy can bring you back up to full capacity (harmony, balance, happiness, peace, joy…). It is this energy than can, and will, make a significant difference in your life -- by enhancing it! You can regain the soul’s natural state of being -- which is that of open expression, giving/receiving abundance, empathy, intuition, and substantive action that makes a difference in this world! Working with Kristina in this sacred, spiritual practice, you can regain what was lost. You can reconnect with your lost innocence, courage, drive, youth, vitality, happiness, spirit, freedom, faith -- you can have your wholeness restored. Without having to re-live the trauma (this is not talk therapy!), you can experience what tens of thousands of people have experienced -- you can have back what was taken from you -- the wholeness of your very soul. Extraction of Unwanted Energy Many believe that there is no “negative” energy, per se; that all energy is simply energy. What makes it “positive” or “negative” is whether or not it serves our physical, mental, emotional, and/or spiritual well-being. Energy may not be “negative,” but it can be unwanted! Energy can attach itself to our core being without our awareness. Have you ever heard yourself say: •          I felt like I got kicked in the gut when I walked in that house! •          I feel like the spirit of _____ (someone who was unkind or abusive to you) just won’t leave me alone! •          I couldn’t take enough baths to get the creepiness of that guy (or gal) off of me! Yes, it can -- and does -- happen on a regular basis. As it does, we tend to collect, and carry around with us, energy that does not belong to us. This “unwanted” energy can cause a great deal of harm -- from just “weighing” us down to actually making us physically ill (or disempowered). Working with Kristina, you can release what does not belong to you -- and be free to open up the space in your own physical, mental, emotional, and/or spiritual being that uplifts and energizes and heals you! Psychopomp (Work with Death and Dying) Traditionally called “psychopomp” -- this is sacred work with the dying that is designed to ensure peace and harmony during the transition; and journey work to ensure that deceased loved ones have safely passed to the light. In our western cultural experience, death has been banished to the depths of fear and terror. Throughout the ages, many cultures celebrated death and dying; not seeing it as an “end” -- a journey into nothingness, but as a “transition” to a higher dimension of spirit. Kristina works with people who are dying in a compassionate and divinely guided manner; helping ease fear and doubt and increase faith and hope in what lies ahead -- for all of us! She also works with families who are concerned that deceased loved ones have not passed into the light. Trained by the Foundation for Shamanic Studies in psychopomp, Kristina has guided souls to the other side -- and finds it a joyful and abundantly rewarding endeavor! Just as energy can never be created and, therefore, never destroyed -- we live on forever. It is essential that we live up to our own code of honor and truth and dignity -- not to the code of others! It is Kristina’s belief, after visiting the other side numerous times, that what we believe is what we get; and the more we prepare for a positive, uplifting, and “ascending” experience in death, the more likely that is exactly what we will experience. We can fear and tremble at the thought of death and dying, or we can align our spirit with our own code of truth -- and approach the light with only tears of joy! It is, as always, our choice. Healing Words In the beginning, there was the word. There is great power in words -- and whether they are healing or harmful rests solely in the intent with which they are expressed! For centuries, people have been given “words” (mantras, songs, poems, stories) that have the power to heal their physical, mental, emotional, and/or spiritual being. Working with Kristina, you can be given “healing words” that do just that -- heal! In journey, Kristina seeks direct revelation from Source, and returns with words just for you. Whether you are seeking healing and power for a specific issue (that is, to say when your back is causing you pain or when you have to stand up and make a speech or when you have to nail that interview for that job) or whether you just want healing words that you can speak/sing whenever you need a “power boost,” then this is an experience you will treasure -- and use -- forever!
Mary Shelley Mary Shelley Mary Shelley (1797 – 1851) was an English novelist, short story writer, dramatist, essayist, biographer, and travel writer, known for her Gothic novel Frankenstein (The Modern Prometheus (1818)). She also edited and promoted the works of her husband, the Romantic poet and philosopher Percy Bysshe Shelley. Similar Author(s) Walter Scott Sir Walter Scott (1771 – 1832) was a Scottish historical novelist, playwright, and poet, popular throughout much of... Bram Stroker Abraham "Bram" Stoker (1847 – 1912) was an Irish novelist and short story writer, best known Gothic novel D Author like Mary Shelley not listed? Find it Create it You are not authorized to access this content. Add comment
Achim H Schwermann, Tomy dos Santos Rolo, Michael S Caterino, Gunter Bechly, Heiko Schmied, Tilo Baumbach, Thomas van de Kamp - University of Bonn, Germany; Karlsruhe Institute of Technology, Germany; Clemson University, United States; State Museum of Natural History Stuttgart, Germany External and internal morphological characters of extant and fossil organisms are crucial to establishing their systematic position, ecological role and evolutionary trends. (…) We found well-preserved three-dimensional anatomy in mineralized arthropods from Paleogene fissure fillings and demonstrate the value of these fossils by utilizing digitally reconstructed anatomical structure of a hister beetle. The new anatomical data facilitate a refinement of the species diagnosis and allowed us to reject a previous hypothesis of close phylogenetic relationship to an extant congeneric species. Our findings suggest that mineralized fossils, even those of macroscopically poor preservation, constitute a rich but yet largely unexploited source of anatomical data for fossil arthropods. How Amira-Avizo Software is used 3D reconstruction followed the protocol described by Ruthensteiner and Heß (2008) and van de Kamp et al., (2014); using Amira (versions 5.5, 6) and Avizo (version 8.1) for segmentation of the tomographic volumes.
Best Animal Eye Best Animal Eye What is the best animal eye? Engineers at the University of Illinois have been researching that question. They have now built the world’s best camera by copying that animal. Their new camera could help military drones see camouflaged or shadowed targets. Their discovery also will allow surgeons to perform many kinds of operations more accurately. They have learned all this from the animal which possesses the best eye known to science. The best animal eye belongs to a small creature known as the mantis shrimp. Here are some of the ways the mantis shrimp’s eyes are superior to all others: The eye of a mantis shrimp has a dozen different kinds of light receptor cells so they can sense properties of light invisible to other animals. Human eyes have only three types of light receptor cells. The mantis shrimp eye can sense polarized light which has waves that undulate in one plane. Light reflecting off of a surface is always polarized. This ability allows the mantis shrimp to see objects that would otherwise be invisible because of blending into the background. A mantis shrimp’s eyes are constructed so that each pixel has a rhabdom which is a rodlike structure made of light receptors. The rhabdoms have threadlike structures called microvilli alternately stacked at right angles. That means the shrimp has cells in the two hemispheres of the eye which are tilted 45 degrees to each other allowing their eyes to detect four polarization directions. The eye of the mantis shrimp can detect an extensive range of light intensities of light to dark known as the dynamic range. This means that they can see clearly even when there is a very bright area next to a very dark area. The mantis shrimp is the only animal that can sense a full spectrum of colors and can see the polarization of each color. That means that when there is a complicated background, the animal can still get a clear image. Electrical and computer engineer Victor Gruev and his research team have already made a camera based on the best animal eye. It has a dynamic range which is about 10,000 times higher than today’s commercial cameras. Gruev and the team are working on a commercial version of their camera. Produced in bulk quantities the improved sensors would cost only $10 each. There seems to be little doubt that this will be the camera of the future, and science has learned how to make it by studying the best animal eye of one of God’s smallest creatures. –John N. Clayton © 2019 Data from Scientific American, February 2019, Page 12, or online HERE. To see our earlier report on the mantis shrimp’s visual system click HERE. See Through Objects See Through Objects –Roland Earnst © 2018 What Is Light? What Is Light? –Roland Earnst © 2018 Why Are Plants Green? Why Are Plants Green? Why are plants green? The answer to this is some pretty basic physics. The colors of light that we receive from the Sun have different energies. Red is the lowest of these energies followed by yellow, green, and blue. The sunlight with the highest energy that actually reaches the surface of the Earth is green. Blue light, which is more energetic, is refracted away by Earth’s atmosphere and scattered as it interacts with molecules in our upper atmosphere. That’s why the sky is blue. When you look at an object, the color you see is the color reflected by that object. A red ball is red because it reflects red and absorbs all other colors. A green leaf is green because it reflects green and absorbs all other colors. If the highest energy of light reaching the surface of the Earth is green, and if the leaf reflects green, what does this do for the leaf? The answer should be pretty obvious – it keeps the leaf from absorbing too much energy and getting cooked. In the fall of the year when the leaves lose their chlorophyll A, which gives them the green color, what happens? The leaf gets cooked, falls off the tree, and we have to scrape it off the yard. If a planet had a different atmosphere so that a different energy of light reached its surface, its plants would have to be a different color. To quote Kermit the Frog “It’s not easy being green.” Why are plants green? They are green because green is essential to life on Earth. This explanation is greatly oversimplified. Obviously, not all plants have green leaves. Some plants live under a canopy of other trees and have to use a different system. The design of life on Earth is incredible, and the green trees and grass around us testify to the wisdom of God in making a place for life to exist. We have a children’s book titled Why Is the Sky Blue, Why Is the Grass Green. You can read it and all of our children’s books on our Grandpa John’s Science Club site using THIS LINK. –John N. Clayton © 2018 Color Vision Gender Differences Color Vision Gender Differences There are many differences between men and women, but you realize that there are color vision gender differences? Light is electromagnetic radiation that stimulates our eyes. There are only specific frequencies of the electromagnetic frequency spectrum that we can see. Frequencies below the range of visible light are called infrared. We can sometimes feel infrared radiation as heat, but we can’t see it, although some animals can. Frequencies higher than visible light are ultraviolet which we can’t see, but it affects our skin and can cause sunburn. Some animals can see infrared light. Within the visible spectrum of light that humans can see, different bands of frequencies affect our eyes differently. Most of us have receptors in our eyes for the wavelengths which we call red, green, and blue. When light stimulates those receptors, they send a signal to our brain which combines the signals to allow us to see many variations in colors. People with colorblindness (mostly men) have one of those color receptor categories missing. The missing color may be either red or green. Why are men colorblind more often than women? The genes that encode the red and green receptors are located in the X-chromosome. Men have one X- and one Y-chromosome. Women have two X chromosomes. That means that if a man has a defective X-chromosome, he is out of luck. A woman would need to have two defective X-chromosomes to be colorblind. It’s interesting that the chromosome pair that creates the sex differences also explains the color vision gender differences. God said, “It is not good for man to be alone” and He took something out of the man to create a woman. Then He put them together to complete each other. In many ways, men and women really do need each other to be complete. –Roland Earnst © 2018
First of all, I'm not a signal processing expert. Actually, I'm trying to benefit from some concepts of signal processing theory in my computer science research. My problem is that, I have to find out the frequencies of a binary signal in run-time/real-time. I was thinking about an adaptive approach to compute DFT. but I'm not sure if this may give me the expected outcome. What I'm doing is: 1. Recording the zero-one series in a buffer represents a time window (T). 2. Analyse the recorded buffer to find out the frequencies (Fs). 3. Feed Fs from step 2 to decision making algorithm. 4. Repeat (1&2) to adapt decision of step 3. Where should I start? and what topics I have to read before go farther? • $\begingroup$ The FFT algorithm is an adaptive approach to computing the DFT. It determines the frequency bin in log2(N) iterations, where N is the length of the sequence. $\endgroup$ – Dan Boschen Dec 30 '18 at 11:27 Your Answer Browse other questions tagged or ask your own question.
Nutrigenomics - The science behind nutrition and gene expression. One of the most exciting areas of Alltech's research is nutrigenomics, a new approach to molecular nutrition. "Nutrigenomics, or the effect of diet on health, is one of the most exciting in science today" said Dr Karl Dawson, Alltech's Director of Worldwide Research. "Feeding the gene is the way forward". Last year Alltech opened the first research centre of its type in the world, dedicated to the study of the effect of animal nutrition on gene expression. Nutrigenomics will more accurately allow competition horses of all disciplines to reach their full genetic potential. It also has huge implications for the breeding industry. The principal behind this research (I hope you all vaguely remember genetics from school!), is that everything we eat has an influence on our DNA and the expression of certain genes. For example, a herb or vitamin, when ingested, gives a certain response in the animal. This response is the effect of certain components influencing the expression of certain genes. The trouble is, it is a bit hit and miss and until now, uncontrolled. The substances we feed switch on (and off) both desirable and undesirable  genes, and until now it was impossible to know which ones. The Gene Chip system is the key technology in  Nutrigenomics. Alltech have developed a way of evaluating the activity of thousands of genes in a single experiment.   What can we learn? 1-A better understanding of the dietary effects and hidden effects of nutrition eg 10% of genes are controlled by selenium! 2-Developing new feeding strategies and better supplements. The future for this technology is very exciting and it promises to revolutionize animal nutrition in the not so distant future. Currently Alltech is developing cheaper alternatives to expensive ingredients that switch on and off exactly the same genes. In other words, they have the same effect on the animal. One example of this is the development of a safe, more economic natural alternative to feeding high levels of Vitamin E, called EconomasE®. Alltech's research has already led to the devlopment of some outstanding products, such as the safest most bio-available form of organic selenium available, and Mycosorb®, a patented mycotoxin binder that has given us the ability to protect our horses from the all too common effects of mycotoxin exposure. Further research break throughs promise to revolutionize the animal feed industry in the next 5-10 years, so watch this space!
Pediatric Enteroviral Infections Updated: Jun 16, 2017 • Print Enteroviruses, a group of single-stranded sense RNA viruses, are commonly encountered infections, especially in infants and children. They are responsible for a myriad of clinical syndromes, including hand-foot-and-mouth (HFM) disease, herpangina, myocarditis, aseptic meningitis, and pleurodynia. Patients with enterovirus infections may present with symptoms as benign as an uncomplicated summer cold or as threatening as encephalitis, myocarditis, or neonatal sepsis. Enteroviral infections annually result in a large number of physician and emergency department visits. In 1998, Pichichero et al performed a prospective study and found that nonpolio enteroviral infections resulted in direct medical costs ranging from $69-771 per case. [1] In addition, patients with nonpolio enteroviral infections missed a minimum of 1 day of school or camp; some missed as many as 3 days of school or camp. The significant economic and medical impacts of enteroviral infections occur mostly during the peak months of summer and fall. In temperate climates, enteroviral outbreaks occur year-round. Enteroviruses belong to the Picornaviridae (small RNA viruses) family. The enteroviral group includes coxsackievirus, echovirus, and poliovirus. Enteroviruses are believed to have 2 distinct classes: polioviruses (types 1, 2, and 3) and nonpolioviruses (coxsackievirus, enterovirus, echoviruses, and unclassified enteroviruses). Enteroviral infections consist of 23 coxsackievirus A, 6 coxsackievirus B, 28 echovirus, and 5 unclassified enteroviruses. More recently, a related genus of viruses, Parechovirus, has been described; two enterovirus species (echovirus types 22 and 23) were reassigned as parechovirus. [2] To date, more than a dozen parechovirus strains have been described; however, not all sequences have been published. The clinical appearance of Parechovirus infection can be similar to enteroviral infections, but tests for Parechovirus are mostly confined to research laboratories. The US Centers for Disease Control and Prevention (CDC) reported a 2014 outbreak of enterovirus 68 (also called enterovirus D68) that began in at least six US states from mid-August to mid-September, including Colorado, Illinois, Iowa, Kansas, Kentucky, and Missouri. [3] This outbreak has since spread coast to coast, with 175 cases in 27 states. The additional reported states include Alabama, California, Connecticut, Georgia, Indiana, Louisiana, Michigan, Minnesota, Montana, Nebraska, New Jersey, New York, Oklahoma, Pennsylvania, Virginia, and Washington. [4] The total number of confirmed cases is higher because this figure does not include cases confirmed by individual state laboratories. Enterovirus 68 was first identified in 1962 in California but had not been commonly reported in the United States before the 2014 outbreak. The CDC identified that among the enterovirus 68 cases in Missouri and Illinois, children with asthma seemed to have a higher risk for severe respiratory illness. [3] Enterovirus 71 has gained notoriety in recent years for causing a rapidly fatal rhombencephalitis, in association with epidemics of HFM disease in East Asian countries. This appears to be a particularly aggressive neutrophic serotype of enterovirus. Coxsackievirus A6 was recently described as a somewhat distinct clinical entity of "atypical hand foot mouth disease", as the skin lesions described are vesiculobullous rather than the typical flat ulcers seen in HFM disease and may be more extensive, often involving areas of preexisting eczema. Each virus obtains its antigenicity from the capsid proteins that surround the RNA core. According to the CDC, 65 human serotypes of enteroviruses have been identified; however, a small number cause most outbreaks. Since the implementation of polio vaccines, the incidence of wild-type polio has been eradicated in the western hemisphere. The most common form of human transmission is the fecal-to-oral route. Although respiratory and oral-to-oral routes are possible, they are more likely to occur in crowded living conditions. Enteroviruses are quite resilient. They remain viable at room temperature for several days and can survive the acidic pH of the human GI tract. The incubation period is usually 3-10 days. The enterovirus enters the human host through the GI or respiratory tract. The cell surfaces of the GI tract serve as viral receptors, and initial replication begins in the local lymphatic GI tissue. The virus seeds into the bloodstream, causing a minor viremia on the third day of infection. The virus then invades organ systems, causing a second viremic episode on days 3-7. This second viremic episode is consistent with the biphasic prodromal illness. The infection can progress to CNS involvement during the major viremic phase or at a later time. Antibody production in response to enteroviral infections occurs within the first 7-10 days. Coxsackievirus notoriously replicates in the pharynx (herpangina), the skin (HFM disease), the myocardium (myocarditis), and the meninges (aseptic meningitis). It can also involve the adrenal glands, pancreas, liver, pleura, and lung. Echovirus can replicate in the liver (hepatic necrosis), the myocardium, the skin (viral exanthems), the meninges (aseptic meningitis), the lungs, and the adrenal glands. After exposure, poliovirus replicates in the oropharynx and GI tissue. Following this replication, polio advances, invading the motor neurons of the anterior horn cells of the spinal cord. It can progress to other CNS regions, including the motor cortex, cerebellum, thalamus, hypothalamus, midbrain, and medulla, causing death of neurons and paralysis. Neuropathy occurs due to direct cellular destruction. Antibody production occurs in the lymphatic system of the GI tract, prior to invasion of the CNS tissue. Infants retain transplacental immunity for the first 4-6 months of life. The enteroviruses are capable of directing almost all cellular protein translation to viral genes through the modification of host cell translation factors (messenger RNA [mRNA] cap-binding proteins) and using internal ribosome entry sites (IRES) to bypass the crippled host machinery. As such, they are highly damaging to the cells they infect. United States Nonpolio enteroviral infections cause an estimated 10-15 million symptomatic infections per year in the United States. Many are treated as potential episodes of sepsis, and antibiotics and acyclovir are administered to treat possible bacterial or herpetic infection. In 1952, an epidemic of polio occurred in the United States, causing 3,000 deaths and 57,879 cases. The vaccine has virtually eliminated wild-type polio in the United States. In 1994, the World Health Organization (WHO) declared the eradication of wild polio in the Western hemisphere. Approximately 6 cases of vaccine-associated paralytic polio (VAPP) occur yearly, leading to the recommendation of inactivated vaccine because the risk of natural disease is so rare in the United States. VAPP is linked to the concomitant administration of live (oral) polio vaccine with intramuscular injections (perhaps allowing the virus better access to myocytes and neuronal axons) and occurs in 1 per 2-4 million vaccinations (paralytic polio occurs in 1 in 200 wild-type infections). In 1979, an outbreak occurred in numerous Amish communities throughout the United States. A smaller outbreak occurred in 2005 in an Amish community in Minnesota. Genetic sequencing of the virus surprisingly revealed that it was only 2.3% different from the Sabin vaccine strain and was likely acquired from subclinical circulating infections from overseas. Nonpolio enteroviral infections are quite prevalent worldwide. The exact numbers are unavailable. [5, 6, 7] Poliomyelitis still occurs in many developing countries as a result of poor health care and an inability to access vaccines. [8] The CDC reported 6227 cases in 1998. [9] This significant drop from the previous decade, in which 35,251 cases were reported, is due to aggressive vaccination programs. Worldwide eradication is hoped to occur in the near future. Recently, setbacks have been noted in Nigeria, where suspicion about the motivations of the vaccination program led to a refusal to vaccinate children. One outbreak in 2003 crossed 15 other African countries and even spread as far as Indonesia, resulting in the paralysis of over 200 children. A more recent outbreak in 2006 affected mostly adults who were missed by vaccination campaigns. As of June 2006, 7 people had died and 27 people had been paralyzed. Nigeria had about half of the reported polio infections in the first 3 months of 2009. With war and civil unrest, new cases have been seen in Somalia and Syria, as reported below. Somalia in particular has been hard hit, as militant activity and specific attacks against Medecins Sans Frontieres staff who have been coordinating large parts of the vaccination efforts resulted in the organization pulling out of the country entirely in late 2013. Worldwide polio cases from 2013 are reported as follows: [10] • Nigeria (endemic) – 53 (801 cases in 2008) • India (now eradicated) – 0 (559 cases in 2008) • Pakistan (endemic) – 93 (118 cases in 2008) • Afghanistan (endemic) - 14 (31 cases in 2008) • Ethiopia (outbreak) - 9 (3 cases in 2008) • Kenya (outbreak) - 14 (0 cases in 2008) • Somalia (outbreak) - 190 (0 in 2008) • Syria (outbreak) - 23 (0 in 2008) • 3 more cases confirmed in June 2017 [11] • Cameroon (outbreak) - 4 (unknown in 2008 but last previous case was in 2009) • Democratic Republic of Congo- 4 cases confirmed in June 2017 [12] Much of the success of the WHO polio eradication campaign has been through aggressive vaccination and grass-roots support from religious, tribal, and social leaders. A monovalent oral polio vaccine (mOPV) is increasingly used in areas with a single circulating strain because it appears to be more effective at inducing protective immunity. However, vaccine-associated paralysis is more likely with the live-attenuated oral polio vaccine (OPV). To fully eradicate paralytic polio, the WHO is working towards a global transition to the inactivated polio vaccine where possible. Some genetic evidence suggests that if the poliovirus is eradicated, genetic recombination between other enteroviruses may result in a phenotypically similar virus. However, this appears to be of academic interest only at this time. The overall mortality rate for nonpolio viruses is extremely low. The patients at greatest risk are those with neonatal sepsis. Occasionally, enteroviruses cause global encephalitis, which has a good prognosis; however, a few fatalities have been reported. Enterovirus 71 has been linked with a rhombencephalitis (inflammation of the brain stem) in outbreaks of hand-foot-and-mouth disease in the eastern hemisphere (Taiwan, Japan, Malaysia, and Australia). Fatality rates from these outbreaks have been as high as 14%. Myoclonus is a poor prognostic indicator, as are lethargy, persistent fever, and peak temperature higher than 38.5 º C. [13] Most cases of myocarditis and pericarditis are self-limited, but a potentially significant mortality rate is associated with myocarditis. Older patients can develop a dilated cardiomyopathy following myocarditis. The overall mortality rate for paralytic polio is 2-10%. For those who survive, a 6-month period is allowed to predict how much muscle function will return. Enteroviruses have a worldwide distribution and are not race-specific infections. Males and females are equally affected. Males are more likely to be symptomatic. People of all ages, including adults, elderly people, and young people, are at risk of manifesting symptoms of enteroviruses. Children have a higher rate of infection because of exposure, hygiene, and immunity status. The infection course tends to be benign in older children and more serious in neonates. Unlike most cases of nonpolio enteroviral infections, acute hemorrhagic conjunctivitis occurs most frequently in adults aged 20-50 years.
Cognitive bias In its simplest form, a cognitive bias is "wishful whinking" or a "mental shortcut". It is when we make a decision or judgment, which deviates from norm or rationality. For me personally I view them as traps which should be avoided, to make sure that I arrive at the rational or best conclusion. In marketing however, a company can see them as tools to make sure customers buy a more profitable product. I am probably stretching the term here, but this is what behavioral economics is about: Making money. Which umbrella term we decide to cover everything under is besides the point. The creative use of cognitive biases to achieve your own personal goals could be seen as both good and bad. Some would argue that it is manipulation, others would call it social skills. I'll leave it up to you to make a decision, keeping in mind that you might be under the influence of one or more cognitive biases when doing so. Cognitive biases Why am I writing this? Why rewrite everything when it is already available on Wikipedia? To better understand it myself and actually learn them. Some might call it "studying". I find that I learn the best when I not only read about a topic, but treat the topic somehow. In case of cognitive biases, I am going to try and learn them the same way I studied in university. It must have done something right for me, I think.
skip to Main Content Critical ThinkingA spreadsheet is an interactive computer application for organi Critical ThinkingA spreadsheet is an interactive computer application for organization, analysis and storage of data in tabular form. Spreadsheets are developed as computerized simulations of paper accounting worksheets. The program operates on data entered in cells of a table. Each cell may contain either numeric or text data, or the results of formulas that automatically calculate and display a value based on the contents of other cells. A spreadsheet may also refer to one such electronic document.Spreadsheet users can adjust any stored value and observe the effects on calculated values. This makes the spreadsheet useful for “what-if” analysis since many cases can be rapidly investigated without manual recalculation. Modern spreadsheet software can have multiple interacting sheets, and can display data either as text and numerals, or in graphical form.Besides performing basic arithmetic and mathematical functions, modern spreadsheets provide built-in functions for common financial and statistical operations. Such calculations as net present value or standard deviation can be applied to tabular data with a pre-programmed function in a formula. Spreadsheet programs also provide conditional expressions, functions to convert between text and numbers, and functions that operate on strings of text.Spreadsheets have replaced paper-based systems throughout the business world. Although they were first developed for accounting or bookkeeping tasks, they now are used extensively in any context where tabular lists are built, sorted, and shared.LANPAR, available in 1969, was the first electronic spreadsheet on mainframe and time sharing computers. LANPAR was an acronym: LANguage for Programming Arrays at Random. VisiCalc was the first electronic spreadsheet on a microcomputer, and it helped turn the Apple II computer into a popular and widely used system. Lotus 1-2-3 was the leading spreadsheet when DOS was the dominant operating system. Excel now has the largest market share on the Windows and Macintosh platforms. A spreadsheet program is a standard feature of an office productivity suite; since the advent of web apps, office suites now also exist in web app form. Web based spreadsheets are a relatively new category.Paper spreadsheetsThe word “spreadsheet” came from “spread” in its sense of a newspaper or magazine item (text or graphics) that covers two facing pages, extending across the center fold and treating the two pages as one large one. The compound word “spread-sheet” came to mean the format used to present book-keeping ledgers—with columns for categories of expenditures across the top, invoices listed down the left margin, and the amount of each payment in the cell where its row and column intersect—which were, traditionally, a “spread” across facing pages of a bound ledger (book for keeping accounting records) or on oversized sheets of paper (termed “analysis paper”) ruled into rows and columns in that format and approximately twice as wide as ordinary paper]Batch spreadsheet report generatorIn 1962 this concept of the spreadsheet, called BCL for Business Computer Language, was implemented on an IBM 1130 and in 1963 was ported to an IBM 7040 by R. Brian Walsh at Marquette University, Wisconsin. This program was written in Fortran. Primitive timesharing was available on those machines. In 1968 BCL was ported by Walsh to the IBM 360/67 timesharing machine at Washington State University.LANPAR spreadsheet compilerA key invention in the development of electronic spreadsheets was made by Rene K. PardoandRemy Landau, who filed in 1970 U.S. Patent 4,398,249 on a spreadsheet automatic natural order calculation algorithm..The LANPAR system was implemented on GE400 and Honeywell 6000 online timesharing systems, enabling users to program remotely via computer terminals and modems.Autoplan/Autotab spreadsheet programming languageIn 1968, three former employees from the General Electric computer company headquartered in Phoenix, Arizona set out to start their own software development house.IBM Financial Planning and Control System]The IBM Financial Planning and Control System was developed in 1976, by Brian Ingham at IBM Canada. It was implemented by IBM in at least 30 countries. It ran on an IBM mainframe and was among the first applications for financial planning developed with APL that completely hid the programming language from the end-user. APLDOT modeling languageAn example of an early “industrial weight” spreadsheet was APLDOT, developed in 1976 at the United States Railway Association on an IBM The application was used successfully for many years in developing such applications as financial and costing models for the US Congress and for Conrail.VisiCalcBecause of Dan Bricklin and Bob Frankston’s implementation of VisiCalc on the Apple II in 1979 and the IBM PC in 1981, the spreadsheet concept became widely known in the late 1970s and early 1980s. VisiCalc was the first spreadsheet that combined all essential features of modern spreadsheet applicationsSuperCalcSuperCalc was a spreadsheet application drafted by Sorcim in 1980, and originally bundled (along with WordStar) as part of the CP/M software package included with the Osborne 1 portable computer. It quickly became the de facto standard spreadsheet for CP/M and was ported to MS-DOS in 1982.Microsoft ExcelMicrosoft released the first version of Excel for the Macintosh on September 30, 1985, and then ported it to Windows, with the first version being numbered 2.05 and released in November 1987.Web based spreadsheetsWith the advent of advanced web technologies such as Ajax circa 2005, a new generation of online spreadsheets has emerged. Equipped with a rich Internet application user experience, the best web based online spreadsheets have many of the features seen in desktop spreadsheet applications.Other productsA number of companies have attempted to break into the spreadsheet market with programs based on very different paradigms. Lotus introduced what is likely the most successful example, Lotus Improv, which saw some commercial success, notably in the financial world where its powerful data mining capabilities remain well respected to this day.Answer The Following Questions: Back To Top
Water as FUEL # – 1975, Frederic WENTWORTH, USP 3,862,819 http://www.freepatentsonline.com/3862819.pdf and in 1990, USP 4,952,340, http://www.freepatentsonline.com/4952340.pdf # – Water Car – Kramer Version – Bubblers, By Thomas C. Kramer – August 2003 The starting point of making your car run on water is a ‘Bubbler’. A bubbler is a simple device that just bubbles air through water to create water vapor that is fed into your engine.This can be done by either blowing air through water or sucking the air through water.Sucking works better because this creates a low pressure over the surface of the water making it easier for the water to vaporize. Low-pressure sucked air can be done using a small pump or much easier by taking a vacuum feed off your engine’s intake manifold.The intake manifold is the easiest, but pumped systems are used where water vapor and fuel gases are blown into the carburetor like in a supercharger system.Both systems will be discussed below with different types of electrolysis units. The purpose of a bubbler is to add water vapor to your fuel mix.That is, to make your car a ‘steam engine’.This causes your fuel to burn slower and cooler, and the water vapor, when heated in the combustion process, expands thus giving you more power with less fuel.A bubbler system on any car should thus give you 10%-15% better gas mileage and a cleaner engine. Hydrogen and oxygen gas tends to burn too fast and too hot, thus running a water car you may have to add a water vapor ‘bubbler’ to your system to add additional water vapor to your fuel intake.This will cool the process and retard or slow down the oxygen-hydrogen burn. Some bubbler designs heat or even boil the water, using placement over the exhaust manifold or by running your air intake into the bubbler from the exhaust pipe!Remember that exhaust gases are only water vapor in a water car.And you will get a positive pressure push too.You can evencoil tubing around your exhaust pipe. A bubbler can also be attached to your gas feed line, bubbling the gas (hydrogen/oxygen/LNG/propane) through a water bath.This is a good idea as a backfire arrestor and even a small cup size unit like those found on air compressors could do the trick. You can even do this with a gasoline feed provided you vaporize the gas first.This can be done by first running your gas through a vaporizer.A vaporizer is just a radiator-type coil where your gas is looped back and forth through some tubing a number of times, like the radiator at the back of your refrigerator.This vaporizes the fluid gas before it reaches the carburetor.You can make one out of copper or s. steel tubing by coiling or looping the tubing and placing this after your fuel pump.The gas vapor can then be run through a bubbler to cool the gas and add water vapor before running this into your LNG adapter in front of your carburetor. Large bubblers made specifically for generating quantities of water vapor are made from cheap water filters.These are run in reverse with the water IN being the vapor OUT and the water out becoming the air IN.The air IN will require a pipe and an air stone to the bottom of the bubbler and usually a control valve to control the amount of air being sucked through the unit.Several bubblers can also be linked together to provide more water vapor. You will require a water-IN hole to be made in the old water filter so that you can top it up with water occasionally. Water can be sucked in through your air-IN hole but that can bugger up your air stones, so it might be wiser just to drill a hole in the side of the filter or filter cap and insert a tank adapter with a screw plug or rubber stopper in the end.Tank adapters are just small diameter pipes that are threaded all the way and have 2 nuts and rubber washers that seal the connection through the wall of the filter.If you only want a small hole use a short piece of the metal tubing used in lamps.If you come in from the side, use an elbow before your cap so that it will be easier to fill.For hand filling, use a squeeze bottle like the ones used for topping up battery water, but don’t waste your money on distilled battery water, as ordinary tap water will do just fine in a bubbler. Fancy bubblers may have water level sensors that activate solenoid valves that are connected to your main water tank feed line, branching off after the pump and filter.Who wants the extra cost? The main idea of a bubbler is to create ‘bubbles’ of water vapor.The low-pressure vacuum from your intake manifold will cause water to vaporize quicker, but this requires more water surface area in contact with air, thus the more bubbles the better.Aquarium air stones generally work fine but these may cause too much resistance and can clog up with dusty or dirty air intake (Use a sponge filter). You can also use several other techniques for bubble making such as, just drilling small holes in the end of your intake tube, using a number of spacer plates with holes in them up the length of your intake tube, placing stainless steel scouring pads (loosened up a bit) inside your bubbler, and so on.The idea is to reduce airflow resistance but create bubbles of the smallest size possible.It is thus easier to make big bubbles and then break them up a bit, than to try to squeeze out tiny bubbles. The bubbler may need a control valve for better control.The vacuum on your intake manifold is enough to suck your water tank dry in a few minutes if you are not careful. You may also want to insert some plastic filter material at the top of the bubblers to prevent sloshing around when you drive and sucking water directly into the engine.This can be a loose sponge or the plastic filter-mat material used in aquariums. Since internal combustion engines work fundamentally on temperature variations, with the bigger variation the better, you might consider cooling the water vapor before entering the engine.This is done again with a radiator, looping tubing or tubing with fins on it (used in floor heating) before the engine intake. Finally, if your engine is still running hot, even on lean fuel and with a bubbler, you might consider a bigger radiator or an additional radiator fan.This all depends on the climate that you live in. # – Fog Injection by Kevin Satterfiled, for fuel saving, http://oupower.com/index.php?dir=_Other_Peoples_Projects/kevinsatterfield/Fog Injection This is an attemtp…. A 1400 psi electric preasure washer pump and motor from walmart for 84 $The plan here is to put a 12v 1/2 HPmotor in replace of the 120 ac motor.I like to use wat I have lay’n around most of the time and I do have a 40lb thrust trolling motor Ive salvaged.Next step is to find a gear puller.Auto Zone wasnt able to supply one small enough next stop Napa and or Carquest. UREAKA….LOLIts put’n out more than my nozzle is rated for(1000psi) but this was only a first test to see how it does on 24 volts at full charge.And as you can see the trolling motor is do’n its job.12 volts just wouldnt put out ANYTHING with this motor…there may be a better motor to use but till i can find it this one ‘s gonna have to do . here was a test to see how much wet fog wuld get past the trip to the throttle body and as you can see,not much…lol.The major wetness is come’n from a hose leah and spray’n everywhere but here’s a decent start on this project and its been a fun one..thanks Bob!!!Looking forward to complete’n this one. 12volts are gonna be just fine with a 12/24 volts trolling boat motor.900psi on 12v with that motor is use’n 8 amps.After testing the fog thru the air filter house’n and into the throttle body,I must say this a really cool project.Herman mention run’n fog for a couple years in his 302 before adding his hydrogen generator.Gotta build a custom air box with this on my truck so im off to the shed….stay tuned…… # – RSR Turbo Water Injection. http://www.rbracing-rsr.com/waterinjection.html The blue rail of this BMW K1200RS/GT/LT Turbo Plenum is our port water injection system with four 40cc/min nozzles. Water Injection is the best way to make power when you are dealing with forced induction systems. In this case we are running a turbo on a 11.5:1 high compression bike and doubling the horsepower with relability. There is really nothing new about this…We’ve been doing it for over 20 years but people were doing it way, way, before we ever came on the scene. Sir Harry Ricardo and the High Speed Aero Engine. Incredibly complex, turbo-supercharged, radial and V-12 engines that had to take off with a full bomb load in conditions that were never ideal. You might think you put the pedal to the metal but for the WWII fighter and bomber crews death was always part of equation…It wasn’t a game and if you couldn’t squeeze out the last bit of horsepower you were dead. It was in this environment that water alcohol injection technology was refined. Wright Cyclone engine pictured above. The pioneering research was done by people like Sir Harry Ricardo, the researchers at NACA in Langley Field and all the aero engine designers of the period entering the Second World War. For an interesting read you can download a PDF from NACA http://www.rbracing-rsr.com/downloads/naca_H2O.pdf dated 15 August 1942. It still applies today. A second NACA PDF http://www.rbracing-rsr.com/downloads/NACA_H2O_2.pdf also validates the use of water injection. If you want your high output turbo or supercharged motor to avoid engine destroying pre-ignition and detonation you are going to have to employ water injection and the way it has to be done was set in stone a long, long, time ago. Pictured above is a schematic of a WWII aero engine water/methanol regulator…designed over 60 years ago! Monster Marine Diesels Engines bigger than your house and they use water injection to reduce emissions! Diesel engines obviously can derive some benefit from cooling the combustion process. This is really big industrial stuff so don’t think about adapting anything. The comany is Wartsila http://www.wartsila.com/ . … Sir Harry Ricardo proved once and for all that you can richen things up to a point but, beyond this, detonation is going to rear it’s head no matter how much fuel you throw at it and, in fact, the extra fuel may increase the tendency to detonate! Going rich beyond the well-defined 12.5:1 boost maximum power air fuel ratio is going to cost you power. Studies in the early part of the Second World War proved conclusively that as you add water you can lean out your overly rich mixtures as you raise your boost pressures. As Sir Harry Ricardo stated…”By the introduction of water…the fuel/air ratio could be reduced once again; in fact, with water injection, no appreciable advantage was found from the use of an over-rich fuel/air mixture”. I Don’t Want To Learn…Just Give Me The Answers! 1. Maximum Torque occurs at a 13.2:1 Air Fuel Ratio. 2. Transitional Fueling and Maximum Boost Air Fuel Ratios are about 12.5:1. 3. Water Injection is most efficient with a 50/50 water alcohol mixture. 4. Methanol, as an additive, is not a practical choice as it is prone to pre-ignition, is not safe to handle and is not readily available. 5. Denatured (ethanol) alcohol, typically 95%, is cheap and is available in paint, hardware, and Home Depot type stores in gallon containers for about $10.00. Isopropyl alcohol can be used but it is often 30% or more water by content. 6. Water Injection allows ignition timing to be more aggressive or closer to stock. In other words boost does not automatically mean retard your timing. 7. Excessive amounts of ignition retard will cause a loss of power and overheating. 8. Water to Fuel ratios should be based on weight and not volume. 9 . Water weighs 8.33 lb per gallon. 10. Alcohol weighs 6.63 lb per gallon. 11. Air weighs .080645 lb per cubic foot. It takes about 150 cubic feet of air per 100 horsepower. It takes about 12 lb of air per 100 horsepower. 12. Water or Water / Alcohol to Fuel Ratios are between 12.5% to 25%. This means Air to Fluid Ratios are between 11.1:1 and 10.0:1 with water injection. 13. Maximum water delivery should be in higher load low to mid rpm ranges tapering somewhat at peak rpms where load is less. 14. Atomization of the water mixture is directly related to it effectiveness. Finer droplets cool the inlet charge better and with less mass they navigate the inlet plenum easier for more equal water distribution. 15. Don’t flow water through an intercooler. 16. Atomized water, just like fuel , does not like to make turns thus making accurate distribution something to think about. This is why port fuel injection is the norm. Water is a fluid just like your fuel. Multiple nozzes, equally spaced in the plenum, although it complicates things, is a superior design. 17. The introduction of water will allow higher boost pressures to be run without detonation. Higher pressures will increase torque. It’s always about torque. 18. Racing high octane gasoline should be used for all forms of competition and for higher than normal boost levels. Water injection as well as charge cooling should be used with racing gas. 91/92 Octane pump gas simply will not cut it. 19. Fuel Injectors operate in the 1 Millisecond range and are not capable of long term usage for H20 as they will corrode or rust shut in a very short period of time. Unless a solenoid can open as fast as a fuel injector it should not be used to “pulse” water injection events. 20. Varying voltage to water injection pumps or using similar schemes is a recipe for disaster. You have to eliminate the variables, not increase them. 21. Fuel Injection pumps cannot be used for water injection. Water is conductive. Gasoline is not. Water will corrode an efi pump shut in a very short period of time. 22. Water injection has a cooling effect on the engine head, valves, and cylinder. Exhaust temperatures (EGT) are largely unaffected at recommended water / fuel ratios. 23. The cooling of potential hot spots in the combustion chamber defeats pre-ignition, the most destructive form of uncontrolled or unplanned combustion. 24. Higher static compression ratios will require a higher percentage of water or water / alcohol. 25. No, water does not burn. We are not combusting the hydrogen in the H2O. 26. At around 13.2:1 or fuel air ratios of .75, EGT’s will peak. 27. Ferrari suspended water in their fuel during their 1980’s Formula1 period. We don’t recommend that you try this…although Acetone will mix with water. Ferrari, engine with water injection: Harley-Davidson, Suzuki, BMW, Hondas, Kawasakis, Triumphs etc….all our race bikes have run water injection since the early 1980’s. The thermal loads, even with intercooling and racing gasoline are too high and without water injection you simply can’t run mile after mile on the long course at Bonneville without some additional form of charge cooling. Running 22 psi of boost between the two mile and four mile markers day after day and no engine failures. Pictured above is our part number 06-1020 Billet Water Injection Regulator which is designed to regulate water or water/alcohol mixtures without any corrosion problems. Referenced to manifold pressure, it is available with both 8mm banjo as well as AN fittings. Banjo fittings allow 360 degree positioning of inlet and bypass lines. Regulator pressure is screwdriver adjustable. Atomizing Nozzles All nozzles are individually filtered so there is no need for an additional filter in the system. If you employ a common rail with multiple nozzles, or individual nozzles in your system, the fact that each nozzle is filtered will prevent the inadvertant clogging of one nozzle as opposed to having one filter in the system and no filters on the nozzles themselves. RB Racing offers the following atomizing nozzles rated in ccs per minute at 40psi: 40cc, 80cc, 120cc, 160cc, 200cc, 400cc, 600cc. Whichever nozzle and pump combination you choose you should bench test the system before you install it in you vehicle. Running multiple nozzles can affect, or lessen, the flow and how you design your system with all the routing of lines and fittings will affect things. Whenever we engineer a new water injection for one of our turbo kits we bench test the the delivery using either digital flowmeters or simple burettes (100ml up to 2000ml). Water is not hazardous so all you need is a 12VDc power source and something to measure the flow. If you don’t measure it you will never know. Since we reference water pressure to manifold pressure the results are valid at a tested pressure. Using our Water Injection Calculator you can make adjustments in the water pressure regulator to bring things in line. For you creative types you can use our nozzles both up and downstream as well as squirting water on your intercooler. It can get as complicated as you wish. Remember it’s all a heat equation. Heat equals power…but too much heat equals destruction. RSR Water Injection Pumps BMW K1200RS Reservoir with Model 38 pump RSR Water Injection Pumps By keying the water pressure to manifold pressure you effectively build a curve of water delivery that delivers proportionally more water in higher load situations and tapers this off somewhat at peak power and rpm where less water is needed. Titanium seats are corrosion resistant. Anodization and double diaphragm nitrile components are alcohol compatible. Various inlet / outlet combinations i.e. AN/O-Ring/Banjo etc are available. … A 12VDc bubble-tight, normally closed, solenoid can be placed between the water pump and the water injection nozzle(s). Multiple solenoids are used in a staged system, or in cases where the nozzles are placed downstream of the throttle blade where high vacuum is present. This insures there is no possibility of water flowing to the nozzles unless boost is present. The solenoid is normally closed and activates under the user defined onset of water injection triggered by a pressure activated switch. Single or multiple solenoids can be employed for staged systems, each with a different pressure trip point controlled by a pressure activated switch. 1/8 NPT standard. 1/4″ NPT optional. Pressure Activated Switches : User adjustable to trigger specific water injection events. There is usually just one of these used, but multiples can be used for staged injection and for other uses such as spray on the intercooler’s exterior. These are usually referenced to plenum or boost pressure and not below the throttle where both vacuum and boost is present. Adjustable from .5 to 150 psi. 1/8 NPT. – RSR Water Injection Calculator You have to match your water delivery to your expected horsepower needs. If you are going to run staged nozzles with multiple solenoids then you can use the calculator to total these versus your peak boost horsepower. You can also enter a lower horsepower figure where your primaries are activated and by zeroing out the secondary nozzle entries you will be able to see your Coolant to Fuel and Air to Liquid Mass Ratios from the primary nozzles only. – Horsepower– Peak boost horsepower – Primary Nozzles– Rated in ccs per minute – Number of Primary Nozzles – Secondary Nozzles– Rated in ccs per minute – Number of Secondary Nozzles – Water Regulator Pressure– Initially set at 40 psi, which is what RSR Nozzles are rated at as stated in ccs per minute – Percent Water– 0 to 100; Typically 50 – Percent Alcohol– 0 to 100; Typically 50 ## TECTANE octane alternative; H2O Injector for Gasoline Conventional Engines, http://www.tectane.com/index.php # Technologies: AQUAHOL AQUAHOL is an automotive fuel process in which precisely-measured amounts of H2O are injected into an alcohol-powered combustion engine. AQUAGAS is a very similar fuel process, but refers to an engine running on gasoline or gasohol. Both AQUAHOL and AQUAGAS fuel processes are made possible by the exact same device, the low-cost high-efficiency TECTANE H2O Injector (which can be added to most cars and trucks as an after-market installation by a certified mechanic, or installed OEM prior to the sale of new cars). Clean vs. Polluting Fuels: The United Nations Environmental Fact Sheet #25 states that the burning of fossil fuels is the highest contributor of CO2 emissions. Ethanol fuels contain 35% oxygen and are much cleaner burning than gasoline fuels. Furthermore, ethanol is a renewable resource. Licensed chemist Domenico Chiovitti at OTI Petroleum Labs demonstrating the comparison of emissions from burning 1. gasoline; 2. alcohol; 3. alcohol and water mixed together. H2O Injector The revolutionary H2O Injector is an inexpensive add-on device (for cars with combustion engines) which reduces emissions and increases mileage, and can even allow regular cars to run on 75-octane gasoline or ethanol fuels. The H2O Injector can be installed on many conventional engines, and uses a special TECTANE formula to protect alcohol-sensitive engine components. The complete system costs $500 USD fully installed (averaging 2-3 hours to install on most cars). AQUAHOL is the separate injection of 80% ethanol and 20% water (H2O). Since water contains 89% oxygen (by molecular weight) and ethanol contains 35% oxygen, complete effective combustion occurs. AQUAHOL is classified by the U.S. Department of Agriculture as being 800% more efficient than gasohol, which is a blend of 90% gasoline and 10% ethanol (currently mandated by U.S. law and also adopted worldwide). The added oxygen provides for better combustion, and therefore fewer emissions (such as carbon monoxide and NOx). It also continuously “steam-cleans” the engine, increasing its lifespan. Tests conducted by the S.A.E., the E.P.A. (California), the Hollywood (Florida) Police Department, Pepsi-Cola, Marriott Hotels, CarQuest and others have all reported savings on mileage ranging from: # E.P.A. – 25% # S.A.E. – 40% # Marriott – 40% # Police – 40% The following links to test papers and testimonials are provided as certification of the chemical, mechanical and operational viability of TECTANE’s H2O Injection system…. see original page ==> Why AQUA? Chemistry 101: water (H2O) is composed of 2 parts hydrogen (a powerful fuel), and 1 part oxygen (which is required for burning any fuel). Water molecules can be split into these separate gaseous elements by using electricity (which is cost-prohibitive), or through the combination of very high pressure + temperature. In fact, the temperature and pressure required to split water into its two elements already exists inside the combustion chamber of every car when you step on the gas. This is an amazing, often overlooked fact. So — what if you could introduce a small amount of water into the combustion chamber of your car’s engine every time you stepped on the gas pedal? This is precisely what the TECTANE H2O Injector does! The water is instantly vaporized into steam, and then further split into hydrogen and oxygen. This extra oxygen helps to more completely burn the regular fuel that you are using, and the extra hydrogen that is added to the mix provides even more fuel energy. The result? An immediate boost in horsepower, plus better mileage, and more complete combustion (therefore fewer harmful emissions). In fact, you can even use lower-octane gasoline (such as 75-octane), because the on-demand added oxygen from the water gives you all the octane you need. # History of H2O Injection and AQUAHOL: U.S. Air Force and H2O Injection: TECTANE CEO Nino De Santis with the Mustang P-51 fighter plane and original H2O injection system. “The P-47D was very much like the P-47C except that H2O injection was made standard for more prolonged combat power, permitting 2300 HP at 27,000 feet, and a top speed of 433 MPH at 30,000 feet. With this capability it was the IDEAL FIGHTER.” — Pilot’s Manual, Forward, P-47 Thunderbolt Race Cars and H2O Injection: Professional race cars (which use alcohol as an extremely efficient, powerful, safe, clean-burning engine fuel) adapted the H2O injection system after fighter planes, for both added engine power as well as safe, clean combustion. Diesel and H2O Injection: The H2O injection system has been proven to work effectively on diesel engines. Pictured is TECTANE’s clean exhaust diesel test on a garbage truck in Florida. Consumer Cars and H2O Injection: TECTANE’s H2O injection Rolls-Royce from the Alternative Fuel Car Race in California, 1993. TECTANE won Best Ecological Patent Award in 1991. Leisure Cars and H2O Injection: In 1999, TECTANE jointly developed a Beach Buggy called the Humming Bird. The vehicle sported a Peugeot 206 engine outfitted with the TECTANE H2O Injector, which provided added horsepower, better mileage and a longer engine life. # Corporate Information : EXECUTIVE SUMMARY: TECTANE signed its first Joint Venture Merger with a Chinese (Hong Kong) Public Company on May 18, 2006. Several other mergers are presently being negotiated, in Canada, India, the Philippines and other countries. Stay tuned for more information on upcoming joint ventures and partnerships. Business Plan Outline: TECTANE Corporation USA holds trade secrets to technologies that address the Climate Change Crisis, Global Warming, Deforestation and alternatives to Nuclear Plants for generating electricity. The technologies consist of: A) TECTANE octane alternative; H2O Injector for Gasoline Conventional Engines. This system produces hydrogen and oxygen from common water, in conventional engines, using the existing heat and pressure of combustion. The TECTANE H2O Injector delivers water through a specifically-designed system that costs under $500 USD and can be installed within 2 to 3 hours. The major advantage of the system is its ability to burn cheap “straight-cut” gasoline, without expensive and poisonous chemicals, such as lead or M.T.B.E. (S.A.E. Tests #215-216) (USDA Feature Bulletin #771-80). The system can save up to 25% in fuel costs, while reducing emissions significantly. The system can also operate on alcohol fuels or Aquahol, TECTANE’s blend of 80% alcohol and 20% water (E.P.A. Test California). B) TECTANE Diesel Ceramic H2O Injector. The diesel system increases efficiency while reducing emissions. Tests conducted by the E.P.A. on N.Y. buses indicated emission reductions ranging from 20-30%, with mileage increases from 10-13%. The kit costs only $500 USD including installation. Business Strategy: Over $50 million of research and development capital has been invested by the U.S. and Canadian governments in the above technologies within the past 20 years. TECTANE has revised these technologies and merged them together to qualify for new patents. The advent of the United Nations and World Bank’s Global Environmental Facility (G.E.F.), under the mandate for environmental protection, has created a new opportunity for these technologies, with huge grant funds for developing countries. TECTANE has organized a program to transfer these technologies to developing countries, with funds from the G.E.F. program. TECTANE becomes a partner with the countries and the World Bank. Contracts with these countries will be assigned to a public company, thereby creating capital and high stock value. Ethanol (alcohol) has been positioned under the National Energy Security in America, while the current Alcohol Energy Security Act provides subsidies for the production of fuel ethanol….the huge Cascades/Boralex Corporation. They produce electricity from wood chips for N.Y. State. TECTANE has negotiated a potential 10-year supply agreement for Sweet Sorghum fibers, while ethanol is co-generated from their surplus heat. Cascades is a large pulp and paper corporation, and they have shown that the fibers can be used in the pulp and paper process.Cascades also owns another division which manufactures low-cost fiber housing. This division could also implement the Sweet Sorghum fibers for their pre-fabricated homes. Since ethanol fuel is now in a stock market bull rush, TECTANE is capturing the opportunity worldwide with its revolutionary technologies and business plan. Fuel station franchises will also be offered, once the ethanol supply from TECTANE’s production partnerships is available. Meanwhile, TECTANE will be offered on the futures markets and stock markets worldwide. C) TECTANE emission recycling system. This system, now under development, adapts the above technologies and recycles the wasted fuel to combustion, while filtering the emissions. The H2O is created at combustion without the need for a surplus H2O tank. D) TECTANE Ethanol (alcohol) production program. Hybrid Sweet Sorghum stalks are separated with a splitting technology to capture the sucrose juice for fermentation and distillation to ethanol fuel, while the fiber becomes a valuable by-product for lumber alternatives. The stock also has a flower grain-top that can be used for animal feed or human consumption. A yield of 400-500 gallons per acre has been achieved, compared to the conventional 300-350 gallons per acre from corn. E) TECTANE separated fiber for pulp/paper, lumber alternatives, including low-cost housing or the generation of electricity in wood burning plants. F) TECTANE Fiber Housing System. Process utilizes the fiber for the pre-fabrication of a patented low-cost housing formula that requires no lumber from trees and is water-, fire- and termite-proof. Housing structure is also resistant to earthquakes. G) TECTANE will also be introducing at least one new car with the H2O Injector system incorporated as original equipment. ## SAAB MOTORS, Water Injection. from french website http://www.econologie.com/articles.php?lng=fr&pg=2338 A lot of information about water injection and many other fuel saver systems, but in french 🙂 # – Water Injection is used in aircraft also, since WW2, for Power Boosting of over 45% ! # – AquaTune Water Injection, http://aquatune.com .Would You Believe This? – From 25% and up increase in fuel economy! – Up to 30% increase in horse power, and a higher torque range! – Gets rid of harmful carbon deposits! – Pass previously failed DEQ checkouts! – Now available for all motorcycles, including 2-cycle engines. Also for boats and all racing applications. – We have a 30-day Money-back guarantee. – Soon to be available for all diesel applications This is a finely tuned water injection system unlike any other, giving atomization of water and air before going into the engine where it is further atomized, here in turn it collides with the fuel molecules making a highly explosive mixture and expanding the fuel. As this mixture goes into the combustion chamber a marvelous thing happens – a chain reaction. All carbon starts immediately being removed inside the engine to the same state it was in when it was new. This happens because hot carbon and water don’t mix, somewhat like nitro colliding with glycerine and small carbon deposits going by valve seats and helping to break off other deposits…giving a clean sweet machine with a much higher compression ratio and a more complete burning of fuels, making in essence a half steam engine, half gas engine. This makes for a sweet tune, efficient combustion and 15% to 30% more horsepower. It’s been well known for many years, and still is today, that the steam engine is the most powerful of all engines and still used on aircraft carriers for catapulting jet fighters off the carrier. Also, the water injection was used on the P-38 fighter planes during WW II and at their high altitudes gave the engine a whopping 68% more horse power. Therefore, at 2500 feet above sea level you could expect 38% more horse power; of course, that does depend on the condition of the engine and how well it has been tuned. … Secondly, AquaTune is like no other water injection system in that it is actuality a fuel cell hydrogen processor. It produces hydrogen-rich bubbles before being introduced into the engine draft. After this it becomes an even more highly enriched hydrogen as it collides with hot metals and combustion and it drops the engine combustion temperature down. Now the engine timing can be advanced up to 7 degrees over factory specifications, if so desired. …water injection and an ultra-sonic barometric pressure chamber giving off ultra-sonic frequencies which in turn are splitting the hydrogen from the oxygen creating hydrogen-rich bubbles and hydrogen gases…This pressure chamber adjusts to any altitude and is not affected by heat…To further resist corrosion we use nickel-plated nipples going into and out of the unit. It has stainless steel water jetting and internal micron filtration with up to 6 microns assuring that it will never stop up. Note: most people are reporting from 35% to even higher better fuel mileage… your foot is much higher on the gas pedal to achieve your best miles per hour. We find that most vehicles use two (2) quarts of water to 15 gallons of fuel, depending on driving conditions. At times, we also add a 1/1 ratio of 70% by volume Isopropyl alcohol for added horsepower. This same mixture should be added to the water source in cold climates to keep it from freezing. The Isopropyl alcohol can be purchased at any drugstore or pharmacy. We highly recommend the use of distilled water to avoid possible mineral buildup…. We have a 30-day money-back guarantee and we guarantee the unit with a five-year warranty on parts and workmanship. (NOTE: Warranty void if unit has been tampered with.) We are the AquaTune Company since 1972 and our best advertising comes from you! Turning water into a fuel, now and in the future! 1. How does this system work? This system consists of a barometric echo chamber which will allow for high altitude driving and will, also, automatically adjust to sea-level driving. The system will adjust to engine loads down to 2.9 inches of vacuum. This echo chamber gives off high sonic frequencies when vacuum goes through the chamber. Incoming air velocity collides with this tuning fork at 1200 feet per second which in turn vibrates the tuning fork violently. This results in the cracking of the water which makes this system more of a fuel cell than a water injection system… … One of the beauties of this system is that it does not require any pumps or cumbersome wires or modules. On the AquaTunePlus units, only two wires need to be hooked up… 4….The whole system weighs less than two pounds and can fit in the palm of your hand, that is, without the water reservoir. The contents of this reservoir holds 2 quarts of water. 5. … On average, the contents of this reservoir will last to approximately 14 to 16 gallons of fuel… During cold weather, any temperatures below 50 degrees, it is necessary that isopropyl alcohol, 70% by volume, be added to act as an anti-freeze. As an added boost for racing vehicles, a 50-50 mixture of water and methanol can be used or a 50-50 mixture of isopropyl alcohol and water. 6. What happens if I run out of water? If the water runs out there is no harm to the engine. The only thing that happens is that the fuel economy and the horsepower revert back to what it was before the installation of this system. 8…. system is easy to install and takes approximately 1/2 hour to an hour, depending on the persons expertise….Each processor is pre-set in the factory for both water and air injection and is bench-tested to be free of defects. Any tampering to the unit will automatically void the Guarantees 9. Will this system void my new car warranty? No.See Title 15, Chapter 50, Sec. 2301-2312 of the Magnuson-Moss Warranty Act. 10. Will putting water into my engine rust the internal parts? No. For one thing heat evaporates the water and also note, engines produce “sweat” naturally and the by-product of engine combustion is water. This is the reason you see moisture escaping through the tail-pipe, particularly in the morning. 11… Water injection systems have been around since the 1930’s. However, they have come and gone in popularity. Ours has remained and is considered the best ever made because this is water injection at it’s highest level. 12… We have found that the best spark plugs to use are standard copper core plugs such as Autolite. Double platinum plugs burn too cool. Standard plugs burn at a higher rate which this system needs. And, at the same time, you are saving money because the copper plugs are less expensive. In the newer vehicles Iridium spark plugs are also acceptable. 13… With this system it’s like adding 108 octane and you will receive the best performance by using ONLY lowest octane grade gasoline possible. # A comment about ‘Aquatune’ from BoB Boyce found on OUPOWER.com ; http://oupower.com/phpBB2/viewtopic.php?t=310&postdays=0&postorder=asc&start=15 Bob Boyce, Regular Poster, Posted: Sun Oct 09, 2005 10:36 pm About $400 + shipping for a glorified repackaged ultrasonic fogger? He must be stoned on something good . Bob _________________ 2H+O+Spark=BOOM! – and an answer from eco, Regular Poster, Posted: Mon Oct 10, 2005 10:19 pm : Water injection only work for turbo or supercharged cars. It doesn’t work for N/A cars. AquaTune is not real water injection. Best solution is water/metanol injection. Lot of water injection has bad design. Homemade water injection is good but you must test lot of diferent injectors sizes and adjust pump pressure. Best result is if you running over 15psi of boost . If water injection working below 5-6 psi is danger because you can overflow engine. # The AquaTune System installation is a breeze! http://aquatune.com/aquatuneplus.html The new AquaTunePlus System $699.50 plus $19.80 Priority Shipping insured (shipping charges are for US delivery only. Please contact AquaTune for shipping outside the US.) This is a ’90 Corvette 350 IRoc engine with twin venturi injector nozzles and a low-profile distribution block. Note the reactor feeding the distribution block. 06 Chev. Vortec with 4.8L engine & the AquaTunePlus Kit Typical Installation of the processor mounted on cowling of an ’04 Buick Rainier with the 4.2L engine. Note the injection line going to the injection nozzle and the overall simplicity of this installation. Typical install on ALL dual throttle blade Dodge applications and Jeep, ’93-’00. This is considered to be a very “clean” installation, done by one of our customers on an ’05 Toyota Corolla S, 1.8L. Please note the use of the brackets for mounting both the distribution block and the processor. This assists in getting as close to the point of entry for a better injection. Also note, the position of the venturi nozzles in the manifold runners. Injection action in the main hose! Injectors view. Due to next generation of AquaTune systems with and without the downstream generators, it has become necessary for a price increase. The new systems are more efficient and, consequently, more costly to produce. The AquaTunePlus System AquaTune Plus far exceeds all other water injection systems on the market today. Here is why: Unlike pressurized water injection systems, the release of energy from pressurized water during the combustion cycle is largely wasted and consequently most all the energy is wasted through the exhaust cycle. The H2O molecule structure in pressurized water is tightly bonded together. Even in a fine mist concentration a 20% release of energy would be considered good during the combustion cycle. Pressurized water is also difficult to dissipate once inside the intake manifold or collected on the intake valve seats. This residue would also accumulate on the turbine blades whether it is a turbo- charged or supercharged vehicle. This has been known to give water injection a bad name. This is not the case with AquaTune Plus. AquaTune Plus pre-explodes the water, in other words, “cracking” the water, which gives a complete foundation for releasing the energy before and during the combustion cycle. When the “cracked” water collides with the fossil fuel, it rapidly expands the fuel giving it a higher BTU rate and maintaining a lower combustion temperature while giving a higher compression ratio and still lowering the oil temperatures by approximately 25 degrees. Here is how the processor is able to explode and crack the water: Water is brought through very precise jetting and then through a tuning fork. Incoming air traveling through intruded turbines hit the tuning fork at 1,200 feet per second, the water injection then travels down through a spiral chamber to the echo cracking chamber where it collides with a ram spiral air injection. Namely, two tornadoes colliding together at the same time; one, the spiral velocity chamber and two, the ram induction chamber. Hence, violently “cracking” the water. The injection is then brought into a precious metal grid hydrogen generator. The generator is powered up by a 12-volt battery and it is powered up only when the engine is running and shuts off automatically when the engine is shut off. Therefore, you never need to worry about the battery being drained or unwanted hydrogen being produced. The generator creates large concentrations of hydrogen on demand making the injection a very high potency injection. From here it then goes from the generator to the venturi injector nozzle. The incoming air passing the specialized venturi injector nozzle creates a vacuum zone which powers up the processor. This injection is quite explosive in the combustion chamber. Four years of research and development makes AquaTune Plus the best system on the market today. It makes other water injection systems and hydrogen producing systems obsolete. For all intents and purposes, this is the only system that has the right to be called a fuel cell with hydrogen on demand. We are turning water into fuel, both today and in the future. Here is what you can expect from our system: up to 30% more hp; oil temperatures down by 25 degrees; gives higher air charge density; no more pre-detonation, even on turbo and supercharged vehicles; removes all carbon build-up in the combustion chamber, valve seats, EGR passways and cleans up sticking rings; keeps the oil cleaner and prolongs oil viscosity life; drastically prolongs engine life; and 25%, or more, increase in the fuel economy. You can order the AquaTunePlus Water Injection System direct from AquaTune.com by the following methods: 1. By Mail – 2. By Phone – 3. By Internet AquaTune Customer Testimonials (Large number of recent testimonials, 2006) http://aquatune.com/testimonials.html # – AQUAMIST the final frontier… aquamist is ERL’s latest generation of water-injection equipment. http://www.aquamist.co.uk/index.html Water, with its high latent heat content, is extremely effective for controlling not only the onset of detonation but also theproduction of oxides of nitrogen in the modern leanburn engines.SYSTEM 2d Latest addition for 2003 – Full 3-D water injection system without programming! It keeps track of the fuel flow and inject water with a fixed w/f ratio, change jet for different ratios. Introduction: For those who wants a 3-D water injection MAP but do not want to spend the time-consuming task of mapping, this system is perfect for you. The heart of the system is a newly developed controller that reads the PWM signal from the engine’s fuel injector (including Peak and Hold type) and convert it to drive the High Speed Valve to deliver water, it draws less than 10ma from the pulsed line. This will enable the water injection to follow a fixed water/fuel ratio. A 3-30psi Adjustable Pressure Switch (normally closed) sets the cut-in point relative to the manifold pressure. It just cannot be simpler for the user. The system provides a pre-pressurised water line up to 8 bars and the flow rate is metered by our High speed Valve (HSV). This inline valve is made of high grade Stainless steel. As usual, except for the water tank, the system is supplied with everything needed to be fitted with ease. The Fuel Injection Amplifier v2 (FiA2) … Concealed inside this tiny exterior, it holds a very important diagnostic circuitry – it will detect BLOCKED water jet !!! … it reads the pulsed signal form the centre pin of the Aquamist pump and compares it to the water valve pulses and flags an error signal to yet another 1.5A output drive to trigger a relay or a boost limiting solenoid valve to bring the boost pressure to a safe level should a blocked jet be detected. being injected ( preset manifold pressure is reached). MF2 injector driver This small controller will go beyond what any EPROM chip can ever dream of: it can control four high impedance injectors to flow enough fuel for up to an extra 200bhp! Full details about Water Injection, in French, at http://www.econologie.com/articles.php?lng=fr&pg=2338 # Industrial Use of Water Injection, for Power Plants, by Power Generation Technology (PGT) , 2006, http://peswiki.com/index.php/Directory:PowerPlus_by_Power_Generation_Technology and http://pepei.pennnet.com/articles/article_display.cfm?article_id=256018 How it Works PowerPlus injects minute jets of water into the turbines through a patented system, which cools the unit. By keeping the unit cool emissions are radically reduced alongside a 25 per cent drop in turbine fuel consumption. Turbines then become more efficient and can generate an additional 25 per cent of power. PGT says that a power station operating eight 200 MW gas turbines fitted with PowerPlus units can save up to £150m ($280m) in fuel every year as well as generate extra revenue of up to £80m through increased megawatt production. Inventor: Tony Archer PowerPlus is the brainchild of Power Generation Technology Ltd’s Chief Executive Tony Archer. Archer is a mechanical engineer with almost 30 years experience in the power and petrochemical industries. Power Generation Technology Ltd is backed by a group of investors, including Sheikh Alli Said Al Harthy who is a member of the Royal Family of Oman. Stage of Development Five global organizations have signaled their intentions to install this new technology, which could lead to potential contracts worth between £12.5m and £200m. The Chinese government has expressed an interested in the device for its 10,000 power stations, a contract that could be worth £45bn to PGT. PGT has production and maintenance agreements with a number of key UK companies including NEL and specialist machine manufacturers, Fairless. Initially, around 50 Tees Valley-based engineering specialists, manufacturers and equipment suppliers will be involved in the manufacture and maintenance of PowerPlus units. In addition, Power Generation Technology Ltd is creating an initial 15 new jobs and investing in a planned new head office building in Billingham. The company has also established offices and workshops in Oman and China.
The ACT Goes Digital Vice President, Career & College Readiness, ACT Big news in standardized tests: the ACT will be offering a digital version of its exam starting in 2015. Why this change, and what does this mean for future test takers? Read on to find out—directly from an ACT insider. It’s no secret that the world has gone digital. Smart phones are virtually standard equipment these days. Tablets are everywhere. Photos and video, once shared, can go viral in a matter of hours. Yet within this technological world, high school students still sit down, #2 pencils in hand, to fill in the bubbles on test answer sheets, aiming to show their readiness for success in college. There is nothing wrong with that, necessarily. The paper-and-pencil format has worked well for a long time. But the advantages brought by ever-improving technology will someday change that scenario. That someday is coming soon. We recently announced plans to begin offering a digital version of the ACT test starting in 2015. Initially, the computer-based version will be available only to schools that administer the ACT to all students on a school day as part of their state- or district-wide assessment programs. By moving to a digital format, we are trying to make use of technology that students already use and understand and provide testing options that match how students learn. We aren’t changing the curriculum-based content of the test, just adding another option for its administration. The digital ACT will cover the same subject areas, measure the same academic skills, and be scored on the same 1 to 36 scale as the paper-and-pencil version. It will still be administered under standard conditions and in a controlled environment, and colleges will still accept the ACT scores earned. The main difference will be that students will use a tablet, laptop, or desktop computer to record their answers. Some of you may be thinking: “It’s about time.” Rest assured, computer-based administration of the ACT has been in the planning stages for quite a while. But it’s very important that we move thoughtfully and responsibly in its development to align the best possible experience for students with the same level of testing quality people have come to expect from ACT. That’s why we haven’t rushed into this new arena. We are making this move because computer-based testing offers significant benefits. It can provide a better user experience for students, who often use touch screens and keyboards more often than pencils. It offers greater flexibility of questions, so the test can more effectively evolve over time to better reflect national and international standards. And, it allows for potentially faster reporting of score results. Two common concerns about digital testing tend to be (1) will it be easier for students to cheat, and (2) how do you prevent technical problems? Cheating is a factor that must be considered any time students sit down to take a test. Digital testing offers different—but not necessarily greater—concerns than paper-and-pencil testing. ACT is constantly working to improve its test security procedures and will continue to do so. Significant steps are being taken to protect the integrity of our test scores in the digital world, so students who do things the right way are not at a disadvantage to those who attempt to game the system. Similarly, no testing format is immune from potential problems. Paper answer sheets can get damaged or lost in shipping. Power outages, utility breakdowns, and severe weather can cause test center cancellations, regardless of test format. There is always the chance that something can go wrong. With each new challenge, we learn, make adjustments, and improve. Finally, it is important to note that paper-and-pencil testing is not going away any time soon; it will remain an option for ACT test takers as long as there is a demand for it. Yes, the world has gone digital, and educational assessments are following suit. We at ACT look forward to the exciting changes as we advance to the future.
Optimization Archives Optimization is the process the DBMS uses to choose the most efficient execution plan for the query you have written. Since SQL is a declarative language, you tell SQL what to do, but not how to do it.  This means, many of the mechanics and in’s and out’s are lets to the DBMS.  It gets to choose whether to use an index or just scan a table. You can see what choices the DBMS is making by viewing it’s query plan. The article Query Plans in SQL is a great place to start.
Dementia is an illness affecting the brain, leading to memory loss and declining mental and cognitive abilities. Losing oneself is extremely traumatic and spotting the signs of dementia early can help us to slow its progression and relieve stress on ourselves and our loved ones. For instance, Uncle N, who has dementia, wakes up at four in the morning and insists on leaving the house. He finds it hard to explain why he wants to leave the house. This may sound frustrating, but learning to spot the early signs of dementia will make it easier to understand him and respond with love and compassion. Here are six early signs of dementia that you can keep a look out for. Sign 1: Short-term memory loss Notable and rapid memory loss in dementia patients is caused by physical changes in the brain, with cells dying at a faster rate than normal. Distinguish between typical ageing from dementia by keeping an eye out for frequent pausing when searching for words or a striking decline in memory for recent events. Uncle N may have difficulty remembering recent conversations but can remember what happened when he was younger. By chatting with him, you might even learn a thing or two about his childhood! Sign 2: Misplacing things often We all misplace things, but Uncle N may leave belongings in unusual places (like the fridge) and is unable to retrace his steps. In such situations, placate and help him to solve the problem, like telling him that you will help to search for the lost item. Sign 3: Lose their way easily Sense of direction and spatial orientation usually worsen during the early onset of dementia. Uncle N may often forget how he arrived at a location or where he currently is. Try finding the reason behind his behaviour, such as his past profession or habits. Sign 4: Trouble making plans or decisions Uncle N may experience difficulty in concentrating and may take a longer time to complete things than before. This change may gradually affect his lifestyle as he finds it harder to make plans or decisions. You know your loved one best, so respond to them accordingly. For example, if Uncle N has a stubborn character, forcing him to make a decision won’t work. Practise patience and guide him through the process instead. Sign 5: Changes in mood It may feel as if a switch is present in Uncle N’s mind. Each time it clicks, his attention either switches to something else or he abruptly forgets why he’s in that moment and become upset. Confusion, anxiety, anger and fear are common emotions a dementia patient may experience. Take baby steps and slowly learn to recognise the situations that may upset them. Sign 6: Withdrawal The confusion and fear that Uncle N is experiencing may result in withdrawal, such as a refusal to go out as he used to. While you may be tempted to force him to get out of the house, consider the underlying causes of this change. The best solution may be to seek professional medical help.  Sometimes, however, a little white lie works wonders (like enticing Uncle N with his favourite food) when getting him to go out or to see the doctor. It’s been widely documented that giving dementia patients the respect they deserve helps best. Feeling understood is crucial to everybody’s well-being, and an improved understanding of a dementia patient’s actions will improve your attitudes and responses towards them. Sources: WebMDAlzheimer’s Association WordPress Image Lightbox
How to Play Oops!, A Simple Review Game for Any Standard Oops! is such an easy game and so great for your budget! Instead of spending money on fancy games that aren't aligned to your standards, this is a simple game you can teach your students at the start of the year that they can play every day. It works for partners, or small groups. AND it can review literally any content. I love it for centers, because it can go on for any period of time. Plus, the "Oops!" part of the game - where students have to return all their cards - is random and allows for your strugglers to feel like they can win even if they are playing students more advanced than they are.  So here's how it works:  Students shuffle the deck of cards, and each student takes a turn drawing a card and answering the question on the back.  If a student answers correctly, they keep the card. If they answer incorrectly, they return the card to the bottom of the stack of cards.  Play continues until time is up. The player with the most cards at the end wins.  Easy, right?! Mixed into the game are some cards without questions, too. There had to be a catch!  The “Oops!” card requires that the student return the ”Oops!” card and any other cards they have to the bottom of the pile. This is how we keep the game interesting! Oops! Multiplication Game.jpg There are three other cards as well. The first asks students to take one card from every other player. The second tells students that they lose a turn, but don’t have to return their cards to the pile. The third allows students to draw two more cards. Oops! Division Cards.jpg All of these cards are returned to the bottom of the pile after each turn. I am still creating Common Core aligned cards for various topics over at my TeachersPayTeachers store. If you need a set that I haven't created yet, leave a comment below so I can get started! Eventually I'll be selling the cards in bundles by grade level/subject. Here are links to the sets I've already created:  Addition Facts Subtraction Facts Multiplication Facts Division Facts Here's to engaged students and more instructional time! Oops! Card Game for Any Standard.png The Printing Hack That Can Save You Hundreds The Printing Hack That Can Save You Hundreds Using Flip Cards to Check for Student Understanding Using Flip Cards to Check for Student Understanding
Web Results Hornet stings are more painful to humans than typical wasp stings because hornet venom contains a large amount (5%) of acetylcholine. [5] [6] Individual hornets can sting repeatedly; unlike honey bees , hornets and wasps do not die after stinging because their stingers are very finely barbed (only visible under high magnification) and can ... As fall turns to winter, you see fewer and fewer wasps in your backyard. What happens to wasps in winter? Most wasps do not survive a winter, sometimes because of the cold temperatures and sometimes because of the lack of food. All wasps do their part to help the queen wasp survive to lay eggs. While wasps and hornet species exhibit subtle differences in nesting habits, most have similar life cycles -- workers and males die in the fall or winter, and only the mated queens survive. New queens hibernate over the winter and start new nests each spring. Because they do not come back to their old nests, you can ... When do Hornet nests die? After the new Queen hornets have left the nest no other eggs are laid by the original queen. When the first frosts arrive and the temperature drops, the nest starts to die off. Some nests survive longer than others if they are located in warm loft spaces but usually by Christmas the nests are dead. In temperate zones, most hornets die before winter. ... A lot do die with the winter and old age together because their life span is anywhere from a few months to 7 years. Some do survive though. When will a wasp nest die off? Once the wasp nest has produced new queens and these have spun their silk caps ready to pupate, the nest is then essentially on a countdown to death. The timing of new queen production varies from year to year, but it is synchronised, so all the nests do it at the same time. When Do Wasps Die Off & All You Need To Know About Nests. Posted on: ... When Do Wasps Die Off? As the summer draws to a close, a wasp colony will produce some new males and queens – these will fly away from the nest, mate and the queens will find somewhere to hibernate. ... Common myths about wasp dangers. Wasps do not typically swarm, and ... Q. I found a wasp in an upstairs bedroom yesterday, flying around indoors in the middle of winter! It was mostly black with some yellow stripes and long legs. I thought it was a one time thing but today I found another one in a bathroom window. What’s going on? Do they have a nest inside? In winter, wasps are using your walls and your attic crawl spaces to hide from the cold. So, if you intend to go up into your attic, or do some renovations in the wall, be aware that this hazard is much more prevalent in winter, and take measures to protect yourself. Most wasps die in the winter due to starvation, not the cold, as was previously thought. Some can survive if food can be found outside the nest. In the fall, most worker wasps die. The workers are male wasps, and before they die, they impregnate the queens. The queens then look for a warm place to stay through the winter.
By | April 11, 2019 Why are Seizures One of the IDD Fatal Four? A seizure is an event in which a person’s brain experiences a surge or “storm” of electrical activity, which interrupts the brain’s typical functioning. About 1 in 10 people will experience a seizure at some point in their life, and many of those seizures do not recur. However, some people experience multiple seizures and are diagnosed with a seizure disorder. Seizure disorder, also called epilepsy, is a developmental disability. Like autism and cerebral palsy, many individuals with epilepsy have typical IQs and live highly independent lives. However, there are many individuals with IDD who have epilepsy in addition to another disability. As many as 35 percent of individuals with cerebral palsy and 75 percent of individuals who have experienced brain injuries may also have a seizure disorder. Seizures can be dangerous in the moment they occur, because the individual experiencing them often loses control of their body, loses consciousness, or both. This can result in a variety of injuries, accidents, and dangerous situations: imagine what would happen if a person suddenly lost consciousness while swimming, driving, or crossing a busy street. The accumulated effects of seizures over a lifespan can also be deadly. Individuals whose seizure disorder is not well-controlled may experience severe complications, including death. Risk Factors for Seizures Individuals who have a known seizure disorder may have seizures at any time for no apparent reason. However, some circumstances might provoke a seizure even in a person who does not have epilepsy. These include: • Stroke • Brain injury • Dementia • Brain infections • Liver or kidney failure • Severe high blood pressure • High fever (typically in children) • Drug use or toxic substances For individuals who do have epilepsy, anything that causes stress to their brain or body systems could increase the risk of a seizure. Further, some individuals have specific “triggers” for seizures. Classic examples include flashing lights or certain sounds. Complications from Seizures The most common immediate dangers of seizures are not from the seizures themselves. Seizures can result in falls, drowning, or other injuries, and a person who vomits during a seizure can choke on or aspirate their vomit. However, there may be other serious complications due to a seizure or ongoing seizure disorder: • Emotional distress. Individuals coping with epilepsy are at a greater risk of depression and other psychological disorders due to the fear and uncertainty that seizures often cause. • Cognitive decline. Individuals who experience many seizures over a long period of time may experience gradual memory loss and other cognitive decline. • Status epilepticus. This occurs when a seizure or series of seizures continues indefinitely without the person recovering in between. Very long seizures can be dangerous and require medication to resolve. • Sudden unexpected death in epilepsy. SUDEP occurs in one-tenth of a percent of people with epilepsy, and it is not fully understood. SUDEP typically occurs during or immediately after a seizure. In general, seizures that last longer than 5 minutes are considered a medical emergency. A series of seizures is also considered an emergency. These situations can result in permanent injury or death. Note that aspiration, another of the Fatal Four conditions, is a possible complication from seizures. Signs of Seizures There are many different types of seizures. Seizures don’t always involve full-body convulsions; in fact, seizures that impact only part of the brain or body are more common. Here are some examples of possible signs of a seizure: • Tremors or “shaking” • Loss of control over parts of the body • Unusual eye movements • Drooling • Vomiting • Sensory abnormalities • Appearing “absent” or staring • A scream or cry • Incontinence • Loss of consciousness • Disorientation or confusion • Exhaustion • Headache Seizures are very individualized, and there are many more signs than can be listed here. People who experience repeat seizures typically develop a pattern of seizure that is typical for them. However, seizures can still occur unpredictably even for people with an established pattern. Responding to Seizures Since there are so many types and causes for seizures, there is no one-size-fits-all response to a seizure. In general, you should consider these actions: • Monitor for environmental risks. Be alert to the potential for falls, for colliding with furniture or other items, for walking into traffic, or any other hazards in the environment. Do what you can to prevent injury. • Prevent choking or aspiration. Some individuals may vomit while seizing, which puts them at risk of aspiration or choking. If possible, help the person turn onto their side or in a recovery position. Don’t put anything into the mouth of a person who is seizing. • Do not restrain them. Restraining or holding a person down can cause injury, and in their disoriented state they may struggle. • Prepare to report. Pay attention to the details of the seizure, including what was happening before, if you noticed any initial warning signs, how the person behaved while seizing, the time it started, and how long it lasts. You will document this later, and if the person needs medical support you will share it with the emergency responders. • Support the aftermath. People who are coming out of seizures may not act like themselves. They may be disoriented, frightened, tired, or weak. Avoid offering food or beverage, or asking the person to do anything physically taxing, in the immediate aftermath of a seizure. Stay with them until you are sure they are fully awake. • Get help if you need it. Contact emergency medical services (call 911) if indicated on the person’s plan, or if you have any concerns about the course of their seizure. Anyone who is not breathing, who is pregnant, who sustains a significant injury due to a fall or other hazard, or who has never had a seizure before should receive immediate medical assistance. Many individuals who experience repeated seizures will have a personalized seizure plan or protocol written by their doctor. They may require specific intervention, such as using a medication to stop their seizure or calling 911 if certain conditions are met. Be sure to follow this plan carefully and notify the person’s medical team of any changes or concerns. 10 Ways DSPs Can Prevent Seizures or Related Injuries Seizures aren’t always preventable, but direct support professionals (DSPs) can play an important role in helping the people they support to reduce their risk of seizures or related injuries. 1) Provide medication support Anticonvulsant medications can substantially lower a person’s risk of having a seizure. Help the individuals you serve take their medications on time and as prescribed by their doctor. 2) Avoid known seizure triggers This can vary tremendously from person to person. Some individuals have seizures triggered by specific songs, by flashing lights, or by monthly hormonal fluctuations. Some individuals have no known triggers. Drugs and alcohol are common triggers. 3) Know their warning signs Some individuals have specific warning signs prior to a seizure. They may feel dizzy, lose sensation in part of their body, or have other unique indicators that a seizure is imminent. Recognizing these signs can create an opportunity to lie down, move away from hazards, or call for help if needed. 4) Recommend showers It takes very little water to drown, so a seizure while in the bath can be fatal. Encourage individuals at risk for seizures to take showers instead, and consider the use of a shower chair to reduce the risk of slipping and falling if a seizure occurs. 5) Beware the heat Intense heat can increase dehydration, another of the Fatal Four conditions. Dehydration is a risk factor for seizures. 6) Support sleep hygiene Not getting enough sleep can increase a person’s risk of seizures. 7) Treat fevers Illnesses, particularly high fevers, can sometimes trigger seizures, particularly in individuals with a known seizure disorder. 8) Help manage stress High levels of stress can trigger seizures in some individuals. Stress can also trigger other risk factors, such as dehydration due to forgetting to drink fluids or not getting enough sleep. 9) Recognize situational hazards Although individuals with seizure disorders can often participate in a wide array of typical activities, be aware of which activities pose special risk for the individual you support. Stairs, for example, can be dangerous for someone who typically falls when they seize and who has no prior warning of an oncoming seizure. 10) Document all seizures Even if a seizure seems minor, it is important to keep a record of it. The individual’s medical team can learn valuable information about treatment needs by knowing facts such as the time, duration, and features of a seizure. Documenting all seizures or suspected seizures can help to identify patterns of possible triggers or warning signs. DSPs and other caregivers need to know how seizures and the rest of the Fatal Four – dehydration, constipation, and aspiration – interact and potentially cause other serious health problems. The only way to keep the Fatal Four from claiming more lives is education and prevention. Additional Posts About The Fatal Four Dehydration Signs and Risk Factors How Constipation Impacts Health Aspiration’s Dangers and Key Interventions Katy Kunst Posts By Topic Show More Show Less to find out more about our training and resources
You are on page 1of 67 It is very important to note that probabilístic, or conditioning, influence is in no way equivalent to the notion of a causal influence. While two variables that are conditionally influenced by a third variable are necessarily correlated, they are not at all necessarily causallY influenced. Shachter [49,50] has defined an influence diagram to be a single connected network that is comprised of an acyclic directed graph, together with asso­ ciated node sets, functional dependencies, and information flows. There are three types of nodes: decision nodes, value nodes, and chance nodes. There are two types of ares: informational ares and conditioning ares. An influence diagram is said to be well-formed, or fully specified, or fully formed, if the following conditions hold. 1. There are no cycles. In other words, an influence diagram is a directed acyclic graph or, in other words, a set of nodes or variables and a set of edges or branches that connect the nodes in a directed sen se and where there are no non trivial paths that begin and end at the same node. 2. There is one, and only one, value node. There is a value function that is defined over the parents of this single value node. A set of nodes p¡ are caBed parents of node nj if and only if there is an edge ej ¡ for each node nj that is an element of Pi. In a similar manner, a set of nodes C¡ are called children of node n¡ if and only if there is an edge eij for each node nj that is an element of Pi. A barren node is a node with no children. A border node is a node with no parents. There is much rather specialízed graph theory terminology, and we shall not go this rar afield in our discussions here. 3. Each node in the digraph is defined in terms of mutually exclusive and collectively exhaustive states. 4. A joint probability density function is defined over all of the sta tes of the uncertainty nodes. 5. There is at least one path that connects all of the decision nodes both to each other and to the single-value node. 6. There are functions over the parents of each deterministic node that are defined over the parents of those nodes. Even though there exists a unique joint probability density function for a specific well-formed influence diagram, there may be a number of physical influence diagram realizations for any given joint density function. There are three steps that will enable identification of an appropriate probability distribution from a well-formed influence diagram. 1. Barren Node Removal. All decision nodes and chance nodes may be eliminated from the diagram if they do not have successors. If there is nothing that follows a node, that node can have no effect on the outcome value. Such nodes are irrelevant and superfluous and can be eliminated. Formally, this says thata nodea, which is barren with respect to node b Barren Nodew (a) Barren Node Removal (e) Are (Edge) Reversal from Probabilistic Node Figure 4.23 Additional node manipulations. and e, can be eliminated from an influence diagram without changing the values these nodes take on, such as p(b le, &). Figure 4.23(a) illustrates this concepto It is important to note that a barren node is not frivolous, but only irrelevant with respect to a particular set of nodes for which it is barren. This suggests that p(x, y,zl&) does not depend upon any conditioning upon w in Figure 4.22(a). 2. Deterministic N ode Propagation. If a well-formed influence diagram con­ tains an are from a deterministic node a to node b, which may be a chance no de or a deterministic node, it is possible to rearrange or transform the influence diagram to one in which there is no edge from node a to b. The new influence diagram will be one such that node b will inherit all conditional predecessors of node a. Furthermore, if node b was deterministic before the transformation, it will remain a deterministic node. Figure 4.23(b) illustrates this concepto 3. Are Reversal. In a well-specified influence diagram in which there is a single directed path or are from probabilistic node a to node b, the diagram may be transformed to one in which there is an are from node b to node a and where the new nodes a and b inherit the conditional predecessors of each node. If node b was initially deterministic, it becomes probabilistic. If it was initially probabilístic, it remains probabilístico Figure 4.23(c) illustrates this concept. These simple reductions can be grouped together into a series of transform­ ations that potentially resolve influence problems. This leads to three addi­ tional steps. 4. Deterministie Node, with a Value Node as the Only Possible Sueeessor, Removal. A given deterministic node may be removed from the network. The given deterministic node is propagated into each of its successors until it has none, and is then barren and can be eliminated from the diagram. Figure 4.23(b) has actually illustrated this. Node y, a determin­ istic node, is barren after the manipulation leading to Figure 4.23(b). 5. Decision Node, with a Value Node as the Only Possible Successor, Removal. A given decision node may be removed from the network. When any conditional predecessors of the value node that are not observable at the time of decision are removed first and when the decision node is a conditional predecessor of the value node, it may be removed. These conditional predecessors are typically the successors of the decision node in question. No new conditional predecessors are inherited by the value node as a result of this operation, and the operation ends when all predecessors to value nodes have been removed. Decision nodes are removed through the maximization of expected value, or subjective expected utility, a subject considered in our next chapter. 6. Probabilistic Node,with a Value Node as the Only Possible Successor, Removal. A probabilistic or chance node that has only a value node as a successor can be removed. In sorne cases, it will be necessary to reverse a conditioning are between the node and other successors such that the value node inherits conditional predecessors of the node that is removed. Figure 4.23(c) illustrates this concept. Node y can be removed as it is a barren node, after the manipulations leading toFigure 4.23(c). These three steps follow from the first three stated. A relatively complete discussion of these steps is contained in Shachter [49,50] and in Call and Miller [51]. Included in these efforts is a discussion of transformations needed to solve inference and decision problems, sufficient information to perform conditional or unconditional inference, and the associated informatiofl,require­ ments for decision making, including calculations of the value of (perfect) information. The Call and Miller paper also discusses influence diagram One of the questions that naturally arises is whether an influence-diagram­ type representation or decision-tree-type representation is "better." The answer is, of course, quite problem and perspective dependent. Figures 4.24 and 4.25 provide sorne comparisons of alternative representations of decision situations in terms of influence diagrams and decision trees. It is relatively easy to construct a situation for which one representational framework is the best. In any case, both decision trees and influence diagrams are equivalent to spread­ sheet-like matrix representations. One particularly impressive demonstration of potential superiority of the influence diagram representation is in situations where probabilistic indepen­ dence exists. Figure 4.26 illustrates this sort of situation in decision tree format and influence diagram format. What is displayed here is a case where p(c 2 Ic 1 ,&) = p(c 2 1&). This independence is clearly illustrated in the influence (b) Decision Tree OuIoome Values ~:' ~ities Figure 4.24 Simple influence diagram and associated decision tree. diagram but not at all evident in the decision tree structure unless we actually examine the probabilities shown in the tree. This effective representation of probabilistic independence also makes it easier to enforce the distinction between probabilistic influences and· values influences. A relatively good illustration of this is provided by the influence diagram of Figure 4.27. There are three decision nodes in this figure: detection (DE), diagnosis (DI), and correction (CO). The function of sorne directly unobserv­ able functioning of sorne systetns is influenced by sorne chance mechanism, C, which is also a border node. The detection decision outcome is influenced· by the chance mechanism and sorne additional random mechanism CDE. De­ pending on the detection result, we enter a diagnostic decision phase, DI. Again, there are chance mechanisms involved that influence the actual diag~ nostic outcome. Finally, the correction decision phase (CO) produces an --. - ~ (b) Decision Tree Outcome . Figure 4.25 . Three-decision influence diagram and associated decision tree. , 11 '1r ,,_ ' J I J 1o.J1 n L ' "1"\1' V 1:.;1 J I J I L/VI L.J " . , n I V I f ' - J IV'......,LJLLJ nI.,...., L" I L I complex inftuence diagram may be viewed in an abstracted form that simplifies its representation and presentation. The relationship between dynamic pro­ gramming and inftuence diagrams is discussed in reference 54. Figure 4.26 Decision tree and equivalent influence diagram iIIustrating conditional independence of two chance nodes. outcome that depends on the failure state of the system and the chance result of the corrective elfort. This inftuence diagram is a relatively illuminating and straightforward representation of the decision situation. The decision tree may not be comparably illuminating and straightforward. This difficulty usually increases for more complex decisions and provides sorne potential and real advantages to the inftuence diagram approach. Other elforts [52] have introduced the concept of super value nodes that enable the representation of separable value functions in inftuence diagrams. Additional extensions to inftuence diagram efforts [53] remove restrictions 2, 3, 4, ·and 6 from the definition of a well-formed inftuence diagram. This enables the expansion of the inftuence diagram concept to decision processes in which value aspects are critical. In particular, it enables the determination of value­ driven clusters and decision-driven clusters of elements such that a relatively You probably already know that many large-scale systems in the public and private sectors involve human concerns and a variety of belief and value considerations. Population pressures, urban growth and decay, pollution propagation and mitigation, energy generation and conservation, health care, and many more issues are amenable to analysis using system dynamics models. A major task in these issues is getting the "appropriate numbers" for policy­ making purposes, and this requires reasonably accurate predictions and forecasts. It is easy to use the words "reasonably accurate" but in many contemporary areas of concern it is very difficult to make forecasts that are in fact "reasonably accurate." Population Models Predictions and forecasts depend to a considerable extent upon population growth and decay. All public and industrial services depend, in a basic way, upon population levels. Population models are therefore of interest in them­ selves and useful as portions oC larger models. Population models are based upon the birth-death equations. These may take either a deterministic or a probabilistic formo We are much more interested in the deterministic equations here and will obtain the deterministic equations from the stochastic equations. Then a discussion of the system dynamics methodology [55-57] and sorne example modeling efforts will follow. We want to predict population n(t), which may represent, for example, the number of people, the number of companies in a given industry, or thenumber of housing units ina subdivision or perhaps an entire city or region.It would appear hopeless to attempt to determine precisely the number of people in a city at a given time because births, deaths, inmigrations, and outmigrations occur at random times. Thus we will attempt to predict instead the expected or average number. To do this we will use the standard definition of expected = ~[n(t)] = Figure 4.27 Influence diagram for fault detection, diagnosis, and correction. where Pn(t) is the probability that there are exactly n people (or companies or houses) in the population at time t. We will soon use this relation to develop the deterministic birth-death differential equations. Before doing this let us consider a heuristic derivation of the birth-death equations which will provide insight as we proceed with a more rigorous derivation, ' ~.1.0169 0. We let L\(t) be the average death rate per unit person in the population at time t. t . Population (Population Year Birthrate Deathrate True Population 1920 1925 1930 1935 1940 1945 1950 1955 1960 1965 1970 1975 0.312 1.627 1.1.mMt)] (4. and m~it) represents the migration rate from zone i to zone j.0241 0.365 1. in which inmigration and outmigration factors are negligible compared with birthrates and deathrates.4. any system that evolves over time. there are sorne questions: • Should the birthrate and-deathrate depend only upon the population at that time? • Should these rates be independent of the actual sequence and timing of births and deaths that have occurred in the past? United States for the period 1920 through 2000.0113 0.&.032 Forecast Population 1.4.(t) dt Example 4.(t) will then be the total average birthrate and deathrate.659 1.2).0178 0. The terms m~¡(t) and mMt) represent migration to and from an external zone that is outside of the basic n zones under consideration.405 1.4. This first-order differential equation.4.L\(t)]fl .3) may be very complex linear or nonlinear functions of the various state variables. to represent the region in which the state variable is located.2) This is the basic birth-death equation. (4. it is possible to rigorously derive the basic birth-death equation. Thus.0108 0.\\ other words. .S.. is equal to the probability of the event occurring at time t conditioned only upon the events occurring at the most recent time t .513 .&JJ We will let f3(t) represent the average birth rate per unit person in the population at time t. x~(t) represents the number of units of category i in zone k at time t.L\(t) term from the integral sign and then integrate both sides of the foregoing to obtain x(t The answers to both of these questions are fundamentally yes.1 times lOS) Forecast from Simple First-Order Model of U.2.0097 1.0195 0.0117 0.4. The average rate of population growth is the difference between the total average birthrate and deathrate.231 1.0237 0.' .0093 0. we may remo ve the l3(t) .0095 0.925 2.793 1.ü '. A process is Markov if the probability of an event occurring at timel conditioned upon events occurring earlier at time t .0179 0.0106 0. By using the Markov assumption. dfl.273 1. or industry.116 ~~ .L\(t)]} t which is an approximate difference equation corresponding to the birth-death differential equation that we may propagate to determine the population for given birthrates and deathrates..:u:. The data in Table 4. We will use a superscript. the common mathematical physics flow equation.0096 0.0237 0. (4..3) may be rewritten as · [X(t+dt) ]X(t) dx1:t) = 13~(t) - M(t) + I:j= l.1.938 2.278 1. so that there is no inmigration or outmigration.794 1. 1. For symbolic convenience we will now drop the average or mean symbol and simply use x as the population state variable. Thus we have In this equation the terms 13~(t) and M(t) represent the birthrate and deathrate for category k in zone i at time t.. The products l3(t)fl. and it is very useful as you wiU see later in this section.023 2. We shall assume a completely closed system. lacks a spatial element.469 1. While the basic birth-death equation seems intuitively reasonable.0194 0. The linear birth-.3) = l t+dt [13(t) _ L\(t)] dt t If we assume that the birthrates and deathrates are constant over a given time interval dt.-death equation representing the way in which a certain category of population in a certain zone changes in time is dx(t) x(t) + dt) (1) = x(t) exp{[I3(t) .0109 0.0213 0. mMt) represents the migration rate from zone j to zone i. if the probability is associated with a Markov process.0189 0. We will use the symbol Xi to represent the ith state variable used to denote a specific category such as housing. :>Y~/~M ANAL Y~/~ VI. This basic birth-deathequation is the simplest type of dynaiñic equation we might postulate for the modeling of processes in a dynamic system . The various terms on the right-hand side of Eq.AL I tKNA IIVt~ UYNAM/L:> MVU~L~ A/'1U 1:)\/I:N~/U/'1.4. We consider development of a simple population model for the (4. .1 represent factual data lABLE 4. " ~ v> )!:u c.203 1. Equation (4.0094 0. We propose to use Eq..317 1.0250 0.158 1.. people. We wish to consider differenf classes or subsets of population such as housing.(t) and L\(t)fl.057 1.(t) = [13(t) . as in x k .0130 0.i*¡[m'¡(t) - m~j(t)] + [mMt) .115 1. 3) are assumed to be zero. the recovery rate or number of people recovering from the epidemic per day.0237 and the deathrate constant at 0.. of immunity. RR(t).'. Figure 4. Because we have Httle confidence in the ability .. we see that these are IR(t). for a constant b. For the year 2000. The rate variables control the flow of people from one level variable to another. only slightly higher than they are at present. introduce a saturation term such that the difference between birthrate and the deathrate approaches zero as this saturation is reached..:¿34 ANAL YSIS OF AL TERNA TlVES taken from U. The recovery rate i8 the ratio of infected people to the average duration. and X 3 ' the immune population... AIso. The population will be aggregated into three categories: Xl' thesusceptible population. Example 4. we obtain a predicted population of 1. natural resources available for use. say 30 years from now.2) is that the population modeled by this equation will approach either infinity or zero over time if the birthrates and deathrates are truly constant.ªnd other consideration s that affect sustainability of a region or perhaps the entire planet. f'VrtIVIILJ IVI\. l/b. x 2 • the infected population.1. l/e. (1).28. based upon the various assumed birthrates and deathrates. Thus we have. Alternatively. RR(t) = bx 2 (t) The loss of immunity rate is the ratio of the immune population to the period. Urban services based upon such an erroneous projection would undoubtedly be pretty poor. could we use the data from year 1920 to project the population in 1970? Ifwe useEq.of this model to project the present known population using data far in the past.805 x 108 ./I"IJ Figure 4. A simple way to model the finite environment is to determine a maximum population and thep. The model projection is quite accurate.. We assume that there are no exogenoús orexternal influences'upon the model. Thus the birthrate and deathrate terms in Eq. It is reasonable to assume that the infection rate increases as the number of infected people increases. The error is even worse than it was in the previous calculation. Thus we see that LR(t) = eX3(t) . that there are no births and deaths. and that the total population is constant.0237.. the birthrate minus the deathrate must go to zero as the population goes to zero.591 x 108 using 1970 data.28 depicts a convenient block diagram representation that we may use to initiate our construction of a model for the propagation of an epidemic. but only 5-year projections are being made. as are the terms m Ei and m¡E' The only task in completing the model is to determine the inmigration and outmigration terms or rate variables mji and mijo From Figure 4.4. the infection rate or number of people infected per day. for simplicity.. we predict a population of 3. which is in error by 22. we obtain a population of 1. We would be assuming that the birthrate remained constant at 0. the 10ss of immunity rate or number of recovered people losing immunity per day. In 1940 we had just come out of a great depression.164 x 108 using 1960 data and 2. the natural progression is from susceptible to infected to immune to suscep­ tible. We assume that J rJI c/v. It wíll be convenient to use a box to represent the level variables. and the actual forecast population is replaced by the corrected value when making the next 5-year projection. assumed constant. and so the number of susceptible people increases. u . census information as weH as information computed from our model Eq. of the disease in days.4. we could seek a more complex model that would determine the birthrates and deathrates as a function of other important variables such as poHution.28 Partial systems dynamics description for the epidemic propagation model. Each category of popu­ lation is a state or level variable influenced by an input or an output. and birthrates were very low. and LR(t).630 x 108 30 years later in 1970. (4.JI\.ILn:LJ r". (1) with dt = 50 years and the data for 1920. In this example we will develop a model for propagation of an epidemic resulting from a contagious disease such as influenza. based upon known and available present data. A principal difficulty with Eq. Can we projeCt the population at year 2000 from the true population in 1970? Alternatively.S.u L / \ ' C.. (1).4.2. we must naturally have considerable reservations concerning the ability of this simple model to project future populations. for example. For this simple model we could also postulate various future birthrates for the next 30 years and then determine a population... Thus we assume that IR(t) = ax 1(t)x 2 (t) where a is.000 people.0130 from 1920 to 1970. ActuaHy the deathrate has been reduced considerably. (4. 5-year predictions of population are made using Eq. In the computations leading to Table 4. If we use data from 1940.700. and the birthrate for many of the intervening years was greater than 0. In other words. In this particular model the fact that there are no births. Information concerning the level is used to control the rate variable.4.30 Major facets of system dynamics modeling.3. we define a rate variable as the time derivative of a level or state variable and determine rate variables as functions of level variables. as in Figure 4.JIL""'. and the interest is compounded q times per year.axt(t)x it) dxit) dt = IR(t) . This will occur when the infection rate is greater than the recovery rate. The other variable "O ZE Si. which can be accomplished by decreasing the effect of contact between infected people and susceptible people either by quarantine or by sorne measure such as an enzyme filter.K would be used to represent the difference equation notation M(kT). We choose a derivative variable to control a flow into the state or level variable that integrates or accumulates this level.29 illustrates two typical responses from this model. by increasing b. we must establish the boundary within which interactions and impacts take place.time such that the derivative dx 2 /dt must initially be positive. In general there will not be a constant population.29(b) shows the results of a simulation in which the epidemic fails to grow.). the interest rate is N per year.IR(t) = cx 3(t) .J System Dynamics One approach to the development of population models is due to Jay W.J/ Goalsor Objectives Observed Conditions Policy Action Basedonthe Discrepancy Figure 4. which can be done by decreasing the duration of the disease. then the equation for the growth of money is M({k + 1}T) = M(kT) + TNM(kT) xl I :::> Z ~I ~ Controlor X _2 Days (a) (b) Days Figure 4.LR(t) = bxit) .. and so this reduction in order does not occur. The first is a level variable. or when x 1 (t O) > b/a. and the propagation of the epidemic may be modeled by the differential equations and parameters determined.bx 2 (t) dx 3(t) dt = RR(t) . In order to develop system concepts sufficiently complete that they can be analyzed by a system dynamics model. the infected population xit) must initially increase in. If there is but a single initial investment of money M o in the account. or by decreasing a. In system dynamics it is assumed that four hierarchical structure levels can be recognized.. ~ al Q. . in which many system dynamics models have been coded. "O \t~~ ~ ~ 4. (1) where T = l/q and M(O) = Mo. We could. who use the phrase system dynamics to describe their modeling methodology.) r. deaths.:J 1"11'1LJ C/\ I C. Figure 4. x 2(t O)' and X3(t O) and parameters a. show flow lines in Figure 4.II. e/VI ANAL YSIS OF ALTERNA TlVES Cor constant c.4.RR(t) = axt(t)x it) . and migrations allows us to write x 1(t) + x 2(t) + x 3(t) = constant and to use this relation to reduce the order of the differential equation by one. An epidemic can be prevented. In the particular simulation language known as 3(t) These differential equations may be programmed on a computer with appro­ priate initial conditions xt(t o).28 in order to illustrate how the various rate variables are determined in terms of the various level variables. b. The final differential equations for our model are easily seen to be dx 1(t) dt = LR(t) . if we wanted.:l3b . and c. Example 4. In Forrester's system dynamics terminology we would identify two variables. In order for an epidemic to occur.29 Possible simulation results for epidemic model.2 u r l'i/"\/VIIL':) IVIVUCL. Let us consider what is doubtlessly the simplest positive feedback problem.30. Forrester and his coworkers. that of accumulation of money in a savingsaccount.29(a).. the symbol M. in this simple model.. which would be money. These four levels are depicted in Figure 4. Figure 4. I\/'II\L Y:>/:> VI" I\L / l:K/V1\ //Vl::> would be a rate variable, the money rate or interest rate, which would be written as MR.JK. This symbol is used to indicate that the expression written is for the money rate and that this is assumed to be constant over the time interval from J to K. With the exception of format input and output statements, the complete Dynamo simulation language representation of this problem is the rate variable equation MR.JK = (N)M.J Interes! Rate N ,.-, .. = NM(t) If the interest is compounded frequentIy, say daily or weekly, then the solutions obtained from either Eq. (1) or Eqs. (2) and (3) will be essentially the same as that obtained from Eq. (4). All three of these mathematical models may be represented by the symbolic or block diagram representation of Figure 4.31, which depicts a single level or state variable (money) and a single rate variable (money rate). The source of the flow of money exerts no influence on the system in this example. Thus the flow is shown as coming from an infinite source that cannot be exhausted. Any amount needed by the model will come from this infinite source. Lines that indicate information transfer should be distinguished from those lines that represent the flow of the content of a level or state variable. In this example we accomplish this by using a double line to indicate money flow. Any other convenient method of distinguishing informa­ tion flow from money flow could, of course, be used. Example 4.4.4 In this example, let us turn our attention to a water poIlution problem and an associated system dynamics mode!. Ordinarily, rivers and lakes can decompose significant amounts of untreated waste products without upsetting the living processes that occur within and around them. However, poIlution arises when there is unlimited and uncontrolled dumping of un- where DT denotes the sampling time or interest compounding time for this example. The Dynamo simulation language will not be developed in this text, but symbols from it will occasionally be used beca use they are common in the system dynamics literature. Dynamo information, including software, is avail­ able from Pugh-Roberts Associates Inc., 41 William Linskey Way, Cambridge, MA 02142, telephone 617-864-8880. Systems dynamics software, known as STELLA, for Windows and Macintosh operating systems is available from High Performance Systems, Inc., 45 Lyme Road, Suite 300, Hanover, NH 03755, telephone 603-643-9636. If we let the samples become dense - that is, let T -+ O, k -+ 00, kT -+ t ­ then Eq. (1) becomes the continuous-time differential equation __ ",,,,,,,,,,, .•.. __... _" __ _ Infinite Environment and Source lor Money and the level variable equation Figure 4.31 System dynamics diagram of interest accumulation. treated waste in the water. The decomposition processes may, under these conditions, use up considerable oxygen that is dissolved in the water. If the amount of oxygen in the water gets too low, the water is incapable of further decomposition, and the water is unfit for either fish, drinking, or recreational activities. Rivers and lakes replenish their oxygen level by drawing it from the airo The more turbulent the surface of the water, the faster the rivers and lakes take in fresh oxygen. Part of the solution to this problem of environmental pollution depends upon our ability to develop a predictive model of the effect of proposed solutions to the problem so that decision and policy analysis studies may be conducted. We will bypass the earlier, formulation steps of systems engineering and concentrate upon the systems analysis and modeling portion here. Specifi~ cally, we will develop a simplified model to predict how the oxygen content of a river or stream changes as pollutants are dumped into it. First, we will develop a model for the concentration of waste products in a particular body of water. We will assume that the physical situation is adequately described by the fact that the rate of change of waste in the water is proportional to the rate at which waste is dumped into die water and the product of the amount of waste in the water and the waste chemical composi­ tion coefficient of the waste. We define the level variable symbol W(t) = volume of waste products in water at time t (gaIlons) and the rate variable symbols W AR(t) = waste addition rate (gallons per day) at time t. (this is an input or exogenous variable). WDR(t) = waste decomposition rate (gallons per day) at time auxiliary coefficient. waste chemical coefficient (per day) and an :,y:,ItM UYNAMIC:' MUUtL:' ANU t}(/tN:'IUN:' We may then determine the difference equations corresponding to the verbal statement of the waste product concentration equation as + dt) = W(t) + dt[WAR(t) WDR(t) = [WCC]W(t) - WDR(t)] This is a first-order difference equation that may be solved once we have specified the auxiliary constant WCC, the sample time dt, and the exogenous variable WAR. From studies of replacement of oxygen in the water we find that the rate at which oxygen is replaced is proportional to the rate of use of oxygen by the wastes and the difference between the amount of oxygen contained in the water and sorne maximum amount that the water can hold. Although oxygen is constantly being replenished in the water by the air, it is also being used up in the decomposition of waste products process. The rate of use of oxygen by the waste products depends directly on the rate of decomposition of waste. To develop a model for oxygen contentin the water we define the level variable O(t) = oxygen content in the water at time t (cubic feet at a given pressure) and the rate variables ORR(t) = oxygen in water replacement rate OUR(t) = oxygen usage rate and the auxiliary constants OM =maximum oxygencontent in water AT = adjustment time for oxygen replacement or turbulence coefficient e = oxygen demand coefficient or coefficient representing oxygen require­ ment for the particular waste products in the water From the aforementioned .verbal description, the difference equations for the oxygen concentration are + dt) = O(t) + dt[ORR(t) - OUR(t)] / '.- = W(t) = waste products x 2 = O(t) = oxygen content all = -WCC a 21 = -[c]WCC a22 = -[ATr l u 1 = WAR (input variable) u2 = OM[ATr 1 (input constant) Figure 4.32 illustrates the system dynamics diagram for this model, and Figure 4.33 represents a control systems-type block diagram, which, while entirely equivalent to the systems dynamics diagram, is not quite as useful for the purpose of displaying the physicalIy different flow quantities, level variables, and information pickoffs as the system dynamics diagram. Much more complex studies of water pollution are currently available. ParticularIy interesting system dynamics studies are contained in references 58-61. In each of the examples considered thus far we have developed a solution by determining a closed boundary around the system and then identifying state or levelvaríables and rate variables as the basic components of feedback loops. As the last example has indicated, rate variables are generally made up of information flows based on action taken as a result of a discrepancy between goals (either physical or social) and observed conditions. Thus we see the four levels of hierarchical structure, described in Figure 4.30, inherent in even simple system dynamics models. As we have seen in our examples thus far, system dynamics models take the form of closed-Ioop feedback control systems whose dynamic behavior results Infinite Source Ior Waste Producls Infinite Source lar ORR(t) = AT - 1 [OM - O(t)] OUR(t) = [c]WDR(t) The source and sink for both oxygen and waste-product concentration are assumed to be infinite in this simple model which has the differential equation or state variable representation dX I dt = a ll x 1 + U I dX 2 dt = a 2l x I + a 22 x 2 + U 2 Figure 4.32 Systems dynamics diagram for environmental pollution. intermediate level between two rate variables. Rate variables can depend only upon level or sfáte variables and auxiliary constants. Rate variables, which are often also policy variables or policy statements in a system, should be of algebraic formo Level and rate variables must alternate in a system dynamics model. Thus rate variables are of the form Figure 4.33 Feedback control system block diagram equivalent of system dynamics diagram of Figure 4.24. = xit)[K ij+ Aij+ - Kij_ Aij_] such that the fundamental level-variable difference and differential equations from their internal structure. These feedback loops are basic structural el­ ements in systems, and it is the interaction of the many diverse feedback loops that makes the behavior of large-scale systems so complexo AH decision processes should be made within feedback loops, because action to implement decisions should depend upon the discrepancy between desired goals and observed conditions. In system dynamics methodology, level or state variables are changed only by rate variables. The present value of a level variable can, therefore, be computed without knowledge of the present or previous value of any other level variable. In difference equation teiminology this says that we agree to write all state or level equations in the form + dt) = x¡(t) + dt I + dtu¡(t) j= 1 such that a level variable is a function only of the previous value of that level variable, the rate variables x,it), and the exogenous inputs u¡(t). Thus the present value of a level variable is determined by the past value of the level variable and all rate variables, which are assumed to be constant from t to t + dt in this difference equation formulation. Equation (4.4.4) is just a particular version of Eq. (4.4.2), the basic birth-death equation, and illustrates the two fundamental and distinct types of variables, level variables and rate variables, in a.system dynamics model. Only state or level variables are necessary to completely describe a system. If all state or level variables are known at a given time, all other variables can be computed from them. .In particular, we can compute rate variables from level variables. The exogenous inputs u¡(t) are assumed to be known time functions and do not dependupon the state or rate variables. In system dynamics, we assume that rate variables cannot be measured instantaneously but, instead, can be measured only as an average over a period of time. Rate variables cannot directly control other rate variables. There must be an + dt) = x¡{t) + dt xit)[K ¡j+ A¡j+ - Kij_ Aij_] + dtu¡(t) + u¡(t) j= 1 xj(t)[K ¡j+ Aij+ - Kij_ A¡j_] j= 1 Here the Xi' i = 1,2, ... , n, represent the n level or state variables of the system, such as population, natural resources, or pollution. u¡(t) represents exogenous deterministic or random inputs which could result, in part, from modeling error. The K¡j+ and Kij_ expressions represent cross impacts or the nominal percentage rates of increase and decrease in variable X¡ due to level variable Xj. These nominal values are defined in terms of chosen dates at which data are available. The Aij+ and A¡j_ terms are the incentives that modify the normal influx rates according toexisting conditions in the system being modeled. These incentives are of value 1 under normal conditions and are expressed as a product of a series of multipliers that quantify the effect of a single-parameter factor upon nominal rates, such as the effect of pollution controls or land-use practices upon the rate of oil field development. Thus these incentives can be written as products of the form A¡j+ = I1M¡j+ (x l' x 2 ,···, x n , u 1 , u 2 ,·.·, where we note that the multipliers are functions of not only the state or level (vector) variables x T = [Xl' X 2 , •.. , x n ] but alsothe exogenous inputs uT = [u 1, U2 ,···, Un]. One of the major contributions of Forrester is contained in the multiplier functions, which, when combined as in Eq. (4.4.8), form the incentives that are quantifications of the dynamic model hypotheses relating the various effects between states and exogenous variables. If a given multiplier has a value of 1, . then conditions, for that multiplier, are precisely the nominal conditions. Values less than or greater than 1 represent a tendency to decrease or increase the associated level variables. 34 and 4. Consumption may.4.. The following assumptions lead to the models of Figures 4. 11.4. 8. 7. LI'\.ULLJ 1. This is indicated by the variable RRSM.:' vr I"\L. Price Controls §. A variable name ending in "R" indicates arate (an exception is PRC). Expansion to the truly large-scale problem that exists in the real world is a major task. (4.35 whose names end in "D" are "desired" quantities. the consumption rate of gasoline (GRC) is determined pri­ marily by the demand for gasoline (DFG)...and long-term demand-supply submodels. \ . 3. To iIlustrate further the formulation of system dynamics models involving multiplier nonlinearities of the form of Eq. The demand for gasoline will also depend on the price of gasoline (P). the supply of crude oil available for refining may also limit the refining. and the simplified model presented here is only a suggestion of the type of approach to be used in systems dynamics model development.5. in part. 2.. 10.' ' ".. The price of gasoline (P) is limited by the maximum price (P max) allowed by the goveromental price controls. it appears desirable to consider these level variables as exogenous inputs for a system dynamics model consisting. those which cannot be quickly changed. This in turo affects the mileage obtained and consequentIy influences the demand for gasoline. are as follows: Number of producing fields (NPF) Domestic refining capacity (DRC) Tanker port facilities (TPF) Because these level variables change very slowly with respect to the first three level variables.' L' '11 ••" ' ' ..."--. 9. We choose to investigate sorne problem elements leading to an impending gasoline shortage."" . A suggested set of difference equations for this model is shown below. In general. A separate submodel may then be developed for the long-term level variables.8). 4.. of the first three level variables. However. CKI'Jf\ " VI:. and a variable ending in "T" indicates an adjustment time for the corresponding rate variable.33 and 4. Thus we pro pose a two-level hierarchical structure for the system dynamics model of the gasoline supply and demand problem. The rate of change in the price of gasoline (PRC) is proportional to the difference between demand for gasoline and supply of gasoline (GS). I"\fVI'\L r ':'f. The oil refining rate (ORR) depends mainly upon the domestic refining capacity (DRC) and the difference between desired supply of gasoline (GSD) and actual supply (GS).rate. let us consider a simplified version of an energy supply and demand example. The short-term dynamic structure of the gasoline supply-and-demand problem appears to involve the following level variables: Crude oil supply (COS) Gasoline supply (GS) Free market price for gasoline (FMP) Long-term level variables. This is indicated by the variable CRSM. " -l-l I ¡ . however. Similarly. variables in the above discussion and the models of Figures 4. The short-term model of Figure 4. .. To complete the model development we must identify all unknown parameters in these equations and specify all needed functional relations.34: EMe Figure 4. 5.34 is described by Auxiliary equations Dynamic structure of short-term factors in gasoline supply. The environmental controls (EMC) in place determine the type of gasoline produced.34 "'''LJ DFG(k) = fl(NC)[ECM]f2[P(k)] CSD(k) = f3[DFG(k)] . The price is also affected by the tax per gallon rate (TPG) and the free-market price for gasoline (PFG)..IVII'--J IYI. hierarchically structured on the basis of time levels of influence..) JI J Exampk 4. 6.... The demand forgasoline (DFG) is primarily determined by the number of cars in use (NC). We will develop the short. r L/YI U" "". be limited by the supply of gasoline available for consumption. ··-·to DPWR GSD = gasoline supply desired PRC = price rate of change . Refining Cap.35 .1.auxiliary NC = number of cars EMC = environmental control multiplier CSD = crude oil supply desired ORR(k)] NFD(k) = fs[DFG(k)] PFD(k) = RCD(k) = f7[DFG(k)] f6[DFG(k)] • Rate variables = [NFDTr 1 [NFD(k) . P = price TPG = tax per galIon • Variable definitions .rate )( Figure 4..GCR(k)] COS(k + 1) = COS(k) + ~T[DWPR(k)+ COIR(k) PFM(k + 1) = PFM(k) + ~T[PRC(k)] COS = crude oil supply GS = gasoIíne supply PFM = price .GS(k)][RRSM(k)] = [GCT] -l[DRG(k)][CRSM(k)] • Level variables + 1) = GS(k) + ~T[ORR(k) ..·. Long-term elements in the systems dynamics model for gasoline supply.COS(k)][IQ] = [ORTr1[DRC(k)][GSD(k) .free market The long-term mode! of Figure 4.NPF(k)] OFDR(k) = [OFDT]-lNFP(k) PCR(k) = [PCTr 1 [PFD(k) .l[ITPF(k)][CSD(k) .1 Dom.TPF(k)] NFDR(k) ..35 is described by the folIowing: • Auxiliary equations GS(k • Variable definitions . PFM(k)} DDWPR = domestic weIl production rate NPF = number of producing fields IQ = import quota COIR = crude oil importing rate TPF = tanker port facilities ORR = oil refining rate DRC = domes tic refining capacity RRSM = refining rate-supply multiplier [This variable allows for the fact that refining rate (ORR) may be limited by the supply of available crude oil (COS)] CRSM = consumption rate-supply multiplier [This variable alIows for the fact that consumption rate (GCR) may be limited by the supply of available gasoline (GS)] GCR = gasoline consumption rate + TPG • Variable definitions -level • Rate variables DWPR(k) COIR(k) ORR(k) GCR(k) = [SWPT] -l[NPF(k)][CSD(k) .··__···. GSD(k) = f4[DFG(k)] PRC(k) = [PRCTrl[DFG(k)-GS(k)] P(k) = min{Pmax . DRC 1 X .COS(k)] = [COIT] . 8). and these table functions locate.OFDR(k)] TFP(k + 1) = TFP(k) + L\T[PCR(k) . the program statements = [RCT] -l[RCD(k) . Time Delays and Averaging.:JICIVI url'VI'1'VIIL':> IVIVUCL':> POR(k) NRCR(k) OROR(k) = [POT] -lTPF(k) • Level variables + L\T[NFDR(k) . most simulation languages andcomputer routines do inc1ude table functions.:> needed is the ability to obtain a nonlinear function of a single state variable. Thus it will often be better to specify the relationship between x and y by means of atable function. for example.'~U N3 N1 Figure 4.4. The name of the table is TNAME.1) = NFP(k) • Variable definitions .:>IV'V.4. We see here that computer simulation will be an invaluable support to analysis of the impacts of altemative controls or policies. and N3 is the interval in X between table entries. X is the input variable for which the corresponding table entry is to be located.Level NFP = number of producing fields TPF = number of tanker port facilities DRC = domestic refining capacity "~7 Y. One such nonlinearity was previously mentioned..K = T ABLE(TNAME.rate 11. (4.:>r. XK. AIso. when we attempt to validate system models using system parameter identification methods.K in terms of the table of Figure 4.POR(k)] DRC(k + 1) = DRC(k) + L\T[NRCR(k) . Also CAIC'V. To identify the parameter a in the equation y = ax 2 using data over the range O to 1 of the x variable and then assume that this equation is valid regardless of x could lead to considerable difficulties. something may happen at time tI but we may not be able Y(k) This brief study of a system dynamics model of the energy supply-and-demand problem has considered only one aspect of this complex problem. Nonlinear functions The example just presented has introduced the need for nonlinear functions in system dynamics modeling and simulation. N2 is the value X for the last table entry.36.auxiliary NFD = number of fields desired PFD = port facilities desired RCD = refining capacity desired NFDR = new-field development rate OFDR = old-field depletion rate PCR = port construction rate POR = port obsolescence rate NRCR = new-refinery construction rate OROR = old-refinery obsolescence rate • Variable Definitions.9) would define the auxiliary variable Y(K) or Y. Nonlinear functions often occur in systems.36 XK N2 Typical table function for system dynamics modeling and simulation. Many individuals and groups will perceive nonlinear functions in the form of atable rather than as sorne analytical function. we must be careful not to overstate the range of the level or state variable over which the identification is accurate. N3) TNAME = Q1IQ21 . ·IQM . N2. the multiplicative nonlinearity of Eq. .DRC(k)] = [ROT] -1 DRC(k) • Variable definitions . Often it will be desirable to incorporate averaging or smoothing operations as well as perception time delays in a modelo For example. generally by straight-line interpolation.. Clearly.­ (4. and certainly for any realistic look at this complex large-scale issue.OROR(k)] NFP(k 4. This will allow us to obtain products of state or level variables. N1 is the value X for the first table entry. Fortunately. those values that are between points entered in the tableo In the Dynamo simulation language. N1. The Qi are the numerical values of the table function Y at the various values of XK. computer simulation is needed here. Of course a program can easily be written on a computer to compute y = f(x) for a specified nonlinear function f. We determine this through measurement until time t 1 + T.4. As with the smoothing operator. the output is XD(t) = x(t . Thus any smoothing of rate or level variables and any perception delays wiU necessarily increase the order of the differential or difference equation that represents the system. A dotted line is used for the ftow into the level variable (SMOOTH) to indicate that this is a ftow of information. we should note that the initial condition for the difference . For both smoothing and time delay operators. we may use (liT) S~-tx(-t) dt as representative of the average of x(t) over the last T seconds.38.4.4. Apure information or material del ay is a delay such that if the input is x(t). You may obtain the difference equation corresponding to this by writing the expression for XD(t + dt) and expanding the result in a Taylor Series and dropping the higher-order terms. we obtain XD(t df + dt) = XD(t) + T [x(t) .xT(t)] (4. Thus. are given much signiQcance in weighted averages.4. (4. Eq. If we have a long information or material delay.13) where T is the amount. Often it is not a level variable that individuals observe but a perceived value of a level variable. for a variety of reasons. we may well wish to smooth a level variable as well. For example. it may be advisable to use .4. Thus the system dynamics diagram of Figure 4. in time. of course. However. in determining the smoothed value than the distant past.37. cascade three delays as shown in Figure 4. After a modest amount of manipulation. Many operations may be used to smooth or average a variable x(t).T.39. it may well be preferablefor you to use an exponentially weighted time average defined by xT(t) = -1 T ft _ e(t-tJ/Tx(-t) d.T) (4. and.XD(t)] (4. there are valid arguments to suggest that this weighted perception delay is much more typical of human information delay than a pure time delay.4. it is often true that the immediate past is more important.10) CQ for your smoothing operations. T represents the weighting factor and has the dimension of time. each delaying the material or information by an amount T/M. It must remove chunks of data as t increases. While the system dynamics representation of Figure 4. It turns out that the difference or differential equation necessary to implement this finite time average is difficult because it is of infinite order. (4.38 is correct for material delays. A somewhat more satisfactory physical picture may be obtained by noticing that the rates associated with each difference equation (4.38 is entirely equivalent to that of Figure 4.13) are just the difference between the output of the previous time delay segment and the time delay currently being computed. Thus.11) which is equivalent to the differential equation dxT(t) dt = ~ [x(t) _ xT(t)] T ( least. If we use Eq. Figure 4. Thus it is reasonable to call T the smoothing time. The system dynamics equation for a single time delay is precisely that of Figure 4.14) This is the same expression as the difference equation for the weighted time averaging operator. because the material does not "ftow" from input to output. We will now develop expressions for smoothing and perception operations. M delays. which is thé data over the most recent T seconds. not fundamental system level variables.4. Thus we do need a method of smoothing these. of the delay.12) The system dynamics diagram for the smoothing operation is illustrated in Figure 4.4. (4. these smoothing and perception operations are not regarded as level variables . AIso. \Ve obtain the approximate difference equation for a weighted time average operation: xT(t + dt) = xT(t) dt +T [x(t) .37 Systems dynamics model of smoothing or averaging operation.13). it is not physically appealing. we have need to model a time delay function. Physically only data from t to t . which is more physically appealing for a physical time delay.10) to obtain the express ion for xT(t + dt) and make a simple Taylor series expan­ sion. which may be the level variable delayed in time. It was mentioned earlier that instantaneous values of rate variables could not be measured. These will be in the form of difference equations. for example. Worker housing may also be built and will eventually decline to underemployed housing. Two of the most widely reported uses of system dynamics modeling methodology have been in the development of models for the city and the world. industry. Doubtless this is due to the value of the efforts themselves. Each subclass of people has its own housing: • Premium housing (PH) • Worker housing (WH) • Underemployed housing (UH) Premium housing must be constructed and will eventually decline to worker housing. generally the initial value of the smoothed output or the initial perceived delayed output. Forrester organized the structures of an urban city area and the world into system models showing life-cycle dynamics of various forms of growth and decay and conducted various experiments utilizing these models.¿3¿ JrJILIVr u ANAL DI::> Uf" ALI tKNA IIVt::> I I . in order to start the simulation. and follow-up study as have these. Applications. In the efforts at modeling an urban city and the world. and considerable concerns with respect to sorne of the assumptions used to obtain the models."u CA IC''IJIV/''IJ ~J. because there may be a difference in age distribution between various population categories~unemployed migrants and managers.n¡VIIL. The urban dynamics model considers one city only.39 Three-stage material flow time delay. Each of these three fundamen­ tal sectors is divided into three subcategories..6. Underemployed housing may be built. We will examine salient features of both of these models as we conclude our presentation of the system dynamics approach to systems analysis and modeling. . Even though these models are about three decades old now.. The population subcategory consists of • Managerial-professional (MP) • Skilled labor (L) • Underemployed (U) Underemployed people are defined as unemployed and unemployable people as well as those in unskilled jobs or marginal activity who might work during periods of great economic activity.4. Aggrega­ tion of housing into only three types rigidly attached to a population . Example 4." . xO(t) Qutput Figure 4. The initial seminal efforts concerning these models are due to Forrester and his colleagues. Underemployed housing declines after a long period and is demolished to make room for new land use. as well as other models that represent various perceived limits to growth. The model enables studies of the impacting dynamics of population.. Forrester's interest in modeling the city is a somewhat abstract one in that he does not fit the data and parameters for his city to any particular city.. j oei Tim = TÍ~' / oel~"'--'" Delal TiZTi3 TiZTi. for example. but its princi­ pal source is declining worker housing.. many of the issues they address are still contemporary ones./ Figure 4.. No spatial behavior is involved. jobs.J the contemporary urgency of the problems they discuss. Failure to disaggregate population by age sectors could be cited as one disadvantage inherent in this nonspatial model. Perhaps no other single works in system dynamics have generated as much interest.. and homes. To fully utilize a city model for projections of residential land use would seem to require disaggregation of both population and housing by other factors in addition to those cited in this work. and housing within this mythical city. controversy. It is necessary for you to obtain sorne perceived initial condition.38 Three-stage information perception time delay modelo equations is basically unspecified because no smoothing 9r time delay has occurred at the time the simulation is started. The urban dynamics model. Within a given class of workers. Effort is primarily directed at discovering the essential features of the city and expressing relationships between these features in mathematical terms as difference equations. each worker is the head of a household with an average family size that demands housing of a specified type.J IV'VUCLJ r". and the city is assumed to be surrounded by a limitIess environment that is capable of producing and receiving unlimited quantities of people. forthemost part..JI"'J ¿. Each of these areas is made up of three sectors. 5. Numerous multiplier functions are used in relating various impacting events... Plots are produced for population. and business. labor. 4. housing.¿. but presumably from sorne outside sponsor. Equilibrium is reached after about 150 to 200 years using Forrester's original parameter settings and policies. Each population sector bases a decision to inmigrate or outmigrate on different criteria and values. in this . Tax-per-Capita Subsidy Programo This simulates a ftow of tax dolIars into the city from sources outside the model area. Demolition of slum housing 2. Underemployed Training Programo This "upward mobility" program alIows for the training of a certain percentage of the underemployed to qualify for jobs that would enable them to be classified as labor. the city government.. and premium housing. FinalIy they are destroyed to create vacant land which may be reused. Underemployed Job Programo This provides additional jobs for the underemployed such as might occur under a public service job programo 2. Declining Industry Demolition Programo This provides for the blanket removal of a certain amount of declining industry. Industry in the urban dynamics model is subdivided into three categories: • New enterprise (NE) • Mature business (MB) • Declining industry (DI) The three categories of business employ heads of household from the three population categories in differing mixes.óJ-. Industry and people inmigrate to the city.J IVIVUCLJ "''VU CA' e. There are a number of reviews of this effort and similar efforts in the literature. The programs that Forrester reports in his 1969 book are only briefty reviewed here. including references 62 and 63. There are numerous perception time delays reftecting the time between when conditions actually change and when the change is recognized. New Enterprise Construction Programo This allows the model user to evaluate the effects of new business development. is independent of the type of city or the population density of the city or the time at which the housing is constructed would seem not fulIy in accord with modern practice. vr J J J I l:/YI . Worker H ousing Construction Programo This provides for construction of middle-class housing. such as state or federal funds. 7.óJóJ are as folIows: 1. Business is classified as new enterprise. 6. Slum Housing Demolition Programo This represents a "slum clearing" policy that does not explicitly take into account such things as reloca­ tion or redevelopment. worker.. The model simulates 250 or more years of urban development. for example. 9. and time constants in the model are such that steady-state behavior is reached in approximately 200 years. Premium Housing Construction Programo This pro vides for construction of expensive housing for manager-professionals. This is. and it alIows members of the labor class to move up to managerial­ professional status. I"U'1/VII\. nonspatial mode} of an urban system with fixed land area. There are a number of implicit values incorporated in the urban dynamics model. What may result in attracting inmigration for new enterprises and manager-professionals may welI result in outmigration of the unemployed. Simulation. The original urban dynamics model alIows the user to introduce any of 10 "urban improvement" programsinto the system to determine their effect on the ~rban environment The basic policies originalIy available for implementation u. "JI\. Encouragement of new enterprise 3.f'\L I 1. Population is broken down into underemployed. Low-Cost Housing Construction Programo This represents the building of low-cost housing with funds not froni. and mana­ gerial-professional. These are the values of the analyst. Industry and housing must inevitably compete for this land when the city ages if the land becomes scarce. Outmigratíon occurs if the city becomes unattractive enough. or declining industry.') . which are provided by the normal dynamics of the city. It turns out that the most effective of Forrester's programs are as folIows: 1. f'\1'II\L r :>. 8. New enterprise requires more man­ agers than declining industry. they become mature businesses and eventualIy become declining industry. Labor Training Programo This program is much like the second one. The model is a twentieth-order dynamic model with nine fundamentallevel variables and a sampling period of one year. 3. As new enterprises age. The Forrester urban dynamics model is a twentieth-order. mature business. 10.1'(1'11"\ I I Vt::> subcategory and the assumption thateach dwelling occupies a fixed land area that. and the variables do not change significantly thereafter. Demolition of declining industry "Effective" is used here to mean that the city isricher in the sense that more tax revenue generators and fewer tax revenue users are present at equilibrium in the city. Housing consists of underemployed. nonlinear. The city is restricted to a land area of specified size. This condition of stagnation is what is believed to be typical of our urban areas today. New enterprises are assumed to be outgrowths of existing industry or attracted from the limitless external environment. and new homes are built if theattractiveness of the city relative to its environment is great enough. Forrester indicates that manifestations of stress in the world system are excessive population. and he incorporates a permanent value structure in the model. As with the urban dynamics model. Thus. . quite different model response has been obtained. Forrester's efforts. inelude a favorable outIook toward Western concepts of progress and the upward mobility of people. and social services really help the poorer sections of the city more than wealthier sections? Again. housing construction. sani­ tation. C/VI u r '''/\/VI/L~ /VIVU~L" I\NU tX I tNSIONS :¿57 dynamics model for several cities. however. have led to a great dealoffurther effort toward understanding the dynamics of social systems. Do economic crises. and jobs? Should there not be an urban ~ r ~. the wealthier elasses? Could a tax that ineffect introduces income redistribution cure the ills of the city? Would a tax on natural resource depletion. slum housing projects. To accommodate such concerns. Thus we can only regard Forrester's pioneering efforts as invaluable in adding an important new dimension to thesolution of many contemporary problems through systems analysis. Any of these three characteristics. do increasing taxes always attract the underem­ ployed group and drive away the managerial-professional elass? Forrester assumes that taxes are equivalent to public expenditures. Three premises appear to be basic to the philosophy used to characterize this world model. . There is a very long perception delay on the part of society for any problem involving population growth and ecosystem and natural re­ source limitations . in many cases. For example. alter the results obtained from use of the model? Forrester is not particularlyconcerned wÍth things that are happening beyond his community. it has generated a great deal of new activity in urban modeling and. In real life there appears to be a shortage of this type of housing as well as rapidly deteriorating housing. almost three decades ago. Forrester's worker elass. It appears from Forrester's writing that "expert opinion" rather than real data were used in constructing the many and varied nonlinear relations and multipliers used. 1. that most city taxes are nonprogressive. they are there to be viewed and. and one of the major aims of the world dynamics model is to resolve this question. and increasing the attractiveness of the city for management and labor are all central policies in the Forrester urban model. Is it the rniddle-elass and skilled workers. that seem to be most violentIy opposed to new tax increases or is it. in synthesizing programs that correlate well with F orrester's suggested strategies. has generated a great deal of criti~sm. He has been criticized for developing a model with built-in bias. Do additional public expenditures for police. in which the system dynamics methodology was applied to the behavior of the impacting forces of global dynamics. because of its complexity and the many assumptions made. While the model may well contain im­ proper assumptions. reftected in his "preferred" urban revival program. Where are the suburbs? How do racial problems affect the city? By changing the parameters within Forrester's urban dynamics structure as weH as changing the structure itself.7. Sorne would elaim. Iricreased public expenditures in the form of welfare. and the like are attractive to the poor. as Forrester elaims. The capacity of the world ecosystem and world resources is such that exponential growth cannot be sustained forever. or pollution. Various methods to validate urban models have been proposed. Balancing the city budget. such that game-theoretic simulations would be needed? Does the availability of underemployed housing serve as a lure for the poor? Does the lack of availability of jobs in other cities and the hope of finding work in the city playa leading role in migration to the city'? Does the model cover the housing situation adequately? In many cases the model predicts an excess of housing for the underemployed. in better educational facilities and enhanced cultural activities. When they are combined. we have a complex system of interacting problems. hitting lower-income groups relatively more than the mana­ gerial-professional group. statements. for not explicitly stating a value system. 2. there is little concern for such issues as natural resource use and sustainability. observations. when they are believed to be improper. sorne would elaim that increased taxes result. not entirely unreasonable because there was no "elient" for the study. we must remember that optimizing the wealth of the city is a sort of natural objective if we are to use the model as developed with its given structure and parameters. along with competition among the cities for industries and jobs. Over the last. rising pollution." He favors economic self-support. and for making untested and possibly invalid assumptions. Example 4. of the city would have to be superimposed on the dynamics of the limitless environment. thedynamics .:¿56 ANALYSIS UF AL' tl<NA IIVt~ case. Thus. decreasing per capita taxes. Forrester's implicit values. Forrester published his text World Dynamics. which might be viewed as more of an effort to demonstrate the power of the systems dynamics modeling approach than as an effort to provide implemen­ table policy decisions. He is concerned with whether these are causes or symptoms of world problems. and that theseare more attractive to the managerial-professional elass than to the underemployed. changed. The World Dynamics Model. . three decades. and technological and social develop­ ments not reftect themselves in such actions and events as inmigration and outmigration. In 1971. by stimulating such a vast number of questions concerning the dynamics of social systems. and assumptions were the primary inputs determining the structure and parameters of this world model. The Forrester urban dynamics model is certainly a pioneering venture in the study of urban dynamics. Physical attributes of the global system all obey the birth-death popula­ tion model equation and are characterized by exponential growth.4. Forrester appears to strongly believe in the precept that "aH progress is good. and they can be used to validate the structure and parameters within the structure of Forrester's urban model. 3. taken separately. and disparity in standards of living. would be quite serious. wars. better teehnol­ ogy. The four fundamental level variables of Figure 4. capital investment. Thus Forrester's world dynamies model is a fifth­ order differential equation. Even though pollution is low and natural resources are fixed at their 1970 level.)/VIV. Population. Now the population rises still further. This perception delay. and several models are now available with a technology sector. material standard of living. He argues most persuasively that his model is better than any of the existing mental models. and deathrates from pollution. There is oÍle perception time delay. as contrasted with the common approaches of "muddling through" or "try it for awhile and see what happens.258 ANALYSIS OF ALTERNA TlVES commonly caUed a "mess. In addition. 2.40 are connected by determining the seven rate variables of this figure in terms of the four level variables. that natural resources can be "ereated" through the use of teehnology. This solution might be acceptable if the policy producing it were not unrealistic. The major state and level variables for the world dynamics model are • Population • Capital investment • Natural resources • PoUutíon From these major sectors and their interactions come such other important variables as quality of life. resource consumption will be reduced 75%. ánd natural resourcedepletion are aU in a cycle. Natural resource usage is forced to zero in 1970." If an accurate model of the global system could be produced and if governments and individuals would believe the model and implement "solutions" demonstrated to be workable by the model. then major advances in policy determination would result. No costs ofthe 90% pollution abatement have been allowed for.40.40 Four fundamental level variables in the world modelo " r" I C/VI v r IVI\/VIIL" /VIVVCL" I\IVV CA II:IV. Simulations are availablé in Forrester [56] for the basie world model with a given set of initial eonditions. More food must be grown and more land must be oeeupied. and possible hidden intents. it appears that the "world quality of life" has been declining since 1948. incompleteness. 3. and poUution rate normal is reduced to 10% of its value in 1970. the Deathrate DR G? Capital Investment in AgricuRure Fraction CIAF Perr:eplion Time Delay Figure 4. adds a first-order differential equation to the four first-order level equations.3 from its 1970 value. The four level variables comprised in the world system are shown in Figure 4. and eventuaUy poUution becomes exeessive or natural resourees are exhausted. Natural resouree usage is zero. we now also suppress the effect in 1970 of crowding on the birthrates and deathrates. Quality of life due to crowding has decreased by 60% from its 1970 value. material standard of living ¡ncrease. Natural resources must mono­ tonicaUy decrease. From these results. The effects of both natural resources and poUution are ameliorated. This lack of a technology sector has been one of the major criticisms of the Forrester world model. However. in the model. and stagnation occurs. PoIlution has been redueed to 10% of its original value. with their inherent fuzziness. due to death from poUution. Several possible policies that might pose a cure for the world's ills are proposed and evaluated: 1. and industrial growth causespollution to rise more rapidly until there is a substantial crisis around 2050 with poIlution 40 times its level in 1970. as in the previous policy. and quality of life due to pollution and food is essentiaIly unchanged. becom­ . because population and industrial growth only consume 25% as mueh natural resources as before. Population then rises greatly and stabilizes. and there is not much hope at present of diseovering infinite natural resources. Population growth. beeause there is no physical way. More food. Popula­ tion rises. the earthbecomes overburdened. Reducing demand for natural resources allows further growth. World population declines." To develop a tentative model and draw various conclusions seems to be Forrester's intent. and better health care each encourage a larger population. A rising population creates pressure to inerease industrialization to maintain and inerease the standard of living.) "c3:1 standard of living is inevitably reduced. of 15 years. more material goods. the fraetion of capital devoted to agrieulture. If everything else remains the same. In 1970 natural resource usage normal is reduced to 25% of its original value. natural resource usage is still affected bypopu­ lation and material standard of living. the overaU quality of life due to material standard of living rises by a factor of 2. as in the previous policy. and poUution aU peak between 2030 and 2050 and decline thereafter. to 20% of its peak level. pollution increase. The quality oflife rises even higher immediately after introduction of the policies and then declines dramatically as pollution risesand causes a dramatic decrease in population. From these the "Forrester's suggested" strategy that emerges is as foHows: a. Several combination policies are suggested. Regardless of the criticisms of the world model. A text by Clark and Cole [68] and a more recent paper by Bremer [69] present detailed surveys and comparative studies of existing global simulation models. there appears to be little doubt that it provides a relatively clear exposition of the assumptions on which it is based. So the overall quality of life is now substantially lower . Natural resources are still slowly declining. 4. The world dynamics model is too highly aggregated. Shortages of natural resources and rapid population declines due to poHution. Reduce normal natural resource usage rate 75%.ZbfJ ANAL YSIS OF AL TERNA l/V!:!) ing 10. Natural resource depletion is a larger problem with the increased standard of living. present efforts of Third World countries to industrialize may be ultimately unwise from the standpoint of their own long-range interests. food shortage. and we now have even less food. which rises momentarily after reduction in the birthrate. 7. Reduce normal pollution rate 50%. Industrialization may be more of a disturbing force to world ecology than population. There may be no way to increase the quality of life to its value in the immediate past.72]. pollution is not incréasing exponentially on a worldwide basis.) MUUl:DI\NU !:XI!:N5/0NS Z61 if the world is to survive. AIso. Lack of a technology sector to give weight to the human ability to solve environmental and resource problems is very unrealistic. 1. The resulting increase in quality of life and capital investment reduces sorne internal pressures limiting population. 8. but industrial society places a 20 to 40 times greater burden on pollution generation and natural resources consumption systems. The quality of life. This last result. Forrester presents the foHowingissues as being raised by the world dynamics model. Birth and population control programs may be inherently self-defeating.)Il:M UYNI\M/L. It is not possible at the present time to construct a model explicitly relating poHution and health. Forrester views this as another example of proposed solutions producing counterintuitive and counterintended results. The pollution crisis reappears. In 1970 the food ratio for the world is increased by 25%.)T. and hopefulIy a "better" model is a result. suggests that an acceptable global equilibrium is possible but requires policies that will be very difficult to implement from both a polítical and a social viewpoint. 5. and the net effect of the resultant growth in population is to bringthe quality of life back to its original baseline in about 20 years. Reduce capital generation rate 40%. The normal birthrate is lowered by 30% in 1970. however. 6. Third World nations may have no hope of reaching the standard of living of industrial society. Increased industrialization by higher capital investment generation is next attempted as a single policy. Highly industrialized society may not be able to sustain itself. Reduce normal food production rate 20%. Reduce normal birthrate 30%. this study resulted in considerable praise and concern. 8. 4. The quality of life from crowding is worse than before. c. 6. The instan­ taneous availability of more food causes the quality of tife to rise. or crowding will create serious problems over the next 70%." 7. The world system would collapse before the Third World population could "catch up. Increased food production thus accomplishes virtually nothing. Thus. The gist of aH of these appears to be tha! population and industrial growth must be checked . 3. and they also include the limits to growth situations we discussed in our last two examples [73]. assuming validity of the model. As agriculture reaches a space and poHution limit and industrialization a natural resource and pollution limit.8 billion rather than 9. and the population actually rises. ánd will create problems sometime after 2100 unless sufficient recycling and material substitution can occur prior to that time. These range from software project dynamics [70] to organizational learning [71. d. Sorne specific conclusions drawn were as follows. The normal birthrate is next lowered by 30% in 1970 as a policy. These can then be argued. Sorne of the many criticisms are important. There is potentially much that industrial ecology [64-66] and design for environment [67] can hope to accomplish. the quatity of tife will fall and will stabitize population. 9. System dynamics modeling has been used for a plethora of otherapptication areas. No real-world data have been used to validate the model. The resultofthis hybrid policy is to decrease population slightly below its 1970 level and increase thequality oflife. The population in Third World countries is quad­ ruple that in industrial society. As might be imagined. 5. The DYNAMO simulation language . resulting in even more pollution than if the technological innovation had not occurred. A technological innovation reduces normal pollution generation by 30% in 1970. This allows the population to grow even further. 2. and new international strife over the ecosystem could reduce the world standard of living to that of a century ago. again declines significantly.7 billion with "policy" 2. and normal natural resource usage is reduced to 25% ofits 1970 value. b. e. dx¡jdt. In this cross-impact analysis-like approach. The Model Is Validated.:. each is assumed. All state variables are bounded. we see that Assumption 1 is satisfied and the state variables are indeed bounded between O and 1. originaHy postulated by Julius Kane [76. It is also straightforward for us to show that in the limit as Xi becomes O.X In X is . it is particularly appropriate for use by a group. STELLA. 2.¿ is described in reference 74.4. Inspection of this equation shows that for Xi = 1 the derivative.r:'It:M UrNI\MIL:' MUUt:L:'I\NU t:.¿b. increases in the one state variable will produce an increase in impact on the system. computer simulation of the resulting equations will impart an appreciation of the influence of system structure and parameters within the structure on system behavior. (4. you assume that rather than estimating conditional probabilities for use in a cross-impact analysis.4. (4. 4. When a state variable is near its bounds of O or 1. While it is suitable for use by an individual. A rate variable will increase ot decrease depending upon whether the net impact of all state variables interacting with the rate variables is positive and enhancing or negative and inhibiting. The Model is Simulated on a Computer. is discussed in reference 75. to be computable from the set of equations (4. Thus Assumption 3 is satisfied by Eq. Even though these initial etIorts were first described more than two decades ago.8. With scaling of state variables. or 3. in much the same way that cross-impact probabilities are specified.15). is O. Bounded growth and decay of state variables exhibit the familiar sigmoid shape. X¡ and the various model parameters are adjusted until it is felt that the various responses are appropriate.4. If these par­ ameters are specified.4. Rather than attempting to determine specific physical relations for aH rate variables. (4. If the term . Complex interactions are described by a pair of interaction tables or matrices A and B. For all state variables but one constant. Xi' and then identification of a set of impact parameters.L. KSIM is designed to incorporate a feeling for the linkages that interconnect the elements of a complex system. Example 4.4. aH state variables.15). the cross-impact of various state variables upon the rate variables of a set of first-order ditIerential equations is obtained. insight into the implications of system structure by associating with the structure a set of interconnected first-order ditIerential equations with unspecified parameters. These elements and an appropriate structure can often be obtained from sound application of the requirements and issue formulation methods you studied in the previous chapter. These procedures may be used to identify essential elements and a structure for a complex system. since variables of human and physical significance cannot increase indefinitely. We 1dentify State Variable Elements. or KSIM. (4. because we have a rather complex nonlinear first -order equation. The KSIM process involves first the selection of a set of state variables. }= 1 ( aijxj + b¡j-d dX j ) x¡lnx¡ t (4. referred to as Kane Simulation. The impact parameters a ij and bij are adjusted to enforce satisfaction of Assumption 2. in KSIM.3 Workshop Dynamic Models ~ -. The KSIM methodology provides further. Use of the KSIM approach results in a simple but powerful set of ditIerential equations that can be used for forecasting and planning purposes.15). This results in identification of the set of parameters ajj and bij in Eq.15) in that the state variable will be bounded between O and 1 and the impacts will be very small near these bounds.15) In this equation.4. let us consider the simplest case where N = 1 and b = Osuch that we have the "simple" system ditIerential equation dx = -ax 2 lnx dt (1) At first glance it might appear that it would be difficult to even simulate a first-order exponential decay model response x(t) = X o exp[ -at]. To gain further insight into this rather unusual ditIerential equation.4. 4.77]. can be bounded between O and 1. we define X¡ = the ¡th state variable N = the total number of state variables xj = the cross-impacting variables ajj = the long-term impact of x j on Xi bjj = the short-term impact of xj on X j The steps and associated assumptions made in the development of KSIM inelude to following: 1. and because it forces the derivative to go to O for Xi = O or Xi = 1. and a personal computer-based language. The expression Xi In X¡ modulates the summed impacts in Eq.¡It:N:'IUNS ANAL YSIS OF ALTERNA TlVES . for the continuous-time nonlinear ditIerential equation dx¡ = -d t . the chaHenges posed relative to sustainable development needs are quite important at this time. Mié Specify the lnteractions Between 1hese Variables. we also have Xi In Xi = O.¿63 Variables for the M odel.15). the influence of impacting variables is less than when the state variable is not at either extreme of the operating region. The results ofthe model simulation are analyzed In this subsection we develop a deterministic dynamic simulation methodol­ ogy.4. In fact. raising it to a power greater than 1 will not result in los s of numerical accuracy. which would greatly increase solution compIexity as well as specification complexity. (2) to obtain Inx(t + At) -lnx(t) = -ax(t)[lnx(t)] At (3) ""'/1IYIIL~ lVIVUCL:J /1/VU rA' t:. Here. (1). then Eq.&0'1 J 1 J .x) and dx dt = axf(x) with f(x) = O for x = O and x = 1 which is equivalent to + At) = [x(t)]l-aAtx(t) x(t (4) This will be an especially accurate difference equation.4. this would convert the set of first-order equations into a set of second-order equations. the aij represent the impact of xj upon the rate variable dx. We note that x Inx is essentially constant over a wide range of x. Specifying the arbitrary function f(x) could allow us to specify the precise nature of the saturation that occurs for extreme vaIues of x(t).x) rate modification expression. In his initial works.. 3aM (8) which is an exact solution to Eq. (1) becomes dxjdt = 0. (4. For negative a the solution decreases in time. For positive a the derivative dxjdt is positive and the solution is increasing in time. The solution to Eq. We note that selection of Eq. Equations (3) and (5) may be combined as x(d + At) = [x(t)]q(t) (6) In Eq. However. (1).3ax.x) expression would allow generation of a more symmetric sigmoid curve. The equations dx dt = ax 2 (1 ./dt. In a similar way we might seek to incorporate a term cij d 2 xjjdt 2 into Eq. Because x is Iess than 1. then the response is essentially that of a linear system during the time that x remains within this region 0. Kane has presented arguments to show that the growth rates near birth (x = O) are generally faster than growth rates near maturation (x = 1). x(l .3ax for differing positive or negative values of a shows that. It is but one equation among many that has the desired property dxjdt = O for x = O and x = In x(t + At) At also have this property.1". correct as At --+ O. We can develop a difference equation solution to Eq. and the bij represent the impact of changes in Xj' dxjjdt.5At[lal .15).AL f/:RNA IIVt~ constant for an appreciable range of values of X. Thus we approximate the x In x terro as a constant. (1) is fairly arbitrary. and this has the soIution x(t + At) = x(t)eO.In x(t) = . Obviously there will be processes for which a term other than the x In x term will be more appropriate.3 for 0.x In x is indeed nearly constant and approximately equal to 0.3. for times sufficiently small.5At[lal + a]x(t) (7) Ifyou now approximate -x(t) Inx by 0.4. the saturation effect does not occur and the solutions to the two equations are essentially the same. However.65. as you can easily verify. (2) is then approxi­ mately In x(t + At) .= - x(t) ft+M X a[xlnx]dt (2) t which is equivalent to Eq.)/Uf\l~ :¿65 where 1 + O.x) is not as constant for intermediate values of x as is x In x. When a is positive. Solution of the difference equation (3) and the linear differential equation dxjdt = 0. Kane reports that many groups have difficulty understanding and estimating the bij termo Thus there would appear to be Httle gained by/.65. we see that the expression Xi In Xi is convenient to force a birth-maturation-type sigmoid effect. (6). The accuracy of Eqs. and actual sigmoid curves are often more like those generated from the x In x term in the rate modification expression than they are like those in the x(1 .15).x In x is nearly constant. L/VI U 1 ANAL Y~/~ VI. The expression . and may be sol ved iteratively for an approximate x(t). and we integrate Eq. we will use this term in all of our following developments concerning KSIM. specifically x(t) In x(t). this wouId represent an added compIication to aIread y complex probIems. over the entire time range of integration. upon the rate variable dx./dt. However.c-cc-:'" .a]x(t) q(t) = 1 + 0. (3) and (4) will suffer for positive a.15 to 0. However. then the solution to the nonlinear first-order differential equation is essentiaIly that of a first-order linear differential equation if the solution values of x do not exceed the range over which .15 < x < 0. we say that we have a negative or inhibiting impact of x on dxjdt through feedback (negative feedback). which is equivalent to x(t + At) = 1 [x(t)] I + aAtx(t) (5) When a is negative. Using the x(1 . (4. we say that we have a positive or enhancing impact of x upon dxjdt (positive feedback). and it is better to use for the constant x(t) In x(t) the vaIue x(t) In x(t + t). Thus if the state variable x remains within this region. To do this we propose the integration j X(t+M) dx . We may regard q¡(t) as 1 minus·the sum of the impacts on Xi.19). it is more convenient to obtain a difference equation that can easily be processed on a digital computer. By so doing..4..4. Aswas the case in the previous example. (4.16) through (4.A. You may rewrite this equation as 1 + M) - N [ In x¡(t) = . a¡jxj + bij d/ In Xi dt X¡ I.4.18) = a qualítative variable caBed quality of service which will inelude numerous factors: train comfort.266 ~ ANAL YSIS OF AL TERNA TlVES incorporating this extra termo In fact we will often be able to set bij = O and obtain reasonable models from Eq. 8.4. information supplíed by the group is made explicit. You may accomplish this in a convenient manner by using step functions for the ajj and bjj terms to switch in different policies as a function of time. 7.4. Example 4. A group unfamiliar with quan­ titative techniques and differential equations may well wish to begin this by assigning interaction impacts to a matrix with numbers chosen to represent zero.4. Determine appropriate scaling such that each state variable can reason­ ably be expected to vary between O and 1. Let us consider a deterministic dynamic simulation based upon cross-impact of potential growth and decay of passenger railroad service. this form of q¡(t) may lead to computational inaccuracy for the posítive or enhancing impacts on Xi' for which the signs of the aij and b¡j are positive.J IYI\.4.. to the model. 5. or intense interaction of an enhancing or inhibiting nature. etc.15) as a continuous time differen­ tial equation. This may be rewritten as Xi(t ¿fJ/ I I 'U""'\/VIIL.4. 2. and structural information is enhanced with numerical informa­ tion.15) may be obtained by solving the difference equations (4.4. courtesy of employees. Iterate through steps 2 to 4 until the group accepts the model response as appropriate. but this added complexity would be very difficult to deal with. dX.. 1 = a qualitative variable called management innovation T = a quantitative variable called traveled passenger miles per month . Determine appropriate initial condítions for each state variable./ULLJ "'. Apply different proposed activities.15).L. We assume that the essential elements for the problem are as follows: S 1 = a¡j + bij dxi t ) 1.(t)] xj(t) .. (4. Appropriate initial conditions for Eq. We should note that there is no fundamental reason why the impact variables aij and bij cannot be functions of time. 9. l¡i t ) dx. 6."'U CI\' CI'I. moderate.19) previous chapter may be used to formulate the etTort.) -' L. Rather than attempting to solve Eq. and then regard as constant all the functions of on the right-hand side of this equation such that you obtain In x¡(t LJ where lij(t) is the total impact of state variable j on the ith rate variable. Determine the time response by computer simulation of Eqs.4.17). Use the numerical results from the model as analysis inputs for the interpretation step of the particular systems engineering phase under consideratíon.4. J= 1 a¡j + b. The approaches described in our 1. (4. (4.j~l aijxj(t) Xj + bij dX] d/ In x¡(t) M + M) = [xi(t)]q¡(t) (4.~ ( dX.Jd XJ t t (4. or policies and policy interventions.16) must be selected in order to start the computation. Identify fundamental problem elements..t j~l [11ij(t) I + lij(t)]x/t) xit) dt (4.J' L/tI'.9. Verify and validate the results of using the model. (4. (4.16) through (4. 4. low.4.4.16) and (4.J'V1V':' We recommend use of Eqs. we use the same procedure that was used in the previous example.16) where ~ [ q¡(t) = 1 . To avoid this potentíal problem.19) to conduct a KSIM exercise.4. 3.17) A valid approximate solution to the original differential equation (4.('J) . We rewrite the q¡(t) term as q¡(t) 1 + M(magnitude of sum of inhibiting impacts on x) = 1 + M(magnitude of sum of enhancing impacts on x) or f ¡ N _ 1 + 2 M j~l [Ilij(t) I-I¡/t)] xj(t) q¡(t) 1 N 1 + 2A.. Determine the cross-impact relationships..t. We suggest the following procedure for establishing a KSIM model: j= 1 integrate from t to t + A.t. adherence to schedules. frequency of travel. = .4. beca use increased passenger volume ..25.AL' tKNA" Vt:> ¿DlJ 1 O + + + + ++ T a 12 =+ a 13 = .18) and (4. As seen in Figure 4. so that the nature of the management innovation is designed to reduce the quality of service to passengers. (4. Because sorne ofthe impacts are enhancing and sorne are inhibiting. it is somewhat better from a numerical accuracy viewpoint to use Eqs. You could attempt to solve these differential equations directIy.40 at .17). in which management deliberately downgrades service to drive off passengers. including the following: 1. If you run several simulations with differing aij parameters and initial conditions. in which management is the scape­ goat of government and other policy. The American dream car hypothesis. will have to be used to obtain computable equations from the differential equations. While these six equations may appear more complicated than the original differential equations. The sign of a23 is therefore -.'> UI..4. The public is doomed hypothesis.'UCLJ nf'lU CA' CI'I.. and T. such as Runge-Kutta or Taylor series. you set a 31 = a 32 = O.:)IVIV. The battered management hypothesis. The parameter a23 is initially positive ( + 0. management innovation (1).41 are those of Figure 4..16).19).J I ANAL y. in which a2 3 is changed to a negative value (-0. The following reasoning is used to construct this cross-impact matrix: a 11 = O I.42 illustrate simulations using this model for the specific parameters and initial conditions indicated in the figures.41 and 4. whereas it was +0.42 represents the "battered management" hypothesis. quality of service (S).. We shall assume bij = O.4.41. 3. Figure 4. because quality of service enhances passenger volume beca use increased management innovation will enhance passenger volume because increased passenger volume will inhibit further increases in passenger volume y ou may easily write the explicit differential equations for the KSIM model as dS dt = (a 121 + a 13 T)SlnS dI dt = (a 21 l + a 23 T)I1nl ¿DY + M) = [S(t)]q¡(t) I(t + L\t) = [I(t)]q2(t) T(t + M) = [T(t)]q3(t) is valid for the A matrix.41 might be regarded as an idealized nominal set of curves in which a 13 and a 33 are set equal to O.41) to simulate imposition of policy to decrease management innovation as passenger volume increases. Figures 4.4.. 2.25) to indicate that increased passenger volume enhances management innovation. (4. they are in a form suitable for direct iteration. (4. This is the battered management hypothesis.4.'>. whereas a numerical algorithm..results in government policy to impose restrictions that decrease management innovation. you could use the discretizationscheme discussed earlier and obtain.J' .­ a 21 = O a 22 =+ a2 3 = a 31 = a32 = a 33 = + ++ 1 T because improvement of quality of service has little impact upon improvement of quality of service beca use increasing management innovation is enhancing to quality of service because volume of passengers is inhibiting to quality of service because quality of service is essentially unrelated to management inno'Vation because management innovation will enhance further management innovation beca use increased passenger volume will inhibit management innovation. and passenger volume (T) all in crease under the nominal idealized condition.. from I The qj(t) terms may be obtained from Eq. Alternative­ ly. In this idealized case. The first curves in Figure 4. increased passenger volume will not directIy impact or affect either quality of service or further passenger volume..¡VIIL. which assumes that the public's love for the automobile drives it away from the railroad.25 in Figure 4. You may accomplish this by changing a12 from + to -.' S(t then S + L/VI Eq. The initial conditions used in Figure 4. To obtain this..J IV'L. 1. you should be able to test various hypotheses. Improvements in service and innovation do not act to increase passenger volume. the approximate difference equations It would appear that the cross-impact matrix if S J' dT dt = (a 31 S + a 32 l + a 33 T) Tln T You now have the task of choosing specific numerical quantities for the several values of aij and the initial conditions for S. such as those discussed in our last chapter.1 .3 0. called system dynamics.7 0. O O 2 3 4 5 6 7 8 9 Time Figure 4. 1 0.5 0. and other pertinent factors.3 A Matrix .42 Battered management theory. systems engineering programs may and typically do encounter large-cost overruns.1 O O 2 3 Figure 4. The results are dramatic in that quality of service (S) ando management innovation (1) decrease significantly over time. and perhaps even for the sponsor of or client for the efIort. We mentioned sorne of the software available to support modeling efIorts.000 0. These simulations were chosen for their usefulness in explaining significant features of the approaches rather than because they were indicative of the complex nature of actual problems for which solution has been attempted using them. MA 01760-1500.7 0. Joint probabilities for the occurrence of future events are determined in the case of cross-impact analysis. 24 Prime Park Way. you look at the macrostructure of a system. for this reason. needed to accomplish activities associated with various life cycles in systems engineering efIorts is a major need. cost overruns may lead to delivery delays and user dissatisfaction. there may be much reluctance on 'the part of the sponsoring organization or its potential customers to . This process can be enhanced using various formulation methods.000 0.41 4 5 Time 6 7 8 9 10 Idealized nominal conditions.4 0.4 1 0.AL r tKNA IIVt5 4. The first approach is based on population models. When costs are overestimated.5 T . telephone 508-647-7000.2 t 0. There is a variety of general-purpose simulation software.4 0. time t = 7.250 0." of the system you are considering..5. Natik.~ s 0. The third approach is based on even greater simplifying assump­ tions in that the form of the difIerential equation used to model the system is fixed. the KSIM approach is related to the cross-impact approach.~ 0. acceptability.2 0. The 'most valuable inputs to a cross-impact or KSIM model are identification of the relevant state variables.250 0. it is necessary to know the efIectiveness that is likely to result from given expenditures of efIort. There have been a number of extensions of the basic KSIM modeling approach. For example. including efIort and schedule. In the second approach.300 0.. 10 ECONOMIC MOOELS ANO ECONOMIC SYSTEMS ANALYSIS The accurate prediction of the cost. Matlab is available from The MathWorks Inc. These determine the structure of the system for either approach.000 0. This may lead to any of several possible embarrassments for the systems engineering organization.8 4.9 0. Matlab is one such general purpose package that contains a simulation application module that is called SimuLink.350 0. Deterministic responses are determined for the KSIM model output indicating levels of use.¿/u t:LV/'l/VM/L MVUt:L:> I\/'l/U tLVNUMIL ANAL Y515 Uf. Several simple simulations have been described in this section.9 0. you attempt to model a system based on the precise microlevel elements. In using this approach. or "physics. As we discussed. and a relatively comprehensive summary of these may be found in reference 78. is sometimes called a "workshop dynamic model. The first step in construction of either a cross-impact or a KSIM model is specification of relevant events. This approach is particularly well-suited to group modeling in workshops and. In a similar manner.5 0.6 0. When organizatións underestimate the costs of activities to be undertaken. Parameters are associated with this structure. ~ y~ I tM5 ANAL YSIS 271 Summary In this section we have examined three approaches that are potentially useful for the analysis of dynamic systems.000 0.6 0." It is one of the few dynamic modeling approaches that lead to construction of a working model within a rather short time periodo This fact suggests caution In that much time and efIort are often needed in order to model a system adequately.200 0. either probabil­ istic parameters as in the cross-impact methodology or deterministic par­ ameters as in the KSIM methodology.4.8 0. and attempt to develop a work breakdown structure (WBS) of the activities to be accomplished at each of these phases.. The integration and maintenance elforts that will be needed near the end of the product acquisition or fielding life cycle are surely factors that influence costs. If we wait until after deployment. The iterative nature of these steps are very similar to the expanded logical steps of systems engineering. sometimes the distinctions made here are associated with use of the terms.43 illustrates a hypothetical WBS for a software development life cycle. Thus prediction and forecasting is the need and not after-the-fact accounting of costs. 2.. structure. In many cases.\. Breakdown structure for a grand design waterfa" software development . The product or service scope./ '-'L L . We may use cost-benefit analy­ sis (CBA) or cost-elfectiveness analysis (CEA) to help choose among potential new projects or to evaluate existing systems for various purposes. as we will mention briefly in our next chapter. and perhaps even post pone it until some development elforts and their costs have become known. .43 is that we are going to undertake a grand design-type approach for system production.. The research and development costs to deliver an emerging technology capable of being incorporated into an operational system is another factor . . We might. the more accurate we should expect to be relative to the estimated costs for development and deployment of the producto One guideline we might espouse is to del ay cost estimation until as late in the product or service definition phase as possible. especially when there are new and emerging technologies involved or when the requirements for the system to be developed are volatile. So..Something about each of these factors must necessarily be known in order toobtain a cost estimate... the word benefit is replaced by the term elfectiveness. It is very important for you to note that a cost estimate is desired in order to predict or forecast the actual cost of developing a product or of delivering a service. and a multiattribute effectiveness evaluation is used. There are many variables that will alfect cost: 1.AL I tKNA IlVt:> undertake systems engineering activities that could provide many beneficial results. In this section we will examine a number of issues that involve costing and cost estimation. The unstated assumption in the development of Figure 4. the words benefit and elfectiveness are used interchangeably. This figure and the associated discussion need some modification for other then the grand design type approach. . 3..... it is not possible to obtain a completely economic evaluation of the benefits of proposed alternatives. such as the machine configuration and operating system on which a software system must run.1~ Uf. The stability of the requirements for the product will be an issue because high requirements volatility over time may lead to continual changes in the specifications that the product must satisfy throughout an entire . forexample. such as identifying potential system modification needs. cost is an important ingredient in risk management. and a realistic WBS would be much more detailed than this.J I"II~ .. as well as associated elfectiveness estimation. systems acquisition life cycle./' :¿7:¿ ANAL y:....or seven-phase development life cycle. The more detailed and specific the definition of the product is.J I J."--' I "' '\.. LIVIJ n l 'VI'"\L r . or we could attempt to develop a macrolevel model of development cost that is influenced by a number of primary and secondary drivers. " . A natural model for this is the life-cycle phases for production. The broad goals of cost-benefit analysis (CBA) are to provide procedures for the estimation and evaluation of the costs and benefits associated with alternative courses of action. In such cases. will be another factor.44 shows Level4 Leve/S Figure 4. . We also see that there is merit in attempting to decompose an acquisition or production elfort into a number of distinct components. L / L\. This approach is hardly feasible... The newness of the product or service to those responsible for its production. ' . or any of the life-cycle models described in Chapter 2. There are a number of operating environment factors.43 life cycle. Thus.-. and to the system development community in general. Figure 4. and complexity will obvi­ ously alfect the costs to produce it.L \.*. such as to enable us to consider iterative development or incremental development [32]. The work elements for each of these phases can be obtained from a description of the elfort at each phase.::JI':> ¿/~ the three. you must generally estimate costs and elfectiveness of a system very early in the life cycle. There are several steps in a cost-benefit analysis which correspond to steps in the systems engineering process. beca use estimates need to be known early in the definition phase in order to . We could attempt to base our estimates of cost upon the detailed activities represented by this WBS. consider 'f \. we should have an error-free estimate of the costs incurred to produce a product or system.. Figure 4.\.determine whether it is realistic to undertake detailed production. size.JI''. that influence costs. this is not at all easy to do. Oftentimes. 80].45 illustrates three typical cash flow situations. A fairIy simple calculation indica tes that the figure is $64. at the present time. In present-value analysis. We will briefly examine this subject here. The firm estimates that an annual percentage return of 18% is required to justify the investment. N Po = L: N PO. Much more detail is available in any of the many excellent engineering economic and economic analysis texts that discuss this subject [79. relatively simple closed-form expressions for present worth result as shown in Figure 4. as well as those that take a more general approach [81].44 Typical cost-benefit and cost-effectiveness assessment process. this relationship. then we earn iP o in interest (4.n' this may be written as PO. given its risks. Then we will return to our discussions of work breakdown structure and cost­ effectiveness analysis.5.5. Finally.000. suppose the purchase by a firm of a $60.000 cost of the machine. In sorne cases the annual amounts of a benefit or a cost are constant over at year n.5. that is to say the present value of a future amOl. When we deposit an amount Po into a savings account at the beginning of year O and the annual constant interest rate is i. able to generate the income stream produced from the machine. Many ways can be used to derive these expressions. Because this amount is more than the $60. let us review sorne of these basic algebraic concepts. it shows the equation you might use to calculate the future value of investing a payment for several time periods at a constant interest rate. the investment is therefore worthwhile from the perspective of this simple analysis. It also shows the equation for calculating the present value of a series of future uniform amounts with a constant interest rate over the time interval. we then have an amount F 2 = P 1(1 + i) = F 1 (1 + i) = P o(1 + i)2 at the end of the second year. where F 1 is the future value that we will have at the beginning of year 1. and we have available at the end of the first year the principal plus interest. One of the efforts needed in a cost-effectiveness assessment is that of referring all costs to sorne particular point in time. Should the interest rate from year k to year (k + 1) be ik.2) where An is the amount of money invested at different points in time and the constant interest rate over the time interval is i. which we denote by P O. If this amount is invested for the next year.44. the firm calculates what amount would need to be invested at 18% to yield the four-year income flow of $24. it depicts how to calculate the future worth of several uniform amounts at the end of a number of time periods.560.1) Stated in present value terms. Then.n n=O L: = An(1 + i)-n n=O (4. We simply add these amounts for all n under consideration and then obtain for the present value of a series of investments PO. Most economic models of costs and benefits use the basic concepts underpinning present value analysis. such that it then becomes possible to compare different cost streams. It represents the steps involved in a typical cost -effectiveness assessment. again assuming a constant interest rate.000 for 4 years.1 + i)-n N Po = L: PO. Let us now describe sorne of the basic techniques and principIes needed to accomplish these steps. This is the amount that is. First. At the end of N years we will have accumulated a grand total amount of FN = P o(1 + i)N (4.3) Of course there is no need for the interest to remain constant from year to year. if the discount rate is constant over the períod under consideration. or F 1 = Po + iP 0(1 + i). where the interest rate varíes from year to year.4) This expression represents the present worth of the several amounts An invested for n years.ECONOMIC MODELS AND ECONOMIC SYSTEMS ANAL YSIS Is CB/CE Analysis Appropriate I 275 in one year.n = An 4. just as there is no need for the amount invested to remain constant.5. One simple .n n=O (4.n = An{l Figure 4. we then have n-1 n (1 + ik)-l k=O Present Value Analysis Present value analysis is a technique for evaluating the economic merit of a potential project. For example. Figure 4. Therefore.000 machine would generate annual returns of $24. .r..+ 1 We can solve this difference equation with arbitrary i". inflation r.. at the Nth period. however.f''\ 1/ V C J Single Payment r+ t J ¡__ J . and dn is the depreciation in year n_ If we assume that this is augmented by an amount An + 1 at the beginning of year n + 1.) (4.)(l .n_ We subtract this from the expression i . Then it is a simple matter for you to show that the result of making an investment of amount A over N years is given by the future monetary amount at year N: F = PN •N = [(1 + l)N . For example. r".L The present value at time N of an amount 1 invested annually from time N + 1 is also i .\.(l + i n )(l .:-~-t (1 F = P(1+ i)n p P P=A i(1 +i) n Compound Amount A A A tt _¡ Pn + 1 . r.+ 1. of an amount 1 invested annually for N years_ We find that the present discounted worth Po of a constant amount A invested at each of N periods from n = 1 to N is given by = A[(l + i)N ..n = Pn ... is the inflation in year n.= i ­ l a 2 --=a+a +a 3 +--­ 1..1 and obtain the expression for the present worth. but can become tedious_Use of a spreadsheet is recom­ mended_ . or appreciation.1 F=A-r-­ Figure 4.. In the simplest case.:JIJ vr t:LUIVU/VIIL /VIUUt:L:> Ii /VU t:LUNUMIL :....7) = P. For the most part these other factors can be consideredjust as ifthey were interest.+ (1 + i)-n + --. can be obtained from the future worth equations (1 277 annuity payments that an initial principal investment will purchase or for calculating constant-amount mortgage payments_We must.l (1 + i)-N + it = A[(1 + i)N -1]i.45 Simple net present and future value ca/culations. .f"\L' C"-''''. the initial investment Pn•n is zero and the annual investment and rates are constant.+ 1. remem­ ber that other factors such as inflation and depreciation.1 .a which will converge for a < L Here... the worth at year n + 1 of an investment at year n of Pn that is subject to interest i. of annual payments from this relation as the expression F = P(1 P.­ This follows in a very simple manner from the identity P + in )(l where in is the interest in year n. then the worth of the investment at the start of the next period is way is to note that the present worth..l]i.I]Ar l where 1 is the effective interest rate which is given by 1= i.1 and the present value at time Oof this amount is i ..1(1 + i) . and depreciation d is given by the expression Uniform Payments A A A F y:./ " /"."+ 1 = P.d.d. need to be considered when evaluating the present worth of an alternative.+ 1 to yield the future worth of any given investment path and annual investment over time.5. I tM~ ANAL YSIS (4S6) where you will notice that in Eqs_ (4_5_5) and (4S6) we have dropped the subscripts on P and F for convenience_ Relations such as these are especially useful for calculating the amount of approximation obtained by dropping the products of small terms_ We then can express the effective interest rate as I~i-r-d Hence we use an effective interest rate that is simply the true interest rate less inflation and depreciation. + A"+l or P. Actual calculation using these relations is quite simple conceptually.I1L 1.. and A.r n )(l .+ 1 .ti..+l.) + A.ird (455) If the percentage rates are all quite small. little error results from using the We may easily calculate the future worth..d + rd - ir .. we let a = (1 + i) ... + i)-l + (1 + i}-2 + --. at time .".. at time Oof an amount 1 that is initially invested at annual interest rate i beginning at year 0.r. d".n(1 (1 +i)n... . It is that interest rate that will result in economic benefits from the project being equal to economic costs. the net present value.2 Economic Appraisal Methods for Benefits and Costs Over Time Several methods can be used to determine the economic value of alternative projects or systems... Another common method of project evaluation ·is called the internal rate of return (IRR). we set the net present worth equal to zero and solve the resulting Nth-order algebraic equation for the IRR. and I IRR(A) '\ f'roji¡ct""lr' ' PtojectA N PC o = ¿ n=O Cn (1 + i)-n ¿/y Figure 4. ._.. Another method.. IRR starts with an estimated income flow.46 IIlustration of potential difficulties with IRR as a criterion. ... assuming that the interest rate is constant. then the present value of the benefits and costs are. It tends to favor short-term projects that yield benefits quickly.:' .. and then it compares that rate with sorne predetermined standard of measure.. ... Assuming the forecasts are done honestly. It simply determines the time required from the start of a project until total revenues flowing from the project start to exceed the total cost of the project. or net present-worth-cost ratio.. ..46.L_. as contrasted with projects that yield long-term benefits only slowly. solves for the present value of the income stream. however. The choice between them should depend on the peculiar circumstances of the firm and the investment it is considering and whether this reinvestment at ·the IRR assumption is acceptable.'. the payback period is a rather naive criterion to use in evaluating alternatives. and compares this figure with up-front cost..5.._.~ . :..PC o = (B n . and that monetary returns received during the period that the investtnent is active can be reinvested at the IRR.. These equations lead naturallY to a consideration of the benefit-cost ratio (BeR).-L//'VVIVIIL.:) r:> I ~M.. payback period calculation. As can be easily shown. has benefits Bn and costs Cn in year n. assurning that all cash flows can be invested at the internal rate of return. The BeR criterion may result in a different ranking also.I/1L .__. _..Cn)(l + i)-n n=O To obtain the internal rate of return..:::::::.::. If we assume that a project.. the two approaches represent equally legitimate tools of analysis. Thus the number that results from the IRR calculation may give an unfortunate impression of the actual return on investment. A and B.:> 4. which is to be calculated over the remaining life of the project."". This approach determines the interest rate implied by anticipated returns on an investment. . of duration N years.PCo = BCR _ 1 PC o Sometimes the ROl criterion is caIled the net benefit-cost ratio (NCBR).. The IRR may be calculated fairly easily. We first consider the selection of a single project or alternative from several. The reinvestment at the constant internal rate of return assumption is inherent in the IRR equation.__ InterestRate . IVIVLlCLJ I'\f'\lU tU. CKIVI1II V C.. there is no real reason to use a BeR criterion..vr CI. NPV starts with an assumed interest rate. the rate at which the NPV of the two investments are the same. the IRR is simply the inverse relation of net-present-value analysis (NPV). and compares this interest rate with a predetermined standard of comparison.::. An immediate problem that arises is that NPV and IRR may lead to confticting results.. does not consider the time value of money at all. . Here we see that if the actual interest rate i is less than 1. Two projects. Unless costs are constrained. we prefer € ~ j I + i)-n n:O .:> ~~u n. as described above.:>1... solves for the interest rate this ftow implies.. The assumption that it is possible to invest cash ftows from the project at the internal rate of return will often be impossible to fulfill. IRR(B) ~ . One is present-value analysis. N PBo = ¿ Bn(1 CLLl/"V/\11IL ..... It is important to contrast and compare these concepts to determine those most appropriate in particular circumstances.. may have the NPV-versus-interest rate curves shown in Figure 4.. _. which is the ratio of benefits to cost and is given by N BCR = PBo _ PC o - n~o Bn(1 + i)-n n~o Cn(1 + i)-n N The return on investment (ROl) is the ratio of the net present value to the net present costs and is therefore given by ROl = PBo . the calculations are done accurately. In sorne cases.l\1"I\L y::tl:> The net present value of the project is given by N ¿ NPV = PBo . Dominance. This simple example iIIustrates some of the potential difficulties with careless use of IRR as an investment criterion. y and if y .ECONOMIC MODELS ANO ECONOMIC SYSTEMS ANAL YSIS investment A over investment B if we use the NPW criterion.. then investment x is preferred to investment y. or x . we would prefer project B to project A by the simplest IRR criterion.. if the actual interest rate is greater than the IRR for each investment... if x is preferred to y. Because IRRB is greater than IRRA..I ' oy andy = [Yo. If cash flow x is at least as large as y at every period in the overaH investment interval (O.. appear quite abstract. we prefer investment B. the interest rate that can be obtained on an investment or some specified discount rateo The rule should select. This dominance identical except that an incremental cash flow obtained by investment x at some period n does not result until period n + 1 for investment y. Xl"'" X N . z...1 is the interest rate that is valid in the time intervalfrom the period j ... This continuity property is very reasonable in that it assures us that the preference criterion and associated prefer­ ences are not schizophrenic for small arbitrary changes in the return of the investments. because more is always preferred to less.. y'. because the NPW of each investment is negative. we should value investment X according to a net present value criterion.. We al so require that the criterion be transitive. Consistency at the Margin. With three cash flow vectors x. This simply says that the preference order over investments is unchanged if we shift all investment returns being con­ sidered by exactly the same number of periods... this is not the case at aH. In making this shift. We prefer to have money now than this same amount of money at some time in the future.. Consistency Over Time... This requires that the investment criterion be complete. from a set of mutuaHy exclusive projects. then investment x is preferred to y.e . which we write as x . are positive. or decision rule.. . The consistency over time requirement is based on the interest or discount rate being constant throughout the investment time interval O to N. Often. 2. YI"'" YN-l' oy and if x . which could be posed as axioms. or that y is not preferred to x on the basis of any two sequences of cash flows x and y. N) and is strictly greater in at least one period. then it turns out that we can only accept the NPV criterion. y. Despite this argument. The IRR of project A is IRRA and that of project B is IRRB.. if this is possible. AIso. If x .that is. and z.. However. as truly reasonable.. If the interest rate i is greater than 1. y. it would be difficult to argue that any of these requirements are unreasonable. The cash flows should be discounted at the opportunity cost of capital. we should be able to valuate one project independently of aH others. X O.y is preferred to a cash flow of O... In practice. it is just this sort of formulation that leads to normative or axiomatic theories. or one that would yield the same preference ordering as the NPV criterion.. which may vary from period to period. z. The time value of money property is equivalent to impatience. We prefer cash flow x to y if and only if the differential cash flow x .. we would prefer not to invest at aH. The consistency over time property simply sta tes that if x = [xo. There are five desirable properties that should be possessed by any reasonable preference ordering of cash flow vectors. This property is caBed consistency at the margino It is relatively easy to see the reasonableness of this criterio n from the satiation of the consumer property. 4. y. Yo. AH cash flows should be considered. Continuity..1 to the period j.. y.. Time Va/ue of M oney. or to customer satiation.A1though this discussion and the five requirements.. 3. there are instances where it is justifiable to use the IRR criterion.. where x' = [0.. Surely. Xl"'" XN_I]T and y' = [O. then we must have x' . or x . It is possible to show that only preferences that satisfy these requirements are those given by the net present value criterion in which interest or discount rates. 1. cash flow vector x is also 5.. in which we obtain the present value of the investment compo­ nent at the nth period X n by use of the standard discounting relation preferred to y. y. the one that maximizes benefit or value as appropriately defim:d. If we accept these fiverequirements as reasonable.. This suggests that if two investments x and y are 281 I NPV(xn ) n=O or I n=O n n N NPV(x) = Xn (1 + ij_l)-l j= 1 where ij .. Based on these requirements and the conclusion that foBows from them. By complete. we require that we can always say that x is not preferred to y.. then for sufficiently small e. then transitivity requires that x .. There are several essential properties that any rational investment rule or criterion should satisfy.. n n NPV(xn ) = Xn (1 + ij_l)-l j=l and then add together all of these present values over all of the interest bearing periods to obtain N NPV(x) = property is equivalent to greed... we must be careful that we do not lose any investments. YI"'" VN_I]T... the major potential problem with it is that it assumes that interest payouts during the time of the project can be reinvested at the internal rate of return. you need to know a good bit about the activities associated with production of a product or service. process dynamics need to be considered. An expert opinion approach would be based upon soliciting the wholistic judgment of one or mode individuals experientíally familiar with the program under consideration. for example. . for example. appropriate for a series of evolutionary or incremental builds than they are for a grand design life cycle. A three-phase life cycle is hardly sufficient to describe a realistic systems engineering effort in sufficient detail for detailed costing. This cost would be allocated to various activities and phases of develop­ mento 6. it is not at aH necessary that this be the focus of efforts in this direction. In this section we will comment on approaches for the minimization of effort. This aHows an organization to cope with competitive pressures through the development of new and improved products with simultaneous control/ on costs and schedule. such as illustrated in Figure 4. A parametric model approach would be comprised of identification of a structural model for the program costo Parameters within this structure would be adjusted to match the specific developments to be undertaken.¿o¿ I\NI\L Y~/~ UI. and schedules necessary to produce a product or service. There are interactions across the various life-cycle phases. is necessarily concerned with knowing the cost and time required to produce a product or service. These are sometimes. Often. it might try to maximize the difference between benefits and costs. or of an organizational activity within an overall project or program: 1. or closely associated programs. These approaches are not mutually exclusive and may be used in combination with one another. the cost for which is then estimated. An increase in the development costs might be associated with a reduced effort required at subsequent maintenance. In order to minimize effort. The use of prototyping. but upon a price that is believed as the maximum price that will win a proposed competition for a contract. A top down.~siS Figure 4.I\LI tKNA IIVt:> tLUNUM/L MUUtD ANU tLUNUM/L :>Y:>IEM:> ANAL YSIS 283 4. you will need to consider more than three steps within each phase. 3. Sorne acquisition programs. and time required to produce a product or service are exelusively focused on the desire to become a low-cost producer of a potentially mediocre product. and perhaps such other features as rapid production cycle time.47. Each may be more appropriate for sorne phases of the life cycle than are other approaches. may weH be more ~na.47 Hypothetical distribution of effort across systems engineering life-cycle phases and steps within these phases. and schedule. 5. or cost. but not always.3 Systematic Measurements of Effort and Schedule At first glance. The choice of a particular life cycle is a very important matter also. Each is reasonable in particular circumstances. and then in terms of the associated costs and schedule. it should wish to maximize the benefit or effectiveness of a product or service for a given cost (and effort and time). In reality. Concerns with organizational profits naturally turn to the desire to maxi­ mize the benefit-cost ratio for a product. effort. Alternately. it might appear that efforts to minimize cost. or design to cost approach would be based upon beginning with a fixed cost for the program or set of activities under consideration. 4. or projects.5. an increase in development costs might be associated with a reduction in the costs associated with deployment. An increase in the effort associated with definition may well lead to a lowering of the effort required for system development. equivalent statements. There are several approaches that we rnight use to estimate costs of a systems engineering project. perhaps with appropriate perturbations. An exclusive focus to cost and schedule reduction can lead to poor results if other efforts are not also ineluded. A bottom-up approach to estimation would be based upon a detailed analysis of the activities or work involved and the subsequent decompo­ sition of these into a number of tasks. We interpret these activities in terms of effort. Thus. in any of several forms. as well as attention to these effectiveness issues. We might imagine that we could project the effort required for each of the life-cycle phases associated with a specific system engineering effort. Analogy is an approach in which we identify a program with similar activities that has occurred in the past and use. or it should minimize the costs to produce the product or service of a given fixed benefit or effectiveness. The more analytically based approaches. You can theq produce a superior product in a minÍlnal amount of time that can be marketed at a lower price than might otherwise be the case. would generally be more . These inelude attention to product and process quality and defects. While this may well be the case. A price to win approach to costing is based not upon what it actually may cost to complete a set of activities. These process-related approaches are closely related to approaches to maximize quality and minimize defects. Several pragmatic difficulties emerge. the actual activity costs for that programo 2. An organization desirous of high product differentiation. perhaps as a fraction of total required effort within the steps associated with each of these phases. costs. will intluence the systems engineering effort considerably. Thus. however. and supplies. we continue to make a profit until the sales volume drops to 545 units.48 I! I II I I I I I I I 1000 1500 Products (C) Total revenue and total cost for a simple single product firmo Figure 4. The answer to the question.000. or perhaps somewhat below. then any competitive advantage [82] depends upon being a low-cost producer and selling at. 500 Figure 4. and ditTerences between direct costs and indirect costs. the accounting of these. to attempt to define indirect overhead in terms of the direct labor costs only.. I ! I 1. Below this level of production and sales. you need to know a number of direct cost rates for the individuals involved and a variety of nondirect costs. . TP. At a production of 1000 units. It is possible. The usual way you would generally think of this is in terms of the sort of relationship given by the foregoing total cost relationship.000 on the basis of the $1. Of course. is the fixed cost.000 in total revenue.. we provide only an overview of this very important area.48.u.-. As sales decline. R. our profit is $200.TC)jTC = 0. We will not deal with these issues here and only point them out because they are present and do need to be considered in realistic situations. These inelude: • • • • • Direct labor rate Labor overhead rate. Then you would need to consider an indirect general administration charge and apply it to the direct labor and material s costs.-------. for which analogy and expert opinion might be quite appropriate. and the cost that does vary is called the variable cost. the profit is negative. The total revenue.000 ~~--~I ~~!I <ñ -g . To estimate the cost of a set of activities. We can obtain a greater profit by increasing the price of the product. the market price. If we can truly market 1000 products. or burden General and administrative rate Inflation rate Profit These are the direct costs associated with labor.. or activities.000 in total costs and $1. Costs may also be associated with material s. There are many texts that discuss microeconomic analysis [81].200. There are a number ofways in which the fixed costs and variable costs may be defined in terms of how the total costs change as a function of production levels. There are a variety of other names used to denote these approaches. These are discussed in detail in more advanced etTorts concerning systems management.. the return on investment (ROl) is ROl = (TR .. ineluding supply demand relations for issues such as this.1. Suppose here that the market price for the product is $1200 and that we have the total cost and total revenue relations depicted in 1500 1. parts. "What is the cost?" depends very much on the judgment and decision problem being considered and how that decision issue has been framed. We may be constrained to sell at a price that is determined by the market situation that results when there are a large number of other sellers who are sellingthe product at a given price. that we are going to obtain for sale of a product is just R=PQ where P is the price of each product and e is the number of products soldo The total profit. Often. There are also a number of issues relating to support of the direct costs of labor. P. The resuIts of considering various approaches to overhead illustrate that overhead may be determined in ditTerent ways and that the concept of cost needs to be very carefully considered and explainedin accordance with these ways. The amount in the total cost equation that does not depend upon the number of products produced. A total cost equation will contain each of these elements. ineluding subsystems incorporated into a ­ product but which are procured elsewhere.20. Here. there must be a consumer demand function for the product that suggests that the demand for the product will decrease as the price for them increases. or by increasing the number of products sold. is then the ditTerence between the total revenue and the production cost. If there is no special product ditTerentiation associated with our product. There are a number of approaches that you may use to estimate etTort or cost rates. Q. Among the many types of costs that can be defined are the following: • • • • Fixed costs and variable costs Direct costs and indirect costs Functional and nonfunctional costs Recurring and nonrecurring costs . associated with the cost allocation base. or 20%. useful for development than they would be for definition. you will need to obtain curvessuch as these for realistic situations in etTorts that involve technical direction and systems management. or customers served in the case of a service. The fringe benefits associated with the direct labor would be ineluded in the indirect overhead. While this is easily stated. It is helpful to think of three different costs: These phases may be used as the basis for a work breakdown structure (WBS) or a cost breakdown structure (CBS).. Level 2 is associated with various projects and associated activities that must be completed to satisfy the program requirements.rtL I C""". Would cost . manufacturing. while desirable. Could cost-the lowest reaso'nable cost estimate to bring all the essential functional features of a system to an operational condition . ineluding the resulting cost. "Could cost" is the cost that will result if no potential risks materialize and all nonfunctional value adding costs are avoided. of the first two phases of the systems life cyele. or both. and promotional pricing. Level 3 represents those activities. and subsystem components that are directly subordinate to the level 2 projects. Program budgets are generally prepared at the project levels. . This level deals with overall system or program management. It is at this level that various detailed systems engineering efforts are described.. level for which a WBS is to be determined is selected. it is very difficult to estimate each of these costs.Program schedules are usually prepared at this level. Military Standard STD 881A governs this. Should cost . investment pricing. "Would cost" is the cost that will result if risks of functional operationalization materialize. To initiate a WBS.F" '". Actually doing it in practice may call for information that is difficult to obtain. has value depends entirely on the purpose for which the information obtained is to be used.that is. expected. for depicting cost element structures. The MIl-STD 881A does not cover extended deployment efforts. J I J \. Another approach to costs is determining the cost required to achieve functional worth . are not absolutely necessary for proper system functioning. we would expect that the total acquisition cost for a system would represent the initial investment cost to the customer for the system. to fulfill all functional requirements that have been established for the system. while there is an answer to the cost question. it must be made before a system has been produced. it is perhaps better to say that there may be many answers. and each of them may be correct.49 Work breakdown structure of subsystem elements in an aircraft system. developing. but the WBS approach could easily be extended to cover these. Three levels are defined in the DoD literature. The extension is simple from a conceptual perspective. . We may also identify a fourth level as the level for the various life cyele 4. 3. There are a number of approaches to cost as pricing strategies. A work breakdown structure is mandated for proposals a:nd contracts for work with the federal government in efforts that involve either. These inelude full-cost pricing.5. Each of these types of costs . Level 1 represents the entire program scope of work for the system to be delivered to the customer. So. Figure 4. After the functional worth of a system has been established in terms of operational effectiveness. The "should" cost estimate is the most likely cost that results from meeting all essential functional requirements in a timely manner. I:M::> A'VALY:'I:' :¿87 There are many ·others such as incremental and marginal costs. The particu­ lar subsystem. there are three fundamental phases in a systems engineering life cyele: • System definition • System development • System deployment Figure 4. and maximum reasonable-should be estimated because this provides a valuable estimate not only of the anticipated program costs but also of the amount of divergence from this cost that might possibly occur. 2. . . as well as ancillary and secondary functions that.rl" VCJ I:LU'VUM'L MUUI:L::> I\'VU I:LU'VUM'L ::> y::>..the highest cost estimated that might have to be paid for the operational system if significant difficulties and risks eventuate 1. The extent to which a given approach to costing. 1.JI.4 Work Breakdown Structure and Cost Breakdown Structure As we have often noted. along with the costs of the support items that are necessary for it to be initially deployed.the most likely cost to bring a system into a condition of operational readiness 3.49 illustrates a hypotheti­ cal structure for an aircraft systems engineering acquisition effort. it is not so easily measured. the initial system concept should be displayed as a number of component development issues. This would be the aggregate cost of designing. or system. If this cost estimate is to be useful. 2. functions. A major difficulty is that there are essential and primary functions that a system must fulfill. or otherwise producing the system. Program authorization occurs at this level. it is necessary to estimate the costs of bringing a system to operational readiness. In general.minimum reasonable. Quite obviously. and they are expressed in common economic units whenever this is possible. Alternatively. and perhaps some general knowledge of the impacts of each alternative. Then the costs as well as the benefits. In many cases a WBS is provided as a part of a request for a proposal (RFP). 2. and generation of alternative projects. Measures for different types of costs and benefits are specified. in terms of benefits and costs. one of the benefits of a proposed highway project might be reduced travel time between two cities. Computation and comparison of overall performance measures. along with present value of costs or benefits of each alternative project. There are a number of related questions that. 1. Often. ' 4. and assists in the identifi~ cation of risk issues. or effectiveness facets or attributes. The results of this formulation of the issue inelude a number of clearly defined alternatives. provides for subsequent performance measurement. some bounding of the issue in terms of constraints and alterables. ofproposed alternative plans or projects. Discounting is used to compare costs andjor benefits at different times.and CBS-type approaches may be found in references 83-87. In addition to this quantitative analysis. a list of affected individuals or groups. 3. Details of WBS.5. if possible. effectiveness and costs may be used if a cost-effectiveness analysis is desired.1 Problem definition 1. such as social aesthetic and environ­ mental effects. Issue Analysis 2.Identification of costs (negative impacts) and benefits (positive impacts) of effectiveness measures for each alterna­ tive is accomplished here. Systems analysis. It is at this level that detailed WBS estimates are obtained. and the set of multiattributed benefits or effectiveness value that is associated with each alternative project. Issue Formulation 1. System synthesis These efforts are accomplished using techniques specifically suited to issue formulation. and proposers are expected to provide this detailed costing information as a portion of their contract bidding information. Present worth is the usual discount criterion of choice.1. because the worth ofvariousattributes can be dramatically different . an account is made of qualitative impacts. Effectiveness is a multiattribute term used when the consequences of project implementation are not reduced to dollar terms. the time horizon for and scope of the study.3.2. such as identification of objectives to be achieved by projects. Value system design 1. accounting questions. There are a number of questions that can be posed. First.benefit analysis. These costs and benefits are next quantified. In order to make this comparable to monetary costs. A completed WBS displays the complete system acquisition effort and the costs for major activities associated with the effort in terms of schedule and costs. For example. Determination of such conversion factors can be a sensitive issue. and. One DoD publication describes these in terms of organiza­ tional questions. The following major steps are generally carried out in a cost . conversion factors are developed to express different types of costs or benefits in the same economic units. A list is made of the costs and benefits for each project. when answered. the terms ·benefit and effectiveness are used as if they were synonyms. planning and budgeting questions. such as social or environmental or even aesthetic. Equity considerations regarding the distribution of costs and benefits across various societal groups may be considered. and program and project revision questions. of proposed projects are identified. Among the results of a cost-benefit or cost-effectiveness analysis are the foIlowing: 1. analysis questions. The cost-benefit analysis method is based on the principIe that a proposed economic condition I::LUNUM/L MUUtLS ANU tLUNUM/C SYSTEMS ANAL YSIS 289 is superior to the present state if the total benefits of a proposed project exceed the total costs. over time.¿oo ANAL YS/S UF AL TEI<NA l/VES phases associated with the overaIl acquisition effort. and alternatives must be generated and defined carefully. we have to determine how many dollars per time unit are gained by the reduction of the travel time. They are based on the premise that a choice should be made by comparing the benefits or effectiveness of each alternative with the monetary costs of fuIl implementation of the alternative. Benefit is an economic term that is generaIly understood to be a monetary unit. and the responses to these provide valuable input for estimating WBS and costs. are computed for each alternative. An accounting of intangible and secondary costs. There are many components that comprise a WBS or CBS for a systems engineering life cyele. It then becomes a part of the statement ofwork (SOW) for the contractor selected for the program effort. such as the total costs and benefits. 2.5 Cost-Benefit and Cost-Effectiveness Analysis Cost-benefit and cost-effectiveness analysis are methods used by systems engineers and others to aid decision makers in the interpretation and compari­ son. for each of the alternative projects. Overall performance measures. objectives for the project must be identified. We will describe an approach for CBA and then indicate modifications to adapt it to COEA. provide the basis for reliable cost estimation for the work break­ down structure. of each alternative project. tables containing a detailed explanation of the costs and benefits. Comparisons of projects may also be made with respect to two different quantified units. It is vitally ímportant to develop cost (ineluding effort and schedule) estimatíon models that can be used very early in the systems engineeríng life­ cyele process. 8. Cost-effectiveness analysisis accomplished through use of a very similar approach. and schedule are relatively new endeavors. effectiveness. Use cost est~mates to evaluate project personnel on their performance. Assign the inítíal cost estimatíon task to the final system developers. 7. such as dollars (of cost) and human lives (for benefits). Planning for and communication of results is the last effort in a CBA. The purpose of these models are to predict product or servíce life-cyele costs and efforts as soon as possible. We may maximize the cost-benefit ratio. It is important that all assumptions made in the study are elearly stated in the report. aesthetic aspects. Delay finalizíng the initial cost estimates until the end of a thorough study of the conceptual system designo 3.5. ínto monetary benefit units. 6. 4. Decision making is accomplished through selection of a preferred ­ project or alternative course of action. Whíle these guidelines were establíshed specifically for ínformatíon system software. and relevant constraints and assumptions used to bound the CBA. models have litde utility íf the mixture of personnel experience or expertíse changes or if the development organizatíon attempts to develop a new type of system or a user organization specífies a new system type with which the systems engineering organization is very unfamíliar. and complex formulas. and references 88-96 are particularly recommended. We also made sorne comments concerning valuation. these models are validated from databases of cost factors that reftect characteristics of a particular systems engineering organization. 2. The impacts that cannot easíly be quantified are assessed for each alternative project. measured in human lives saved. safety. Interpretation 3. the discount rates that have been us~d. Management should carefully study and appraise cost estimates. The report should be especially clear with respect to costs and benefits that have been ineluded in the study. 5. 4. Carefully monitor the progress of the project under development. approaches used to quantify costs and benefits.1. and simple arithmetic formulas rather than guessing. Further complicating economic benefit evaluation are equity con­ siderations. and equity considerations. We are able to use the resulting effectiveness indices in conjunction with cost analysis to assist in making the tradeoffs between ·quantitative and qualitative attributes of the alternative 291 projects. An interesting study of cost estimatíon [97] provides níne guidelines for cost estimatíon. Methods such as decision analysis and multiattribute utility theory can be used to evaluate effectiveness. consíder the dífficultíes in volved in transformíng additional safety benefits of a proposed project. and other important issues. We may select the project that maximizes benefits for a given costo We may select the project that has minimum costs for a gíven level ofbenefits. which may require the costs and benefits of the project to be allocated in different amounts to different groups. Modeling and estimatíon of costs. there ís every reason to believe that they have more general applicabilíty. and net value analysis is then used to compare benefits and costs.2. 3. such as social and environmental impacts. íntuítion. We may maximize the net benefits. or benefit minus cost. These usually inelude intangible and secondary or indirect effects. Do not rely on cost estimation software for an accurate estimate. as well as to pro vide informatíon about the costs and effectíveness of various approaches to the life cyele and íts management. There are a variety of criteria that can be used. Refinement of the alterna ti ves is accomplished through more de­ tailed quantitative analysis of costs and benefits. Anticípate and control user changes to the system functíonality and purpose. This often takes the form of a report on both the quantitative and the qualitative parts of the study. This study identifies the use of cost estímates for selecting projects . costs and benefits that have been exeluded from the study. legal considerations. standards. Rely on documented facts. 3. Evaluate progress on the project under development through use of índependent auditors. Which is the most appropriate depends upon the number of alternatives being considered. or a recommended course of action.6 Summary In this section we have examined a number of issues surrounding the economic systems analysis and costing of products and services. However.2. 1. The report may inelude a ranking or prioritization of alternative projects. Such models are useful when developer organizations are stable and contínue to produce systems similar to those developed in the past for users who have simílarly stable envíronments and needs. effort. 2. There are a number of studies available addressing this important subject. personal memory. For example. Often. Economic discounting is used to convert costs and benefits at various times to values at the same time. 9.ECONOM/C MUUtLS ANU tCUNUM/C SYS I tMS ANAL YSIS for dífferent stakeholders. Costs and benefits are expressed in common economic units in so far as possible. of course. adversely impact reliability both before and during operation. Reliability. Blanchard and Fabrycky [98] note that an acceptable definition of reliabil­ ity is based on four important ingredients: ••••• I 4. or gradual. We can think of two different types of failure: 293 . reliability is a quantitative term that depends on describing failures using probability concepts. As you know. damage from shock or vibration can all. over many attempts. or component. in sorne specified operating time. But how is the failure rate determined? Gathering field data from mainten­ ance and service records can help determine the failure rateo Another way is to . deteriorates with time and is very dependent on the operational environment. MAINTAINABILlTY. traveled by plane. in their appearance. you know that. Failures can be sudden and catastrophic.1) where f is the number of system. From an individual component perspective. and how many resources must be dedicated to keep the system working. people worry about system failures. and bug. which refers to a failure that is sudden. A system "fails" when it violates one or more ofits technical specifications. AVAILABILlTY.6 RELlABILlTY. dust. The relative frequency of occurrence of successful free throws is 0. or worked with a computer can appreciate. such that it no longer meets specifica­ tions after some point in time Each of these is important. defect. Failures cost time and money and can even be dangerous.­ • • • • Probability Satisfactory performance Time Prescribed operating conditions or technical specifications as contained in the system technical requirements specifications part of system definition. A 90% shooter. An understanding of probability concepts is basic to reliability modeling.6. There are many alternative words used for failure ·or fault. Among these are error. Reliability. denoted by lambda. Satisfactory performance is determined on the basis of requirements for the system. complete. as is time. probability is a quantitative term that represents the long­ term chance of an event happening in a number of trials.·. 11 VII/LII/J/UTY. scheduling projects. Associated with this. which refers to a failure associated with the gradual change in the system response over time. much of the cost of any system depends on how frequently failures occur. or water. staffing projects. over time. usually due to component drift or degradation. You only know that. controlling and monitoring project imple­ mentations. If a component of a system is 0. we suggest. is evaluating the costs of quality. For example. average nine successful shots for every ten attempted.9 or 90%. miss. slip. Again. This same idea is used in the reliability definition. you don't know when a specific component will fail. Mil/N'II/NII/J/U TY. Sometimes they are used to imply different types of failings. You also know that these components fail an average of two times before 100 hours for every ten attempts.. There are human reliability concerns as well as machine reliability concerns. the failure rate. or the benefits of quality. we see the importance of system requirements and formulation efforts in systems engineering. Extremes in hot or cold weather. how difficult and expensive it is to prevent or repair failures. identical components will operate successfully for at least 100 hours eight times for every ten attempts. From a life-cycle viewpoirit. will. failures. in basketball. a very important parameter in reliability models is the failure rateo The failure rate is the fraction of components failing per unit time. components will operate successfully more often than they will fail. Hence. may be defined as the probability that the system will perform in a satisfactory manner for a specified period of time when used under prescribed operating conditions. and the costs of poor quality. coaches rate players on their ability to successfully make free-throw shots using probability basics.KtLlIlIiILl/ Y. there is still a 20% chance that it will be unreliable or fail sometime before the first 100 hours of operation. is defined by A=[ T (4. and generally irreversible • Orift failure. Sometimes these terms are used interchangeably. As you can imagine. auditing project progress and success. reliability is the probability that a component will not fail during the specified time period of operation.··' • Catastrophic failure. The reliability of a system. AND SUPPORTABIUTY MODELS for implementation. ideally. Formally. and evaluating project developers and estimators. maintainability. availability. ANO SUPPORTABILlTY MOOELS As anyone who has driven a car. or degradation failure. how likely they are to occur. ·r···· ·. from a mathematical perspective. Every time you put one of these components into operation. exposure to sand. and prescribed operating conditions.8 or 80% reliable for at least 100 hours under standard operating conditions. Coaches simply count the number of successful shots out of the total number of shots attempted to compute the free-throw percentage or probability of making free-throw shots. Still. and supportability (RAMS) are quantitative con­ cepts that you can use to model and analyze these concerns. T. for most components and systems. over the course of many trial runs. Following this period. the failure rate per hour..<' . are given by F(t) = R(t) f~ f(t) dt = 1 - e-ll (4. This exponential is a reasonable . the entire area under the curve is equal to unity when integrated over aH time...6.4) = fX> f(t) dt = e-M (4. .6.... f(t). hazard rateo Failures during this usefullife period are generally random and are due to sudden catastrophic events. sorne times.. we could also express failure rate or MTTF in terms of the number of failures per thousand operating hours. or mean time to failure (MTTF).. In equation form. Now. ._ density funétion in that it assumes that the rate of change of the failure rate is proportional to the failure rate. Hence.50 Traditionally assumed "bathtub" time-dependency of failure rateo A= number of failures = MTTF total operating hours Clearly.. at time t...'¡ The Exponential Due lo Syslem Correction Failure Density ! Aging ... The distribution of failures of a system as a function of time is modeled by a probability density function.2) I Decreasing Failure Constant Failure Rate Ilncreasing Failure Rale I Region . such that we obtain ~ a: " (4. the system enters a "useful life" period wh~re the failure rate is essentially constant. with t = 9 = l/A. ~ /"\ V rl'LrlU'L' If a system has a constant failure rate..1 = 0.f(t).6. .. f(O) = A ::3 ~..I\'C LII"lD'L" conduct tests befo re fielding in order to caIculate the failure rateo You can expect that only a sample from sorne total number of identical components will be tested because it would be wasteful to wear out more items than necessary.(t) = A. we have 1 9 =-= MTBF A (4. A.. that do not show significant deterioration over time. as we also indicated earlier. Because the total area under the curve is. Function Applies i I I Time Figure 4. and other probability density functions are often used depending upon the theoretical basis for one of these being appropriate failure and reliability density functions. given by R{t = l/A) = e. in the aboye relation is usually called the failure rate or. Consider the situation shown in Figure 4. as 1 . f(t) = Ae. Initially. by definition... . or operating time.. as we have indicated. The Poisson. I " . R{t)..1 as the failure density function... binomial. then the reliability of that system at its mean life is. or as more commonly called the mean time between failures (MTBF). equal to 1. This exponential failure distribution is generally valid for systems. exponential.... •• _ _ L... normal. The parameter A. The probability that a failure will occur in time T equals the ratio of the area under the failure rate curve between O and T to the total area under the curve. suppose that you examine sorne time interval of interest caBed mission time.50. you can easily caIculate reliability. is used and we may therefore define. or df{t) dt = _ I. The more observed failure data that are available..... Rale Due lo Early Failure. This is associated with an exponential failure rate probability density function. This probability is the failure function and is usually called unreliability or F{t). Because the total area under the curve equals 1. the more accurately you can select the type of distribution that best represents how the population of identical components performs. such as modero VLSI systems." and failures here are usually those associated with poor quality.. you can expect to observe that failures occur at diffetent points of time during operation.. such as electrical failures or storms.. This is sometimes called the period of "infant mortality.7-.37.-1' ' 1 1" "L. We can define the inverse of the failure rate as the mean life. .. 40.6..6) Often. By the definition of a probability density function..r \l'" f " " ' " 11. the area under the curve from t = O to t = T is the probability that the system will fail in time T. from t = O to t = T. you will usually have to approximate the true probability distribution of failures by using a standard known distribution that fits well with the sample data.3) The event-probability relations for failure and reliability. The probabilityof such a system failing in a small time interval is independent of whether the system has been in operation for a long ora short period of time..6. .. there is a large failure rate and this is corrected through replacing initially failed system components. ..5) R(t) + F(t) = 1 (4.F{t)...111... . it is repaired and the system is again functional. you ask your plant operators to collect reliability data on a sample of 100 of the current components in use for 1000 hours of operation. f\ VI\ILI\VIUI ANAL ysls OF AL TERNA TI VES Using the reliability function.9) This instantaneous failure rate is a measure of the fractional number of system failures per unit time. When a component fails. R(t). Now. Because you want an expression for the exact or instantaneous failure rate. we recall that we can approximate the differential reliability of a set of identical components. you manage an electronics assembly plant that uses a large number of a particular component.(t).t/N)/(Nt/N) At. this means that R(t) = NI/N. you obtain A(t) = lim At -+ O R(t + . by substitution.f~ f(t) dt you can solve for the first derivative to find that dR(t) = _ f(t) dt (4. the hazard function H(t).!\t) - R(t) R(t)At You recognize the definition of a derivative in the expression aboye. errors per line item of code for software. and mean time between failure. The hazard function is the instantaneous failure rate of the system and is usually denoted as A.10) Example 4. to get the express ion (Nt/N . failure rate.Nt+M)/N t . Another important parameter in re1iability modeling is the mean time to failure. you now have a very useful expression for the instantaneous failure rate in terms of any probability density function and its associated reliability function as follows: H(t) = A(t) = f(t) R(t) (4.00067 failures per component hour with an MTBF of 1500 hours. you begin your analysis. rlUIL' I I IVIL. R(t + At) = NtH. Tocalculate the fraction of components failing per unit time. For example. you derive the following results. . It is the reciprocal of the mean time between failure.NtH. Because you know that R(t) = Loo f(t) dt = 1 . then the number of times the system fails in a time increment is N t . What is their MTBF? 2.Nt+M)/NtAt. MTBF. The fraction of components failing in the time increment of interest is the number of components that failed during the time period divided by the total number of operating components at the start of the time periods or (N t . the average failure rate. we see that we can now rewrite the average fraction of components failing per unit time as . You can derive an expression for A(t)rather easily. But you want this in reliability terms so divide the numerator and denominator by N. Because the manufacturer of the current components also accomplishes debug­ ging before delivery to your plant. You want to answer the following two questions relative to the components you are currently having tested: 1. Using the definition of expectation. If N t is the number of components operating at time t.LI' r I /"\f'YU Jvrr .11'1/"\D.R(t)] / R(t) At.1 = E[t] = too tf(t) dt (4. The new source advertises extensive quality control and initial testing that they claim has a low constant failure rate of 0.7) From this expression. To compare the new source with your current supplier. f(t).[R(t + At) . you just divide by the time period./N. you can use failures per component-hour for a mechanical system. . Using the formulas developed for reliability. A new supply source can potentially provide these components at a reduced price.6..1. you decide to assume that the current components will fail randomly in time due to chance at sorne constant failure rate that can be approximated by the exponential distribution. From these lasí two expressions. The MTTF is simply the expected value of t for the probability density function f(t). you can derive another very important function for re1iability analysis. fI" . 1VIf'\1I"1' /1. the total number of components. to yield (N t .6.6. you can write an expression for MTTF: MTIF = MTBF.'" = e-M . You calculate the reliability from R(t) = 100 f(t) dt = Joo Ae-Mdt = [ -e-M]. You can express the failure rate in a number of different units depending on the components or system you are analyzing.8) r. From the last expression. which is the probability that any one of the components will not fail by sorne time t1 by dividing the number of operational components after sorne time t by the total number of components that were set into operation. Thus. In your professional effort as a systems engineer. you can easily show that the hazard or failure rate is constant for the exponen tial failure density function. or MTTF. you can find this by taking the limit of this average failure rate as the increment of time approaches zero. Consider the situation of N identical components that comprise a system and whose failures over time are represented by sorne probability density function.R(t) dt (4.lULLJ "3/ Therefore.6. What reliability do they exhibit for a mission duration of 70 hours? As your plant operators are collecting test data. or errors per job instruction. At. errors per page of documentation.t. failures per cycle of operation.. and this allows you to write 1 dR(t) A(t) = H(t) = .¿'b KtLII\t1ILII Y.NtH.6. MA/I'III\/I'1I\D/LI/ Where you assume an exponential distribution f(t) = Ae-J.00109 0. . 1. For any given normal distribution with a mean and standard deviation.00100 0.932.00100 0.00 937.4% chance that any individual component selected will survive a mission duration of 70 hours. The data shown in Figure 4. you create the spreadsheet shown in Figure 4.tJoo =_ o o A o A Solution a.92 945.~~ R = e-J.0010 failures per component hour. From this result. Because a component in wearout usually has an increas­ ing failure rate...00106 0.70 1012. the MTBF for the currently used components is only 1000 hours. Because the failures are exponentially distributed. a useful life period of constant failure rate due to chance only. which is less than that advertised by the new supply source (MTBF = 1500 hours). A VAILA/:j/LlI Y.t = f(t) _ Ae-J. reliability in this region can be modeled by using the normal distribution. I "D/LI/ For the new supplier.. What is the reliability of the component for 100 hours of continuous operation? 2.00107 0. This means that if the component lasts throughout the 70-hour mission.00099 0. Figure 4. the components have a constant failure rate of 0. it will also have a 95. you obtain l [1 1 = oo tf(t) = 100 tM-Mdt= _ _ e-J.00103 0.51.t e-M =A And for the mean time to failure.51.'vu ::>U.00067 failures per component hour so that for the same mission that lasts 70 hours. you obtain A(t) r. its failure rate is no longer constant as in the case for usefullife. As depicted in Figure 4. Therefore you decide to change to the new supplier to improve reliability and also to reduce costs.00 1000.51 was collected at discrete time intervals of 100 hours. The en tire previous ' l00-hour period is considered as part of the successful operation time for any newly failed component. AIso indicated are the appropriate failure rate distributions for calculating reliability.. and a wearout region of increasing failure rate due to chance and wearout.OOlO)(70) = e.50 918.001 failures per component hour. you can calculate all the typical reliability formulas for any mission duration that ineludes wearout considerations.00106 0.51 J IVIVUr:L::> ¿.00067)(70) = e-O:0469 = 0.. you reach the important conelusion that useful life reliability is independent of component age so long as wearout does not begin.00099 0.­ Therefore there is a 95. What is the reliability of the component for 900 hours of continuous operation? 3.I<tLlA/:JILII Y. Using these results and the data collected from the reliability test.07 = 0..00 947.37 972. you can deter­ mine the reliability for a mission duration of 70 hours where you assume a constant failure rate ofO.954 R(t) - MTBF=E(t) . What is the reliability of the component for a mission duration of 70-hours given that it has been operating continuously for 300 hours? Answers to these questions can be quickly found using a graphical depiction of these three missions or time periods of interest. Solution b.00100 1000. Continuing with our example of components with a chance failure rate of 0.4% probability of surviving the next 70-hour mission. Hoursol Hoursol Tolal Numberol Operationol Operation 01 Componenl Time 01 Check Numberol Working Failed Working Hoursol Total Number (Hoursl New Failures Comoonents Comoonents Comoonents Operafion 01 Failures O O 100 O O O O lOO 10 90 1000 9000 10000 10 200 9 81 1800 16200 18000 19 300 6 75 1800 22500 24300 25 400 7 68 2800 27200 30000 32 500 5 63 2500 31500 34000 37 600 3 60 1800 36000 37800 40 700 2 58 1400 40600 42000 42 800 4 54 3200 43200 46400 46 900 2 52 1800 46800 48600 48 1000 4 48 4000 48000 52000 52 Figure 4.00 1008. we make further tests of the current components and find that the mean of their wearout distribution is 1000 hours with a standard deviation of 200 hours and ask the following questions. When a component enters wearout. Estimaled Failures Per Componenl MTBF Hour JIlourID 0. Figure 4.VI<. This result is given by the expression R = e-M = e-(O.O.50 has illustrated a characteristic failure rate curve that depicts the typical life cyele for a component that consists of three regions: debugging.50 1000.00 Reliability data and spreadsheet calculations for exponential failures.52 and the density function obtained for the case where wearout failures follow anormal distribu­ tion given by f(t) = cr~exp[_~(T~ MJ] . we have the expression For the failure rate. It turns out that this same reliability result is also constant for any subsequent 70-hour period of operation as long as the failure rate remains exponentially distributed.t = e-(O. RELlABILlTY. 9 = 0. the reliability is obtained from R= 1-Q Another convenient way to find the value of reliability for any time T is to use a spreadsheet's normal distribution function with appropriate specified par­ ameters.O. To determine the time when wearout begíns given a mean and standard deviation. This result is reasonable because. and TRUE is a parameter that returns a cumulative distribution function. Using Z for any time T and a cumulative normal distribution table. Then.3. in the usefullife regíon where failures are only due to chance and a constant failure rate applies. We first note that the standard normal variable Z. AVAILABILlTY. MAINTAINABILlTY.. M = mean.6915 .9048 We note that this reliability is less than what we calculated for a mission duration of 70-hours. TRUE) = 1 . which gives the position of T as a number of standard deviations to the left or right of the mean. reliability depends only on mission duration.5cr 900 Figure 4.:¡ L Now that we have obtained information about wearout failures following a normal distribution.52 301 370 Graph of reliability mission time lines.M)2 i= 1 N-1 We proceed in the following manner. 200. Q. We note that 600 hours of this mission is under wearout from begínning of wearout at t = 300 hours until end of mission at t = 900 hours. The cell equation for reliability would be R = 1 . up to any time T can be found. S.5cr = 1000 . N. aqd the number of components.3. and 5 as follows. Solution d.3085 = 0. and this provide a basis for the calculation. if the failure data follow the normal distribution. we find the reliability due to chance for the entire 900 hours of operation. This value is used to enter atable of values for the cumulative normal distribution.NORMDIST(6oo. We now calculate the reliability for a continuous 900 hours of operation from time t = O until time t = 900. we can calculate answers to questions 3.001 (100) = 0. Rwearout(6oo) = 1 . ANO SUPPORTABILlTY MOOELS Wearout Mean Wearout Begins 300 o 1000 Only Chance Faifures 1-- Chance and Wearout Failures Time ..0. TRUE) e-A! = e. Therefore we have to consider the reliability due to chance failures during useful life and the reliability due to wearoutand chance in the wearout region so the total reliability is just the product of the two failure modes: Rtotal Rehanee(900) = e- = Rehanee x Rwearout A(900) = e-(O. for a mission of 100 continuous hours of operation.O. We may obtain the time when the components first enter wearout as Tw = M .OOl)(900) = e. in the sample.) Usefullife Wearout where T = time.04065 To continue our computations.NORMDIST(T. M. Then we find the reliability due to wearout for 600 hours from Tw = 300 until t = 900. the shape of the normal curve allows you to use 100 Tw = M .3. (T¡ . 1000. none of the mission is during the wearout periodo Hence we can just assume a constant failure rate and use the exponential distribution to find the reliability for this 100 hours as R(t) = M=-¡. These results are given by N L i=l 1¡ N cr= Solution c. We observe that the mean and standard deviation can be calculated using the time of each component failure.5(200) = 300 hours Therefore. can be calculated as follows: Z = (T . 1¡.M)/cr. the area representing the value of unreliability. 4. S = standard deviation. (Hou. 200.TRUE» = O(0. so that the component has only just barely begun to experience the increasing failure rate in the wearout region. and series-parallel formo The system is as­ sumed to undergo a (catastrophic) failure if any or the individual subsystems rail. We easily see that the overall reliability for the series connection case is given by the product of the individual reliabilities.932 O X (1 . parallel elements. AVAILABILlTY.2oo.53 illustrates several simple systems comprised of elements-or subsystems-in series.RA)(l. we have n [1 - n N F(t) = 1- ' N Fi(t)] = 1- i= 1 = R'o'al 0.11) i= 1 For the particular case of exponential failure probabilities and constant failure rates. Mathematically.001 failures per component hour. The first approach is a simulation-based approach that requires simulation of many actual systems. For each block. we determine the possible functionality conditions that may exist.RB)(l. or a combination of series and parallel elements. The approaches that are taken usually involve Monte Cario methods or methods based on an assumed m-variate normal density for failure rates of individual components.TRUE» 0. is just one minus the probability that there is no failure in any of the subsystems. describe systems as comprised of series elements.lOOO.~ ) Input Figure 4. through use of this approach. or unreliability probabil­ ity. the mission starts just as wearout begins. Adllanced Reliability Models. Computing the probabilities of catastrophic failure and drift failure is needed in order to fulIy develop a reliability model. We easily obtain this result when we recall that the failure probability cumulative distribution runction. We may. The resuIts will not necessarily apply to other situations. we assume that the failure probability for system j is independent of that ror a different system k.{1. we have N This resuIt is quite reasonable because 70 hours is a relatively short time of operation given the failure rate of 0. Using conditional probability because the component must be capable of surviving to the beginning of the time period of interest and because the entire mission or period of interest is within wearout.1000. This resuIt is R'o'al = Re X Rw = 0. We note that these example caIculations are for a situation where we have components whose reliability data indicate an exponential distribution of chance failures and a normal distribution of wearout failures. " • proach is based on assumed theoretical failure rate densities.6. Re is the reliability associated with chance failures and Rw is the reliability associated with wearout failures. Now. we compute the total reliability as _ R'o'al - -(0. we want to find its reliability for the next 70 hours.53 Several networks for reliability calculations.RELlABILlTY..9264 303 Ri(t) = 1- R(t) (4.932 0.NORMDIST{300. given that a component has been in operation for 300 hours. a system is disaggregated into a number of subsystems or components associated with particular functional relationships. we use the following formula to calculate total reliability: l' I I I : i r~ R'o'al = Re X Rw(T + t) Rw(T) where T is the age of the component at beginning of the period of interest or mission and t is the length of the period of interest or mission time.NORMDIST{370. MAINTAINABILlTY.9992) (1 . For all cases.4065 X 0.2810 Solution e. For large systems. parallel. Catastrophic failures analysis is often approached through the use of block diagrams.~ B I A e ~ut R=RARaRC I F=FA FBFC Input R=1-F=l. Figure 4. The probability of drift failure depends very much on the type of system that is being considered. The second ap- R(t) = 1. In these.e ¿ (4.6915 = 0. .9998 = 0. catastrophic failures are usually more of a problem than drift failures because drift failures can generally be prevented through routine maintenance. In this case.6. we caIculate the total reliability considering both chance and wearout reliability.12) A¡t i= 1 ~A n. ANO SUPPORTABILlTY MODELS Finally. Even though this entire mission takes place during wearout.001)(70) e X Rw(3OO + 70) Rw(300) . if more than N _ K of the N components are not working. Ultimately.6. It turns out that there are at least three types of availability. 3. and ii represents the probability that the system is operational or available at time t. we have F(t) = Jl + /. is ¡ ~ l t I (4. 2. You may show that the reliability of a set of N paralIel switched systems. This is really just the reliability of this composite system that is comprised of a single system and a single repair facility. Often.1. For the paraHel case. will operate satisfactorily according to specifications up to sorne point in time when it is called upon.6.R(t) I\NU .15) Many other configurations can be established. the second system fails at time t 2 having been started at time t 1 .REUAB/UTY. Operational availability is the probability that a system.e-(H/l)I] A+Jl (4. we actuaHy can consider that aH parallel elements are initialIy armed. The overalI system fails if the first system fails at time t 1 having been started at time 0. for this system of parallel elements with reliability R(t). and so forth. 1. and so forth. Preventative or scheduled main­ tenance is not included. then we can show that the event probabilities for the system being operational at time t and of being failed at time tare given by the expressions N F(t) = i= 1 A (t) (4. distribution function for a system of N parallel elements is given by n F¡(t) = 1 ..1. when used under prescribed operating conditions in an actual environment. Thus. A relatively good source of reliability information is reference 99. or unreliability. A VA/LAB/U TY.17) and where we again assume that the subsystems have independent failure probabil­ ities. and so forth. and the last system failure at time t N occurs at a time t. things become rather involved. The inherent availability can be written as Al = MTBF(MTBF + MTTR) . R(t).. it is then subject to failure detection. This availability can be written as AO = MTBM(MTBM + . and correction in order to restore the system to oper­ ational functionality. diagnosis. This leads us to consider systems that have a failure (4. the overall system failure..13) A+Jl = [1 - e-llY i R(t) = e-A. We can show that this is the case for all N. equivalendy. The achieved MDT is the MTTR plus the planned maintenance plus logistics time. The availability can be written as AA = MTBM(MTBM + MAMT) . as is intuitively obvious because we only switch one system on after the lower-order system has failed.e-(). where MTBM is the mean time between maintenance and MAMT is the mean active maintenance time. will operate satisfactorily according to specifications up to sorne point in time.14) Alternately. where MTBF is the mean time between failures and MTTR is the mean time to repair. These issues form a part of the mathematical theory of operations research and have been explored for quite sorne time now.[1 .urrvl< II\O/LII J IVIVUI:L'> JU3 density or rate and a repair density.6.+/l)1 I 1 (4. Preventative or scheduled maintenance is included in achieved availabil­ ity. The achieved MDT is now the MTTR plus the planned preventative or scheduled maintenance. MA/N /A/NI\I1/LII Y. where a single system is switched on only after the then operating system develops a fauIt.ll and that this is always greater than the reliability of the redundant parallel structure. For example.:. we consider a paralIel standby system in which subsystem 2 is only switched in if system 1 fails. two subsystems with two repair facilities.6. We could obtain similar results for two subsystems and a single repair facility. You will find it interesting to derive this result. when used under prescribed operating conditions in an ideal environment.6. will operate satisfactorily according to specifications up to sorne point in time as required and without maintenance. We obtain R(t) = i~ (~)[R(t)JK [1 - R(t)JN-K (4. For the case where N = 2. each with the same failure rate. The mean down time (MDT) is just the MTTR.16) as the overall reliability. Inherent availability refers to the probability that a system. If we assume that these are constant in mean value and that the appropriate densities are exponential.18) The term A(t) is generally called the system availability. Preventative or scheduled maintenance is included. system 3 is only switched in if system 2 fails. The system fails if aH subsystems fail. when used under prescribed operating conditions in an ideal environment. you could utilize a paralIel redundant system where failure will Occur if less than K components out of N are working or.t[N l (~tf] . i=O l! F(t) = 1. we can show that the failure and reliability are given by R(t) = (1 + At)e.6. Achieved availability represents the probability that the system. when a componentor system fails. For the particular case where we have a constant and equal MTBF for each subsystem and an exponential failure density. as you might well imagine. There are several categories of methods that might be discussed and several taxonomies that might be used to partition these categories into more easily comprehended subunits. This follows the detection.19) A¡ i= 1 and we can compute this if we know the MTTR of each of the N units.. develop­ ment. which may be converted into a set of activities. Identification of each system or component that is likely to faiJ. '·1 I i i . MTBM. and maintainability.6.J'.. and criticality analysis (FMECA). or quality control specialists are selected as part of a systems engineering team for just this purpose. and correction of faults as they occur In this section. and deployment phases of the system Jife cyele. and maintainability efforts with other operational requirements.. This generally requires that we monitor probabilistic uncertainties associated with the various facets of the reliability. or RAM... we have discussed models for reliability. or requirements.. Our efforts in the next chapter are very useful in making these tradeoffs. or RAM. one might desire to know the structure of sorne decision situation.r. It allows us to consider two principal subtopics: . One of the outcomes of considering reliability during the requirements phase of the life cyc1e is specification of such important parameters as R(t). planning effort generalIy integrates reliability. there is need for use of one or more of the many tools that fit under the general "network and graph" category. '''JL. availability. Identification of possible failure effects 4.. Each of the three availability definitions is important. Identification of probability of occurrence of failures 5.. and maintainability (RAM) studies is that of obtaining reJiable data for the models that may be constructed and in verifying the mode1s themselves. Identification of corrective actions and preventative measures . It is often difficult to estimate the MITR. are as folIows: 1. To associate RAM requirements with appropriate organizationallevels through use of the functional analysis results of the definition phase of the systems Jife cyele 3... ru " . MTTR. 4. . To perform a failure mode and effect analysis (FMEA) 4. F(t).1 Reliability and availability are important needs that must be addressed in the definition. in sorne waiting lines or queues.. To estimate RAM needs for the various system elements 5. e e v LI ~. References [33]. is often performed during the latter phases of the system definition phase in order to identify possible problems that could result from system failure. concern might be with the ftow of people... "L I VV"-'''''. MTBF. " . effect.. This involves tradeoffs across reliability and maintaina­ bility. phase of the systems engineering life cyc1e. The objectives of a RAM plan. Alternately.. and correction trilogy general1y used for risk planning and error correction. or item identification 2.. NETWORKS. Often reliability. diagnosis. One of the major difficulties in performing reliability. In a management decision situation. Determination of maintenance-resource re­ quirements is a need. and MMDT in order to meet operational needs. A reliability. military concerns development of a system maintenance plan (SMP) as part of an overall systems engineering management plan (SEMP).. One particular taxonomy concerns whether or not there is probabilistic uncertainty associated with the particular state that will follow the present sta te.. In each of these sifuations.. [98]. MAMT. sometimes called a failure mode. A useful relation for the expected MITR of N different systems is given by N ¿ MITRi A¡ E(MTTR) = -=--i=-. or other physical quantities as needed to accomplish sorne objective.j .. repair parts. where MTBM is the mean time between maintenance and MMDT is the mean maintenance down time. people. To enable detection. diagnosis.7 DISCRETE EVENT MODELS. Identification of criticality of failures 6. To ensure compatibility between operational requirements and RAM requirements 2..o'Ut. This is the taxonomy that will be used here. Flow is necessarily used in a very broad sense. Interest could be in the assignment or transportation of supplies. Description of most probable failure modes 3.S.. availability.UI':>Lf\C I MMDT) . There are a number of other important "ilities" such as usability not discussed here. Diagnosis of failure causes 7. or the structure of an organizational plan that might enable accomplishment of sorne desired goal.::l_N_ __ ¿ (4. I j . and [99] provide additional details concerning this important subject. AND GRAPHS There are many situations in the modeling and analysis of systems engineering efforts when it is desired to be able to represent the ftow of materials. or ideas from one "state" to another.or telephone calIs. inventory.L.. A failure mode and effect analysis (FMEA). model throughout the definition... Part of an integrated logistics Support (ILS) in the U. availability. 4. An FMEA wilI generally inelude the following: 1.1. O 1 1 O O A=\O 1 O O O 101 It is easily shown that if we multiply the aboye matrix by itself. An entry in the matrix A is defined a ai'J = 1 if PiRPi and aij = O if PiRpj' This matrix A is typically called an adjacency • Detenninistic graphs. whose domain is R and whose range is contained in P The set P contains the elements or points. it is possible. For the matrix in Fig.1 = (A + 1)' = p . whose domain is R and whose range is contained in P 4. A function. r. we obtain a matrix that describes reachability for paths of length 2 or less. tines r 3. and the set R contains the elements or lines. and it is possible to represent a digraph by a binary matrix A with the set of points P serving as both the _ vertical and horizontal index sets. A function. R. however. Associated with these concepts. are assumed when developing a theory of graphs. In any di­ graph. 4. which imply the structure. irreftexive. or more briefty. and network ftows • Stochastic graphs and network ftows This topic is not mutually exclusive and independent of other topics considered in our other chapters. a relation is a network with no parallel lines.­ PiRp". is finite. There are no Ioops.54. This is 1 1 101 O 1 1 1 O (A + 1)2 = I O O 1 1 O O O O 1 O 1. 1 1 O O O 2. symmetric. A simple prototype of a minimum tine adjacency matrix and the associated digraph is shown in Fig. and the second will hold if there is not. These axioms make it impossible to describe a network with no points. This characteristic suggests a binary relation. In Fig. You may find it helpful to verify that the digraph corresponds to the adjacency matrix shown in the figure. If we add the identity matrix. P4 is reachable from Ps by a path of length 2. it is necessary to add two additional axioms for completeness. l. of elements called directed tines. A set. While this may seem rather abstract. we obtain 1. and a very useful one as well. The minimum-tine or minimum-edge adjacency matrix is the matrix that describes reachability for aH paths of length 1. we obtain the matrix that describes reachability for all paths of length O and length 1. to have a set of isolated points with no lines between them. Among the important relations for graph-theoretic developments are: reftexive. which are to be structured.54. if the relation being considered is transitive in the sense that p¡Rpj and PjRp" infer 1. it is possible to develop the concept of a digraph.54.Matrix representation of digraphs noW becomes meaningful. It is always assumed that the set of points P is finite and nonempty. of elements called points P 2. . P. we can show that either Pi reaches. Reachability is a very intuitive concept. Thus. AIso it is assumed that the set of lines. in­ transitive. transitive.6. If there is a path from Pi to Pi' we say that Pj is reachable from Pi' or that Pi reaches Pj' The number of tines in the path Pi to Pj is called the length of the path. a digraph is seen to be an irreftexive relation. nontransitive. There are a number of "relations" and important properties that a relation must possess. consider two points. 4. A digraph satisfies the same four primitives and two axioms as a network. Example 4.7.3. Pi_and Pj. structured modeling. p. As we saw in our earlier discussions in Section 4. 2. A set. a concept that greatly facilitates the analysis and interpretation of graphs.1. there are four primitives or fundamental entities associated with a net or graph: matrix. 4. s. for example. AIso. O 1 O O O 1 O O O 1 1 1 We continue to multiply this expression by (A + 1) until successive powers produce identical matrices. You may also wish to use this for reference as we discuss sorne graph-theoretic concepts. No two distinct tines are parallel. Then we have for sufficiently large r (A + 1)' . 4.2 = (A + 1)'. to A. asymmetric. which are very unrestrictive. R. or does not reach. Pj(PiRpj or PiRpJ The first relation will be true if there is a directed line from point i to point j. it is these assumptions that enable development of the important concept of a minimum edge digraph. Two axioms. In general. The functions J and s serve to identify the "first" and "second" points on each tine. and complete.1 Network Flows A good systems engineering analyst should have the capability of modeling systems using graph-theoreticconcepts. J. These can be useful in representing the structure of objectives. If a digraph is not transitive. These are generally considered to be "transportation" problems or "network modeling" problems. such as Gantt. many intransitive adjacency matrices that will result in the same reachability matrix. l'Ie I VVVKI\':>. Data collected relevant to the existence or nonexistence of the relation between every pair of elements are explored [several options are available here: a strict binary relation with entries O and 1 may be used-in sorne (J/(APH5 ~" cases where there are potential transitivity difficulties with use of a strict binary relation.. many of the commonly used graph-theoretic and structural modeling techniques are no longer valido There are many uses where graph-theoretic notions are potentially useful. or digraphs.-. or software may be written such that the reachability matrix is constructed after each data entry or set of data en tries]. 3. Here. in principal. 4.U. A meaningful contextual relation. + 1 indicates an enhancing relation. . where O indicates no relation. 5. networks. it will be desirable to iterate and refine the structural model that is obtained such that there results a final structural model in the form of a digraph that can then be annotated with whatever signs and symbols are desired in order to convey the desired interpretation of the structural model. or the command structure of an organization. . \.:J. O "JO O O 4 5 Figure 4. it is more reasonable to obtain a "signed digraph" by using the three level relations + 1.. the process of data colIection may be computer­ directed or may be interactively directed by the humans seeking to establish a structural model]. A reachability matrix of the resulting structural model is constructed using responses concerning relations between elements and transitive inference to fill in entries in the reachability matrix [this may be done in a computer-directed-and-controlIed way after alI questions concerning relatedness have been posed by the computer and answered. It is also possible to represent various routing and transporta­ tion issues by a series of directed graphs. PERT. . however. is selected... '-1\''''/'111 veJ A. These resuIt in zero-one integer linear program­ ming problems [occasionalIy mixed integer linear programming problems] that can.. These inelude development of a variety of charts for project planning and scheduling. or the transitive elosure of A. such as the time required to go between two points in the network or the distance covered in servicing sorne given number of nodes.54 Adjacency matrix and associated digraph. we see that 1 1 1 O 1 1 p=lo O 1 O O O O O 1 1 1 1 1 O O O O 1 1 It turns out that there is only one minimum-edge transitive adjacency matrix that corresponds to a given reachability matrix. The process of structural modeling generally proceeds as follows: 1. Often. be solved through a combination of the algorithms for graph theory noted here and an integer programming package. •. 2. with which to structure a set of elements. There are various end uses to which the resuIts of graph theory may be put.. It is for this reason that it is generalIy essential to consider transitive relations only when working with directed graphs. A set of elements relevant to an issue is identified.JI n. UI':>LIH:: le CVCI'I' IVlVUCL. where P is defined as the reachability matrix. The resulting structural model of network routings may then serve as the basis for an optimization problem in which it is desired to extremize sorne cost function. and CPM charts. such as often arise when negatively transitive contextual relations are used. The process of structural modeling is one of the major results of graph theory that is especialIy useful in practice. 'H 'L.. I\NU r1 1 O O O O O 1 O O O 1 O O 1 O ---. and -1. There are. O. 6. and -1 indicates an inhibiting relation. or the organization of a complex piece of equipment. DELTA. The computer determines the minimum-edge adjacency matrix and displays the resulting structure for possible iteration and modification [this can be done in a totalIy computer-controlled-and-directed way or it can be accomplished in an iterative fashion after each data entry is obtained using one of the available algorithms for determining an adjacency matrix from a reachability matrix]. 1t is also possible to obtain representations in the forms of trees or more general hierarchical . g' .. objectives..a.!!I . ~ ~.. . 1 181-<. A modeling environment should facilitate the maintenance and ongoing evolution of those models and systems that are contained therein. considered.{. Geoffrion [100] has noted that a modeling environment needs five quality. Interaction matrices are generalIy applicable in all circumstances where the structure of a set of elements has to be explored. 4. such as needs. ~O (.. in order to best support the development of modeling applications. constraints. E ~ oo 1 o 1 o ·S 000000 5f oc 4 l¡.. and ...~Á o/.. "t:I 19& Ql '%! ~~. ::<'"c:'" c:o .. A modeling environment should encourage and support those who use it to speak the same paradigm neutral language. . The resulting matrix may be helpful in identifying clusters of related elements.E . the table may be used to construct a graph or map showing a structure of elements. = ... e> l\i e '" -0 c: ~ c: ~ .0 ~~ ~ ~~ ~ ~V. Figures 4. AIso. . and the existence of a relationship is indicated in the table.312 UIJL"L I ANAL YSIS OF AL TERNA TIVES ey'~ ~1000] e3 es 1 o 1 10 o 1o 1o 1 6 o1 1 o1 o e4 oo1 oo ~1000] e ~ E .) ~1IIlI1!!II1IlI1!!II1!!II1!III1!!II ~1IlI1IIB1IlI1IlI1II1IlI ~1I!II1!III1!!II1!III1Il!I ~1IllI1II!I1!III1!!II ~ e~+ e2+e + ~I0001e:Jr2 e ~2000]0 _ . .~ C> * (f¡ (f¡ ~ 8 ::< <ñ o (. Slralegic Management Policy Organizational Objeclives Figure 4.!!IE ~Ea.. as well as to modeling professionals. such as in the adjacency matrices shown in Figure 4. en o a: "" E E . A modeling environment should nurture the entire modeling lifecycle.13 ~<ll\9 ""~QJ "'15>'Ó 19&· o.." 1>< Sample self..55 Minimum-edge adjacency matrix for several interaction matrix types_ Competitiveness Assessment Managemenl Conlrol Processes One of the major uses to which graph-theoretic and structural modeling concepts may be put is that of obtaining interaction matrices_ Interaction matrices pro vide a framework for identification and description of patterns oC . .55 through 4.) - ie ~ .~ 1.and productivity-related properties.55. The theory of digraphs and structural modeling is several of the works we discussed earlier in Section 4. 1.. or elements that appear to be quite isolated from the others. and it significantly enhances insight in the connectivity of problem elements. E ~ (d) Weighled Digraph e e2 e3 o o 1 -1 o o - 000001 e _ ~ 00-10-10 000000 4 4 e 3 oo 5 4 o o 6 e 1 Cuslomer Requiremenls 000001 001030 000000 Syslem Specifications Syslem Design 4 Syslem Production Figure 4. 1Il en ~ D..g¡ ~ .3. C> ~ ~ 8" (ij e.57 ~ en c: c: O ¡ ~ "8 en en « .. A modeling environment should be hospitable to decision and policy makers.~.57 illustrate concepts useful in constructing interaction matrices. atable is set up in which each entry represents a possible linkage between two elements. Self-ínteraction Ma/rix I><~ 1...' o/~." (b) Direcled Graph (a) Non-Direeled Graph 1 L ~1lII1IIlI1IlI ~I!IIII!!II ~III!I 'oqq Interaction matrices for organizational policy assessment and quality function deployment.~ . ~<l19 ~ 1l ~..56 deployment. not just a part of it. 2.~~ q. ". Basically. relations between sets of elements.. Each entry is Cross-ínteraction Malrix 11 Figure 4.. ~-- 5lE e2 e3 o o 1 1 oo ey­ e6000001 1 (e) Signed Digraph 3 LV '-.and cross-interaction matrices for quality function •• _ •• .. Development of an interaction matrix encourages us to consider every possible linkage. 3.. alterables. Self-interaction matrices are used to explore and describe relations or interactions between elements of the same seto Cross-interaction matrices provide a framework for study and representation of relations between the elements of two different sets. ~ .. 4.JI. Translation of the M atrix into an Appropriate Graph. Interaction matrices are generally useful in the definition phases and issue formulation steps of a systems effort. The consistency of several interaction matrices of elements relating to the same problem may be checked. This approach is described in considerable detail in the structured modeling references given here. constraints. and the degree of specificity desired. These may have been obtained by an idea generation method or by using sorne other approach. 5. Directed or nondirected graph relationships might be considered. essentially containing the same information as the matrix. or it might be ~ that change in one alterable directly affects another alterable.. Analysis and Formulation of Conclusions.. or not?" A positive answer may be indicated by writing a "1" or an "x" in the appropriate box or by blackening the box. In general. Certain patterns may appear in the matrix. Nondirected self­ interaction matrices have a triangular form. AIso. such an element is added to the matrix. and a negative answer may be indicated by writing a "O" or by leaving the box blank. depending on the type of matrix. any type of relationship may be used. Structural modeling is a general conceptual framework for modeling and a prelude to many other forms of modeling. Information about the strength of the relation may be included. 3. for each objective. Many structural models take the form of a tree. the type of elements. or in that they tend to change in a similar nondirected way. "Are these two elements directly linked according to the specified relation. alterables might be related in that they belong to the same subsystem. WU/(KS. elements may be redefined. stakeholders) or the linkages between elements of two or more different sets are to be explored. or slightly more complex but hierarchical structure. Ir transitivity can be assumed. The usual approach is to first represent a mental model of a situation by identifying a number of elements and an appropriate contextual relationship. alterables. while directed self-interaction matrices are square and cross-interaction matrices are rectangular. Setting Up the Interaction Matrix Framework. The IDEF approach discussed briefly in Chapter 3 represents one approach to the development of system structure. there is at least one alternative that helps achieving it.. nondirected relations seem appropriate. Determination of the Type of Relationship. AIso. as noted in Figure 4. AIso. such as needs. AIso. The following activities are typicaHy followed in setting up an interaction matrix. /VIVUC:L". These references should be consulted for much greater and in-depth discussions of structured modeling. or more sets of elements are given. They may also be used to a considerable extent in the analysis effort throughout all of the phases of the life cycle. Computer assistance may be helpful for setting up and displaying large matrices. 2. appropriated directed relations might be used. The elements are listed and lines are drawn in such a way that the result is a table in which there is a box for each possible combination of elements. The graph. Completion of Each Entry in the Interaction Matrix. Fora cross-interaction matrix. . '''1:. They pro vide much assistance in the determination of function from structure. They provide a simple aid to exploring the structure of all the problem elements. For example. Here we provide only an overview of this important topic. 6. It may be worthwhile to construct such a graph from a self-interaction matrix. Revision of the List of Elements and Scanning the Pattern ofthe Relations 1hat Are Displayed. The entries in the matrix are completed one by one. AIso. a decision has been made as to whether the interactions between the elements of one set (such as needs. stakeholders. conclusions may be drawn concerning the major subsets of the problem. many structural models take the form of influence diagrams. and so on. which we discussed earlier in Section 4. matrix entries. The process of completing the matrix entries may lead to the identification of important intermedia te elements that are missing from the original list. agencies involved. depending on the relation used. two. If desired. A modeling environment should facilitate management of aH the re­ sources contained therein. including a graph-theory-based structural modelo 1.. graphical analysis methods might be used to reorder the matrix or graph to show the structure more clearly. Questions are posed and answered concerning the binary rela­ tions among the elements according to the selected contextual relationship.U':>LKI:' 1: 5. can be directed or nondirected. as long as the meaning of the relation considered is clear to aH those involved in setting up the interaction matrix. alterables. computers may be used to perform analysis of (binary) matrices and to help structure a matrix in such a way that an informative graph may easily be drawn. each time asking the question. whereas when the self-interactions are considered. We assume that one. AND 0KAI'H:> . On the basis of the matrix and possibly the corresponding graph. revealing either (a) clusters of interrelated elements or (b) elements appearing to be quite isolated from others.55. it is important to check whether. it is generally a better way to communicate structure than a matrix.3. requiring revision of related 1: VI:. provided that the matrix is not too large and there are not too many interactions. Because a graph conveys both direct and indirect linkages. it is possible to uniquely determine a . This enables construction of a reachability matrix or a minimum-edge adjac­ ency matrix according to the approach chosen.reachability matrix by asking a limited number of contextual relatedness questions about the elements and use the transitivity property to infer many responses. In an alternatives-objectives cross-interaction matrix. 6. often exists. which deals essentially with deterministic models. no faults 2. In a great many cases. can be modified to enable consider­ ation of probabilistic and stochastic effects. but with minor faults 3.58. Inoperable If nothing relative to maintenance is accomplished. Section 4. A Markov model is just a digraph that is weighted such that the weights associated with each edge correspond to the probabilities of a change in state between adjacent nodes in the directed graph. and the encoding of a consistent set of state transition probabilities onto the various edges of the digraph 2. Determination of altemative courses of action and their impact upon the transition probabilities 3.6. A specification of a performance index Markov processes are the finite-state stochastic counterpart of optimum­ systems control that typically deals with the infinite state value problem. but with major faults 4. for example. It can be found in one of four states: 1. Whíle it would be possible to pro vide much additional detail conceming stochastic optimization.:.2. the state of a new machine will generally evolve from state 1 to state 2 to state 3 to state 4. inventory and stock management. Most of the tools of graph theory.2~with no repair or replacement. this isnot our intent here. Example 4. Data are obtained relative to the transition from one state to the other. and the management of queuing systems are among the many problems that can be treated as Markov decision processes. Mathemat­ ical analysis of simple Markov process models involves just the solution of systems oflinear equations and associated matrix-vector multiplication.7. which discusses discrete event digital simulation. As good as new.JIU rt/W\L r . These results are described in Figure 4. given the state on the previous dayand that nothing is 1/16 Figure 4. which results from uncertainty. Operable. It is.:.58 State transition diagram for Example 4. should be referred to for a discussion of this topic. . A specification of the effects of each action altemative on process dynamics 3. We can calculate the probability of being in any of the four possible states on any chosen day. Sensitivity analysis of the results Determination of machine repair and replacement strategies. A Markov decision model consists of the following: L A statistical description of the typically not totally controllable dynamics of a process 2. This level of sophistication should probably be sought only after experience has been gained with operational concems associated with use of simpler analysis activities. Steps in the solution of the typical Markov optimization problem generally involve the following: 1. not shown on the graph. There are many available extensions to the simple Markov process model discussed here.2 Stochastic Networks and Markov Models The mathematical modeling approach we have described up to this point is deterministic in that it can be used to represent processes that are determined without an intervention from a chance mechanism. We note tbat it is not possible to go from a state of higher deterioration to one of lower deterioration in that the associated probabilities. UI" AL I tKNA IlVt:> 4. Specification of the (typicalIy single scalar-valued) objective performance index 4. Definition of the problem and the determination of a dígraph that represents the problem structure. Verification and validation of the resulting optimization model 5. Let us consider a very simple repair or replacement exercise in which a production machine is inspected at the end of each day's effort. are each O. Indeterminism. possible to introduce the concept of a "stochastic shortest route model" in which the availability of routings at each stage is probabilistic and where these probabilities depend upon the states that precede a particular stage. simulation will be the preferred· course of action to analytical solution of Markov process models./. Selection of the optimum altemative course of action using one of the existing algorithms for this purpose 6.4. This results in a dynamic programming-type formulation of a Markov process and solution of the resulting optimization problem.. Operable. where it will remain indefinitely.6. . N) + (!)P(2. N} + P(2.. N State 4 = 1. + 1) = (-f¡. . The system becomes unrepairable and remains theré. costs $2000.:JI:J vr r\L I ~KNI\ IIV~:' done relative to maintenance or replacement. The problem is to find the general repair or replacement strategy for which the expected long-term average costs per day of operation are minimum. N) = 2/11.. The expected cost of implementing a strategy is + C(2}P(2) + C(3)P(3} + C(4)P(4) = 1000P(2) + 4000P(3) + 6000P(4) E(C} = C(1}P(1) and so we obtain Cor this strategy. N) + (~)P(3.···········. ·• 1 i ~ ~ r··············.\! State 1 and recognize this as a set of first-order linear difference equations. and 3. .1"l'''I"lL J . O} = O for K = 2... the costs in state 3 become $4000 when we repair the machine and $6000 if we replace it.58 to produce that shown in Figure 4. for large N. N) = 1. N) + (i)P(2.. N) = O.. P(2.. N) + 1) = (6)P(1.. ~ . with N = O. 2... we surely would not wish to replace the machine because there is not even a minor fault.60 and 4. Suppose that the production cost increases are as follows: State 1 No costs State 2 Expected cost increase of $1000 per day State 3 Expected cost increase of $3000 per day. Day O Day 1 Figure 4. C Replace the machine such as to return it to state 1 with probability 1. N) + (i)P(2. we obtain. N) + (})P(3...59 ! Day 2 Day 3 Day 4 Day 5 Day 6 State transition diagram for the /Ido nothing/l policy... N} = 1.. N) i 1 ' ... Thus. = (¡Hi)N-l = (:6)Gr. the only option if the maéhine gets to state 4 is to replace it. N) = O for K = 1.)P(1. It is easy to show that we obtain solutions like P(2. and this costs an additional $2000. N + 1) = (¡)P(1. In each case a production delay of one day is introduced. N) + (MP(2. the only question is which of the three options should be chosen if we are in states 2 or 3. N) + P(4. The costs in state 4 are $6000 for a replacement.··············. N > O + 1) = (¡)P(1... N) • ... Suppose that maintenance. One policy that we might adopt is to do nothing until the machine becomes unreparable and then repair it.\ r be done or we will have no production at all from that point on. P(2. N) = P(4. E(C} = $2454. The production costs increase as the condition of the machine deteriorates. P(1. Suppose that the replacement costs are $4000. N) P(4. If we are in state 1. N + 1) = P(4. N) = 7/11.)P(l.. N) + (})P(3. and P(4.···········. This must 1. N) + 1) = (-f¡. N} = P(3.. in either state 2 or 3. This will alter the state transition diagram of Figure 4. We can easily write the difference equation for the .. N P(3. ~ I . There are costs associated with maintenance and replacement.54. the machine operator can choose from among combinations of three possible courses of action: A Do nothing at aH. Thus... After inspecting a machine and detection and diagnosis of a fault. P(1.. N P(4. O) P(2. We know that P(1. Figures 4.:4 it Gr-iGY-2 1 + Ultimately. . N) + (k)P(2. 4. N) N) N) + (i}P(2... . We can solve this difference equation if we wish with the initial condition P(1. ! State 3 N) N) State 2 + P(4. The solution for large N is of particular interest. N) + P(3. N + 1} = (-h}P(l. N) f···········. B Overhaul the machine such as to r~turn it to state 2 with probability 1.61 illustrate the resulting state transition diagram. We start the solution at day zero. P(3. N P(3.. Clearly.. N) + (}}P(3.. P(K. presumably due to increased production difficulties or the need to correct low quality production. The difference equations expressing the state transition probabilities now become P(1.59. So.. 3. We could adopt the strategy oC replacing the machine if it is in state 4 and repairing it if it is in state 3. we see that we obtain P(K. We easily obtain P(1.. O} = 1. ~ ~"' . In particular. N) + 1) = (6)P(1. N) + P(2. It turns out that this is the best possible strategy. and a service facility with one or more servers. N+ 1) P(4.3 Queuing Models and Queuing Network Optimization Queuing theory is the applied mathematical analysis of probabilistic concerns.7. more customers desiring a service than can be served at once. '~'''''''''''''''''''''''''''''''I . There is replacement when machine becomes inoperable. Given reliable data. N) = 2/21 and = 5/7.2. N) = 1. These have been applied to a number of application areas. N) = P(3. The expected costs of this strategy are $1666. The arrival pattern or input to a queuing system is often measured in terms of the average number of arrivals per sorne unit of time.. While possible for this simple example. the mean arrival rate. Figure 4.. 3.L"I... Typically. Figure 4. There is replacement when machine becomes inoperable. __ computationally intensive for more complex problems and mathematical programming techniques of operations research may be used to advantage. _. at certain times.L . elementary queuing system consists of customers. and there is renovation when it enters state 3.61 State transition diagram for Example 4.UIJ\. we obtain for large steady-state conditions P(I. N) = P(4. probability evolution from this diagram and obtain P(I.. it has been developed to study problems related to queues or waiting lines. Service pattern of servers Queue discipline System capacity Number of service channels Number of service stages In most cases. N) + (Mp(2.. We can compute others if we desire. such as possible priority rules on the behavior of personnel in these lines. We have presented a brief overview of Markov decision processes here.6. and server occupancy.6. N) + P(3. N) P(2. N) Recalling that P(1. It is generally necessary to use dynamic programming to obtain solutions over a finite-time horizon or use linear programming in the case of infinite-time horizon problems. queuing phenomena arise when there are. the mean interarrival time. P(3. waiting and being served.. . Here we have enumerated all strategies and selected the one with minimum cost. There are six primary defining characteristics of queuing processes: 1. The state of the system changes with the arrival of a customer or with the departure of one or more customers that have been served. 4. waiting times. Arrival pattern of customers 2.. N) N) + (¡)P(2. . N) 4. N) = h\)P(l. These inelude such things as personnel behavior when faced with long waiting lines. this would be very N the P(2.60 State transition diagram for Example 4. knowledge of these six characteristics provides an adequate description of a queuing system. an analysís based on queuing theory can yield general insight and precise forecasts that are much more veridical than comparable results obtained from use of more intuitive methods. N) + P(4.~ .2. 5. or by the average time between successive arrivals.-'. and the effects of policies. Queuing theory enables the analyst to compute expected values and distributions of queue length. plus information on queuing and service disciplines.. N + 1) = (¡)P(l.. A typical. We see that this strategy results in the machine being in state 2 for a much longer fraction of time than the strategy of only replacing the machine when it is inoperable.. Application of queuing theory requires a proper statistical description of customer arrival and service time distribution. N) + (~)P(2.\L.67 per day..u . Much more detail is available in reference 101. Both customer arrivals and service time for each customer are governed by a chance process. N = P(4.. N) + P(3. " . 6.'". A person may decide to wait no matter how long the queue becomes. these terms are conditioned on the fact that the system is not empty. though by nomeans always. which is applicable to many inventory systems when there is no obsolescence of stored units. a minimum of three distinct subsystems must be identified: 1. One important difference exists. It is also necessary to know the reaction of a system element. people may switch from one to another. In some queuing processes there is a physical limitation to the amount of waiting room. These are referred to as finite queuing situations. I. and parts that do not meet quality standards are sent back for reprocessing. In sorne muItistage queuing processes. For example. not only may the time between successive arrivals of the batches be probabilístic. Service may . first served. then these mean values provide only measures of central tendency for the input process. In the event that the stream of input is deterministic-that is to say." On the other hand. RecycIing is common in manufacturing processes where quality control inspections are performed after certain stages. In these systems. In more complex cases. the service facility is idle. One or more impJicit or explicit waiting facilities. A queue with limited waiting room can be viewed as one with forced balking where a customer is forced to balk if the customer arrives at a time when queue size is at its limit. it is most likely that some customers will be delayed by waiting in the lineo In general. but also the number of elements in a batch.J o.>. service patterns can also be describedby arate or as the time required to service a customer.:J. however." . Thus it folIows that a probability distribution for queue lengths would be the result of two separate processes~ namely. To accomplish a queuing analysis. and in particular a queuing analysis. arrivals and services. It should be noted that entering the system does not necessarily mean entering into service. A queuing system might have only a single stage of service or it may have several stages. I( the system is empty.I~~ Queue discipline refers to the process through which customers are selected for service when a queue has formed. "'IC I VVVI\I\. The set ofpeople or things that requireservice-that is to say. An example of a multistage queuing system would be a routine equipment maintenance procedure. and knowledge of either one is usuaIly sufficient to enable description of the system input. Much of the discussion concerning the arrival pattern is also appropriate in discussing service patterns. If a person decides not to enter the queue upon arrival. if there is uncertainty in the arrival pattern. or if the queue is too long. A service facility that services the callíng population . This is certainly not the only possible queue discipline. Hence. Sorne others in common usage are the Iast-in first-out (LIFO) queue. the queue length wilI assume no definitive pattern unless arrivals and service are deterministic. or a first-in first-out (FIFO) queue. an arrival may enter the queue.. The six characteristics of queuing systems are generaIly sufficient to describe completelya process under study. One can see from the discussion thus far that there exist a wide variety of queuing systems thatcan be encountered. that is. Before performing any mathematical analysis. such as by a computer with paralIel processing capabilities. When one speaks about service rate or service time. no further customers are aIlowed to enter until space becomes available through a service completion. regardless of their time of arrival to the system. customers arrive and depart at irregular intervaIs. that is. where each equipment undergoing maintenance must proceed through several stages in order to complete themaintenance process. In the event that more than one arrival can enter the system simultaneously. There exist a variety of priority schemes. and the ones with higher priorities are selected for service ahead of those with low~r priorities. there is a finite limit to the maximum queue size. may decide not to enter it.. but may instead require joining the line when immediate service is not available. where arrivals are given priorities upon entering the system. such as personnel for example. the input is said to occur in bulk or in batches. known as queues 3. the person is said to have "balked. The most common discipline that can be observed in everyday Jife is first come. On the other hand. they jockey for position. Even if the service rate is high. Another factor of interest concerning the input process 1S the possibility that arrivals come in batches instead of one at a time. but after a time lose patience and decide to leave. the caIling population 2." In the event that there are two or more parallel waiting lines. it is easier to reach ·the nearest items that are the last in the queue. Another queue selection is service in random order independent of the time of arrival to the queue (SIRO).. In this case the arrival is said to have "reneged.U'~LKt I t: l:VtN' MVUCL. and further characterization is required in the form of the probability distribution asspciated with this random process. that there is someone in the system requiring service. between service and arrivals. "'''''IV un. Knowl­ edge of the six characteristics of a queuing process is essential in this task. the circumstances under which arriving customers wiIl balk may not be known exactly. These are generalIy. so that when the line reaches a certain length. but there are many situations where customers may be served simuItaneously by the same server.then the arrival pattern is fulIy determined by either the mean arrival rate or the mean interarrival time. In this case. The number of service channels refers to the number of parallel service stations that can service customers simultaneously. These three situations are all examples of queues with impatient customers. completely known and thus void of uncertainty . it is absolutely necessary to describe adequately the process being modeled. upon entering the system. In the bulk-arrival situation. assumed to be mutualIy independent phenomena. One generalIy thinks of one person being served at a time by a given server. also be single or batch. These quantities are cIearly related. recycling may occur. A queuing configuration may involve single versus multiple queues. wbere specific symbols or letters substituted in the tbree positions describe standard models.. Discrete event sirnulation may be used as an altemative approacb to obtaining the solution to queuing problems that become analytically intractable. maximum number of customers allowed in the system. M is used to represent a Poisson arrival . Tben. For example. A general elassification scbeme with three parameters is used to identify tbe type of problem. step 4 is accomplisbed. The forms of arrival and service time distributions are assumed to be constant. or other elements being studied. multiple identical servers in parallel. 3. Characteristics of the queuing phenomenon are phrased. Two distinct aspects of a queuing subsystem need to be considered: tbe queuing configuration and the queue discipline. an . wbicb allows a person to leave a queue after once joining it. and single arrival and whicb has a well-bebaved probabilistic arrival pattem. we may have serial or parallel queues wben tbe service that is being sougbt involves more tban a single activity. 2. Y indicates queue priority system. and the system is supposed to bave stabilized. Wben the system d~s not exist yet. and the size of the customer population. and where X indicates queue capacity. eve. These inelude customer arrivals. bomogeneous. or parallel sets of servers tbat are in series. while there are two parallel service channels. and it can also be homogeneous or nonhomogeneous depending upon the type of service that is required. AND GAAI'H:> . If not. NETWORKS. Behavior is observed elosely to determine wbether and when tbe personnel. Tbe most critical aspect in modeling tbe calling population is tbe probability function tbat determines tbe arrival of customers from tbe population to a queuing subsystem. tbere are queuing models for which no symbolism has ever been developed. A statistical de­ scription of random events is chosen for each event. are among tbe many possible queue discipline options. Some notation does exist for unmentioned models. past experience allows tbe systems analyst to decide whether the complexity of tbe queuing problem and tbe resulting model allows analytical treatment. If it is elected to continue with a formal queuing analysis.3 and criteria to evaluate the merits of altemative solutions are formulated. Several steps are generally associated with solution of a queuing problem: 1. last-in first-served protocols. Balking. Possible priority rules and otber altemative policies are specified. We could bave ll' single server. tbere is no indication of a symbol to represent bulk arrivals. we can bave single or parallel service facilities. Collection of Information. There may be a single queue for parallel servers. These three subsystems are very highly interdepen­ dent in operation. and the third one refers to the number of parallel service cbannels. In addition. wbicb gives a person tbe option of not joining a queue if it is too long. This list of symbols is not complete. This is often true for those problems less frequently analyzed in the literature. A queue discipline may involve first-in first-served protocols. and Z indica tes tbe number of series servers. discrete event· digital simulation approaches may be used. M/M/2 queue means tbat both interarrival times (times between individ­ ual arrivals) and service times can be described using an exponential distribution (indicated by "M").1. and restrictions imposed on thesystem by its environment are identified. and no symbol to denote any state dependence. Altemative solutions are specified. in the sense of identical service time for the same service activity. Data on the statisticalproperties of events are collected from tbeory or by direct observation. Tbere are many variations of tbis. Hypotbesis testing techniques may be used to identify tbe description tbat best fits tbe data. Specification of the Structure of the Three Subsystems of a Queuing System. we bave a elassification system UjV/W /X¡V /Z.o r exponential distribution. and G is used for some other general distribution of arrival or service times. or batch arrival systems in which all customers arrive at one time. tbe second one refers to the departure (or service time) distribution. Tbe simplest service facility subsystem is tbat of a single server with a very well behaved probabilistic service time. . for tbe design of new systems as well as for improving existing systems. other parameters are determined to specify · the equations describing the queuing system. most of tbe success­ fuI applications involve only a small portion of tbe available theory. or nonhomogeneous. We can also consider single arrival systems in which all customers arrive one at a time. sbift queuesor leave the service facility when tbey perceive long waiting lines. and it is not possible to meaningfully analyze one aspect of queuing without considering the other two. or random service protocols. Major elements and events are structured. where tbe meaning of U. or reneging. Mosi solutions obtained through use of queuing theory apply to steady­ state queuing phenomena. Identification of Appropriate Mathematical Descriptions. /VIVUt:L::'. Queuing tbeory has been applied successfully in a variety of fields. As already noted.U/::'LKt: It: The calling population may be finite or infinite. AIso. AIso. or individual queues for eacb.v. The first parameter refers to tbe type of arrival distribution. A most critical aspect of a service system is the probability function that determines the service rates. This notation takes tbe form UjV/w. If tbis is not appropriate. At tbis point in the analysis and modeling elfort. observation of comparable events or a comparable solution may be possible. tben attention should sbift to use of a digital simulation approach. but many of these are not in standard use. Service rates for multiple servers may be homogeneous.1. Tbe simplest calling population to deal with is one that is infinite. Servers in a series may or may not bave queues between each server. Sometimes three additional parameters are used to specify tbe service discipline or priorities. queues. generally on the basis of available data. While it can be an extraordinarily theoretical and complicated subject. and serving facilities. For example. D is used for a deterministic arrival or service distribution. V. and W is as before. nothing to represent series queues. :) descriptions for the various random elements in the system to be analyzed. and the and less time-intensive and expensive. 2. or design are specified and queuing models: results are obtained for each of these. Mathematical pro­ 3.. and other performance indices for proaches might be used to determine the ·optimum design or solution in terms the situation(s) on which data are available. they could be used to determine (a) the optimum number of . The results are compared of a given performance function. minimum overall cost solution is identified. the analyst must relate waiting delays and queue lengths to the given properties of the input stream and the service procedures. Most analytical queuing studies are restricted to equilibrium or steady­ is selected. A sensitivity analysis that allowable range of the variable parameters.J. A minimum total cost solution for a queuing system has to be identified.. incompre­ hensible for other than queuing specialists. When cIear relatively simple problems because of the difficuIties in establishing an criteria have been specified. 5. 6.. or a portion of the resuIts. Mathematical programming ap­ expected queue length. or to study the effects of possible changes in an existing system.. While it would state conditions. a general expression giving total cost as a dard mathematical descriptions may affect the validity of results. a ·performance index for each alternative appropriate model and in conducting a queuing analysis for other than solution can be computed." . The mathematics needed for a queuing analysis may easily render the quite a complex undertaking.~_ •• • ~ . may be computed for a system with a variable number of servers.. these measures are often random variables.. found. and precise resuIts are desirable. and a 1. Because most queuing systems system. The primary interest is in steady-state queuing behavior. an indication of the manner in which customers may accumulate. 4. Sufficient data to conduct the analysis can be obtained. 3.. solution. and quite likely the results of the artalysis as well. allows variation of the various assumptions concerning waiting costs 7. The resuIts. To accomplish system design or system evaluation.. - I1L I [... When there is a need for determining with actual observations. largest payoff (or smallest if a minimization-type extremization is sought). Selection of the Best Alternative.. a Markov decision model adjusted appropriately and the analysis is repeated. and this resolves the queuing analysis issue. 4.. 6. If the cost of waiting and idle service can be obtained directly. generally always be desirable to perform a sensitivity analysis._. If serious discrepancies appear. or to measure these quantities for existing systems.". Hypothesis testing might be used to select the most appropriate statistical The task of the queuing analyst is to accomplish two analytic efforts. and Validation of the Queuing M odel and Analy­ Discrete event digital simulation is an alternative for queuing problems that siso The systems analyst uses the results of queuing theory to compute are too complicated to be handled easily.. Verification. 4. and they gramming may be used to find the minimum cost solution within the generalIy involve subjective value judgments.U':> vr U I JLI'\L IL L rLf'" • . have stochastic elements. When cIosed-form analytical 2. . The presence of several conditions makes use of queuing models and the 6. Models for each There are several validity concerns that should be addressed when you use alternative course of action.. 7. this can be 5. the analyst would probably want to balance customer waiting time against the idIe time of servers according to sorne inherent cost structure. function of controllable parameters can be derived. - . Discrete event digital simulation is a more appropriate. The resuIts of the queuing analysis 1. The problem is well-structured and relatively simple. also might be useful.. The analyst must determine the values of appropriate measures of effectiveness for a given process or must design an "optimal" system according to sorne criterion.U I"\'~I"\L J :. - . There is a problem related to queues or waiting lines in an existing measure of the idle time of the servers.. To accomplish the first task.. Evaluation and Refinement of Results."'1''1'1'\ 11 Vt. waiting times. Costs of waiting may be very difficuIt or impossible to estimate. their probability distributions or their expected values need to be queuing related aspects. Application of analytical results of queuing theory is restricted to are used to compare aIternatives with respect to their merits. of a discrete event digital simulation have to be analyzed and understood.. procedure. Necessary simplifications and adaptations of queuing events to fit stan­ solutions have been obtained. waitirtg costs and server costs simple models. plan... the model is state-dependent control laws in a queuing system. there are three system responses of ¡nterest: sorne associated queuing analysis quite appropriate: measure of the waiting time that a customer might be forced to endure. Forexample. Queueing theory is most appropriate for analysis and refinement of alternative designs. The aIternative course of action with the should be incIuded in the analysis if this is possible. . As already noted. Solution. Solution of the Queuing Model for Each Alternative.. method for complicated problems. Various design options have to be compared with respect to their thus. 5. 3. The discrete event structural model is basicalIy a model of events and potential times of occurrence for these events. Central to methods of discrete. Simulation models allow systems engineers to engage in "what-if" and "if-then" type exercises to a degree not possible using totalIy analytical methods. are ordered instants in time at which the system undergoes perceptible change. Events. and attributes are obtained. but the ability to explore the implication of many alternative courses of action is perhaps the principal one that allows more effective planning and decision making. Entities. An event is an instant in timeat which a given activity begins or ends. An entity is simply an item of interest in the process itself.Each event is processed by transferring control to a specialIy written event process­ ing routine that accomplishes necessary housekeeping in order to record event occurrence and to create other events caused by the occurrence of the now current event.c­ series modeling and simulation. Examples of the latter inelude telephone networks and repair depots. Of importance also are the notions of attributes and entities. Average repair time for a particular procedure would be one example of an attribute. Choice of Modeling Method and lnterpretation of the Results from the Study. These times are generalIy random and are often described in terms of a probability distribution. In a later section of this chapter. These are the only time instants at which it is possible for . Thus the use of the word "event" may be somewhat inappropriate. Discrete event digital simulations may be based or focused on activities or events. The method for advancing time in a discrete event simulation model is the same. It is important to note that only the times where events occur need to be recorded. regardless of whether an event-oriented or block-oriented approach is used. Determina­ tion of times between recurrent events is needed. The progress of an entity through a system.4 Discrete Event Digital Simulation In no way can simulation be consid~red as a tool of last resort by the systems analyst. The steps that are in volved in the typical discrete event simulation study inelude the folIowing: 1. The collection of activities in a given situation is known as a procesS. Discrete event simulation languages keep track of the current simulated time through use of a specifical1y defined variable that al10ws time to be passed from the time of the event currently being processed to the time associated with the next event. Data relevant to the activities. SIMULA and GPSS are examples of this ap­ proach. There are many other potential benefits to simulation. Data Collection and Analysis. we will generalIy have to first establish a structuraI model of the system and an identification of possible alternative courses of action. Activities are always preceded and proceeded by other activities. event digital simulation is the notion of an activity that is an elementary task that requires time to complete. 2. or finite.7. When this fails. To do this meaningfulIy. from initial arrival event through to final departure event. entities. Problem F ormulation. ANU lJKAt'HS channels to maintain and (b) the service rates at which to operate thesc channels.UI:>LKI= 11= 1= VI=/'1 I MUUI=L. A particular subelass of the activity-focused. Examples of the former are telephone calIs and parts in an inventory. Our concern in this subsection is with probabilistic simulations that involve a finite number of state values. These changes could represent customers being serviced. Activities are decomposed into their respective events. and we have already examined several of these. Existing simulation languages such as GASP and SIMSCRIPT accomplish modeling by focusing on events. :JZ9 evolution over time of the progress of an entity in the system. events. entities. NI= 1 WUKK:>. and attributes are specified such that we obtain the problem formulation elements. The activity scanning approach emphasizes study of the activ­ ities in a simulation to determine which can be begun or terminated each time that an event occurs. discrete-event simula­ tion language is the entity progress approach that allows focus on the events. These characteristics are calIed attributes. or any of a multitude of changes of potential interest. This is generalIy known as discrete event digital simulation [104]. therefore. discrete event digital simulation is used. equipment being repaired. There are a variety of simulation types that are possible. Sorne appropriate references for additional study of queuing inelude references 102 and 103. to design the waiting facility it is necessary to have information regarding the possible size of the queue. Based on the decision situation structural model. The problem is defined and the various activities. the analyst would strive to solve the problem by analytical means. is tracked. AIso. 4. Languages such as GPSS and CSL are block oriented process approacheswhich focus on activities. SIMULA is a particular entity progress simulation language that enables one to folIow the interactions of an entity in the system. In a process interaction approach or block-oriented approach. and activities all have characteristics that are of interest. a discrete event simulation approach is chosen and the simulation is conducted. the processes represented are a chronologicaI sequences of activities. These entities could be temporary or permanent. which will often occur when complex systems are considered. and an event generation diagram is constructed. Clearly both are appropriate representation methods and newer languages such as SLAM and GERT alIow implementation by either approach. from an arrival event to its departure event.). events. although it is very common. There may also be a space cost that should be considered along with customer-waiting and idle-server costs to obtain the optimal system designo In each case. The event scheduling approach to discrete event simulation emphasizes de­ tailed consideration of occurrences that result when individual events occur. we wilI discuss time. and associated discrete event simulation techniques. 4. Many issues that involve information processing in humans and organiz­ ations can be associated with the analysis of change over time based on ordered sequences of observations. Discrete event digital simulation is. modeling is a particularly important compo­ nent of a realistic time-series analysis. Pictorial or graphical representation 2. Others are based upon mathematical approaches and formal reason­ ing. and schema. Sorne are based upon the information that constitutes the expert judgment of an individual or a group. Many time series are not well-behaved. Finite-state modeling. one important ingredient is a library of software tools that will support the user in determining how important variables will likely behave in the future based upon their behavior in the past and the assumption of a model structure to relate system inputs and outputs. II I!III 1'". The actual simulation model is constructed and the simulation is runo Suspected errors in the structure of the model are corrected and the modeling process iterated until the model is judged to be a good representation of the real system. It is important that an information system provide several ways of encoding dynamic behavior. in these situatioÍls. It is this execution of the model through the discrete advances in time. Flow diagram or graphic representation 4. Among these are references 105-107. or statistical estimation theory. Much experience has indicated that the analysis that can be accomplished using graph and queuing concepts becomes extraordinarily difficuIt for other than simple models. Often these provide early dues to sorne impending crisis. in which there exist a limited number of states or events. ~ I !:! ~:Iilllll~i! M li'1. 3. such as to allow evaluation of each potential alterna­ tive course of action. 1. cognitive maps. Many techniques are userul for the analysis of change over time.7. and sorne are sophisticated. There are at least five approaches to modeling observed phenomena: 1.for example. If we choose serial dependencies in discrete time as the way of represen­ ting observations. consequently. Verbal representation 3. incIuding discrete everit digital simulation.1'1. A time-series analysis takes into account the nature of an observed process as it evolves through time. we will be concerned with quantitative forecasting approaches based on ordered time series of observations. one of which can be used to characterize each of a system's state variables The first two of these relate primarily to artificial intelligence and expert system based approaches where various forms of knowledge representation may be used . Here. scripts. this and the preceding step are unnecessary. this is accomplished. and the sensitivityeffects_ of these changes upon recommended courses of action are determined. Sorne forecasting models are very crude.11' 1. careful consideration must be devoted to basic data structures. In the design of a system to support human information processing. A time series is constructed so as to reftect the way a system behaves over time due to changes in input to the system. The last of these relate to the discrete event and queuing representations we discussed earlier in this chapter. There are several methods that might be used to accomplish this. queuing networks. Control theoretic modeling in the form of differential equations. NETWORKS.5 Time-Series and Regression Models There are many occasions when people wish to make forecasts of possible future states and events. production rules.1 'l. the preferred technique. There are at least two ways of representing a time series of an observed variable: 1. for this reason. we obtain what is called an autoregressive model and what are called input-output transfer functions. and any persistent regularities and variances are hidden from all except perhaps the most experienced observer. Usually. 4. Each of the aIternative courses of action are programmed into the discrete event digital simulation model and the model is exercised. Representation of many real-world phenomena in terms of time series is a very practical and useful approach for understanding and predicting system behavior. from one event to the next succeeding event. Regression analysis techniques are one of the most commonly employed and useful techniques.1 ~ k'I~l ~~'1 '11 '~ ~ Ilj !! i: 111 ~ ~ ~ 2.1!\ ! ~.o/seRETE EVENT MOOELS. In order to develop an appropriate set of algorithms to enable modeling in terms of graphs. Parameters within the discrete event simulation model that are subject to change are changed. Selectíon 01 Best Alternative If the intent of the simulation is to enable selection of a best aIternative policy. time-series analysis is considered to be an area of statistics. Evaluatíon 01 Alternatíves. There are a number of useful references that describe modeling and simulation. that gives the method "discrete event simulation" (model) its name. Methods 3 and 4 are representations appro­ priate for the use of time-series analysis. If the intent is to describe an existing situation without selection of a appropriate new alternative. perhaps with unspecified parameters that are to be determined as part of the modeling and analysis effort 5. ANO GRAPHS the outputs of the simulation model to undergo changes. It is rich in the choice of models potentially offered the user. This is the common . these constitute what is caBed 331 a time series. Model Validatíon. their detection is very important. Those based on forecasting models are of interest here. By definition. Sensitivity Analysis. Measurement error is associated with a noninfinite time interval. the covariance is a statistical measure of the association between the variables.t2) = E{x(t 1 )y(t 2)} In general. is described by and depends on two parameters only. spaced at regular time intervals. An ergodic process is always stationary. Many statistical methods are designed to use data to identify a parameter within a system or perhaps even the undedying probabil­ ity functions that are responsible for sorne random phenomena. namely. with perhaps "eyeball" estimations of fit and average values to very complex mathematical analysis that use very large computers and sophisticated models. 11 method used in time-series analysis and in control theory and is the approach emphasized in this section. The variance function is always nonnegative. which provides the mathematical basis for the models that describe random phenomena. this expectation must be taken over an ensemble of records such that only a probabilistic definition and interpretation of this relation can be given. In this case the definition of the autocorrelation function of a variable x(t) is Statistical procedures can vary from the drawing and assessment of . It is sometimes useful to describe a probability function by a mathematical relation that contains a few unknown parameters. Problems in this area are known as system identification problems. only. Thus. statistics may be defined as the collection and analysis of data from random observations.AL I tKNA IIVt:. When it is needed to know various internal and structural aspects of system behavior. it is necessary to also use various variance. then the expectation operator in the foregoing two relations can be replaced by a time average over a sufficiently long period of time. It can be shown that the variance of an observation is the mean square value of the observation minus the square of the mean of the observation. correlation and covariance terms to describe a Gaussian process. The most-used form of probability density function is the normal or Gaussian density function which. In 01 two variables x and y is the average of the product 01 the deviations 01 corresponding x and y values Irom their respective means. We define this as <l>xitl. in the single-variable case. die multivariable case. however. There are a variety of ways in which this might be accomplished.. we will be concerned with processes that evolve over time. we see thata <l>x(t l' t 2) = E{x(t l)X(t 2)} where the symbol E denotes probabilistic expectation and is defined by E{x(t)} = f~ 00 x(t)p[x(t)] dx(t) Often it is necessary to compute the cross-correlation function of two time variables x(t) and y(t). For an ergodic random process. This approach is more powerful that the input-output analysis approach but will often require more complex mathematics and a more detailed knowledge of the structure and interactions of the system being modeled. The square root of the variance is commonly known as the standard deviation. and mode of the observation. An ergodic autocorrelation function is symmetrical in this variable. Many physical processes are stationary in the sense that the time average of the product of two random processes is a function only of the time difference in the age variable of the two processes. is a necessary ingredient in any­ statistical analysis. A condition that isstronger than stationarity is ergodicity. It turns out that there are a variety of measures of "average" such as mean. In any area that is appropriate for statistical analysis. and this is just the average of the product of the values of the variables x and y themselves. and it is consequently an even function. this is the ensemble or time average of the squared dijference between the values of the observations and the average value. median. In most of the applications of interest here. An often-used measure of this "spread" is known as the variance.J. Another very important average measure of an observation concerns the variability of one piece of data from others. by using the average value of the observations. One purpose of a statistical analysis is the summarization. and there exists a large body of knowledge conceming measurement of correlation functions. or standardized representation in terms of various norms. Generally. we have <l>Xy(t 1 - t 2) = <PXy(t l ' t 2) The time difference variable t 1 . Probability theory. The covariance 2.J¿ ANAL Y:'I:' Ut. By definition. This is often a very useful measure of "spread" in a set of observations. In fact. it will have a smaller value for a random variable x . When random processes are ergodic. of data or information. An ergodic process is one for which the time average moments 01 the process and the ensemble average moments are the same.a few simple graphs. a state-space model representation is appropriate. The converse is not necessarily true. You may obtain a representation of the "central value" of a random phenomenon. The average of the square of the observa­ tions is known as the mean square value. While the variance of ~ set of observations is a measure of their spread or dispersion. variance function is a particular case of a covariance function. The correlation function is often used also. The major potential problem with this representation is that there is no general way in which we can become familiar with other than the input -output behavior of a system through use of input-output data. there is an essential random nature to the observations that are taken.t 2 is generally replaced by a single variable such as 'r. such as a time series. the mean and the variance. Various approaches can be used to "identify" these parameters. The input to the model is data that represent observations of those variables... a ranking of the levels of significance of the respective coefficients in the regression equation is determined. the number of data points should be not smaller than 10 times the number of variables. an equation is estimated using the most significant variables only. and the process is repeated until the likelihood that any of the remaining coefficients actually represents no relation at all is smaller than sorne preset value or level of significance. If. ti ! 1. it is not possible to directly observe the values of variables. The most widely used esti­ mation method is generally referred to as "least squares. and correlation functions into compo­ nents that arise from specific causes. . Often.r\." to indicate that it determines those coefficient values that will yield the smallest possible value for the sum of the squares of the differences between observed values of the dependent variable and values computed from the estimated relationship. which is elosely related to regression analysis. through use of hypothesis testing techniques. Then.UI. In mathematical notation. whose observations are always elose to the mean value. b. then the logarithm is taken on both sides to yield Y = ln(y) = ln(a) + b ln(x) + c ln(z) or Y = A + bX + CZ where X = ln(x) and Z = ln(z). Generally. and c that best fit the nonlinear equation are not generally the values that best fit the linear logarithmic equation. 1II 'II!I II. it is not linear in the original variables x. if the function f(x) is to be determined such as to best express y as a function of the set of state variables x = [Xl' x 2. such that use of the equation results in the best possible fit to observed data.J. and we have N observed values of y = [YI' Y2' . a regression analysis equation describes the value of one variable.. Y. Then. it should be ascertained that a sufficient number of joint observations of the values of all the variables considered are available. logarithmic. Then.)oLI'{CIC CVCI'II JVILlUCLJ. One state .! irl 1. This aspect of regression analysis is often not emphasized to the extent appropriate for identification of useful models. Postulation of a Mathematical Model or Structure. The dependent variables that need to be described as a function of other variables are defined. One approach to this calls for first taking all of the candidate variables into account and then estimating the associated coefficient values and their uncertainty. Regression analysis and estimation theory also inelude the search for an appropriate structural equation or. These extensions make regression analysis and estimation theory prob­ lems virtually indistinguishable. It should be remarked that even though the logarithm of all data is taken so that the postulated relationship between the transformed data becomes linear. The result of a regression analysis may al so be useful as evidence to support or reject hypothetical theories about the existence­ of relations between variables in a system.. is just the variance plus the square of the mean value. 3. It is important to note here that while the resulting equation is now linear in the transformed variables X. The covariance may have any numerical value. It is positive when increases in x are generally associated with increases in y. There are a number of generalizations on the basic least squares estimation criterion. they may be used as a part of a more complicated mathematical description of sorne problem. covariance. The state variable corresponding to this is then dropped from further consideration in the analysis.. 2. the dependent variable. The form of the postulated equation may be linear. At zero age variable. After all state variables to be considered have been ineluded in the pro po sed regression equation. positive or negative. . Essentially the only restric­ tion is that the autocorrelation function must be nonnegative for zero differ­ ence in the age variable. the values of a. Regression analysis equations may be helpful for interpolation or extrapolation. multiplicative. and. y.. The end result is the appropriate regression equation. for example. . noise-corrupted observa­ tions must be made. Regression. alternately. This is usually guided by intuition and existing theory and knowledge. because of this. is con­ cerned with the determination of those parameter values in a given equation. Regression techniqu~s are used to obtain a mathematical model that specifies the relations between a set of variables. '1 !I . it is determined which of the coefficients is most likely to represent no relation at all between the dependent and independent variables. especially to inelude the dynamic evolution of observations over time and the notion of weights. the assumed model structure has the form y = axbz. nl. .C. The autocorrelation and cross-correlation functions may take on value. Determination of Candidate Variables and Data Collection.. The statistical techniquc known as analysis of variance is gene rally concerned with disaggregation of the compdnents of variance. vvn. as a function of other independent variables. exponen­ tial. The following activities are associated with the solution of a typical regression problem: 1. or sorne other appropriate mathematical formo An initial postulate is made. YN]' then we determine the unspecified coefficients in f(x) such that the sum from i = 1 to i = N of the squared error expression = [Yi . .u UlV'l r nJ JJJ that a linear relationship between the transformed variables results. Estimation theory. one needs to determine which of the candidate independent variables needs to be taken into account in order to obtain a good description of variations in the dependent variable. Another approach is based on a procedure that is inverse to the one just described. Usually. transformations are performed such ''le. generally when possible.. an input-output model that best replicates observed data. and z. the autocorrelation function. Choice of Estimation and Selection Method.f(x)] 2 is minimal. Alternately. . or forecasting of events of interest. x n ]. er In regression analysis and estimation theory. and Z. Preliminary Analysis ~. for example. 7. Have you taken aIl the important explanatory variables into account? 2.62 estimation. Do the obtained results make sense to you. the determined model must be primarily data-based as contras­ ted with theory-based. 4.. Here are the steps to foIlow: 1. The methods of regression analysis and estimation theory are closely related methodologically to optimiz­ ation methods beca use.. Clearly. Systems engineering phased efforts in regression analysis and system if they are based on an insufficient number of observations.. and systems engineering in general. and it is desired to "filter" this observed data in order to best separate data from noise. Regression analysis and estimation theory are often used in conjunction with other forms of mathematical modeling in order to attain extrapola­ tions of likely futures or trends.o"n. For example. various iterative and sensitivity forms of testing should be performed in which. are under explored relative to areas more subject to complete analytical explora­ tion. until it is observed that the addition of one more additional state variable does not lead to an "appreciable" improvement in the goodness of fit of the resulting regression equation. 8. it is important that the answers to the foIlowing questions be "yes": 1. 6. The success of a modeling effort is criticaIly dependent. In practice. As noted earlier. Data and theory need to be combined in order to determine an equation that best expresses the relations between and among a set of observed variables.lysis process. A mathematical model of observed phenomena is desired and there exists no established theory to explain variations in a variable. along with the use of a subportion of the regression analysis program in which algorithms for parameter estimation have been en­ coded.1. Results that are obtained through use of these approaches in situations in which there exists little theoretical knowledge should be examined very carefully because there is no guarantee that a regression relation that displays an excellent fit to observed data will really have any predictive power at aH. The results of regression analysis and estimation theory will be unreliable Sensitivity Analysis Figure 4. between variables..:I/ 2. It is desired to use data as a basis for suggesting theoretical relations . in decreasing order of initial significance. 2. Data need to be critically examined to test the validity of a postulated hypothetical theory or assumption. as a conse­ quence. is needed. Data are corrupted by observation noise. and can this assertion be validated in sorne manner? 3. these structural aspects of regression analysis. 5. Figure 4. /"11"'-' 1. This step involves obtaining the needed data. ¡ . there are many variations of these basic approaches that are possible and potentiaIly desirable for many applications.. or an interpolation of likely values occurring between data points. this caveat often seems overlooked! . Depending on the criticality of obtain­ ing a good regression equation. parameters are determined such as to lead to extreme values of a performance indexo In order to determine the completeness and usefulness of a regression analysis.62 illustrates the flow of these steps in regression analysis.\. I'VLI Vl'll. 3. Hypothesis testing is generally used as part of the regression ana.. other structural~ models or different selection procedures are used. in each case. it is possible to add more than one state variable at a time. in most cases. Determination of the Regression Curve. Extrapolation of historical data into the future. CVCf'Vl ¡VIL/UCLJ. upon success in choosing an appropriate structural mode!. Parameter values for an assumed structural model need to be determined on the basis of empirical evidence. Causation is required here. and a good fit obtained using regression approaches only assures high correlation. Iteration and Sensitivity Analysis..VIJLI\CIc:.. The following are conditions under which the use of regression analysis and estimation theory techniques may be appropriate: 1.1 11 2. Are the results of the analysis useful to you in clarifying the structure of the problem and in leading to enhanced wisdom relative to its resolution? The user of regression analysis and estimation theory techniques should be concerned with several observations that affect model validity: 1.1nrtlIIJ variable at a time is added to this regression equation. and/or 4. and then 3. When only a very crude analysis is contemplated. Preliminary Review of Observed Data and the Environment in Which the Trend extrapolation and time-series forecasting.. ANU 0RAPHS 339 of the process being modeled. in particular the time length of the extrapolation into the future that is needed and the dynamics ¡ 3.::n:> vr I\L I r. This may be very helpful in the selection of appropriate structural forms or characteristics for an initial time-series model. it is generally assumed that a time-series of observations of sufficient length and quality is available. The time interval over which observations are available should be long enough to allow detection of trends of potential interest. but not necessarily in the lalter. tVtNI MUUtL~. observed data as they evolve over time is an initial and very helpful approach leading to identification of readily apparent characteristics of a time series. Statistical tests are conducted to judge the adequacy of an assumed time-series model. 5. or a portion of. be identified. However. where Yr indicates the potentially transformed observation of the time series at time t. may be of much assistance in enabling the selection of an appropriate mathematical model for the time-series representation. There is no general systemic approach for finding the appropriate structural model to use to represent a time series. The essential difference between time-series forecasting and regression analysis is that time is necessarily consideredas the independent variable in the former case. and the environ­ ment into which it is embedded. The general form of a time-series model is Yr=f(1'. parameters for a linear differential equation of low order may be identified in order to provide a best fit to observed data. the results of a regression analysis do not necessarily .n. which are very elosely related to estimation and regression theory. Poor data quality may make even the optimum fit a very poor one. Often. general trends. the data previously unused to identify mode! parameters may be used to refine the parameters of the model. The following steps provide an elementary description of the process of time-series analysis: 1. at least in a preliminary way. 1. and then the model may be used to predict behavior for the next 5 days where data are available but where the data have not been used to identify the model. Often cyelic components. This length and quality depends upon the purpose of the forecast. the undetermined coefficients or parameters of this model are es­ timated in order to make the output of the model best fit the observed data. Verification and Validation of an Assumed Model.-1o Yr-2' Yr-3"'" Yr-n) + ~. and the time interval between observations should be sufficiently small to enable isolation of phenomena of interest. The criteria for inelusion or exclusion of variables in an estimation or regression algorithm must necessarily be strongly dependent upon the purpose to which the resulting model is ultimately to be used.. Once a tentative structural model has been identifi­ ed. The remaining portio n of the data is then used to verify the model. A rough sketch of all. and the subject doubtlessly deserves a separate treatment. 4. There is no automatic assurance that the "best" is necessarily very good. In a similar way. a time-series model is purposely based onlyon part of the available data.­ 2. Information on the physical process. time series forecasting is a subset of regression analysis. such that some statistical error measure is minimized. and the random fluctuations can. To identify a mathematical structural relation that might potentially explain observed phenomena. . Both the functional form and order of the structural model and the criterion used to best fit the data to the model output are subject to change. Another candidate structural model is subjected to experimentation if the one under test is shown to be inadequate. To best estimate a parameter within the time interval of observation. To use the determined relation in order to extend the observed data into the future based on the assumption that past trends will continue. NtIWUKK~.. The typical results or final products of a time-series analysis ¡nelude the following: • A projection or forecast of one or more future values of one or more variables that are of interest • An identified mathematical function or structural model describing past observations which is potentially useful as an aid in forecasting • Evidence that can be used to support or reject assumptions or theories about the mechanisms that govern the past behavior of one or more variables that are of interest In the actual use of time-series analysis algotithms. As in regression analysis.1'<1'I1\' I VI. . After the model has been verified in the way described.' U/~LKtlt 3. The expression ~ represents a white noise driving termo Other statistical analysis methods are used to characterize the error term associated with this. 1dentification of a Structural M odel for the Time Series. To best identify unspecified parameters within this structure. the basic objectives in time-series forecasting are as follows: Process Evolves. are widely used as the basis for projection of the future in terms of a series of historical observations of one or more observed variables over time.nL r .provide evidence or proof of causal re1ations among events. 2.. In this sense. a model might be identified based on 30 days worth of data. For example. there have been many studies in this area. These inelude: • • • • • • Least-squares curve fitting Regression analysis Estimation theory Hypothesis testing System identification MR. into a set of expected values of performance. moving average models. these will be difference or differen­ tial equations of appropriate order.. and ARMA time-series analysis The models to be used are the assumed models for the physical or organiza­ tional process being represented. The general problem of constructing a mathematical representation for observed phenomena is fundamental to system identification. I"\I'IIU ANAL DI:> Uf" AL I/:KNA "VD As we noted before. Generally. Time-series and regression models may not be fully useful for predicting the effect of options that were in effect over the time period for which data is obtained.1 :11 . curve fitting. = a 1x t . = wt + b 1 wt -¡. With each of these. Simply stated. It would be impossible to provide details on all of the methods appropriate for time-series forecasting here. Actual F orecasting. or service within reasonable cost and other constraint conditions such as to enable the fulfillment of sorne desired performance goals.62 indicates that the time-series analysis process is essentially the same. as the regression analysis process.1 + w" where w. of appropriate variables such as forecast values. Examination of Figure 4. it is the task of the system planner attempting conceptual specification of system architecture. and ul\l"\rn.. = a 1 x. is a zero mean white noise forcing function. Although observed descriptive behavior is generally used as the basis for identification of a system. For an Nth-order MA process.J. hypothesis testing. There is a need for forecasting future values of critical variables.. ~ '~ 11I The presence of several conditions make use of time-series analysis appro­ priate: .. J'" autoregressive moving average models that represent a hybrid approach. 1. through use of the systems engineering process.:. For almost all intents and purposes. 4.J"1I 4. The autoregressive moving average [ARMA] is just a com­ bination of the AR and the MA process and can be written as X t = a 1 x t . It is not necessary or not possible to fully specify an appropriate model on the basis of accepted theories about the causes of change in important variables.N. That is to say that we need to identify.. The purpose of the optimal policy is to accomplish sorne meaningful goal.2 + . These overall performance goals or objectives are typically translated. . Often.2 + . It is possible to obtain adaptive estimation and adaptive time­ series analysis algorit~ms in which these estimated values are used to tune the parameters of the estimation or time-series analysis algorithms.. + bNw.. The general zero mean autoregressive process of order N is that process which evolves from the model x.. estimation theory. . poor results will often occur. + aNx. which is the process of constructing a model that describes observed system behavior. There are three basic types of models used for time-series forecasting: autoregressive models. The environment may change over the forecasting time interval and unless this is accommodated.'! '1 Ii lii 1. AR. or estimate. it is very important to note that the uses for system identification are primarily normative. an autoregressive model [AR] is one represented by the first-order difference equation x.-N + Wt • A first order moving average [MA] process is one that evolves over time from the model x. product.1 + a 2 x t . There are also variations of these. 1 1" 11 ¡ I:II! l' I!' U'JI. reliability. we have the model x t = W t + b 1 wt . The process x. It may be difficult to cope with information that is not easily quantified. perhaps even probability density functions. 2. and safety.2 + . the characteristics of systems at sorne future time in order to evolve optimal policies for these systems over this future time horizon. and time-series analysis methods have a lot in common.. the task of the . +aNxt-N + w. The appropriate time-series model is used to forecast values for variables of interest. and is also a Markov process. 1i1 '11 Ili l lI. all of these describe similar methods that are used for essentially similar purposes. It is reasonable to assume that the information contained in historical data is the best and most reliable source of future predictions. + bNwt . ' ·li l !I :~I 'I'. This also describes the subject of statistical estimation theory and regression analysis.-"L'L LVLI'f1 '''''-''l.1 + a 2 x t . + b 1w.2 + . where W t is zero mean white noise. On the basis of this. it should also be cautioned that: 1. Earlier comments concerning validity cautions regarding use of regression analysis and estimation theory are applicable here as well. There exist sufficient past data about these variables.1 + b2 wt .-1 + b 2 w t . '''CIVVV''''J. generically.. generated by this model is known as a first-order autoregressive process.. and a consequence of this may be to ignore such information to the detriment of the analysis. The ultimate goal of systems control is to provide a certain function. regression analysis.11 !._N' The tools useful for implementation of approaches to time-series analysis are just computer implementations of algorithms associated with the methods discussed here. 3. especially if implementation of these options change the structure of the model used for prediction and this change is not recognized... 3./LL. 2. this is accompanied by an error analysis in which statistical uncertainties of the random functions are taken into account to enable computation of moments. 11 ~I 1 '~ 11. and so on. we have only called the contingency task structure-that is to say. The results of the firstactivity clearly inftuences the second. dynamic or static. Information Imprecision and Uncertainties Involved. structural representation and parameter determination inftuences the types of System identification needs abound in many of the aboye six activities. Identification of the impacts of alternative courses of action can only be accomplished through use of some mode! of system operation. and alterables. the environment. and the experientialfamiliarity ofthe problem solver with the task and tion can be found in references 108-113. this section will Among them are the foIlowing: discuss the very important subject of appraisal of system models. Our intent has been algorithms. measures as well as policies and their measures and thus specifying or the parameters of which need estimation or identification. Interpret the impacts in terms of the objectives for the task at hand 5. Identify task requirements to determine issues to be examined further and those not to be considered further 2. we determine the to determine who wants a mode! and what the model is wanted for. ihe estimation and identification issue characterization that is used. the specific task at hand­ scratched the surface of a number of important issues in systems analysis and and the general objectives for the system. This will generally require data and perhaps an optimization process to best fit the Representation of time-series analysis and system identification problems in .11 " I . or generalized system estimation. although it may be very tedious to accomplish. This chapter has ation. Se!ect an alternative for implementation and implement the resulting control or policy 6. analysis. we determine the nature of the uncertainties involved and the discussed specification of mode! structure and parameters within the structure. constraints.~. Here.~ . We will not discuss statistical evaluation of mode!s here. The past several sections have examined various approaches to the determina­ There are many modeling issues that arise in the use of time-series tion of mode!s of various aspects of system behavior. In this characterization. once the nature of the process that is involved has been characterized. We also need 2. Nature of the Input-Output Relations Involved. '!I II " I!! 1III . valuation and validation of system models will be discussed. 1. the environment into which the task is modeling. considering objectives and their assumed correct together with a set of behavioral structural assumptions.. and the choice of one of these depends upon the method chosen to represent uncertainty and imprecision. and the task of a man-machine intelligent system to examine future issues in such a way as to be able to do the following: 1 ".. Our last chapter was concerned with issue formulation and this ineludes such elements as problem definition. If that model has unspecified parameters associated with it.. discrete or In a very real sense this entire text concerns the model-building process. With respect to the topics discussed here. and interpretation steps way. Characterization of the generic type of etTort involved 2. what is presently available in terms of software implementation for of systems engineering as we have discussed so often here. then there is a 4. '1 ~I I ¡i l¡ il ll '1 1 II 11. e!aborating the problem so that we can begin formalizing a model and 3.1 . 1 ' . primarily to describe various methodologies for mode! making.. Monitor performance to enable determination of how weIl the integrated system is functioning . identification of task requirements involves an etTort to determine what is often As with other topics in this section. In a similar These are just the fundamental formulation. There appears to be three fundamental activities associated with any given time-series analysis and system identification effort: 1. Chapter 22 of reference 33 1. and indeed in this chapter.. Identification of parameters within this structure Each of these three activities are related.VUCL" system designer attempting concept realization in the form of operational system specifications.t1ltV/'I v r Ll"'\l'tuC-"LI'1LC:: IV. needs. 11 1: 11 111. 1. There are a number of potential ways that may be used for process characterization. much additional informa­ embedded. . Identify a set of hypotheses or alternative courses of action that may resolve the identified future issues 3. degree of precision and completeness that is associated with the process. noncausal. Identify the impacts of the alternative courses of action 4..CV. In this characterization.8 EVALUATION OF LARGE-SCALE MODELS fundamental requirement for what is generally caIled system (parameter) identification. In this characteriz­ contextual relations to determine a structure for the mode!. continuous event. which in turn inftuences the third activity. Nature ofthe Process Involved.I1LU. terms of the nature of the input-output relations involved is usually not a conceptually difficult task. . Determination of the structure of the specific system to be identified 3. finite state or infinite state. along with basic process involved in the form of a set of structural laws that are value system design and system synthesis. as well as in the use of related approaches in systems identification. contains a description of approaches for statistical evaluation in systems we determine whether the system with which we are dealing is causal or engineering.1. there is no assurance that it is valid in the sense that predictions made from the model will oecur. It is either impossible or very costly to observe the effects of implementing certainpolicies in the real world. Observed data may be used to help formulate the system structure. Even if a model is verified. we may form estimates of these parameters. . as opposed to using sorne other method such as judgment or the resuIt of a coin toss. in which we determine that the overall model as well as model subsystems respond to inputs in a reasonable way. large-scale system. 2. If we are given a sample from a population whose specifica­ tion involves one or more unknown parameters. there are several steps that can be used to validate a model. This can be achieved if we can determine that the structure of the model corresponds to the structure of the elements obtained in the problem definition. is the least square error estimator. These inelude a reasonableness test. Finally. The system under study is too complex to be described by a simple analytical model. and moral standards of the group affected by the modelo A simulation model is generally the most appropriate when the following four conditions are satisfied: 1. There are no one-to-one relationships between the structure of a complex nonlinear feedback system and its behavior. These types are often remarkably similar.CVI1LLJ/\'. structure. In this way we are able to show how problems are created so that corrective actions may be taken and control policies established.~ estimation.'''3 scale system is the need to correctly represent the structure of a system rather than just to accurately reproduce observed data.V'''- model to the data. Nevertheless. Finally. to be of ultimate value. as indicated in the preceding paragraphs. should be useful in the decision-making process. and also because of inherent noise and observation error. and system synthesis. There are no objective criteria for model validity. at least with respect to policies that have been implemented. a qualitative understanding of what can be accomplished with respect to this phase of model validation is quite important. Model credibility depends to a considerable extent upon the interaction between the model and the model user. In estimating or identifying parameters such as those needed to validate and calibrate system. and purpose. there are also conditions under which it may be questionable whether a simulation model can be . two subprocesses may immediately be recognized. It would be impossible to present an in-depth discussion of this topic without first presenting considerable material concerning optimization. and thus there is little likelihood of the development of a general-purpose context-free simulation model. 4. One is the formulation of the system structure. There is no reasonable straightforward analytical technique for solution of the system model. An essential complicating problem in a large­ vr LJ·11\L1C-~LI"\LC IVIVUCL. For this reason the earlier chapters placed considerable emphasis on techniques appropriate for the model specification steps of problem definition. Only by obtaining proper system structure can there be a proper understanding of the underlying cause­ and effect relationships. The model should also be valid accord­ ing to statistical time series used to determine parameters within the model. value system designo and system synthesis steps. The majority of large-scale and large-scope systems. at best. Such a presentation would not be consistent with the goals of this text. but only with respect to well-defined and stated functions and purposes. Verification of a model is needed to ensure that the model behaves in a gross fashion as the model builder intends. However. RecalI our earlier comments that a model has structure. Thus we want to postulate correctly the forces operating between various subsystems of a complex system. Because data concerning the results of policies not implemented are generally not available. The simplest estimation procedure. the model should be epistomologically valid in that the policy interpretations of the various model parameters.:l .models. the model. as determined by knowledgeable people. ethical. It is much more likely that relevant data will be available for these analyses in operational situations than for strategic issues. difficult to build a model that does not reflect the outlook and bias of the modeler. Many advanced estima­ tion algorithms are available and in actual use. in addition to the simple but important problem of explaining behavior. A credible model is one that has been verified and validated. and recommendations are consistent with theprofes: sional. Model usefulness cannot be determined by objective truth criteria. In a complex. 3. and estimation theory is concerned with properties of different estimators. both conceptually and from the point of view of implementation. to completely validate a model. To determine model credibility we must examine the evidence that will be required before the model user can use the model as an aid to decision making. value system design. Parameter estimation is a very important subject with respect to mo~el validation. there is no way. The process under investigation has many state variables and/or ishighly nonlinear in its behavioral patterns. certainly satisfy these four conditions. There are quite a number of different types of estimators of a given parameter. It is. organizational and tech­ nical. which will be discussed in our next chapter. this process is very difficult because of the large number of individual subsystems that need to be structured to form the overall system model. Observation of basic data and estimation or identification of parameters are essential steps in the construction and validation of system models. and the second is the determination ofsystem parameters which determine behavior within the system structure. Nevertheless. and system identification. Selection of a poor structure will complicate system parameter identification and design and inhibit or prohibit proper system operation. function. and then unknown system parameters may be determined within that structure to minimize a given error criterion. and it is not true that only one model will produce given observed data. value system design. and purpose when devising a model if it is to be a truly useful one. This extensive chapter has examined many types of systems analysis and modeling methodologies. and a brief discussion has been devoted to questions of model evaluation. We have also discussed cautions concerning their use. because the mere reproduction of a historical set of data is not in itself sufficient to make the model useful for explaining behavior. sensitivity considerations for variations in internal model parameters as weIl as exogenous variables. Have logicotheoretical strengths and weaknesses of the model been considered? Are the underIying structural assumptions welI-treated? b. Control. and purpose been considered? b. Theoretical appraisal: Is the model structuraIly sound? Have structure. responsible. and model transfer­ ability from one setting and in one environment to another. Here the selection of one or the courses of action leads to a chance situation in which the outcome is uncertain. function. There are short-term deadlines or the cost of modeling outweighs potential benefits. These conditions are as follows: 347 4. Has technical evaluation of the database been considered? 4. and their use in intelli­ gently constructing a model and linking it to the decision maker. The problem definitioll. to policy or forecasting questions? ::l Figure 4. problem definition. . 2. Specification and problem elaboration a. function. Are model appraisal criteria appropriate for the model intentions? Have we connected the intended use and criteria? 2. who initiaIly discussed many of these factors. Insightful. Brewer [114]. for sorne reason. WilI we assess model validity with respect to the in tended purpose of the model? <: 5. Intention.9 SUMMARY 1. We must consider structure. The relation of systems analysis and modeling to the other steps of systems engineering has been indicated previously. The problems at the end of the chapter provide the opportunity to construct and evaluate a number of simulation models. also suggests that the foIlowing evaluation scorecard of potential models be used to evaluate their usefulness: 1. or principIes? d. There are limitations and problems associated with current systems analy­ sis and modeling approaches. duty. Have we assessed the model validation techniques we propose to use? b. The issue context is not reasonably weIl specified. This is the sort of structure that the efforts in the latter portions of Chapter 5 are most useful.63. and system synthesis steps have not been adequately completed or. In this chapter we wish to consider an analysis-related problem that can be used to model decisions under uncertain­ ty. Have data collection and management procedures been carefuIly considered and scrutinized? b. OveraIl appraisal function a. 3. This is a particularIy important topie. and appropriate model development should result in models that are both relevant to large-scale and large-scope issues. Validation and performance appraisal a. or inaccessible. and context demarcation a. data source adequacy. These inelude documentation problems. Ethical appraisal: Does the modeI or the results from it offend moral codes. Pragmatic appraisaI: Is the model realistic and applicabIe with respect . Technical appraisaI: Does the modeI reproduce historical data? How accurately have the parameters within the structure been tuned? c. A simple ilIustration of such a situation is represented in Figure 4. error propagation. There may be problems associated with building the proper connecting links between the formulation elements. ineluding identification of these.63 ¡ Decision Chance 1 • Event B Risk Even! Node Nodes Probability Ou!come A decision tree model of actions and risk event outcomes. We wilI discuss sorne aspects of sensitivity analysis in our next chapter. cannot be completed. 5. and management a.'>"0 ANAL r:>/:> UI" ALI tKNA IIVt:> SUMMARY developed which will be appropriate. Is the major purpose forwhich the model is to be built described? ­ b. Almost any simple model with two or three constant parameters can be tuned to fit a curve. The needed databases are inconsistent and inadequate. We have presented many lists of steps to support systems modeling and have provided a number of detailed examples of sorne of the modeling approaches. More appropriate techniques exist. information coIlection. The elements related to the policy questions to be asked are not readily accessible and measurable at acceptable costs. Have data aspects of the model received explicit consideration? 3. 4. and substitute fraction (SF). Determine a system dynamics model for energy supply and demand with respect to a single energy source. a competitive substitution variable (S) may determine changes in demand resulting from the use of altemative energy re­ sources. Both schools exist and will accept all who apply.12 4. which may account for the effects of higher prices in stimulating the search for new reserves. Please comment upon the potential abilities of systems dynamics modeling in accomplishing each of these. such as the ratio of cost to average cost or substitution. or effective. derive the algorithm that expresses the relationship between annual 4. Sales revenue from business income may be used to finance the search for new energy resources. Prepare a brief paper in which you discuss amplitude and time-scale considerations associated with determination of a KSIM model and its associated solution.13 . 4.2 Consider development of a system dynamics model to predict the percentage of children in public schools and the percentage in private school in a city.800. (b) Same as (a) but for a group. The exogenous variable for this model may be assumed to be normal demand (ND) for a given energy source.11 4. Normal demand might be assumed to be an exponentially increasing function of time.n'~"'L rJ':.x) for the x In x expression in the KSIM differential equation.3 A system dynamics model for energy resource utilization might consist of the following level variables: level of reserves (NR). and (b) Increased volume damages highways and results in high maintenance costs and low speed. enrollments decrease. Indicate how this basic model could be expanded to incorporate a variety of energy sources by cascading several simple models of the sort you have just developed and coupling them through some average cost and availabil­ ity relationships. The private school is very short of money and has high tuition.06 times 20.67.1 for the case where event j is inhibiting.6 Determine the bounds for the future probabilities in Example 4. and newly developed techniques from this research and development may be so used. 4.000 at a bank to buy a car. AIso. the monthly payment is $516. What other important factors do you believe should be incorporated in the model? 4.f\ I IVt::> PROBLEMS 4.4 Determine a system dynamics diagram for pertinent elements in your educational institution. Provision should also be made for the discovery and development of new reserves on nonrenewable resources. The loan arrangement calls for a four-year loan and for a quoted interest rate of 6%. Technology (T) is basically the result of research and development financed through revenue (R) and demand (DE).800 and. Prepare a brief paper in which you contrast and compare cross-impact analysis with KSIM. etc. spread over 48 months. e present road conditions.1 become for the case where event i precedes event j in time? 4. 4. For what purposes would one method be appli­ cable and the other inapplicable? For what purposes would they both be equally applicable? Reexamine the solution to the KSIM exercise considered in the text in which you use x(l . Use the model to examine two theories: (a) Larger maintenance funds result in better roads that carry more people at higher speeds. Be sure to specify your assumed problem bound­ ary and elements (faculty.9 4. in order to generate actual demand (DE). V. quality of maintenance by highway department. and reserves (NR). level of technology (T).) used to construct your system dynamics diagram. 4. vehicle volume. (e) to determine policy. student enrollments. or $4. This would be accomplished in such a way that as reserves are depleted. usage rate (URA).8 Determine a KSIM model for the four elements of a highway transpor­ tation problem: M.1 questions to be answered If we use the direct tree probabilities suggested here.1 Determine a feedback control system model for the epidemic propaga­ tion problem. A person borrows $20. This normal demand may be modified by actual developments. A represen­ tative model would appear to consist of a basic supply-demand loop consisting of actual cost (AC).1.1. (d) to predict or forecast the future. Contrast and compare this with the system dynamics model of an epidemic. vehicle speed. 4. research support. What do the results of Example 4. demand (DE). the increased cost of distribution and pollution abatement is reflected in the total cost of the resource. (c) to leam more about a particular subject. What is the actual. Research and development may also lead to exploration for new resources. The total to be repaid is $24.7 Show that there are N 2 probabilities that must be estimated to employ the classical cross-impact analysis approach and 2N . The reserves to cost link may actually be indirectly implemented through two other loops representing distribution (D) anp pollution abatement (PA). The total interest is computed as 4 times 0.10 4. As tuition rates are increased in the private school.> v r I"\L I CF\I'V.5 Among the several uses to which a model may be put are the following: (a) To display the perception and belief of a person conceming what has happened and what will happen in the world. average usage rate (URA).000. faculty promotions and discharges. S. This may be accomplished by means of exploration (EX). rate of interest in this case? Based on the time value of money relations developed in this chapter. 2 million during FY93-94 b. (b) There is no transfer of information between projects.8 million annually after that.000 and are directed at technologies that will supply the same service. the organization in which the individual is employed. the economic system (public economic analysis).) Based on this information. in present-day dollars. Please write a brief paper on how we might go about each analysis. and society.40 probability of supplying it at a cost of $2 per unit. and shadow prices would be used in the second. 4. 4.000. What is the expected worth of the two projects. Production costs: 180 million per year . Is there sorne OCC where the two investments have the same value? Explain the implications of this. Suppose that the opportunity cost of capital (OCC). 4. The manu­ facturer projects the resale value of the truck to be $12.6 probability of supplying the service at a cost of $1 per unit and a 0. and (c) a decision support system [115]. is 20% peryear. You put $5000 down on it. 4. Distribution costs: 0.2 million in FY94. Discuss issues associated with expanding the approach such that it is applicable to each of these situations. · database management software. $2oo/month to insure.24 Build a cost breakdown structure for a project based on the following figures: a. or discount rate.5 million during FY95-97. The third analysis would necessarily attempt to weigh various potentially noncommensurate factors into the analysis. (You plan to sell the car after four years. These criteria are generally met by large-scale projects directed at the final stages of technology development but are not met by many basic research efforts. 4. and the worth of supplying it at $2 per unit is $1M.000. which is not retired over time and which becomes due upon expiration of the mortgage. Investment B will return $20. duration in months of the loan.2 million in FY94 c. Factory modifications: 2.18 Identify relevant factors that will enable you to bring about a cost­ effectiveness analysis of a make or buy decision for (a). (c) The performance of each product does not depend on the performance of the others. d. Advertising: 1. The technology resulting from one project has a 0.15 Suppose that you need to decide whether to accept investment A or investment B as the best investment.22 The most elementary approaches to cost -effectiveness analysis are intended for application to those situations where the following condi­ tions exist.000 in a period of two years. Investment A will return $30. and $50/month to maintain.19 Prepare for and conduct a cost-effectiveness analysis of the potential decision to purchase a personal computer. and which should be selected if only one project is to be developed? What is the probability that the technology produced by the nonfunded project is at least as good as that developed by the funded project? What does this say in terms of risk management? 4.000 thirty years from now if the discount rate is 6%? 10%? 4.23 You are buying a Sport Utility Vehicle (SUV) which costs $25. Each investment requires an initial investment of $10.21 A company wishes to explore the possibilities of implementing a com­ puter-integrated manufacturing process for its production efforts. and constant monthly payments in order to fully amortize a mortgage. and you should discuss these.2 million per year g. Market prices would be used for benefits and costs in the first analysis. profitability.:"'. /'1/"W'''\L r .~~. called the balloon amount.20 Two projects each cost $300. 0.16 Please indicate how you might modify the net present worth criterion such that you could consider a borrowing interest rate that is different from a lending interest rateo 4. draw the cash flow diagram (CFD) for the period you will own your SUYo What is the value. Research and development costs: 3.000 in four years. or a complete sociopoliticoeconomic system (social cost-benefit analysis). The worth to the project FItVDLI:M:' ~1>1 developer of being able to supply the service at $1 per unit is estimated at $2M. Comparable risks and outcomes result from project B.17 Costs and benefits can be calculated from the perspective of an individ­ ual or firm (private sector analysis).. 4. 1. (a) Enough is known about the technologies under develop­ ment to develop credible forecasts and estimates of their performance. amount borrowed. There are tangible and intangible benefits to a CIM effort. Please do this from the perspective of an individual. What do the results of this problem become if there is a certain amount of the principal. You also estimate that it will cost you $1OO/month to opera te.000 in one year. Please prepare a cost-benefit analysis or cost-effectiveness analysis of a CIM effort in terms of productivity.14 How much should you be willing to pay now for a promissory note to pay $100.5 million in FY98 f. and competitiveness. Replacement model research: 0. which willleave you with a $460/month payment for 48 months.5 million per year e.' vr I'\L' CKIVI"\ IIVC~ interest rate. of how much you will lose or gain by buying this SUV? 4. Which project is the best according to these criteria? Suppose that the opportunity cost of capital is considered to be a variable. Required service/maintenance support: 1. (b) a complete office automation system. 4. Please compare the net present worth of the two investments and the internal rate of return of the two. The ACME Corporation will have to spend about $lOOO¡year for ordinary maintenance and $500¡year forinsurance. pp. One part of your systemis automatic token vending machines. 2. No...000 hours with a standard deviation of 3500 hours..000 hours without failure. 1976.. A.. B. 1. Synthese. Cross Impact Techniques in Technology Assessment. A Guíde to M odels in Government Planning and Operations. M. and Alter. and Goodman. Vol. 248-265. 1994. A.)? You want a maintenance plan whereby the machines are overhauled before any wearout failures occur.. 1979. To maintain the equipment is expected to require $3000 per year. H. 167-189. [25] 1981. [10] Porter. No. P. Toulmin. and Botta. Wiley. REFERENCES [1] Gass. Probabilistic Information Processing Systems: Design and Evaluation. A.. [19] Schum. [12] Enzer. S. An Introduction to Reasoníng. 1983. Vol. Suppes. D. C. A. P. March 1972.000. Phillips.. SMC-4. 4. [14] Sage. A. P. 48-61. MD. pp. taxes and insurance. Futures. No. Alternative investment oppor­ tunities would produce a 5% return to the ACME Corporation at this time. W. Assume that the annual outIays occur at the beginning of each year. 527.. and Janik. 227-239. The Application of Bayes' Theorem When the True Data State is Uncertain.. L. What should the corporation management do? Use present values comparison. to remove it at the end of five years.. [13] Quinlan. December 1969. The Combination of Evidence. "Inferno: A Cautious Approach to Uncertain Inference.. D. [11] Enzer. Concise Encyc/opedia of Informatíon Processing ín Systems and Organizatíons. The mean time for wearout failures on these vending machines is 27. 26. Sauger Books. BattelIe Memorial Institute. No. Hays. The equipment can be expected to have a useful life of 5 years. Vol. When is the latest possible time you can do this? A vending machine has been in operation for 14. Lookíng Forward-A Guíde to Futures Research. pp. and Gabus.:. A Probabilistic Theory of Causa lit y. 1. $1200. c. UK. Cross Impact Analysis: A Handbook of Concepts and Applications. after which it can be sold as scrap with a net realization of $30. L. Gordon. G. [21] Princeton. Vol. A. Beverly HilIs CA..531. S. Vol. (Ed. pp. W. December 1970. Purchase: The purchase price is $400. Heurístics: Partially Informed Strategíesfor Computer Problem Solving.. Oxford. P. Shafer..:JJ [7] Enzer. Futures.. Futures. Potomac. pp. Upper and Lower Probabilities Induced by a Multivalued [20] Mapping. 1975. R.. pp. Futures. 38. in Baldwin.. E. Vol. S. and WilIke. An Interactive Inquiry System. [3] Sage. MA.. BattelIe Monograph 9. [23] New York. F. T. A. Pergamon Press. December 1968. Organizatíonal Behavior and Human Performance. pp.. 2. S. Large Scale Systems. 1992.. 4... 2. 1990. T. J. 3. G. Oxford. pp.. Vol. and Borne. A. A Critique of Suppes' Theory of Causality. 1967. [2] Atherton. No. A Mathematical Theory of Evídence. O. Addison-Wesley. S. MacMilIan..) Portraíts of Complexíty. 1983. 1985... Dempster. Vol. payable in five equal annual installments at the beginning of each year.. March 1978. L. [8] Duval.. and Banks. [5] [6] I\I:'-CI\C''''-I:'> ANAL YSIS OF AL TERNA T/VES Enzer. March 1971. 255-266. No. and Hayward. R. A.. Vol. E. 325-339. No..000. .. 1983. [24] 1970. 202-222. 10. J. San Mateo." The Computer Journal.. Sage Publishers. Annals of Mathematícal Statístícs. P. 3051. Cross Impact Matrices-An Illustration ofTheir Use for Policy Analysis. pp... pp.. Futures. OH. Delphi and Cross-Impact Techniques: An Elfective Combination for Systematic Futures Analysis. NJ. (Eds). 231-244. CA. A Case Study Using Forecasting as a Decision-Making Aid. Vol. 48. Evidentíal Foundatíons of Probabilistíc Reasoníng. P. 2. [15] Gettys. 208-223. Rossini.. Concise Encyc/opedía of Modeling and Símulatíon. 1986. 1983. S. Lease: The machine can be leased for 5 years at an annual rental fee of $58.000 and installation costs will be $40. Each machine operates for an average of 9000 hours between chance failures (MTBF). 2. Ottes. and to see that it is kept in working order. 1. Futures. 1. New York. 1975. Morgan Kaufman. Columbus. On Human Information Processing and Its Enhancement Using Knowledge-Based Systems. 1988. 341-362. Vol. No. 4. Vol. 4.¿ 4. and Sage. Lagomasino. Wiley. Rieke. pp. [16] Edwards.. 100. S. R.Pergamon Press. No.. Shafer.116. (Ed. pp. T. FonteIla.. Amsterdam. North-Holland. pp. J. W. J. l. 3. p.).25 The ACME Corporation faces a decision whether to lease or purchase a piece of manufacturing equipment. The lessor agrees to install the equipment ready to use. Princeton University Press. A. 2.. L. IEEE Transactions on System Science and Cybernetics. T.000. No. UK... International Journal of Intelligent [22] Systems. [9] Helmer. and Sisson. 5. [4] Gordon. [17] Pearl. J. 155-179. A. Vol. A. T. What is the chance failure rate (l. pp. New York. Vol. Determine the reliability of the machine for the next 5000 hours. Large Scale [26] Systems. 1969. Cross Impact Analysis and Classical Probability: The Question of Consistency. A. 125-141. 1991. [18] Pearl. 4. 9.26 y ou are the systems engineer working on the design of a new subway system. 3. G. Roper. R. Reading. Forecastíng and Management of Technology. 1968. R. J. Initial Experiments with the Cross-Impact Matrix Method of Forecasting. Mason. Probabilistic Reasoning ín Intelligent Systems: Networks of Plausible Inference.J. and Cybernetics. 473-497. 1985.. P. D.. [64] Graedel. Messing About in Problems. M. Vol. P. A. 155-186. (Ed. R. 1971. Green.IEEE Transactions on proach. O. D. pp. [55] Forrester. '1 ' I! !III 1 1'1 1'1 . Petrocelli Books. MA. in press. B. Toulmin-Based Logic in a Group Decision Support 114-148. York. D. and Cybernetics. J. pp. Systems Analysis and Management: Structure. E. 1. J. W. 1995. ment. July 1989. S. China. 35. Strategy. A. Man.. G.. Societal Systems: Planning. MA. A. L.1976. and Cybernetics. Probabilistic Influence and Influence Diagrams. 6. D. 12. P. CA. World Dynamics. A. W. Wiley. Englewood Cliffs. Systems. May 1987. [51] Call.. Approximate Reasoning in Expert Systems. J. R. Inferential Activities. Imprecise Knowledge Representation in 1989. P.1976. Policy Conflict Resolution.). M. Networks.. Chelsea NJ. M. [44] Forrester. 807-840. No. J.. New York. 33.. [49] Shachter.. Wiley. T. J. Structural Modeling: A Tutorial Guide. M. 1990.!i ll'ji1l'¡ 1ill! I 11II 1[: 111 '11 1II 11 1 IIII! III !II 1 " !IIII Illil ¡lli' II 1 1''1'1 . No. and Complexity.. MA. New York.). New York. IEEE Systems Man and Cybernetics Annual Conference. Palo Alto. Man. Vol. New York. 1969. Policy. Anderson. Methodology for Large Scale Systems. ¡'i 'jl!i 11 'i!il " 1 il'l' 1. IEEE Transactions on Research.. and Cybernetics. and Randers. Systems Engineering. 903-922. Convergence in Problem Solving: A Prelude [33] Sage. Discrete Mathematical Models.. 1984. IEEE Transactions on Systems. 33-43. Meadows. P.033-' ::ri lin: ¡!i. Design. Structural Models: An Intmduc­ [54] Tatman. E. 41. No.. [42] Geoffrion. Introduction to Computer Simulation: A Systems Dynamics Modeling Ap­ [38] Lendaris. White River Junction. T. 1992. Knowledge Maps. Ashgate. R. '~~ l! . F. E. No. 1983. J... 37. M. Industrial Ecology. M.1 li il !¡ !~' I! I!¡I ¡¡!i r\Nr\L Y::>/::> Uf" AL / ~KNA IIV~::> rtCrCKCI'ILc:> . Brookfield.. New York. Norman. D. Press. pp. Readings on the [65] Graedel. 547-588. (Ed. No. A.. April 1990. Upper Saddle River. SMC 10. in Beroggi. G. A. D. New York. Prentice-Hall. 1986. Man. [31] Diagrams.. 365-379. Wiley. Principies of Systems. N.. 365-379. pp.. H. [39] Eden. Wright Allen Press. No.). N. pp. New York. MIT Press. January 1989. W. Systems Management for Information Technology and Software 2. J. Slovic. Vol. [58] Meadows. A. Reading MA. (Eds.. 1965. B. [45] Howard. [41] Geoffrion. R. [63] Sage. Pergamon Press. [35] Warfield. [52] Tatman. NJ 1995.. Operations Research. and Matheson.. and Sage. D. Special Issue on Public Policy Engineering Management. W. and [57] Roberts. and Ferrell. and Sage. D. ApriI1972.. 36.1 '1 1. First Decade ofGlobal Modeling. A. [34] Harary. Approach. K. Industrial Ecology: Policy Framework and Implementation. H. R. 8. B. 1999. Combining Probability Distributions: A Critique and an Annotated Bibliography.. T. and Rouse. Statistical Science. MA. NJ. 1983. and Cybernetics. Garet. pp. Systems. No. Reliability Engineering and System [30] Kahneman.. New 2.. Man. Urban Dynamics. and Shachter. Vol. VT. A. Beijing. [60] Meadows. Cambridge. and Sage. Cambridge.. Heuristics and Biases. [53] Buede. VT. D. A.. Wiley. December 1980. 1981. [46] Shachter. K. [59] Meadows... Engineering. P. M. Vol. 1990.. 719-762. Vol. UK. Judgment Under Uncertainty: Safety. M. 34. F. Operations Research. Prentice-Hall. and Miller. Vol.. 30-51. Management [61] Saeed. (Eds. R. and Bruckmann. Handbook of Systems Engineering and to Quantitative Analysis. IEEE [43] Geoffrion. System for Pest Management. P. No. and Cartwright... [48] Genest.. New York. Strategic Decisions Group. NJ. and Meadows. W. pp. Proceedings. . Towards Global Equilibrium. J. Man... Vol. Wiley. pp... 30. J. G. 20. J. Upper Saddle Principies and Applications of Decision Analysis.. Gupta. H. pp. The Formal Aspects of Structured Modeling. Vol. North-Holland EIsevier.. 1999. 1977. R. V. and Shaffer. J. pp.. [36] Sage. and Allenby. 22.. Operations [62] Chen. A. (Ed. D.1 : 1 1 ' Il' 1''l1 lil[ ¡ ! . Englewood Cliffs. D. International Journal of Technology Manage­ Vol. A. and Shachter. Evaluating Influence Diagrams. D. Prentice-Hall. P. December 1992. 2. pp. Groping in the Dark: 1he Oxford. 1981. R. Special Issue on Urban Modeling. m-m. A.. W. River. No. and Zidek. 115-162. Vol. Richardson. S. March 1990. R. [66] AlIenby. Streamlined Life-Cycle Assessment.. A... 1994. 1977. Vol. [40] Roberts.. pp. 1992. IEEE Transactions on Systems. Beyond the Limits. T.. pp.. No. McGraw-Hill.. E. J.). Influence Diagrams. [29] Janssen. L. Z. Cambridge University Press. A. European Journal of Operations Research. No. D. (Ed. A Group Decision Support System for Science [50] Shachter. pp. Dynamic Programming and Influence Sage. in M. 1971. Addison-Wesley..áá [47] Howard. R. 1. A.. 1. 5. 1982. and Sims. pp. R... Wright Allen Press. Cambridge. and Tversky. Computer Based Modeling Environments. A. Vol. 1999. 20.). Jones. Vol.. c. c. [32] Sage. 589-604. 2. August [27] Lagomasino.). A Comparison of Approaches and Implementa­ tions for Automating Decision Analysis. Wiley. pp. Prentice-Hall. D. [56] Forrester. 4. A.). 11. 535-563. Cambridge. IEEE Transactions on Systems. Deal. An Introduction to Structured Modeling. No. H. D... G.. R. Management Science. [28] Janssen. Amsterdam. 2704-2709. 20. D.. Dynamic Programming and Influence tion to the Theory of Directed Graphs. (Eds. V.. P. 1. [37] Steward. D. F. 871-882. 1973. 1986. pp. October 1996. Diagrams. 1998. 6. Development Planning and Policy Design: A SystemsDynamics Science. A. Vol. Systems Engineering: Methodology and Applications. An Ordered Examination of Influence Diagrams. Wright Allen Press. Management. July 1988. and Sigal. Social Science Computer Review. 1994.ds. . 1976.1 !1: Il li '11i i. B. [74] Richardson. and Fabrycky. pp. Vol. [71] Senge. Queueing Systems. .. Systems Engineering Management. (Eds. Academic Press. New York. B. 4. Cost-Benefit Analysis-A Handbook. J. Oxford.. III.. Modelingfor Learning. Scha/fer.. B. Englewood Cli/fs. Thompson.59. . and Vertinsky. 33-43.). Life-Cycle Cost and Economic Analysis. European Journal of Operations Research. Rossini. D . pp. R. J.. Post HilIs. 1998. Oaklands.. 1974. 1980 A Guidebook for Technology Assessment and Impact Analysis. T. L... Prentice-Hall. Prentice-Hall. L. and Willliams... and Rainey L. ' [91] '. " . Vol. Cost-Benefit Analysis in Infonnation Systems Development and Operation.. S. S. Vol. NJ. Bertsekas. NJ. E. Designfor Environment.. CA.. Long Range Forecasting: From Crystal Ball to Computer. 1985.... and Combs.. A. Merkhofer.. New York. New York.. J. W. S. No.y¿.. A.t [100] t [101] [102] :1 [103] [104] [105] [106] [107] [108] [109] [110] .1 ¡i '[¡i 1\'1 !I i¡! ~! 11 It! 1' [1 I [67] Kiksel. J. Health Care Delivery: A Policy Simulator. R. 1992. Contemporary Engineering Economics.. A. Morecroft. 1983. Prentice-HaIl. Engineering Management. P. J. and Fabrycky. 1998. Communications of the Association for Computing Machinery. Fehruary 1992. V. (Eds. Evaluating Infonnation Systems Projects: A Perspective on Cost-Benefit Analysis.. and Prasad. L. W. and Stennan.. Systems Engineering and Analysis. New York.. A. Handbook of Reliability Engineering and Management.. New York. Upper Saddle River.-CJ · . B. W. L. E. Geolfrion.. G. Fishwick. NJ. Hanover. and Controlling. I. J. K. 205-217. 4. Oxford. J. A.... Prentice-HaIl. M. [70] Abdel-Hamid. New York.. [84] Blanchard. Engineering Economy. . and Schrems. 1992. C. " [90] . Cost Benefit Analysis. New York. Wiley. A. Dordrecht.). and Sage. J. ' ~ _i ~ '. . 1980. Computer ModeIing in Global and International ReIations: The State of the Art.Benefit Analysis. 1987. [86] Kerzner. B. 4th edition. Concise Encyclopedía of Informatíon Processíng in Systems and Organizations. Kleiner.. J. 3. Pergamon Press. No. No. Pritsker. Vol. 6. pp. Simulation Model Design and Execution: Building Digital Worl. S. Cambridge University Press. W. and Cole. G . Roberts. . 1994. M. Introduction to Simulation and Slam Il. " [93] [94] [95] ~}. L. -~ ~:. pp. Prentice-Hall. Applied Modeling and Simulation: An Integrated Approach to Development and Operation. No.¡J/ Porter. Introduction to System Dynamics Modeling with DYNAMO. Prentice-Hall. 1989. C. 1998.. Dynamic Programming and Stochastic Control. S. New York. A.. [85] Fabrycky. A. [89] Mishan.. Smith. P . 1992. 1978. S.1993. New York. Decision Science and Social Risk Management. King. R. Carpenter. 1988. [79] Thuesen. March 1978.. pp. Chelsea Green. E.). S." [96] 't ' - [97] :1~ "" " ~ [98] 1 [99] . and Pugh. J.. 3.. F. Workshop Dynamic Modeling. and Wood. [75] Richmond.... W. Essentials of Project and Systems Engineering M anagement. L. New York. New York. Roper. McGraw-Hill.. NJ. Van Nostrand Reinhold. Englewood Cli/fs. 129-142..). The Principies of Practical Cost-Benefit Analysis... Holland. E. A. S. 1975... Design to Cost. Vol. [87] Eisner. A... Productivity Press. NJ. J. and Peterson. A. 35. Cost .. EngIewood Clilfs. Academic Press. 2. S. Free Press. [78] White. G . Socio-Economic Planning Sciences. Computing Surveys. New York.. Box.. Praeger. New York.'11 il! [il W I1 1 II! '1. G. The Economic Analysis of Industrial Projects. 1972. P . 1991. and Project Selection. l. New York. New York. 508-514. North-Holland.. Fundamentals of Queueing Theory. Layard. UK. Program M anagement: A Systems Approach to Planning. B.NJ. July 1989. 1978.).. Forecasting and Control. New York. Gross. 1984. and Randers. VT. 1973. 41. 7.. North-Holland. C.. 51 . New York. R. Prentice-HalI. l. NJ. 1994. Sassone. Wiley. 1995. Management Decision Making: A Network Simulation Approach. M. and Harris. OR. [69] Bremer.. (Eds.. MA. Englewood Cli/fs. No. 2nd edition. D. July 1989. "'. P. NJ.:. Pritsker. The Fifih Dimension Fieldbook. [68] Clark. G. NJ. Software Project Dynamics: An Integrated Approach. J. No. [83] MichaeIs. L. (Eds. 20-34. D. 459-478. [88] Bussey. Ewusi-Mensah. [73] Meadows. Kleinrock. [72] Portland. A.. Addison Wesley. W. New York.s 1. in Sage. S.. 1996. 1978. High Performance Systems. J. pp. W. and Ross.. ~r -.D. 2nd Edition. Cloud. [92] ' . [81] Sage.. A. 1981. P. pp. pp. L. E.-' '. Wiley.. Blanchard..KSIM. B. 1989.. A. 1989. Oxford University Press. Technological Forecasting and Social Change. [76] Kane. and Madnik. D. Vol. McGraw-Hill.. PortIand. . Prentice-HalI. 14. T.. Holden Day. Computer Based ModeIing Environments. UK. Wiley. New York. K. 3rd edition.. Armstrong. [77] Kane. M. 1970. Competitive Advantage. A. Jr. Global Simulation Models: A Comparative Study. ReideI.. J. Productivity Press. H... Upper Saddle River.. Beyond the Limits. J. '--'. and Glaister. Meadows. H. Time Series Analysis. P . 1985.L' L"''''''''' Vr::J t: " iI l~ ~1.... E. Reading. C. 1972. No. 1992. P . NH. P.. 10. Doubleday."r:: r r::I\¡. 1. Wiley. 1. A Primer for a New Cross Impact Language ... and Blanchard. Sugden. J.. Prentice-HaIl. A. An Introduction to Systems Thinking. F. Nine Management Guidelines for Better Cost Estimating. Information Systems. Economic Systems Analysis: Microeconomicsfor Systems Engineering. Melsa. Scheduling. E. P. Vol. H. J.. Wiley.. A. 1.. .. Wiley.. [82] Porter. 1978. B. An Introduction to Probability and Stochastic Processes. 283-293.. Upper Saddle River. (Eds. 1976. 1991. McGraw-Hill. Upper Saddle River... P . 1983. Ireson. 1997.. c. Lederer. W. [80] Park. Englewood Cli/fs. OR. A. P. S. Estimation Theory: with Application to Communi­ cation andControl. we need to make a deeision eoncerning 359 . L. New York. Many major efforts called for throughout aH phases O} a systems engineering life cycle involve the making of decisions. and Melsa. beeause the machine-unless it is malfunctioning-is just executing a poliey that was decided by people and programmed into the automated teller's software logic. L. pp. P. Poliey Analysis by Computer Simulation: The Need for Apprai­ sal. Journal of Management Informati01l Systems. your request for a cash advance at an automated teller kiosk is disapproved. you know better than to get mad at the maehine. or decisionmaking. (Eds) Encyclopedia of Statistical Sciences. and Johnson. Justifying Deeision Support and Office Automation Systems.. Vol. Santa Moniea. P. 2. or decision-making. or in working with clients in so doing. The systems engineering life cycle is in tended to enable evolution of high-quality. and Melsa. L. N. This is a central faet that must be remembered even in today's world ofincreasing computer automation where many times it seems that machines are making decisions for uso For example.330 A/VAL DI::' U~ AL 1tK/VA I/Vt::. A. J. L. and even decision taking in the United Kingdom and elsewhere. S. [115] Gremillion. Without question.. Summer 1985. But. New York.. L. [114] Brewer. AH of these terms are used throughout the decision literature. [111] Kotz. [113] Sage. System ldentification. 5-17. G. D. [112] Sage. or decision assessment. So. J. CHAPTER 5 Interpretation of Alternative Courses of Action and Decision Making People make decisions. 1973. and Pyburn.. We fee1 that it is an appropriate term because we are interested in accomplishing mueh more than an analysis of decisions that have been. trustworthy systems that have appropríate structure and which also provide functional support for identified purposeful clientobjectives. Academie Press. 1971. P. 1983. 1971. J. No. or decision making. MeGraw-Hill. New York. Rand Corporation Memo P-4893... made. decision-making is the most common termo Decision assessment is perhaps the least common of the several noted. This chapter is about helping people make better decisions in general and in helping systems engineers with the decision-making tasks they typically encounter in interpreting the impacts of alternative eourses of action. A. Wiley. CA. 1. or whieh could be.
You are on page 1of 5 Experiment 26 Lung Volumes and Capacities Pulmonary function tests are a broad range of tests that measure lung volumes and capacities that are usually done in a health care provider's office or a specialized facility. In other words, they measure how well the lungs take in and exhale air and how efficiently they transfer oxygen into the blood. Lung volumes and capacities are related to a person’s age, weight, gender and body position. In fact, vital capacity decreases with age, in supine position as compared with erect (sitting or standing posture) and with restrictive and obstructive lung diseases. On the other hand, residual volume increases with age and with obstructive lung diseases such as emphysema. Perhaps the oldest device and the most commonly used lung function screening study is spirometry. Spirometry measures how well the lungs exhale. The information obstructive pulmonary disease, COPD). A wet spirometer measures lung volumes based on the simple mechanical volumes are defined below and illustrated in Figure 2. Figure 1: Wet spirometer Relation to the Case In the case, we have a patient who has symptoms of cough and shortness of breath. His chest x-ray showed symptoms of infiltrates and pneumonia which will definitely affect his breathing. Thus, it is important to perform pulmonary function tests such as spirometry to measure the patient’s lung volume and capacities. This can help determine the degree of difficulty the lungs are going through. It can also be used after disease treatment. Objectives of the Experiment 1. To define and measure the various lung volumes and capacities 2. To compare the values obtained between a female and a male subject. The different lung volumes and capacities were measured with the use of a wet spirometer taken with the subject standing. The various lung volumes and capacities were done in three trials and the average of each was taken. The ones measured were as follows: Tidal Volume – The subject was asked to inhale a normal breath. After which, he/she was asked to place the mouthpiece of the spirometer between the lips and then exhale normally into the spirometer. Expiratory Reserve Volume (ERV) – After exhaling normally, the mouthpiece is placed between the lips and the subject was asked to exhale forcefully all the additional air possible. Inspiratory Reserve Volume (IRV) – The subject was asked to breathe in deeply as much as he/she can. Then place the mouthpiece and exhale normally. The value recorded here is subtracted from the tidal volume to get the IRV. Vital Capacity – Breathe in maximally, place the mouthpiece and then forcibly exhale all the air possible. Inspiratory Capacity (IC) – After exhaling normally, breathe in as deeply as possible, place the mouthpiece and exhale normally. Figure 3: Testing of the subjects involved Marco Trial 1 Trial 2 Trial 3 Average TV 0.4 0.5 0.7 0.53 L ERV 0.7 0.7 1.0 0.80 L IRV 1.1 – 0.4 = 0.7 1.4 – 0.5 = 0.9 1.2 – 0.7 = 0.5 0.70 L VC 3.0 3.5 4.5 3.66 L IC 1.1 1.4 1.2 1.23 L Figure 4: Tabulated result of the male subject’s different lung volume and capacities Isay Trial 1 Trial 2 Trial 3 Average TV 0.3 0.2 0.2 0.23 L ERV 0.6 1.0 0.8 0.80 L IRV 0.5 – 0.3 = 0.2 0.4 – 0.2 = 0.2 0.4 – 0.2 = 0.2 0.20 L VC 1.4 1.4 1.2 1.33 L IC 0.5 0.4 0.4 0.43 L Figure 5: Tabulated result of the female subject’s different lung volume and capacities The measurement of the different lung volumes and capacities by the use of a wet spirometer was done with the subject standing so that the abdominal organs do not interfere with the diaphragm as it contracts and moves downward, pushing out the The various lung volumes and capacities that were measured using the spirometer are the tidal volume, expiratory reserve volume, inspiratory reserve volume, vital capacity and inspiratory capacity. The tidal volume or the resting tidal volume is the volume of air entering the lungs during a single inspiration which is approximately equal to the volume leaving the lungs on the subsequent expiration. The normal value of which is 500ml/0.5L. The expiratory reserve volume is the additional volume of air that can be expired after the resting tidal volume which amounts to 1500 ml. Meanwhile, the amount of air than can still be inspired after the resting tidal volume is termed inspiratory reserve volume which is about 3000 ml. On the other hand, the capacities are the sums of the two or more lung volumes. Inspiratory capacity is the maximum volume of air that can be inspired from end expiratory position. It is called a capacity because it is the sum of tidal volume and inspiratory reserve volume. The vital capacity is the maximal volume of air that a person can expire, regardless of the time required, after a maximal inspiration. In other words, it is the sum of the three volumes: tidal volume, inspiratory reserve volume and expiratory reserve volume.. A variant on this method is the forced vital capacity (FVC), in which the person takes a maximal inspiration and then exhales maximally as fast as possible. The apparatus that measures this also measures the volume expired after 1 second. This is the forced expiratory volume in 1 second or the FEV1 of which normal persons can expire approximately 80 percent of the FVC in 1 second. These measurements are useful diagnostic tools for patients with obstructive lung disorders in which they cannot expire a normal fraction of the FVC in 1 second because of their narrowed airways. On the contrary, patients with restrictive lung diseases are characterized by a reduced vital capacity but a normal FEV1/FVC ratio. Based on the tabulated results above, we can see that there is a difference in the lung volume and capacities recorded between the male and female. Usually the lung volumes and capacities of males are larger than the lung volumes and capacities of females. Even when males and females are matched for height and weight, males have larger lungs than females. Because of this gender-dependent lung size difference, different normal tables must be used for males and females. Also as noted on the tables above, the female subject’s lung volume results was considerably lower than normal. This could probably be that the subject had coughs and colds during experimentation that affected her breathing, and in turn affected the different lung volumes. We can note in this experiment that a spirometer can only measure the different lung volumes and capacities mentioned above. The total lung capacity and residual volume cannot be measured directly with a spirometer. In the case of the residual volume, it is not possible for us to expire this since this volume of air remains in the lungs even after we forcibly expire. Meanwhile, since the total lung capacity is the sum of the four distinct lung volumes including here is the residual volume, it also cannot be measured. In this experiment, we were able to define and measure the various lung volumes and capacities with the use of a wet spirometer. However, the residual volume and the total lung capacity were not measured since the residual volume, which is one of the components of the total lung capacity, is the volume of air remaining in lungs even after forceful expiration. We also observed that the lung volumes and capacities between males and females vary. Males usually exhibit larger lung volumes and capacities than females because of the gender-dependent lung size difference. Kisner, C. and Colby, L. (1996). Therapeutic Exercise Foundations and Techniques, 3rd ed. F. A. Davis Company.
Can a ball be decomposed into a finite number of point sets and reassembled into two balls identical to the original? The Banach–Tarski paradox is a theorem in set-theoretic geometry, which states the following: Given a solid ball in 3‑dimensional space, there exists a decomposition of the ball into a finite number of disjoint subsets, which can then be put back together in a different way to yield two identical copies of the original ball. Indeed, the reassembly process involves only moving the pieces around and rotating them without changing their shape. However, the pieces themselves are not "solids" in the usual sense, but infinite scatterings of points. The reconstruction can work with as few as five pieces.[1] A stronger form of the theorem implies that given any two "reasonable" solid objects (such as a small ball and a huge ball), the cut pieces of either one can be reassembled into the other. This is often stated informally as "a pea can be chopped up and reassembled into the Sun" and called the "pea and the Sun paradox". The reason the Banach–Tarski theorem is called a paradox is that it contradicts basic geometric intuition. "Doubling the ball" by dividing it into parts and moving them around by rotations and translations, without any stretching, bending, or adding new points, seems to be impossible, since all these operations ought, intuitively speaking, to preserve the volume. The intuition that such operations preserve volumes is not mathematically absurd and it is even included in the formal definition of volumes. However, this is not applicable here because in this case it is impossible to define the volumes of the considered subsets. Reassembling them reproduces a volume, which happens to be different from the volume at the start. Unlike most theorems in geometry, the proof of this result depends in a critical way on the choice of axioms for set theory. It can be proven using the axiom of choice, which allows for the construction of non-measurable sets, i.e., collections of points that do not have a volume in the ordinary sense, and whose construction requires an uncountable number of choices.[2] It was shown in 2005 that the pieces in the decomposition can be chosen in such a way that they can be moved continuously into place without running into one another.[3] Banach and Tarski publicationEdit In a paper published in 1924,[4] Stefan Banach and Alfred Tarski gave a construction of such a paradoxical decomposition, based on earlier work by Giuseppe Vitali concerning the unit interval and on the paradoxical decompositions of the sphere by Felix Hausdorff, and discussed a number of related questions concerning decompositions of subsets of Euclidean spaces in various dimensions. They proved the following more general statement, the strong form of the Banach–Tarski paradox: Given any two bounded subsets A and B of an Euclidean space in at least three dimensions, both of which have a nonempty interior, there are partitions of A and B into a finite number of disjoint subsets,  ,   (for some integer k), such that for each (integer) i between 1 and k, the sets Ai and Bi are congruent. Now let A be the original ball and B be the union of two translated copies of the original ball. Then the proposition means that you can divide the original ball A into a certain number of pieces and then rotate and translate these pieces in such a way that the result is the whole set B, which contains two copies of A. The strong form of the Banach–Tarski paradox is false in dimensions one and two, but Banach and Tarski showed that an analogous statement remains true if countably many subsets are allowed. The difference between the dimensions 1 and 2 on the one hand, and three and higher on the other hand, is due to the richer structure of the group E(n) of Euclidean motions in higher dimensions, which is solvable for n = 1, 2 and contains a free group with two generators for n ≥ 3. John von Neumann studied the properties of the group of equivalences that make a paradoxical decomposition possible and introduced the notion of amenable groups. He also found a form of the paradox in the plane which uses area-preserving affine transformations in place of the usual congruences. Tarski proved that amenable groups are precisely those for which no paradoxical decompositions exist. Since only free subgroups are needed in the Banach–Tarski paradox, this led to the long-standing von Neumann conjecture. Formal treatmentEdit The Banach–Tarski paradox states that a ball in the ordinary Euclidean space can be doubled using only the operations of partitioning into subsets, replacing a set with a congruent set, and reassembling. Its mathematical structure is greatly elucidated by emphasizing the role played by the group of Euclidean motions and introducing the notions of equidecomposable sets and a paradoxical set. Suppose that G is a group acting on a set X. In the most important special case, X is an n-dimensional Euclidean space (for integral n), and G consists of all isometries of X, i.e. the transformations of X into itself that preserve the distances, usually denoted E(n). Two geometric figures that can be transformed into each other are called congruent, and this terminology will be extended to the general G-action. Two subsets A and B of X are called G-equidecomposable, or equidecomposable with respect to G, if A and B can be partitioned into the same finite number of respectively G-congruent pieces. This defines an equivalence relation among all subsets of X. Formally, if there exist non-empty sets  ,   such that and there exist elements   such that then it can be said that A and B are G-equidecomposable using k pieces. If a set E has two disjoint subsets A and B such that A and E, as well as B and E, are G-equidecomposable, then E is called paradoxical. Using this terminology, the Banach–Tarski paradox can be reformulated as follows: A three-dimensional Euclidean ball is equidecomposable with two copies of itself. In fact, there is a sharp result in this case, due to Raphael M. Robinson:[5] doubling the ball can be accomplished with five pieces, and fewer than five pieces will not suffice. The strong version of the paradox claims: Any two bounded subsets of 3-dimensional Euclidean space with non-empty interiors are equidecomposable. While apparently more general, this statement is derived in a simple way from the doubling of a ball by using a generalization of the Bernstein–Schroeder theorem due to Banach that implies that if A is equidecomposable with a subset of B and B is equidecomposable with a subset of A, then A and B are equidecomposable. The Banach–Tarski paradox can be put in context by pointing out that for two sets in the strong form of the paradox, there is always a bijective function that can map the points in one shape into the other in a one-to-one fashion. In the language of Georg Cantor's set theory, these two sets have equal cardinality. Thus, if one enlarges the group to allow arbitrary bijections of X, then all sets with non-empty interior become congruent. Likewise, one ball can be made into a larger or smaller ball by stretching, or in other words, by applying similarity transformations. Hence, if the group G is large enough, G-equidecomposable sets may be found whose "size"s vary. Moreover, since a countable set can be made into two copies of itself, one might expect that using countably many pieces could somehow do the trick. On the other hand, in the Banach–Tarski paradox, the number of pieces is finite and the allowed equivalences are Euclidean congruences, which preserve the volumes. Yet, somehow, they end up doubling the volume of the ball! While this is certainly surprising, some of the pieces used in the paradoxical decomposition are non-measurable sets, so the notion of volume (more precisely, Lebesgue measure) is not defined for them, and the partitioning cannot be accomplished in a practical way. In fact, the Banach–Tarski paradox demonstrates that it is impossible to find a finitely-additive measure (or a Banach measure) defined on all subsets of an Euclidean space of three (and greater) dimensions that is invariant with respect to Euclidean motions and takes the value one on a unit cube. In his later work, Tarski showed that, conversely, non-existence of paradoxical decompositions of this type implies the existence of a finitely-additive invariant measure. The heart of the proof of the "doubling the ball" form of the paradox presented below is the remarkable fact that by a Euclidean isometry (and renaming of elements), one can divide a certain set (essentially, the surface of a unit sphere) into four parts, then rotate one of them to become itself plus two of the other parts. This follows rather easily from a F2-paradoxical decomposition of F2, the free group with two generators. Banach and Tarski's proof relied on an analogous fact discovered by Hausdorff some years earlier: the surface of a unit sphere in space is a disjoint union of three sets B, C, D and a countable set E such that, on the one hand, B, C, D are pairwise congruent, and on the other hand, B is congruent with the union of C and D. This is often called the Hausdorff paradox. Connection with earlier work and the role of the axiom of choiceEdit Banach and Tarski explicitly acknowledge Giuseppe Vitali's 1905 construction of the set bearing his name, Hausdorff's paradox (1914), and an earlier (1923) paper of Banach as the precursors to their work. Vitali's and Hausdorff's constructions depend on Zermelo's axiom of choice ("AC"), which is also crucial to the Banach–Tarski paper, both for proving their paradox and for the proof of another result: Two Euclidean polygons, one of which strictly contains the other, are not equidecomposable. They remark: (The role this axiom plays in our reasoning seems to us to deserve attention) They point out that while the second result fully agrees with our geometric intuition, its proof uses AC in an even more substantial way than the proof of the paradox. Thus Banach and Tarski imply that AC should not be rejected simply because it produces a paradoxical decomposition, for such an argument also undermines proofs of geometrically intuitive statements. However, in 1949, A.P. Morse showed that the statement about Euclidean polygons can be proved in ZF set theory and thus does not require the axiom of choice. In 1964, Paul Cohen proved that the axiom of choice cannot be proved from ZF. A weaker version of an axiom of choice is the axiom of dependent choice, DC. It has been shown that The Banach–Tarski paradox is not a theorem of ZF, nor of ZF+DC.[6] Large amounts of mathematics use AC. As Stan Wagon points out at the end of his monograph, the Banach–Tarski paradox has been more significant for its role in pure mathematics than for foundational questions: it motivated a fruitful new direction for research, the amenability of groups, which has nothing to do with the foundational questions. In 1991, using then-recent results by Matthew Foreman and Friedrich Wehrung,[7] Janusz Pawlikowski proved that the Banach–Tarski paradox follows from ZF plus the Hahn–Banach theorem.[8] The Hahn–Banach theorem does not rely on the full axiom of choice but can be proved using a weaker version of AC called the ultrafilter lemma. So Pawlikowski proved that the set theory needed to prove the Banach–Tarski paradox, while stronger than ZF, is weaker than full ZFC. A sketch of the proofEdit Here a proof is sketched which is similar but not identical to that given by Banach and Tarski. Essentially, the paradoxical decomposition of the ball is achieved in four steps: 1. Find a paradoxical decomposition of the free group in two generators. 2. Find a group of rotations in 3-d space isomorphic to the free group in two generators. 3. Use the paradoxical decomposition of that group and the axiom of choice to produce a paradoxical decomposition of the hollow unit sphere. 4. Extend this decomposition of the sphere to a decomposition of the solid unit ball. These steps are discussed in more detail below. Step 1Edit Cayley graph of F2, showing decomposition into the sets S(a) and aS(a−1). Traversing a horizontal edge of the graph in the rightward direction represents left multiplication of an element of F2 by a; traversing a vertical edge of the graph in the upward direction represents left multiplication of an element of F2 by b. Elements of the set S(a) are green dots; elements of the set aS(a−1) are blue dots or red dots with blue border. Red dots with blue border are elements of S(a−1), which is a subset of aS(a−1). The free group with two generators a and b consists of all finite strings that can be formed from the four symbols a, a−1, b and b−1 such that no a appears directly next to an a−1 and no b appears directly next to a b−1. Two such strings can be concatenated and converted into a string of this type by repeatedly replacing the "forbidden" substrings with the empty string. For instance: abab−1a−1 concatenated with abab−1a yields abab−1a−1abab−1a, which contains the substring a−1a, and so gets reduced to abab−1bab−1a, which contains the substring b−1b, which gets reduced to abaab−1a. One can check that the set of those strings with this operation forms a group with identity element the empty string e. This group may be called F2. The group   can be "paradoxically decomposed" as follows: Let S(a) be the set of all non-forbidden strings that start with a and define S(a−1), S(b) and S(b−1) similarly. Clearly, but also where the notation aS(a−1) means take all the strings in S(a−1) and concatenate them on the left with a. This is at the core of the proof. For example, there may be a string   in the set   which, because of the rule that   must not appear next to  , reduces to the string  . Similarly,   contains all the strings that start with   (for example, the string   which reduces to  ). In this way,   contains all the strings that start with  ,   and  . Group F2 has been cut into four pieces (plus the singleton {e}), then two of them "shifted" by multiplying with a or b, then "reassembled" as two pieces to make one copy of   and the other two to make another copy of  . That is exactly what is intended to do to the ball. Step 2Edit In order to find a free group of rotations of 3D space, i.e. that behaves just like (or "is isomorphic to") the free group F2, two orthogonal axes are taken (e.g. the x and z axes), and A may be given a rotation of   about the x axis, and B be a rotation of   about the z axis (there are many other suitable pairs of irrational multiples of π that could be used here as well).[9] The group of rotations generated by A and B will be called H. Let   be an element of H that starts with a positive rotation about the z axis, that is, an element of the form   with  . It can be shown by induction that   maps the point   to  , for some  . Analyzing   and   modulo 3, one can show that  . The same argument repeated (by symmetry of the problem) is valid when   starts with a negative rotation about the z axis, or a rotation about the x axis. This shows that if   is given by a non-trivial word in A and B, then  . Therefore, the group H is a free group, isomorphic to F2. The two rotations behave just like the elements a and b in the group F2: there is now a paradoxical decomposition of H. This step cannot be performed in two dimensions since it involves rotations in three dimensions. If two rotations are taken about the same axis, the resulting group is commutative and does not have the property required in step 1. An alternate arithmetic proof of the existence of free groups in some special orthogonal groups using integral quaternions leads to paradoxical decompositions of the rotation group.[10] Step 3Edit The unit sphere S2 is partitioned into orbits by the action of our group H: two points belong to the same orbit if and only if there is a rotation in H which moves the first point into the second. (Note that the orbit of a point is a dense set in S2.) The axiom of choice can be used to pick exactly one point from every orbit; collect these points into a set M. The action of H on a given orbit is free and transitive and so each orbit can be identified with H. In other words, every point in S2 can be reached in exactly one way by applying the proper rotation from H to the proper element from M. Because of this, the paradoxical decomposition of H yields a paradoxical decomposition of S2 into four pieces A1, A2, A3, A4 as follows: where we define and likewise for the other sets, and where we define (The five "paradoxical" parts of F2 were not used directly, as they would leave M as an extra piece after doubling, owing to the presence of the singleton {e}!) The (majority of the) sphere has now been divided into four sets (each one dense on the sphere), and when two of these are rotated, the result is double of what was had before: Step 4Edit Finally, connect every point on S2 with a half-open segment to the origin; the paradoxical decomposition of S2 then yields a paradoxical decomposition of the solid unit ball minus the point at the ball's center. (This center point needs a bit more care; see below.) N.B. This sketch glosses over some details. One has to be careful about the set of points on the sphere which happen to lie on the axis of some rotation in H. However, there are only countably many such points, and like the case of the point at the center of the ball, it is possible to patch the proof to account for them all. (See below.) Some details, fleshed outEdit In Step 3, the sphere was partitioned into orbits of our group H. To streamline the proof, the discussion of points that are fixed by some rotation was omitted; since the paradoxical decomposition of F2 relies on shifting certain subsets, the fact that some points are fixed might cause some trouble. Since any rotation of S2 (other than the null rotation) has exactly two fixed points, and since H, which is isomorphic to F2, is countable, there are countably many points of S2 that are fixed by some rotation in H. Denote this set of fixed points as D. Step 3 proves that S2D admits a paradoxical decomposition. What remains to be shown is the Claim: S2D is equidecomposable with S2. Proof. Let λ be some line through the origin that does not intersect any point in D. This is possible since D is countable. Let J be the set of angles, α, such that for some natural number n, and some P in D, r(nα)P is also in D, where r(nα) is a rotation about λ of nα. Then J is countable. So there exists an angle θ not in J. Let ρ be the rotation about λ by θ. Then ρ acts on S2 with no fixed points in D, i.e., ρn(D) is disjoint from D, and for natural m<n, ρn(D) is disjoint from ρm(D). Let E be the disjoint union of ρn(D) over n = 0, 1, 2, ... . Then S2 = E ∪ (S2E) ~ ρ(E) ∪ (S2E) = (ED) ∪ (S2E) = S2D, where ~ denotes "is equidecomposable to". For step 4, it has already been shown that the ball minus a point admits a paradoxical decomposition; it remains to be shown that the ball minus a point is equidecomposable with the ball. Consider a circle within the ball, containing the point at the center of the ball. Using an argument like that used to prove the Claim, one can see that the full circle is equidecomposable with the circle minus the point at the ball's center. (Basically, a countable set of points on the circle can be rotated to give itself plus one more point.) Note that this involves the rotation about a point other than the origin, so the Banach–Tarski paradox involves isometries of Euclidean 3-space rather than just SO(3). Use is made of the fact that if A ~ B and B ~ C, then A ~ C. The decomposition of A into C can be done using number of pieces equal to the product of the numbers needed for taking A into B and for taking B into C. The proof sketched above requires 2 × 4 × 2 + 8 = 24 pieces - a factor of 2 to remove fixed points, a factor 4 from step 1, a factor 2 to recreate fixed points, and 8 for the center point of the second ball. But in step 1 when moving {e} and all strings of the form an into S(a−1), do this to all orbits except one. Move {e} of this last orbit to the center point of the second ball. This brings the total down to 16 + 1 pieces. With more algebra, one can also decompose fixed orbits into 4 sets as in step 1. This gives 5 pieces and is the best possible. Obtaining infinitely many balls from oneEdit Using the Banach–Tarski paradox, it is possible to obtain k copies of a ball in the Euclidean n-space from one, for any integers n ≥ 3 and k ≥ 1, i.e. a ball can be cut into k pieces so that each of them is equidecomposable to a ball of the same size as the original. Using the fact that the free group F2 of rank 2 admits a free subgroup of countably infinite rank, a similar proof yields that the unit sphere Sn−1 can be partitioned into countably infinitely many pieces, each of which is equidecomposable (with two pieces) to the Sn−1 using rotations. By using analytic properties of the rotation group SO(n), which is a connected analytic Lie group, one can further prove that the sphere Sn−1 can be partitioned into as many pieces as there are real numbers (that is,   pieces), so that each piece is equidecomposable with two pieces to Sn−1 using rotations. These results then extend to the unit ball deprived of the origin. A 2010 article by Valeriy Churkin gives a new proof of the continuous version of the Banach–Tarski paradox.[11] Von Neumann paradox in the Euclidean planeEdit In the Euclidean plane, two figures that are equidecomposable with respect to the group of Euclidean motions are necessarily of the same area, and therefore, a paradoxical decomposition of a square or disk of Banach–Tarski type that uses only Euclidean congruences is impossible. A conceptual explanation of the distinction between the planar and higher-dimensional cases was given by John von Neumann: unlike the group SO(3) of rotations in three dimensions, the group E(2) of Euclidean motions of the plane is solvable, which implies the existence of a finitely-additive measure on E(2) and R2 which is invariant under translations and rotations, and rules out paradoxical decompositions of non-negligible sets. Von Neumann then posed the following question: can such a paradoxical decomposition be constructed if one allows a larger group of equivalences? It is clear that if one permits similarities, any two squares in the plane become equivalent even without further subdivision. This motivates restricting one's attention to the group SA2 of area-preserving affine transformations. Since the area is preserved, any paradoxical decomposition of a square with respect to this group would be counterintuitive for the same reasons as the Banach–Tarski decomposition of a ball. In fact, the group SA2 contains as a subgroup the special linear group SL(2,R), which in its turn contains the free group F2 with two generators as a subgroup. This makes it plausible that the proof of Banach–Tarski paradox can be imitated in the plane. The main difficulty here lies in the fact that the unit square is not invariant under the action of the linear group SL(2, R), hence one cannot simply transfer a paradoxical decomposition from the group to the square, as in the third step of the above proof of the Banach–Tarski paradox. Moreover, the fixed points of the group present difficulties (for example, the origin is fixed under all linear transformations). This is why von Neumann used the larger group SA2 including the translations, and he constructed a paradoxical decomposition of the unit square with respect to the enlarged group (in 1929). Applying the Banach–Tarski method, the paradox for the square can be strengthened as follows: Any two bounded subsets of the Euclidean plane with non-empty interiors are equidecomposable with respect to the area-preserving affine maps. As von Neumann notes:[12] "Infolgedessen gibt es bereits in der Ebene kein nichtnegatives additives Maß (wo das Einheitsquadrat das Maß 1 hat), das gegenüber allen Abbildungen von A2 invariant wäre." "In accordance with this, already in the plane there is no non-negative additive measure (for which the unit square has a measure of 1), which is invariant with respect to all transformations belonging to A2 [the group of area-preserving affine transformations]." To explain further, the question of whether a finitely additive measure (that is preserved under certain transformations) exists or not depends on what transformations are allowed. The Banach measure of sets in the plane, which is preserved by translations and rotations, is not preserved by non-isometric transformations even when they do preserve the area of polygons. The points of the plane (other than the origin) can be divided into two dense sets which may be called A and B. If the A points of a given polygon are transformed by a certain area-preserving transformation and the B points by another, both sets can become subsets of the A points in two new polygons. The new polygons have the same area as the old polygon, but the two transformed sets cannot have the same measure as before (since they contain only part of the A points), and therefore there is no measure that "works". The class of groups isolated by von Neumann in the course of study of Banach–Tarski phenomenon turned out to be very important for many areas of Mathematics: these are amenable groups, or groups with an invariant mean, and include all finite and all solvable groups. Generally speaking, paradoxical decompositions arise when the group used for equivalences in the definition of equidecomposability is not amenable. Recent progressEdit • 2000: Von Neumann's paper left open the possibility of a paradoxical decomposition of the interior of the unit square with respect to the linear group SL(2,R) (Wagon, Question 7.4). In 2000, Miklós Laczkovich proved that such a decomposition exists.[13] More precisely, let A be the family of all bounded subsets of the plane with non-empty interior and at a positive distance from the origin, and B the family of all planar sets with the property that a union of finitely many translates under some elements of SL(2, R) contains a punctured neighborhood of the origin. Then all sets in the family A are SL(2, R)-equidecomposable, and likewise for the sets in B. It follows that both families consist of paradoxical sets. • 2003: It had been known for a long time that the full plane was paradoxical with respect to SA2, and that the minimal number of pieces would equal four provided that there exists a locally commutative free subgroup of SA2. In 2003 Kenzi Satô constructed such a subgroup, confirming that four pieces suffice.[14] • 2011: Laczkovich's paper[15] left open the possibility if there exists a free group F of piecewise linear transformations acting on the punctured disk D \{0,0} without fixed points. Grzegorz Tomkowicz constructed such a group,[16] showing that the system of congruences A ≈ B ≈ C ≈ B U C can be realized by means of F and D \{0,0}. • 2017: It has been known for a long time that there exists in the hyperbolic plane H2 a set E that is a third, a fourth and ... and a  -th part of H2. The requirement was satisfied by orientation-preserving isometries of H2. Analogous results were obtained by John Frank Adams[17] and Jan Mycielski[18] who showed that the unit sphere S2 contains a set E that is a half, a third, a fourth and ... and a  -th part of S2. Grzegorz Tomkowicz[19] showed that Adams and Mycielski construction can be generalized to obtain a set E of H2 with the same properties as in S2. • 2017: Von Neumann's paradox concerns the Euclidean plane, but there are also other classical spaces where the paradoxes are possible. For example, one can ask if there is a Banach–Tarski paradox in the hyperbolic plane H2. This was shown by Jan Mycielski and Grzegorz Tomkowicz.[20][21] Tomkowicz[22] proved also that most of the classical paradoxes are an easy consequence of a graph theoretical result and the fact that the groups in question are rich enough. • 2018: In 1984, Jan Mycielski and Stan Wagon [23] constructed a paradoxical decomposition of the hyperbolic plane H2 that uses Borel sets. The paradox depends on the existence of a properly discontinuous subgroup of the group of isometries of H2. Similar paradox is obtained by Grzegorz Tomkowicz [24] who constructed a free properly discontinuous subgroup G of the affine group SA(3,Z). The existence of such a group implies the existence of a subset E of Z3 such that for any finite F of Z3 there exists an element g of G such that g(E)= , where   denotes the symmetric difference of E and F. See alsoEdit 1. ^ Tao, Terence (2011). "An introduction to measure theory" (PDF): 3. 2. ^ Wagon, Corollary 13.3 3. ^ Wilson, Trevor M. (September 2005). "A continuous movement version of the Banach–Tarski paradox: A solution to De Groot's problem". Journal of Symbolic Logic. 70 (3): 946–952. CiteSeerX doi:10.2178/jsl/1122038921. JSTOR 27588401. 4. ^ Banach, Stefan; Tarski, Alfred (1924). "Sur la décomposition des ensembles de points en parties respectivement congruentes" (PDF). Fundamenta Mathematicae (in French). 6: 244–277. 5. ^ Robinson, Raphael M.]] (1947). "On the Decomposition of Spheres". Fund. Math. 34: 246–260. This article, based on an analysis of the Hausdorff paradox, settled a question put forth by von Neumann in 1929: 6. ^ Wagon, Corollary 13.3 7. ^ Foreman, M.; Wehrung, F. (1991). "The Hahn–Banach theorem implies the existence of a non-Lebesgue measurable set" (PDF). Fundamenta Mathematicae. 138: 13–19. 8. ^ Pawlikowski, Janusz (1991). "The Hahn–Banach theorem implies the Banach–Tarski paradox" (PDF). Fundamenta Mathematicae. 138: 21–22. 9. ^ Wagon, p. 16. 11. ^ Churkin, V. A. (2010). "A continuous version of the Hausdorff–Banach–Tarski paradox". Algebra and Logic. 49 (1): 81–89. doi:10.1007/s10469-010-9080-y. Full text in Russian is available from the page. 12. ^ On p. 85. Neumann, J. v. (1929). "Zur allgemeinen Theorie des Masses" (PDF). Fundamenta Mathematicae. 13: 73–116. 13. ^ Laczkovich, Miklós (1999). "Paradoxical sets under SL2(R)". Ann. Univ. Sci. Budapest. Eötvös Sect. Math. 42: 141–145. 14. ^ Satô, Kenzi (2003). "A locally commutative free group acting on the plane". Fundamenta Mathematicae. 180 (1): 25–34. doi:10.4064/fm180-1-3. 16. ^ Tomkowicz, Grzegorz (2011). "A free group of piecwise linear transformations". Coll. Math. 125: 141–146. 17. ^ Adams, John Frank (1954). "On decompositions of the sphere". J. London Math. Soc. 29: 96–99. 18. ^ Mycielski, Jan (1955). "On the paradox of the sphere". Fund. Math. 42: 348–355. 19. ^ Tomkowicz, Grzegorz (2017). "On decompositions of the hyperbolic plane satisfying many congruences". Bulletin of the London Mathematical Society. 49: 133–140. doi:10.1112/blms.12024. 20. ^ Mycielski, Jan (1989). "The Banach-Tarski paradox for the hyperbolic plane". Fund. Math. 132: 143–149. 21. ^ Mycielski, Jan; Tomkowicz, Grzegorz (2013). "The Banach-Tarski paradox for the hyperbolic plane (II)". Fund. Math. 222: 289–290. 22. ^ Tomkowicz, Grzegorz. "Banach-Tarski paradox in some complete manifolds". Proc. Amer. Math. Soc. doi:10.1090/proc/13657. 23. ^ Mycielski, Jan; Wagon, Stan (1984). "Large free groups of isometries and their geometrical uses". Ens. Math. 30: 247–267. 24. ^ Tomkowicz, Grzegorz. "A properly discontinuous free group of affine transformations". Geom. Dedicata. doi:10.1007/s10711-018-0320-y. External linksEdit
Vaginal Infections 1. What Is Vaginal Infection (Vaginitis)? Vaginitis is the inflammation of vaginal mucosa and is one of the diseases that are the most frequent reason to consult gynecologists. Vaginitis affect the 90% of adolescent women and two or more infections proceed together on the 30% of the mature1,2,3. Vaginal secretion is a clear and glairy secretum, which helps the vaginal environment stay wet in normal cases. Vaginal secretion may rise in natural conditions like pregnancy, sexual arousal, and laying, yet this does not cause any complaint as well as being a usual case. Thus, vaginal secretion's rise does not necessarily mean it is a sign of a medical problem. If rash, irritancy, and fetor is felt together with a rise in the amount of vaginal secretion and a change in its texture and colour, and if these symptoms have been running on for over 2-3 days, the vaginal secretion might be indicating the presence of a problem. Vaginitis is not generally a life-critical disease. Nonetheless, it may cause more serious health issues unless treated accurately and timely. 2. How Does Vaginitis Develop? Three factors are often (90%) responsible for occurence of Vaginitis: - Mycetes (Candida albicans) - Bacteria (Gardnerella vaginalis) - Parasites (Trichomonas vaginalis) Apart from these, bacteria of Chlamydia and Micoplasma groups are the microorganisms that may be the answer for the vaginitis tables of Neisseria gonorrhoea, Escherichia coli, Giardia lamblia, Balantidum coli, Entamoeba histolytica and Ureaplasma urealiticum 4. The spoil of the usual vaginal environment (flora) and the change of pH are the leading factors that play a part in the development of Vaginitis. Mixed vaginal infections are mentioned when, sometimes, two of these factors, and sometimes all of them exist together. 3. What Are The Types of Vaginitis That Are Frequent? The types of vaginitis that are the most frequent are candidal vulvovaginitis caused by mycetes, bacterial vaginosis caused by bacteria, and trichomonal vaginitis caused by parasites. What Is Candidal Vulvovaginitis (Vaginal Fungal Infection)? The determinant of candidal vulvovaginitis, which is a frequent kind of vaginitis, is majorly blastomycetes, which are named Candida albicans. Candida albicans is the most important one of the mycetes that cause infection as a result of the collapse of the natural resistance of the organism that is present in skin and mucouse membranes and normal flora of healthy individuals. This kind of vaginitis occurs at least once in the lifetime of about 75% percent of adult women and twice or more in that of 40-50% of them1. What Are The Symptoms of Candidal Vulvovaginitis? Its main symptom is severe itching and irritation around vagina. In parallel with this, there is also rash and swelling, and a little and intensive cheesy leakage on outer genitals2. What Are The Reasons Lying Behind Candidal Vulvovaginitis? These mycetes, which normally exist in mouth, throat, bowel (colon), and vaginal flora, cause disease in cases which change the body balance such as pregnancy, diabetes, obesity, and if birth control products, spermicides, intrauterine device (IUD), and intense antibiotics are used5. How To Cure Candidal Vulvovaginitis? In the treatment of these infections, products that are called antifungal and used against mycetes and that are administered orally and by vaginal route are used. What Is Vaginal Infection Originating From Bacteria (Bacterial Vaginosis)? Bacterial vaginosis, also known as Gardnerella vaginalis or non-specific vaginitis, is the most frequent bacterial vaginal infection observed on women in reproductive age group. Throughout this disease, the usual vaginal flora changes, the helpful bacteria that are named Lactobacillus and, which must exist in healthy flora, disappear, and other sorts of bacteria (Peptococcus sp., Prevotela sp., G. Vaginalis and Mobiluncus sp.) increase6. Since bacterial vaginosis proceeds indefinitely by 50%, its frequency ratio can not be determined precisely. In addition, it is possible to say that bacterial vaginosis has been detected in 15%-19% of the patients who apply to polyclinics due to their gynecologic disorders, in 10%-30% of pregnant women, and in 24%-40% of individuals who carry a disease that is sexually transmitted1. What Are The Symptoms of Bacterial Vaginosis? In bacterial vaginosis, no symptoms occur at the rate of 50%. The most crucial symptom is the fishy vaginal secretion. Itching and irritation can be observed in some cases. What Are The Reasons Lying Behind Bacterial Vaginosis? Among risk factors, there are use of IUD, vaginal douching, and pregnancy. How To Cure Bacterial Vaginosis? In its treatment process, various antibiotics that are incepted or applied by vaginal route against bacteria are used. What Is Trichomonal Vaginitis? The cause of trichomonal vaginitis is Trichomonas vaginalis, which is a sexually transmitted parasite. Trichomonas vaginalis affects nearly 180 million people all around the world in a year7. What Are The Symptoms of Trichomonal Vaginitis? Symptoms intensify especially in gestation period and after period of menstruation. The most frequent symptom is the sniffy excessive flix. The flix is generally observed in yellow-green colour. The pain during sexual relation, the irritancy while urinating, and the ache some patients feel at their hypogastrium are among other symptoms. What Are The Reasons Lying Behind Trichomonal Vaginitis As trichomonal vaginitis is a sexually transmitted disease, it may emerge as a result of sexual intercourse with an individual that is a carrier of this disease. How To Cure Trichomonal Vaginitis? In its treatment, remedies that are called antiprotozoal and that are incepted or used by vaginal route against parasites are taken. These remedies contain the products. Since it is a disease passed on sexually, the sexual partner should also be treated for the success of the treatment and for the recurrence rate to be decreased to the lowest level. Sexual relationship should be forbidden during treatment8. What Are The Effects That Facilitate Vaginitis To Emerge? Such causes as - Using synthetic, tight clothes - Pregnancy (hormonal changes) - Diabetes - Taking birth control pills - Inaccurate hygiene habits - Immune deficiency are the factors promoting vaginitis development. What Are The Precautions To Take To Protect From Vaginitis? Hygiene is the primary factor at avoiding diseases. It is useful to change, boil, and iron underclothes daily. Often washing vagina with soap and intime douche causes inflammatory diseases spoiling the protective layer of vagina. Using pad and tamp for a long time is also a negative determinant. Public baths and over chlorinated pools must not be used and public toilets and public toilet materials must not be utilized. In cleaning of external genitals, washing and drying must be done front to back. With this method, germs might be prevented from being carried to vagina from perianal, which is a location rich in microbes. Preference of clothes is a vital factor for avoiding diseases. Clothes that are too tight, such as trousers, underwears with synthetic fibre, are especially factors increasing development of mycetes as they let very little air in and boost the heat and moisture around vagina. Such clothes as swimming suit, which is worn wetly for too long, are harmful due to the same reason. Therefore, casual dresses and cotton underwears must be preferred. Chemical contact might cause vaginitis by changing the vaginal environment as a result of local allergy and hypersensitivity. This is why the use of scented toilet papers must be evaded. Antibiotics should only be used in control of doctor as long-standing and unrestrained use of them is one of the frequent causes of vaginitis. Eating habits may be preparative for vaginitis as well. Especially, well-sugared nutrients  may enable the occurence of the disease considering that feeding on them would raise blood glucose. How Should Sex Life Be During Treatment of Vaginitis? Throughout treatment, sexual intercourse should be avoided for other types of vaginitis as well because scratchiness caused by sexual intercourse enables infections to develop and pain is a high incidence symptom for any kind of vaginitis. 1.    Egan, ME, Lipsky, MS. Diagnosis of Vaginitis; American Family Physician, 2000   Sep 1; 62(5): 1095-104. 2.    Hetal B Gor, Vaginitis: Differential Diagnoses & Workup. Updated: May 19, 2010. 3.    Coco AS, Vandenboscheet M. Postgraduate Medicine, 2000, 107: No.4: 63-74. 4.    Family Practice News. A Clinical Update In Treatment of Bacterial Vaginosis. 5.    Mardh, PA., Tchoudomirova, K., Elshiby, S., Hellberg, D. Symptoms and signs in single and mixed genital infections. Int. J Gynecol Obstet 1998; 63: 145-152. 6.    Balcı O, Çağar M.: Vajinal enfeksiyonlar. Magazine of Turkish Gynecology Obstetrics Association, 2005; Volume:2 Issue: 5 Page 14-20. 7.    Jack D. Sobel. Vaginitis. The New England Journal of Medicine. December 25th, 1997.                8.    Sexually Transmitted Diseases Treatment Guidelines, 2006, MMWR.
Hyper Dictionary English Dictionary Computer Dictionary Video Dictionary Thesaurus Dream Dictionary Medical Dictionary Search Dictionary:   Meaning of HARMONISE Pronunciation:  'hârmu`nIz WordNet Dictionary 1. [v]  bring into consonance, harmony, or accord while making music or singing 2. [v]  bring into consonance or accord; "harmonize one's goals with one's abilities" 3. [v]  bring into consonance or relate harmoniously; "harmonize the different interests" 4. [v]  sing or play in harmony 5. [v]  write a harmony for 6. [v]  go together; "The colors don't harmonize"; "Their ideas concorded" HARMONISE is a 9 letter word that starts with H.  Synonyms: accord, agree, chord, concord, consort, fit in, harmonize, harmonize, harmonize, reconcile  See Also: accommodate, adjust, alter, blend, blend in, change, check, compose, conciliate, correspond, fit, gibe, go, jibe, key, match, proportion, realise, realize, reharmonise, reharmonize, relate, set, sing, tally, write
What's New Q & A Tip Jar C# Helper... Follow VBHelper on Twitter MSDN Visual Basic Community TitleUse regular expressions in VB .NET DescriptionThis example shows how to use regular expressions in VB .NET. It uses the Regex class to validate text entered by the user. Keywordsregular expression, string, parsing, parse CategoriesStrings, Algorithms When the user changes the text in the txtTestExp text box, the program makes a Regex object, passing the constructor the regular expression to match. It calls the object's IsMatch method, passing it the text that the user entered and sets the text box's background color to yellow if the text doesn't match the expression. Private Sub txtTestExp_TextChanged(ByVal sender As _ System.Object, ByVal e As System.EventArgs) Handles _ Dim reg_exp As New Regex(txtRegExp.Text) If reg_exp.IsMatch(txtTestExp.Text) Then txtTestExp.BackColor = Color.White txtTestExp.BackColor = Color.Yellow End If End Sub The example program starts with the regular expression "^([2-9]{3}-)?[2-9]{3}-\d{4}$". The pieces of this expression have the following meanings: ^Match the beginning of the string [2-9]{3}-Match the characters 2 through 9 exactly 3 times, followed by a - ([2-9]{3}-)?Match the expression "[2-9]{3}-" zero or 1 times [2-9]{3}-Again, match the characters 2 through 9 exactly 3 times, followed by a - \d{4}Match any digit exactly 4 times $Match the end of the string The result is that the string must have the form 222-2222 or 222-222-2222. Also note that the Regex class has a shared IsMatch method that you can use without creating a Regex object. Creating an object as in this example allows the object to compile the regular expression so it can later evaluate the expression more quickly. This is useful if you need to use the expression many times.
*Last Updated on January 31,2016 Foundation Construction The foundation transmits the wind turbines' dead load and wind load into the ground. Our foundations are always circular. Depending on the site, the ground can only absorb a certain amount of compressive strain, so the foundation surfaces are adapted accordingly. Circular foundations are designed based on this elementary realization and as a rule are installed as shallow foundations. Advantages of a circular foundation are:
Alliteration is the use of the same consonant sounds in words that are near each other. It is the sound, not the letter, that is important: therefore ‘city’ and ‘code’ do not alliterate, but ‘kitchen’ and ‘code’ do. The Poetry Archive How alliterate are you? Which are alliterations? Cupboard’s contents Crowded kitchen Kitchen knife Chef’s knife Crowded room Keep calm Church choir Cherry cola
Friedrich Nietzsche, The Gay Science.  Die fröhliche Wissenschaft. First published in 1882.   Friedrich Nietzsche Full Text EBook   Previous Section   348. The Origin of the Learned   Next Section The Origin of the Learned. The learned man in Europe grows out of all the different ranks and social conditions, like a plant requiring no specific soil: on that account he belongs essentially and involuntarily to the partisans of democratic thought.  But this origin betrays itself.  If one has trained one s glance to some extent to recognise in a learned book or scientific treatise the intellectual idiosyncrasy of the learned man all of them have such idiosyncrasy, and if we take it by surprise, we shall almost always get a glimpse behind it of the "antecedent history" of the learned man and his family, especially of the nature of their callings and occupations.  Where the feeling finds expression, "That is at last proved, I am now done with it" it is commonly the ancestor in the blood and instincts of the learned man that approves of the "accomplished work" in the nook from which he sees things; the belief in the proof is only an indication of what has been looked upon for ages by a laborious family as "good work”.  Take an example: the sons of registrars and office clerks of every kind whose main task has always been to arrange a variety of material, distribute it in drawers, and systematise it generally, display, when they become learned men, an inclination to regard a problem as almost solved when they have systematised it.  There are philosophers who are at bottom nothing but systematising brains the formal part of the paternal occupation has become its essence to them.  The talent for classifications, for tables of categories, betrays something; it is not for nothing that a person is the child of his parents.  The son of an advocate will also have to be an advocate as investigator: he seeks as a first consideration, to carry the point in his case, as a second consideration, he perhaps seeks to be in the right.  One recognises the sons of Protestant clergymen and schoolmasters by the naive assurance with which as learned men they already assume their case to be proved, when it has but been presented by them staunchly and warmly: they are thoroughly accustomed to people believing in them, it belonged to their fathers "trade"!  A Jew, contrariwise, in accordance with his business surroundings and the past of his race, is least of all accustomed to people believing him.  Observe Jewish scholars with regard to this matter, they all lay great stress on logic, that is to say, on compelling assent by means of reasons; they know that they must conquer thereby, even when race and class antipathy is against them, even where people are unwilling to believe them.  For in fact, nothing is more democratic than logic: it knows no respect of persons, and takes even the crooked nose as straight.  In passing we may remark that in respect to logical thinking, in respect to cleaner intellectual habits, Europe is not a little indebted to the Jews; nobody more so than the Germans who are a lamentably deraisonnable race who to this day must always have their "heads washed" first.  Wherever the Jews have won influence they have taught men to analyse more subtly, to argue more acutely, to write more clearly and purely: their task was ever to bring a people "to listen to reason”.   Friedrich Nietzsche, "Ecce Homo" Ebook Kindle Version : $1 from Amazon! PDA, Mobile/Smart phone : $1 from MobiPocket.com!
e (mathematical constant) From Wikipedia, the free encyclopedia Jump to navigation Jump to search Graph of the equation Here, e is the unique number larger than 1 that makes the shaded area equal to 1. The constant can be characterized in many different ways. For example, e can be defined as the unique positive number a such that the graph of the function y = ax has unit slope at x = 0.[3] The function f(x) = ex is called the (natural) exponential function, and is the unique exponential function equal to its own derivative. The natural logarithm, or logarithm to base e, is the inverse function to the natural exponential function. The natural logarithm of a number k > 1 can be defined directly as the area under the curve y = 1/x between x = 1 and x = k, in which case e is the value of k for which this area equals one (see image). There are alternative characterizations. 2.71828182845904523536028747135266249775724709369995... (sequence A001113 in the OEIS). The constant has been historically typeset as "e", in italics, although the ISO 80000-2:2009 standard recommends typesetting constants in an upright style. Compound interest[edit] If the interest is credited twice in the year, the interest rate for each 6 months will be 50%, so the initial $1 is multiplied by 1.5 twice, yielding $1.00 × 1.52 = $2.25 at the end of the year. Compounding quarterly yields $1.00 × 1.254 = $2.4414…, and compounding monthly yields $1.00 × (1 + 1/12)12 = $2.613035… If there are n compounding intervals, the interest for each interval will be 100%/n and the value at the end of the year will be $1.00×(1 + 1/n)n. Bernoulli noticed that this sequence approaches a limit (the force of interest) with larger n and, thus, smaller compounding intervals. Compounding weekly (n = 52) yields $2.692597…, while compounding daily (n = 365) yields $2.714567…, just two cents more. The limit as n grows large is the number that came to be known as e; with continuous compounding, the account value will reach $2.7182818... Bernoulli trials[edit] Graphs of probability P of not observing independent events each of probability 1/n after n Bernoulli trials, and 1 − P  vs n ; it can be observed that as n increases, the probability of a 1/n-chance event never appearing after n tries rapidly converges to 1/e. This is very close to the limit Optimal planning problems[edit] A stick of length L is broken into n equal parts. The value of n that maximizes the product of the lengths is then either[15] The stated result follows because the maximum value of occurs at (Steiner's problem, discussed below). The quantity is a measure of information gleaned from an event occurring with probability , so that essentially the same optimal division appears in optimal planning problems like the secretary problem. The number e occurs naturally in connection with many problems involving asymptotics. An example is Stirling's formula for the asymptotics of the factorial function, in which both the numbers e and π enter: As a consequence, Standard normal distribution[edit] The normal distribution with zero mean and unit standard deviation is known as the standard normal distribution, given by the probability density function The constraint of unit variance (and thus also unit standard deviation) results in the 1/2 in the exponent, and the constraint of unit total area under the curve ϕ(x) results in the factor .[proof] This function is symmetric around x = 0, where it attains its maximum value , and has inflection points at x = ±1. In calculus[edit] The graphs of the functions xax are shown for a = 2 (dotted), a = e (blue), and a = 4 (dashed). They all pass through the point (0,1), but the red line (which has slope 1) is tangent to only ex there. The value of the natural log function for argument e, i.e. ln(e), equals 1. The principal motivation for introducing the number e, particularly in calculus, is to perform differential and integral calculus with exponential functions and logarithms.[16] A general exponential function y = ax has a derivative, given by a limit: The parenthesized limit on the right is independent of the variable x: it depends only on the base a. When the base is set to e, this limit is equal to 1, and so e is symbolically defined by the equation: Another motivation comes from considering the derivative of the base-a logarithm,[17] i.e., of loga x for x > 0: where the substitution u = h/x was made. The a-logarithm of e is 1, if a equals e. So symbolically, The logarithm with this special base is called the natural logarithm and is denoted as ln; it behaves well under differentiation since there is no undetermined limit to carry through the calculations. There are thus two ways in which to select such special numbers a. One way is to set the derivative of the exponential function ax equal to ax, and solve for a. The other way is to set the derivative of the base a logarithm to 1/x and solve for a. In each case, one arrives at a convenient choice of base for doing calculus. It turns out that these two solutions for a are actually the same, the number e. Alternative characterizations[edit] The five colored regions are of equal area, and define units of hyperbolic angle along the hyperbola 1. The number e is the unique positive real number such that . The following four characterizations can be proven equivalent: 1. The number e is the limit 2. The number e is the sum of the infinite series where n! is the factorial of n. 4. If f(t) is an exponential function, then the quantity is a constant, sometimes called the time constant (it is the reciprocal of the exponential growth constant or decay constant). The time constant is the time it takes for the exponential function to increase by a factor of e: . and therefore its own antiderivative as well: Exponential functions and intersect the graph of , respectively, at and . The number is the unique base such that intersects only at . We may infer that lies between 2 and 4. The number e is the unique real number such that for all positive x.[18] Also, we have the inequality for all real x, with equality if and only if x = 0. Furthermore, e is the unique base of the exponential for which the inequality axx + 1 holds for all x.[19] This is a limiting case of Bernoulli's inequality. Exponential-like functions[edit] The global maximum of occurs at x = e. Steiner's problem asks to find the global maximum for the function This maximum occurs precisely at x = e. For proof, the inequality , from above, evaluated at and simplifying gives . So for all positive x.[20] defined for positive x. More generally, for the function The infinite tetration Number theory[edit] The real number e is irrational. Euler proved this by showing that its simple continued fraction expansion is infinite.[22] (See also Fourier's proof that e is irrational.) Complex numbers[edit] The exponential function ex may be written as a Taylor series Furthermore, using the laws for exponentiation, which is de Moivre's formula. The expression is sometimes referred to as cis(x). The expressions of and in terms of the exponential function can be deduced: Differential equations[edit] The general function is the solution to the differential equation: given above, as well as the series Less common is the continued fraction (sequence A003417 in the OEIS). which written out looks like This continued fraction for e converges three times as quickly:[citation needed] Stochastic representations[edit] In addition to exact analytical expressions for representation of e, there are stochastic techniques for estimating e. One such approach begins with an infinite sequence of independent random variables X1, X2…, drawn from the uniform distribution on [0, 1]. Let V be the least number n such that the sum of the first n observations exceeds 1: Then the expected value of V is e: E(V) = e.[24][25] Known digits[edit] The number of known digits of e has increased substantially during the last decades. This is due both to the increased performance of computers and to algorithmic improvements.[26][27] Number of known decimal digits of e Date Decimal digits Computation performed by 1690 1 Jacob Bernoulli[7] 1714 13 Roger Cotes[28] 1748 23 Leonhard Euler[29] 1853 137 William Shanks[30] 1871 205 William Shanks[31] 1884 346 J. Marcus Boorman[32] 1949 2,010 John von Neumann (on the ENIAC) 1961 100,265 Daniel Shanks and John Wrench[33] 1978 116,000 Steve Wozniak on the Apple II[34] Since that time, the proliferation of modern high-speed desktop computers has made it possible for amateurs, with the right hardware, to compute trillions of digits of e.[35] In computer culture[edit] For instance, in the IPO filing for Google in 2004, rather than a typical round-number amount of money, the company announced its intention to raise $2,718,281,828, which is e billion dollars rounded to the nearest dollar. Google was also responsible for a billboard[36] that appeared in the heart of Silicon Valley, and later in Cambridge, Massachusetts; Seattle, Washington; and Austin, Texas. It read "{first 10-digit prime found in consecutive digits of e}.com". Solving this problem and visiting the advertised (now defunct) web site led to an even more difficult problem to solve, which in turn led to Google Labs where the visitor was invited to submit a résumé.[37] The first 10-digit prime in e is 7427466391, which starts at the 99th digit.[38] 1. ^ Oxford English Dictionary, 2nd ed.: natural logarithm 2. ^ Encyclopedic Dictionary of Mathematics 142.D 3. ^ Jerrold E. Marsden, Alan Weinstein (1985). Calculus. Springer. ISBN 978-0-387-90974-5. 4. ^ Sondow, Jonathan. "e". Wolfram Mathworld. Wolfram Research. Retrieved 10 May 2011. 5. ^ a b c O'Connor, J J; Robertson, E F. "The number e". MacTutor History of Mathematics. 6. ^ Howard Whitley Eves (1969). An Introduction to the History of Mathematics. Holt, Rinehart & Winston. ISBN 978-0-03-029558-4. 9. ^ Lettre XV. Euler à Goldbach, dated November 25, 1731 in: P.H. Fuss, ed., Correspondance Mathématique et Physique de Quelques Célèbres Géomètres du XVIIIeme Siècle … (Mathematical and physical correspondence of some famous geometers of the 18th century), vol. 1, (St. Petersburg, Russia: 1843), pp. 56–60, see especially p. 58. From p. 58: " … ( e denotat hic numerum, cujus logarithmus hyperbolicus est = 1), … " ( … (e denotes that number whose hyperbolic [i.e., natural] logarithm is equal to 1) … ) 10. ^ Remmert, Reinhold (1991). Theory of Complex Functions. Springer-Verlag. p. 136. ISBN 978-0-387-97195-7 11. ^ Euler, Meditatio in experimenta explosione tormentorum nuper instituta. 12. ^ Leonhard Euler, Mechanica, sive Motus scientia analytice exposita (St. Petersburg (Petropoli), Russia: Academy of Sciences, 1736), vol. 1, Chapter 2, Corollary 11, paragraph 171, p. 68. From page 68: Erit enim seu ubi e denotat numerum, cuius logarithmus hyperbolicus est 1. (So it [i.e., c, the speed] will be or , where e denotes the number whose hyperbolic [i.e., natural] logarithm is 1.) 13. ^ Grinstead, C.M. and Snell, J.L.Introduction to probability theory (published online under the GFDL), p. 85. 15. ^ Steven Finch (2003). Mathematical constants. Cambridge University Press. p. 14. 17. ^ This is the approach taken by Kline (1998). 18. ^ Dorrie, Heinrich (1965). 100 Great Problems of Elementary Mathematics. Dover. pp. 44–48. 19. ^ A standard calculus exercise using the mean value theorem; see for example Apostol (1967) Calculus, §6.17.41. 22. ^ Sandifer, Ed (Feb 2006). "How Euler Did It: Who proved e is Irrational?" (PDF). MAA Online. Archived from the original (PDF) on 2014-02-23. Retrieved 2010-06-18. 23. ^ Hofstadter, D.R., "Fluid Concepts and Creative Analogies: Computer Models of the Fundamental Mechanisms of Thought" Basic Books (1995) ISBN 0-7139-9155-0 24. ^ Russell, K.G. (1991) Estimating the Value of e by Simulation The American Statistician, Vol. 45, No. 1. (Feb., 1991), pp. 66–68. 26. ^ Sebah, P. and Gourdon, X.; The constant e and its computation 27. ^ Gourdon, X.; Reported large computations with PiFast 28. ^ Roger Cotes (1714) "Logometria," Philosophical Transactions of the Royal Society of London, 29 (338) : 5–45; see especially the bottom of page 10. From page 10: "Porro eadem ratio est inter 2,718281828459 &c et 1, … " (Furthermore, by the same means, the ratio is between 2.718281828459… and 1, … ) 30. ^ William Shanks, Contributions to Mathematics, … (London, England: G. Bell, 1853), page 89. 32. ^ J. Marcus Boorman (October 1884) "Computation of the Naperian base," Mathematical Magazine, 1 (12) : 204–205. 33. ^ Daniel Shanks and John W Wrench (1962). "Calculation of Pi to 100,000 Decimals" (PDF). Mathematics of Computation. 16 (77): 76–99 (78). doi:10.2307/2003813. JSTOR 2003813. We have computed e on a 7090 to 100,265D by the obvious program 35. ^ Alexander Yee. "e". 36. ^ First 10-digit prime found in consecutive digits of e}. Brain Tags. Retrieved on 2012-02-24. 37. ^ Shea, Andrea. "Google Entices Job-Searchers with Math Puzzle". NPR. Retrieved 2007-06-09. 38. ^ Kazmierczak, Marcus (2004-07-29). "Google Billboard". Retrieved 2007-06-09. Further reading[edit] External links[edit]
Clean Water & Food Security Collectively, over 1 billion people are affected by food security and the inability to access to clean water. Caused by economic, environmental and social factors such as crop failure, overpopulation, and policies; food security and clean water are issues faced around the world. For this challenge, we’re asking you to build a solution to increase accessibility and alleviate hunger in a sustainable way in communities and regions where food and water are readily available.