text stringlengths 144 682k |
|---|
My understanding of how neurons transmit signals is pretty basic - dendrites receive signals (both excitatory and inhibitory), transmitting them to the cell body where, if a sufficient depolarization builds up, the axon fires an action potential, sending the signal along. This entire process is carried out by various ion channels, especially voltage gated ones. My question is this - do we know how sensitive the speed and reliability of the process are to genetic variations that control how these ion channels are built? Is it robust enough to permit small variations, or is any change catastrophic? If it permits small variations, do we know how variable the human population is for features like signal fidelity (low rate for failing to fire when the neuron should, and low rate for firing in the absence of stimulus) and transmission speed?
Considering the difficulty of studying humans, is this sort of information known for mice, rats, primates, or etc?
• 2
$\begingroup$ Action potentials are pretty robust in most circumstances, but minor tweaks at various stages of the process, whether conduction, neurotransmitter release, or post-synaptic responses, sometimes affecting only certain populations of cells, are thought to be behind many neurological disorders and conditions, including schizophrenia, autism, epilepsy, multiple sclerosis, and some peripheral neuropathies. Of cousre many of these conditions have both environmental and genetic influences, so it is often difficult to attach a specific genetic cause. $\endgroup$ – Bryan Krause Jun 11 '17 at 21:37
• 1
$\begingroup$ There is a certain amount of variation in the genes that regulate ion channel/receptor structure, much of which has little/no affect on neuronal function. But as @BryanKrause says, many are known that do cause neuronal disorders. A very interesting question, but I don't know the genetics well enough to provide a decent answer. $\endgroup$ – Oliver Houston Jun 12 '17 at 15:56
• 1
$\begingroup$ @OliverHouston What I was trying to get at, although maybe not too clearly, is that a lot of the variation in those genes does not deterministically cause disorders, but that variation may predispose certain individuals to those disorders. For Sean: it isn't directly related to your question, but on the issue of robustness you could read some papers on multiple sclerosis. MS is a demyelinating disorder, which eventually impairs conductivity, but it actually takes quite a bit of demyelination before the symptoms are noticeable. $\endgroup$ – Bryan Krause Jun 12 '17 at 16:22
• 2
$\begingroup$ Got it - based on your SE background I see you are mostly a physics/math person. I would highly recommend the "NEURON" simulation which is freely available and documented in various places. There are pre-existing models that you could tweak to test some of your hypotheses. For example, if you are talking about conduction, rather than action potential initiation, you will find that one of the important factors in both threshold and reliability is voltage-gated sodium channel concentration. However, more Na+-VGCs will mean both a lower threshold and more reliable signalling. $\endgroup$ – Bryan Krause Jun 12 '17 at 16:36
• 1
$\begingroup$ Also note that in biology if you ask about variability: "do we know how sensitive the speed and reliability of the process are to genetic variations" - the answer is almost always going to depend somewhat on the phenotype. If a variation doesn't cause any disorder, there is no selective pressure to prevent that type of variation, so over generations that variation will tend to increase on average. The presence of some low-efficacy genetic causes for some of the disorders I mentioned is good evidence that there is some variability in the population, though. $\endgroup$ – Bryan Krause Jun 12 '17 at 16:38
Your Answer
Browse other questions tagged or ask your own question. |
HIGH GOVT / Online
Includes a brief study of the American political heritage; modern political and economic systems; events leading to the writing of the Declaration of Independence and the Constitution; and topics such as political parties, interest groups, and voting behavior. Examines legislative, executive, and judicial branches of the federal, state, and local governments, as well as civil rights.
Required textbook (Sold separately):
• Remy, Richard C. (2003). United States Government: Democracy in Action (Texas ed.). Columbus, OH: Glencoe McGraw-Hill. ISBN 0-07-828568-2 or 978-0-07-828568-4
Course: HIGH GOVT / Online
Schedule Number: 56520
Instructor(s): Jody Bolin
Units: 0.5 Academic Credits
Lessons/Exams: 1 final exam
Tuition & Mandatory Fees:
9th - 12th Grade Courses $225.00 |
Chew This: 10 Best Foods for Healthy Teeth
Foods for Healthy Teeth
Teeth are the most important part of our body as it helps in chewing the food and breaking them into small particles. It also adds beauty to our face with its dazzling shine. So, it becomes very important to maintain a good oral hygiene. Brushing is a very important task that you should perform two times a day to maintain healthy teeth and for avoiding mouth odor. But there are few studies which have shown that besides brushing, one can include some foods in their diet to maintain a good oral health. Here is the list of foods for healthy teeth.
1. Whole grains
Whole grainsImage Source: heart
Try to consume more of those foods which contain whole grain like brown rice, bran, pasta, cereals as they have rich content of iron and Vitamin B which are very beneficial for your gums. Plus, it also contains manganese that can help in strengthening your bones and teeth. And fiber content of the whole grains can also help in easing the process of digestion.
2. Milk products
Milk productsImage Source: ccm2
Cheese, yogurt, and cottage cheese etc. are the milk products which are very helpful for maintaining your oral health. Milk is known as the best source of calcium and the foods that contain a high amount of phosphorous and calcium can help in preventing tooth decay and can strengthen your teeth.
Also read: 7 Health Benefits of Eating Peanuts During Winter
3. Celery
CeleryImage Source: tqn
Celery tastes bland and for this reason it is not loved by many people. But you can also use it as your natural toothbrush as it helps in removing the bacteria and the remaining food particles from your teeth. And the high content of vitamin C and A can further help in maintaining the health of your gums.
4. Sprouted beans
Sprouted beansImage Source: indiaphile
Sprouted beans are very healthy if eaten early in the morning. Sprouts not only help in improving metabolism but also help in losing fat. Besides this, the sprout is that food item that can render food health for your teeth as they have rich content of fiber that induces your salivary glands and develops minerals that can help in preventing tooth decay.
Also read: 8 Ways to Get Enough Calcium Without Drinking Milk
5. Nuts
NutsImage Source: bbcgoodfood
Nuts like peanut, cashew, walnut, pistachios etc. contain a high amount of calcium, magnesium, and phosphorus which are very good for your teeth health. Try to have a small bowl of nuts as an appetizer to maintain good health.
6. Lentils
LentilsImage Source: wikimedia
Lentils like moong dal and masoor dal are a very rich source of protein and also aid in the process of digestion. And the roughage content present in the lentils also helps in maintaining gum health and increasing the process of salivation. It also neutralizes the alkalis and acids present in the mouth. Thus, preventing tooth decay.
Also read: 4 Diet Secrets to Get Glowing Skin and Beautiful Hair During Winter
7. Carrots
CarrotsImage Source: well-beingsecrets
Carrots are very good for your teeth as it increases the production of saliva which can reduce the risk of cavities. Also it is a good source of soluble fiber and vitamin A which is very beneficial for your health. Try to add this super vegetable in your diet to reap its benefits.
8. Apples
ApplesImage Source: well-beingsecrets
Apples are the natural sweeteners that contain a high amount of fiber and water which is really good for your teeth. The crunchiness of the apples induces the secretion of more saliva which can reduce the chances of the cavity and also helps in removing the bacteria and remaining food. And eating one apple daily is really good for your gum health.
Also read: 10 Foods That One Should Avoid and Eat on an Empty Stomach
9. Cucumber
CucumberImage Source: sickchirpse
No doubt that cucumber is one of the best appetizers as it aids in the process of weight loss and also helps in boosting oral health by increasing the secretion of saliva. It also removes the bacteria and existing food particles from your teeth to prevent tooth decay.
10. Leafy green vegetables
Leafy green vegetablesImage Source: inforum
Most of us are aware of the various health benefits of green vegetables like weight loss, improving the process of digestion etc. But there are few veggies like broccoli, kale and spinach are very good for your teeth as they contain a high amount of minerals and vitamins. It also considered as a great source of calcium that can create a shield around your teeth to strengthen your enamel. Apart from this green vegetables also contains folic acids that can aid in the treatment of gum disease.
So, these were the few foods for healthy teeth that you should include in your diet for maintaining a good oral hygiene. |
Posted by Kingtous on March 17, 2019
A:There is always a lot of confusion about this concept. (And the naming does not help!) The other answers present so far are not correct.
Firstly, we have to understand that the underlying problem is almost always a graph - So, the difference is not whether the problem is a tree (a special kind of graph), or a general graph!
The distinction instead is how we are traversing to search for our goal state. It also includes whether we are storing closed list, or not.
So, the basic difference is:
• If doing graph search, keep a “closed” list, that is, a list of nodes where the search has been completed.
• If doing tree search, we don’t keep this closed list.
The advantage of graph search obviously is that if we finish the search of a node, we will never search it again, while we may do so in tree search.
The disadvantage of graph search is that it uses more memory, which we may or may not have.
In other words, graph search vs. tree search is a simple space vs. time tradeoff.
Now, about the naming:
Graph Search is called graph search, because when we observe the traversal structure, we observe a GRAPH, that this node leads us to the other node that we saw before, etc, etc.
Tree search is called a tree search, because when we observe the traversal structure, we observe a TREE. We observe a tree, even if the underlying problem structure is a graph. This is because when we observe a node, we have no recollection of having seen it earlier, we don’t store that list, etc. So, the same node in the underlying problem structure can appear as multiple times (as different nodes) of the tree.
• https://ai.stackexchange.com/questions/6426/what-is-the-difference-between-tree-search-and-graph-search?answertab=oldest#tab-top |
Approach 1: Dijkstra's
Treating the original graph as a weighted, undirected graph, we can use Dijkstra's algorithm to find all reachable nodes in the original graph. However, this won't be enough to solve examples where subdivided edges are only used partially.
When we travel along an edge (in either direction), we can keep track of how much we use it. At the end, we want to know every node we reached in the original graph, plus the sum of the utilization of each edge.
We use Dijkstra's algorithm to find the shortest distance from our source to all targets. This is a textbook algorithm, refer to this link for more details.
Additionally, for each (directed) edge (node, nei), we'll keep track of how many "new" nodes (new from subdivision of the original edge) were used. At the end, we'll sum up the utilization of each edge.
Please see the inline comments for more details.
Complexity Analysis
• Time Complexity: , where is the length of edges.
• Space Complexity: .
Analysis written by: @awice. |
Germinating Seeds
what is ideal temperature for seed germination?
greenthumbde added on December 28, 2010 | Answered
I have a grow light set up in my office with several house plants thriving. I want to utilize these lights to germinate seeds for spring veggie and flower plants. I have a thermometer in one of the larger 10inch pots and it shows a constant 63°. Is this adequate or does the seed starting media need to be warmer? What is the optimal temperature to receive the best results?
Thank you!
A.Answers to this queston: Add Answer
Answered on December 29, 2010
Germinating temperature range of 65 to 75 degrees F for most vegetables. Some cool-season vegetables such as cabbage, broccoli, cauliflower, and peas tend to do well when started at temperatures of about 55 degrees F. Once they have sprouted, keep the light 4-6 inches above the plant other wise they will get spindly reaching for the light. Give them 14-16 hours of light per day. One way to get optimum light on your plants is to make a foil tent over the light so that it's light refects all around the plants.
Was this answer useful?
Answered on December 28, 2010
Most seeds need to germinate at a warm constant temperature. I have found that many people do not know about “nicking” Nicking is something that is required prior to germinating very hard seeds. If the out shell or seed is very hard you must use a sharp razor knife pie shape nick out of the shell without damaging the inner seed. Once nicked if needed the hot water heater is the warm constant temperature. Simply place the seeds warm water for a little while, and then place them in a folded moist paper towel. Keep it moist until you see sprouting with two or more seeds then you will need to transport the sprouting seeds to a very rich sandy loom soil in a small starter container. Make sure you Keep the soil moist and allow for sun light or artificial light. Care take the starter container feeding and watering do not over water or burn the new sprouts with the artificial light. When the new sprouts and root base starts to out grow the small container you can transplant to larger containers remember that plants go into a form of shock with transplanting so you need to ad liquid vitamins many companies have products specific for your plants.
Was this answer useful?
Log in or sign up to help answer this question.
Did you find this helpful? Share it with your friends!
You must be logged into your account to answer a question.
Looking for more?
here are more questions about...
Germinating Seeds
Do you know a lot about gardening?
Become a GKH Gardening Expert
Learn More |
Rectangle 27 1
What is the JavaScript operator and how do you use it?
1>>>0 === 1
-1>>>0 === 0xFFFFFFFF -1>>0 === -1
1.7>>>0 === 1
0x100000002>>>0 === 2
1e21>>>0 === 0xDEA00000 1e21>>0 === -0x21600000
Infinity>>>0 === 0
NaN>>>0 === 0
null>>>0 === 0
'1'>>>0 === 1
'x'>>>0 === 0
Object>>>0 === 0
(*: well, they're defined as behaving like floats. It wouldn't surprise me if some JavaScript engine actually used ints when it could, for performance reasons. But that would be an implementation detail you wouldn't get to take any advantage of.)
(In reality there's little practical need for this as hopefully people aren't going to be setting array.length to 0.5, -1, 1e21 or 'LEMONS'. But this is JavaScript authors we're talking about, so you never know...)
+1 for array.length = LEMONS. Have to find a really annoying person so I can play this trick on their code :3
+2 in depth description and table, -1 because array.length validates itself and can't be arbitrarily set to anything that is not an integer or 0 (FF throws this error: RangeError: invalid array length).
Although JavaScript's Numbers are double-precision floats(*), the bitwise operators (<<, >>, &, | and ~) are defined in terms of operations on 32-bit integers. Doing a bitwise operation converts the number to a 32-bit signed int, losing any fractions and higher-place bits than 32, before doing the calculation and then converting back to Number.
Great explanation and great examples! Unfortunately this is another insane aspect of Javascript. I just don't understand what's so horrible about throwing an error when you receive the wrong type. It's possible to allow dynamic typing without allowing every accidental mistake to create a type casting. :(
However, the spec deliberately allows many Array functions to be called on non-Array (eg. via, so array might not actually be a real Array: it might be some other user-defined class. (Unfortunately, it can't reliably be a NodeList, which is when you'd really want to do that, as that's a host object. That leaves the only place you'd realistically do that as the arguments pseudo-Array.)
In this case this is useful because ECMAScript defines Array indexes in terms of 32 bit unsigned ints. So if you're trying to implement array.filter in a way that exactly duplicates what the ECMAScript Fifth Edition standard says, you would cast the number to 32-bit unsigned int like this.
It doesn't just convert non-Numbers to Number, it converts them to Numbers that can be expressed as 32-bit unsigned ints.
Oh, and @yoshi, please don't! Let's not scare off those trying to learn...
So doing a bitwise operation with no actual effect, like a rightward-shift of 0 bits >>0, is a quick way to round a number and ensure it is in the 32-bit int range. Additionally, the triple >>> operator, after doing its unsigned operation, converts the results of its calculation to Number as an unsigned integer rather than the signed integer the others do, so it can be used to convert negatives to the 32-bit-two's-complement version as a large Number. Using >>>0 ensures you've got an integer between 0 and 0xFFFFFFFF.
Rectangle 27 1
What is the JavaScript operator and how do you use it?
|
Share this page
Implementing the 3Rs
• Replacement: methods which avoid or replace animal use - our principal goal
• Reduction: ensuring that the minimum number of animals is used to answer the scientific question, using effective experimental design and statistical analysis to optimise numbers and avoid wasting animals
• Refinement: reducing suffering and improving welfare throughout animals' lives, including procedures, housing, husbandry and care
We strongly promote fuller implementation of all 3Rs, and we recognise the work done by other bodies to develop and validate humane alternatives, and to address the current crisis with reproducibility and translatability in the life sciences.
Many of the RSPCA's long-term initiatives address refinement, as this 'R' is a key area of expertise for us and our work in this field can benefit millions of animals immediately.
Our current refinement work falls into three main areas:
We believe that the lives and experiences of laboratory animals matter, and we value all animals used in research and testing equally.
The RSPCA believes it is essential that students whose future careers may involve animal use fully understand the ethical, welfare and scientific issues involved. We are currently working with UK doctoral schools to prioritise animal welfare, ethics and the 3Rs within the syllabus in further and higher education.
Download the 'R of replacement' educational resource (PDF 2.95MB).
See our work on biotechnology and genetically altered animals. |
Govern AI technology in healthcare to keep patients safe
AI can be a boon to healthcare providers, but groups like the American Medical Association recommend putting policies in place to protect patient data and privacy.
AI technology in healthcare has shown tremendous potential to help providers improve patient outcomes and deliver...
on advancements to fight and cure diseases. Its ability to quickly process vast amounts of data and deliver meaningful insights within seconds based on advanced algorithms makes it the perfect tool to assist physicians. However, despite the excitement that AI has been receiving in fields like oncology and population health analysis, policymakers are not willing to let AI run wild without governance due to concerns around patient privacy and safety.
The American Medical Association (AMA) is one group that has recently acknowledged the need to draft policy around AI technology in healthcare to address concerns around patient safety. In a release, the AMA detailed its policy recommendations on augmented intelligence. The recommendations detailed the benefits AI has to offer medicine, as well as highlighted the concerns related to the design, use and implementation of AI in healthcare.
The AMA's recommendations for the use of AI technology in healthcare include:
• defining key priority areas where AI would provide the most benefits;
• taking into consideration physicians' prospectives as part of the design and implementation of AI in healthcare;
• ensuring patient safeguards and privacy when it comes to designing and deploying AI;
• promoting an understanding of what AI has to offer, but also educating patients on its limitations; and
• determining any legal implications from healthcare AI relating to its safety, effectiveness and oversight.
The AMA is hardly the first to raise concerns around the use of AI technology in healthcare in recent months. In May, the New England Journal of Medicine published an article highlighting some concerns researchers at Stanford University had about the ethical challenges associated with the use of AI in healthcare.
Today most AI tools that are considered by physicians for patient treatment require access to patients' medical records, which are often stored in different systems in the hospital. AI processes the information it accesses and, once specific algorithms are applied, the results are provided in the form of suggested treatment plans, diagnosis predictions, or other clinical insights. Permission for AI to access patient records is given by default since most AI tools are hosted within hospital systems. However, the greater concern is around patient privacy and acknowledging that AI technology in healthcare has much higher risk factors when it comes to security and data privacy.
Although the AMA's recommendations cover most of what physicians and patients would be interested in seeing in a policy around healthcare-related AI technology, security and privacy will likely be the two most pressing areas AI vendors have to address. Some of the factors contributing to the concerns around privacy are directly related to what provides AI its data and power. These factors include:
• moving patient data outside of the hospital firewalls in order to process it in the AI vendor's own data centers;
• increasing data breach risks as the result of centralizing vast amounts of health data;
• increasing the attack surface by introducing more systems that touch patient data; and
• mixing nonclinical data, such as social media and other patient-generated information, in order to link lifestyle and behaviors to illness and wellness.
Technological innovations in AI are proving to be the next big thing in healthcare. Although adoption of AI technology in healthcare is still limited to health systems like the Mayo Clinic, the results of its early use in the likes of Google, Apple and IBM have proved to advance investments in this arena. Regardless of how big the tech companies investing in AI for healthcare are, developing appropriate policies and standards will help ensure accountability, safety and security when it comes to its use in patient care, data protection and privacy.
This was last published in July 2018
Dig Deeper on Health care business intelligence systems
Join the conversation
1 comment
Send me notifications when other members comment.
Please create a username to comment.
What challenges has your organization encountered when it comes to using AI? |
How can price discrimination benefit both consumers and producers?
1 Answer
Jan 1, 2016
Basically, it's because producers get to sell more and consumers get to buy at prices they can afford.
There are three degrees of price discrimination, in Microeconomic Theory.
First degree is when each unit can be sold at a different price, what some might name one-to-one sales (or market). This is very uncommon (though might happen with club memberships' fee, for example) and is named also "perfect price discrimination".
Second degree is when the seller differentiates their products so that consumers will decide where they fit, while third degree is when the product is not differentiated, but the consumers are allocated into categories.
In these two cases, from one way or another, consumers will find what suits their needs and is more easily affordable, while the company (seller) will reach out more easily to a greater demand, that is, if prices were not discriminated, probably their number of customers would be lower.
Think of it like this: it is better to have people buying a can of soda for $2 and not buying the $5 two-liter bottle than simply not buying the $5 two-liter bottle. |
The Health Benefits of Cucumber (Cucumis Sativus)
Frequent Consumption of cucumber has long been linked to reduce the danger of many adverse health conditions.
As a member of the botanical family Cucurbitaceae, it is made up of 95 percent water. Cucumbers are naturally low in calories, fat, cholesterol, and sodium.
Surveys have indicated that increasing consumption of plant foods like cucumber decreases the risk of heart disease, obesity, diabetes, and mortality while promoting a healthy complexion, lower weight, and increased vitality.
Cucumbers are extremely beneficial for overall wellness, especially during the dry season when dehydration is on the high side; since it is mostly made of water and important nutrients that are essential for the human body.
Mineral Constituents
The flesh of a cucumber is rich in vitamin A, vitamin C, and folic acid while the hard skin of cucumber is rich in fibre and a range of minerals include magnesium, molybdenum, and potassium. In addition, cucumber contains silica, a trace mineral that contributes greatly to strengthening our connective tissues.
Benefits of Cucumber
Cucumbers are effective for healing many skin problems, under eye swelling and sunburn. Cucumbers also contain ascorbic and caffeic acids, which prevent water loss, which makes it been routinely applied topically to burns and dermatitis.
It has been useful for diabetic patients for many years. Cucumbers possess a hormone required by the beta cells during insulin production.
Cucumber helps lower the body pressure, keeps the body healthy and functioning. Its health benefits also includes; prevention of constipation, reduction in kidney stones while also providing bright and glowing complexion.
By Adeyemi Bamidele Ezekiel |
10 Crazy Cool Facts About Your Heart | Arena District Athletic Club
Keep up with the arena athletic blog
Steve Levert
Mar 07, 2019
10 Crazy Cool Facts About Your Heart
Other than the occasional doctor’s visit, how often are you thinking about your heart? Considering the amount of work this pint-size muscle performs on a daily basis, it’s a wonder we don’t thank it by the minute. From the gallons of blood it pumps to the impact our emotions can have on it, here are some remarkable facts sure to renew your appreciation for that tireless ticker.
1. Heart at Work
Beginning a mere four weeks after conception, the heart works ceaselessly (it doesn’t even pause when you sneeze, despite popular belief). From there, it beats about 100,000 times a day and 40 million times a year, depending on gender because a woman’s average heartbeat is faster than a man’s by eight beats per minute. This adds up to 3 billion heartbeats over the course of 75 years.
2. Fully Charged
If separated from the body, the heart will continue to beat because it has its own electrical supply. This is due to electrical impulses sent from the myocardium (the “muscle” of the heart), causing the heart to contract when signaled by the sinoatrial node at the top right atrium. The sinoatrial node is also referred to as the heart’s “natural pacemaker.”
3. Pump Power
Though it may be small — weighing between seven and 15 ounces — the heart is mighty. It pumps about 100 gallons of blood through the body each hour. That’s enough to fill 1,600 drinking glasses, which clocks in at 2,000 gallons each day. That means that over the course of 75 years the heart will pump about 1 million barrels of blood.
4. Head to Toes
The heart pumps blood to all 75 trillion cells in the body (with the exception of the corneas). When the body is at rest, it takes only six seconds for the blood to go from the heart to the lungs and back; eight seconds for it to go to the brain and back; and 16 seconds for it to go from the heart to the toes and back again.
5. Going the Distance
In just one day the heart pumps blood through 12,000 miles of vessels, which is four times the distance from New York to California. Even more incredible? If all of these vessels were placed end to end, they would extend about 60,000 miles, which is enough to circle the earth more than twice.
6. Heart Disease Knows No Age
Heart disease — the biggest killer of Americans — is no longer reserved for the middle-aged and elderly. Research within the past couple of years has found that obese children as young as 8 years old are developing thickened heart tissue, a precursor to heart disease. Researchers stress the importance of increased physical activity to counteract these findings.
7. Heartbreak Is Real
When you lose someone close to you, the emotional burden can take a physical toll on your heart. Stress cardiomyopathy, or “broken heart syndrome,” occurs when an emotionally stressful event induces symptoms of a heart attack, such as shortness of breath and chest pains. Another study found that those who have lost a partner also have a higher risk of atrial fibrillation, or developing an irregular heartbeat, within the first year. The risk is especially high among younger people and if the loss was sudden or unexpected.
8. Happiness Might Not Help
Interestingly, joyous occasions can likewise trigger “broken heart syndrome,” though it’s much more rare. A recent study in the European Heart Journal found that happy events like surprise parties or winning the lottery can put sudden stress on the heart — a condition that’s been dubbed “happy heart syndrome,” which appears to primarily affect postmenopausal women.
9. Cold and Wind Are Hard on Your Heart
Physical exertion in cold or windy weather — even activities like shoveling or walking through the snow — can put a heavy burden on your heart as your internal body heat drops. The American Heart Association recommends wearing warm layers and to always don a hat in cold or windy weather to keep heat from escaping and prevent your heart from having to overcompensate.
10. Heart Cell Phenomena
Heart cells stop dividing shortly after birth, reducing the risk of mutations. This is why heart cancer is so rare. However, because heart cells don’t regenerate, the heart is also unable to heal itself and will carry lasting scars from any heart damage. That said, emerging research is now finding that blood cells may be “reprogrammed” to act as heart muscle cells to regenerate a heart, which could reduce the need for heart transplants in the future.
By Paige Brettingen
Try Us Out for Free!
Fill out the following form to start your free trial.
Thank You!
Please see your free trial instructions below:
The code for your free trial is ADACTrial18.
|
Biblical Archaeology Review 11:1, January/February 1985
Challenge to Sun-Worship Interpretation of Temple Scroll’s Gilded Staircase
By Jacob Milgrom
In “The Case of the Gilded Staircase,” BAR 10:05, Professor Morton Smith attempts to prove that the Temple envisioned by the Essenes had a gilded staircase to reach the roof of the Temple where members of the Dead Sea sect worshipped the sun. As can be expected from Smith’s well-attested erudition, his contribution is informative and insightful. However, his thesis that the Essenes worshipped the sun must be rejected in toto.
Smith himself is fully aware of the major obstacle to his thesis: The Essenes were a fundamentalist sect that interpreted the Torah (the Pentateuch) literally. Their Temple Scroll—the same document that prescribes the Temple’s gilded staircase—also expressly cites the Deuteronomic prohibition against worshipping the sun (Temple Scroll, Column 55:17–18). Smith suggests that the sect may have rationalized its blatantly heretical behavior by claiming that it “reverenced” rather than worshipped the sun.
Join the BAS Library!
Already a library member? Log in here.
Institution user? Log in with your IP address. |
Car Culture
Sidewalk Labs has four design ideas that could change traffic and the way we use our streets
What if the streets outside your door were packed with pedestrians and cyclists, and cars were less important?
Spencer Platt/Getty Images
Traffic is a problem all over America. In our urban centers, traffic causes noise pollution as well as actual pollution and wastes time, money and resources. In a word, traffic sucks. Even in cities that were designed around the automobile, like Los Angeles, the sheer number of cars on the road forces municipalities to make infrastructure decisions that can make life for pedestrians and cyclists miserable.
On Friday a group called Sidewalk Labs published a document outlining what it thinks the four principles of modern street design should be and some of the ideas are pretty wild. Others just make too much sense to ignore.
The first principle that Sidewalk Labs suggests is that different streets should be tailored specifically for different roles. This means that there would ideally be streets for pedestrians and cyclists only, streets for regular cars, streets for transit and streets for autonomous and connected vehicles.
Laneways would be 35 feet wide and almost exclusively for pedestrian use.
Sidewalk Labs
In some ways, this is a good idea. Each mode of transportation has different needs, some of which are mutually exclusive. Cars, for example, need more space and that space can't be shared as easily or safely with pedestrians. On the other hand, streets that prioritize above-ground light rail -- like Portland, Oregon's streetcar system -- don't necessarily jibe well with cyclists but would be fine with cars or pedestrians.
The second principle being put forth is that streets should be separated by speed. This means that the bigger and wider streets would handle the transit and vehicle traffic, but there would be less of them, so the distance traveled would be greater.
Pedestrian-focused "laneways" would be more common, so the physical distance traveled would be much less, but the speed of travel for someone on foot is very low, so the time spent en route wouldn't be dramatically different from that of other modes of travel.
The Labs' third principle of design is that urban planners should incorporate flexibility into street space. This would be done with something called dynamic pavement. This is another name for pavement with lots of LEDs and sensors in it that could be programmed to show different markings at different times.
For example, during low-traffic density hours, a street typically geared toward bikes and pedestrians might be temporarily restriped to offer parking to cars. Those cars aren't moving quickly, so the narrower, slower street type wouldn't restrict them too much. Open parking spaces could be lit up to help cars find them quickly. When demand changes and parking isn't as much of a priority, the road markings would change.
Sidewalk Labs' idea for dynamic roads would change parts of the street to fit different purposes at different times. Sometimes areas would be used for parking, at others they'd be pedestrian-only and LEDs embedded in the pavement would denote the change.
Sidewalk Labs
This is something that is done on a more limited basis now but by using street signs. For example, in downtown Los Angeles, the lane closest to the curb is usually designated as a parking lane, but during peak traffic hours, parking isn't allowed, and that lane is used to lessen the overall density of cars on the road, hopefully speeding up traffic. Being able to change markings dynamically would bring a lot more flexibility to bear on the problem of traffic.
The fourth and final principle that Sidewalk Labs suggests is likely to be the most controversial to motorists. It advises that the city "recapture street space for the public realm, transit, bikes and pedestrians."
What does that mean? This principle takes the other three principles and wraps them all up together. It suggests that most of the space devoted to cars and parking be removed and repurposed for pedestrians, cyclists and public transit to make the city more livable and pleasant.
The problem with this idea, in our opinion, is that it can make access to businesses or residences difficult or impossible. What if, for example, you live in the middle of a busy downtown center -- we'll use Los Angeles as an example again -- and you have a vehicle that you park underground.
This diagram breaks down the different types of roads that Sidewalk Labs envision cities adopting, with each street having a different width, different speed limit and different purpose.
Sidewalk Labs
Your building is hemmed in by a pair of streets that get redesignated as a "laneway" and an "accessway" so that means that getting your car out of the garage and out of downtown becomes infinitely more complicated and time-consuming. Ditto if you are coming into downtown with a car because you live somewhere with limited access to transit and you need to go to a doctor's appointment, but that doctor's office is on a street designated as a laneway.
A lot of these ideas will make more sense when the ratio of human-driven cars versus autonomous cars shifts in favor of autonomous vehicles, but that is years if not decades off. Until then you'd have to rely on people to understand and obey these new kinds of traffic control ideas and devices.
All that being said, it's one of the more interesting -- and in some ways at least -- practical looks at how to reimagine the way we use our city streets. We've seen time and again that infrastructure as it exists now and as it is traditionally built has difficulty meeting the needs of an increasing population that is increasingly mobile. Maybe Sidewalk Labs is on the right track toward finding a solution. |
Robotic Surgery : A Surgical Surgery Essay
1432 Words Nov 21st, 2015 null Page
How It Works
The da Vinci was approved in July of 2000 by the FDA. The da Vinci is operated by a surgeon with hand controls on the computer system away from the patient. The robot has 3 or 4 arms and has a tiny video camera attached to one arm, and the other arm is tipped with tiny surgical equipment. The da Vinci is the biggest thing in operating rooms now a days.
Machines Role
Robotic surgery is a minimally invasive surgery much like laparoscopic surgery which is done through a small incision. While laparoscopy is limited in imaging and instrument motion, computer-enhanced technology has advanced enough to overcome them. With the development of robotic-surgery comes 3-dimensional imagining for advanced vision, and the EndoWrist, an instrument that rotates 360-degrees.The 3-dimensional imaging is a major part of the machines role because it helps the surgeon see precisely where they are at and what they are about to work on inside of the human body. It gives them a closer look inside. The machine also…
Related Documents |
LINUXMAKER, OpenSource, Tutorials
Presentation Layer
The presentation layer (data representation layer, data provision layer) converts the system-dependent representation of the data (for example ASCII, EBCDIC) into an independent form and thus enables syntactically correct data exchange between different systems.
Tasks such as data compression and encryption are also part of the presentation layer. The presentation layer ensures that data sent from the application layer of one system can be read by the application layer of another system. If necessary, the presentation layer acts as a translator between different data formats, using a data format understandable to both systems, the ASN.1 (Abstract Syntax Notation One).
Protocols and standards: ISO 8822 / X.216 (Presentation Service), ISO 8823 / X.226 (Connection-Oriented Presentation Protocol), ISO 9576 (Connectionless Presentation Protocol) |
Heat wave over Greenland: February 27, 2014
Another look at the heat wave over the Arctic Circle and Greenland.
This is from today, February 27, 2014. Look at the above average temperatures in The Arctic. Greenland's temperature was 8 degrees Celsius warmer than normal 1979 to 2000 average. Areas near Alaska showed temperature departures in the range of 15 to 20 degrees Celsius above average. Meanwhile a zone of cold Arctic air has moved south over the US, with temperature differentials setting conditions ripe for extreme weather.
(Climate Change Institute Map Showing Arctic Heat Anomaly 2.68 C above the, already warmer than normal, 1979 to 2000 average. Image source: Climate Reanalyzer.)
Almaden Reservoir near San Jose shows the strain of California's megadrought.
The governor has declared a drought "state of emergency."
Bakersfield, California where many farms have been abandoned.
California's Folsom Lake on July 2011
California's Folsom Lake Febuary 2014
Greenland Ice Sheet is melting
Greenland is a vast store of ice. Nearly two miles thick at its center, it contains enough ice to raise the world’s sea levels by 23 feet. Satellite observations from July 25 2013 have revealed a dramatic and unprecedented level of ice melt in Greenland. Scientists say an estimated 97 percent of Greenland's ice sheet surface, from its thin, low-lying coastal edges to its center, thawed at some point in mid-July.
The Greenland Ice Sheet is Starting to melt.
Reports coming in over the past decade show that the vast two mile high Greenland ice sheet is starting to melt.
Thousands of melt ponds appearing on Greenland's ice sheet.
One melt pond the size of a lake, draining off to the ocean. There are thousands of these.
Unfortunately, we may be too late to avoid a catastrophic global sea level rise. Under the current climate change effect, the Greenland ice sheet is sagging and deforming, filling with melt ponds and flows that flush through to its base, and slipping toward the ocean at an ever increasing pace.
Research conducted by Arctic scientists shows that the ice sheet’s speed is increasing by a rate of about 2-3 percent per year. This speed of increase results in the disgorging of vast volumes of icebergs and melt waters into the North Atlantic. An average of about 500 cubic kilometers of icebergs and melt waters are now flowing into the ocean from Greenland alone. But with the pace of ice sheet melt and movement picking up, we are at the beginning of a very risky situation. The melt forces eventually reach a tipping point. The warmer water greatly softens the ice sheet. Floods of water flow out beneath the ice. Ice ponds grow into great lakes that may spill out both over top of the ice and underneath it. Large ice dams start to form. All this time ice motion and melt is accelerating.
We will reach a tipping point and a surge of water and ice will enters the Atlantic ocean. Tsunamis of melt water bearing vast icebergs will contribute to sea level rise. And then the weather starts to get really nasty.
Extraordinarily Rapid Arctic Amplification.
Despite the various reassurances, what we have seen over the past seven years or so is an extraordinarily rapid amplification of heat within the Arctic. Arctic sea ice continues its death spiral, hitting new record lows at various times at least once a year. Heat keeps funneling into the Arctic, resulting in heatwaves that bring 90 degree temperatures to Arctic Ocean shores during summer and unprecedented Alaskan melts during January. And we see periods during winter when sea ice goes through extended stretches of melt, as we did just last week in the region of Svalbard. One need only look at the temperature anomaly map for the last 30 days to know that something is dreadfully, dreadfully wrong with the Arctic:
The human greenhouse gas effect is powerful. At the last ice age’s end, about 100 parts per million of additional CO2 was enough to end the last Ice Age. Today, CO2 has risen by 120 ppm and continues to rise by 2-3 parts per million each year even as other rising greenhouse gasses, primarily methane, add to this global warming. Because CO2 can remain in the atmosphere for a century or longer, its increasing concentration warms our climate over long periods of time. Through its absorption and emission of energy back onto Earth’s surface, increased atmospheric CO2 traps more heat in the climate system.
Its warming effect, however, is amplified by positive feedback such as increased water vapor, reduced reflectivity of the ice, changes in cloud characteristics, and CO2 exchanges with the ocean and terrestrial ecosystems. The current pace and path of increased effect makes a bad situation worse as a CO2 rise to at least 480 ppm is predicted by mid-century. End of the century estimates come in at the catastrophic level of 800 ppm of CO2 and related greenhouse gasses.
Severe weather across the U.S. Southeast, Midwest, and East Coast February 21, 2014
A potent area of low pressure will intensify Thursday and Friday to produce lots of rain, snow, wind, and possibly some flying monkeys. Travelers are advised to avoid strangers on the yellow brick road.
A potent area of low pressure will intensify Thursday and Friday (February 20-21, 2014) to produce severe weather across the U.S. Southeast, Midwest, and the East Coast. The area of low pressure will also be responsible for producing blizzard conditions across parts of southern Minnesota and into north/central Iowa. Many spots will see over six inches of snow with winds gusting as high as 60 miles per hour (mph). Ahead of the front, temperatures are extremely warm. With a potent cold front pushing into an area seeing temperatures 10-20 degrees above average, you can bet that severe weather will likely develop. The biggest threat for severe weather today will occur across Mississippi, Alabama, Louisiana, Tennessee, Kentucky, Indiana, Illinois, and Ohio. By Friday, the threat shifts to the U.S. East Coast.
Jennifer Francis: "Understanding the Jetstream".
The extreme weather trend in the Northern Hemisphere is recent, so the science that can explain what is happening is still tentative. The first hypothesis blamed a slowing of the northern hemisphere’s polar jet stream. That sounded plausible, and was published in 2012 in Geophysical Letters. The paper “Evidence linking Arctic amplification to extreme weather in mid-latitudes”, was written by Jennifer Francis of Rutgers University and Stephen Vavrus of the University of Wisconsin-Madison. I think that their theory will turn out to be right. That is not good news....
Jennifer Francis - Understanding the Jetstream
The fact is that the Arctic has been warming faster than anywhere else on Earth, and the difference in temperature between the Arctic and the temperate zone has been shrinking. Since that difference in temperature is what drives the jet stream, a lower difference means a slower jet stream.
A fast jet stream travels in a straight line around the planet from west to east, just like a mountain stream goes straight downhill. A slower jet stream, however, meanders like a river crossing a plain. The loops it makes extend much further south and north than when it was moving fast.
In a big southerly loop, you will have Arctic air much further south than usual, while there will be relatively warm air from the temperate air mass in a northerly loop that extends up into the Arctic. Moreover, the slower-moving jet stream tends to get “stuck”, so that a given kind of weather—snow or rain or heat—will stay longer over the same area.
Hence the “polar-vortex” winter in North America this year, the record snowfalls in Japan in 2012 and again this winter, the lethal heat waves in the eastern U.S. in 2012—and the floods in Britain this winter.
“They’ve been pummelled by storm after storm this winter [in Britain],” said Francis at the American Association for the Advancement of Science conference in Chicago last week. “It’s been amazing what’s going on, and it’s because the pattern this winter has been stuck in one place ever since early December.” There’s no particular reason to think that it will move on soon, either.
Climate Change 2013: Greenland Ice Sheet & Northern Polar Jet Stream - Peter Sinclair
Enormous chunks of ice melting into the sea.
Total Precipitable Water - Global
SSMI/SSMIS/TMI-derived Total Precipitable Water - Global
Current time: Mon, 17 Feb 2014 05:36:04 GMT
Sat 2
Helicopter flyover Thames Valley (Flooding in England).
Birds-Eye View Of River Thames As It Reaches Highest Level Since 1883. Video recorded February 10, 2014. The UK is experiencing its most exceptional period of rainfall in 248 years, with hundreds of flood alert warnings covering much of the country and hundreds of homes left inundated.
Another major storm in the Atlantic expected to develop hurricane force winds.
Another major winter storm brewing in the North Atlantic. This is the latest snowstorm to blanket New England in a foot of snow. Picking up speed and moving east to Europe and possibly Great Britain.
An animated loop of today's visible satellite imagery combined with lightning density has been created, and it shows this rapidly intensifying system with thunderstorms developing ahead of the strong cold front : http://go.usa.gov/BVJm
"Rapidly intensifying low pressure off the mid-Atlantic coast is expected to develop hurricane force winds within the next 24 hours in the W Atlantic. The top portion of the image contains segments of the OPC 24-Hour W Atlantic Surface and Wind/Wave forecasts produced earlier today. A composite forecast of the low track through 120 hours from today's 12Z OPC charts is in the lower left, and a part of the 18Z Atlantic Surface Analysis is in the lower right.
The forecast central pressure of the low center at 24 hours is 965 hPa with winds in the south quadrant up to 65 kt. OPC is forecasting significant wave heights up to 36 ft in the vicinity of the hurricane force winds. Full versions of these charts are available on the main OPC website:
http://go.usa.gov/ByF3 ~ NOAA NWS Ocean Prediction Center
More rain expected in the UK : Update on England Floods
I learned a new word today. "Extratropical cyclone".
The Environment Agency has issued 24 severe flood warnings.
The latest warnings are for various places along the south coast of England.
The Met Office said that following the heavy rain that had fallen in many places during the day, it expected "potentially damaging" severe gales in southern England during the evening and into Saturday morning.
BBC Weather's Peter Gibbs said that with gusts of up to 80 mph likely, there was a danger of high tides bringing fresh coastal flooding.
About 2,200 armed forces personnel - regulars and reserves - are helping the flood relief effort and a further 3,000 are on standby to respond within two hours. Flood defenses in Gloucester are succeeding in holding back the water, according to the Environment Agency. Council staff in Hampshire have been moved from their "normal day jobs" to help the flood relief effort. The prime minister said UK businesses were offering "free help" to those affected by the flooding. Major supermarkets are providing supplies such as waders, food parcels, batteries and torches.
Police have appealed to drivers across Northern Ireland to take extra care as rain and snow disrupt travel.
Heavy rain brings widespread flooding to England.
After weeks of above average rainfall and a series of heavy winter storms that had the UK in its cross-hairs, the aftermath is what one would see after a hurricane: major damage to infrastructure, loss of thousands of cars, and property damages in the billions due to flooding. Most of southern England is in a high level flood alert, with more rain on the way. The severe weather has caused disruption for commuters which could take months to resolve, with thousands of acres of land affected, and residents who have been flooded for weeks. The situation has reached a critical level.
The Thames Barrier is a series of giant metal gates downstream of central London that can be closed against tidal surges. The Thames Barrier has closed almost as many times in the past six weeks as in the whole of the 1990's. The wettest period of weather for a century has seen the barrier closed a record 29 times since the beginning of the year, compared with 35 times in the decade of the 1990's. It is what protects London from tidal flooding. With thousands of homes along the Thames threatened by flood water, the emergency work to prevent flooding continues, including the distribution of tens of thousands more sandbags by the Royal Marines. As the waters continued to rise, some residents tried to help others who want to leave, while some were concerned how they would get to work today, with cars trapped and the rail line to London closed.
England has had its wettest January since 1766. Its southwest coast has been battered repeatedly by storms, and a large area of the low-lying Somerset Levels in the southwest has been under water for more than a month. The disaster has sparked a political storm. Prime Minister David Cameron's Conservative-led government is facing criticism for allegedly failing to dredge rivers and take other flood-prevention measures.
Cameron and Deputy Prime Minister Nick Clegg visited flood-hit areas Monday as the government struggled to take charge of the crisis. Cameron denied that the government had been slow to respond.
As of 7.52am GMT, Fire crews in Surrey have rescued 150 people over the last 24 hours as police warn that residents in around 2,500 homes are at risk. Chief superintendent Matt Twist, borough commander for the flooded areas in north Surrey, described the floods as an “extremely challenging situation”.
11-metre waves are also pounding the coast of Ireland, Spain, Portugal and France causing extensive damage. |
Children develop a foundation of understanding of what numbers are and what they mean in a practical way. They count objects and divide them up and find how many there are altogether. They also gradually acquire an understanding of weight, length, capacity and shape.
We count anything and everything – how many children are in today and how many grapes are left on the plate for example. We sing number songs and spot numbers while we are out and about. The children know their bikes by the number on the seat and they do number jigsaws and puzzles. They experience capacity when using the sand and water, by seeing how many spades it takes to fill the bucket. We measure each other and how tall plants have grown.
You can help your child by:
• Sharing number rhymes and stories.
• Use mathematical language during everyday activities e.g. pairing up socks, laying the table, sorting the crayons, counting the stairs, weighing ingredients or themselves!, measuring how tall they are etc.
• Helping them to work out problems for themselves. |
Persephone provides easy ways to align and compare maps of the same or different kind. If you are comparing two (or more) genetic maps, the identical markers (having the same internal MARKER_ID) will be connected by a line (connector).
Comparing genetic maps
In case one of genetic maps is loaded from an external file - which means that there are no internal IDs used - the connection between markers will be done based on marker names. Remember, that markers can have several names but only one of the names is considered the primary name. The other names are regarded as synonyms. The primary marker name is displayed as a label on the map but other synonyms are also used in determining if the markers should be linked.
If there are more than two maps on the stage, the connectors between all the plates may run across the entire stage, cluttering up the display. An option in (Tools>Settings>Plate display) can configure the application so that it connects only neighboring plates (see Tools and Settings). |
Fun facts
Fun Facts 1
Fun Facts 2
Fun Facts 3
The sun is 330 330 time larger than the Earth.
The storage capacity of the human brain exceeds 4 terrabytes.
The Earth’s atmosphere weighs about 5.5 quadrillion tons.
The banana tree cannot reproduce itself. it can ONLY propagate by the hand of man.
Strannous fluoride, which is the cavity fighter in toothpaste is made from recycled tin.
Sound travels fifteen times faster through steel than through air.
Sound at the right vibration can bore holes through a solid object.
Scientists are now able to grow “beating” heart tissue in a lab. |
Our plans provide no more than kcals per day and include
Our plans provide no more than, kcals per day and include room to pick and choose from our healthy snack recipes if you like, or if your lifestyle requires a higher calorie intake. However, extremely low-fat diets limit the intake of essential fatty acids and fat-soluble vitamins.
These vitamins and minerals all work together: bones require calcium for strength, vitamin D helps the body absorb calcium, and magnesium helps available calcium make its way through the blood stream. Saturated fats are found mainly in fatty cuts of meat, sausages, pies, butter, ghee, cheese, cakes electricians bexley try this site and biscuits. Orange and yellow fruits and vegetables are rich in vitamin C and carotenoids, including beta-carotene. It is now one of his favorite fruits. Wine can be consumed in small amounts with meals.
However, take it easy on the peanut butter is very high in calories and incredibly easy to eat excessive amounts of it. A balanced diet is one that provides the optimum proportions of carbohydrates, proteins and fats, but that is not enough on its own. There are many things that affect food choices, for example, personal preferences, cultural backgrounds or philosophical choices such as vegetarian dietary patterns. Unhappy with her options discovered a day plan to help ease symptoms. Below are some tips for eating smart at school and nutrition information for teenagers. Spend a little bit of time each week planning some healthy meals and snacks and then write your shopping list. Whole foods cooked in wholesome, healthy ways are the best choice for giving your body the vitamins and minerals they need to last you a lifetime.
Reducing the sugar intake from extra foods, like sodas and candy makes more sense than reducing sugars from nutrient-rich foods like milk. You may think you're eating smarter by sprinkling sea salt on your roasted vegetables instead of table salt. What does healthy eating look like. Your child's muscles, organs and immune system are comprised mostly of protein, making it an essential part of a well-balanced diet for your busy little one. Eat at least five portions of fruit and vegetables every day. Political food writer suggests that foods that are the most profitable are not necessarily the best for health and that dietary advice, by way of the food pyramid, is hard to interpret. Body isn't optimal for this kind of eating strategy. In reality, it takes a bit of work to eat a healthy, balanced diet, so I'll walk you through the process.
Having steady blood-sugar levels — also known as tight glycemic control — has been linked with beneficial health outcomes including weight loss, better energy levels throughout the day, and a reduced risk of chronic disease. Dietary sources of energy, solid fats, and added sugars among children and adolescents in the. If your kids eat breakfast outside the home, talk with them about how to make healthy selections. Pulses, for example baked beans, kidney beans, lentils and chick peas can be classified as either a protein food or vegetable. With more than billion people malnourished and food production driving climate change, biodiversity loss and pollution, a transformation of the global food system is urgently needed.
Our dietitian has costed out a week of healthy meals and snacks for under £for two people. Dietary sources are meat, unrefined grains, broccoli, garlic, and basil. It allows one beef burger and two servings of fish a week, but most protein comes from pulses and nuts.
Turn off your phone, which could lead you to mindless eating. Milk, cheese, yogurt, meat, poultry, grains, fish. Refined carbs can also be found in lots of other processed foods — they appear on nutrition labels as refined flour or just flour Swap the white bread and rice in your meals for whole grains. They are very greasy, but those fats are not the ones that your body needs.
Copyright © 2018 locksmith prices |
A parachute is a recreational, sporting, or military device used to slow the descent of falling object or person. The word is derived from the French word "para" which means to shield and the French word "chute" which means to fall.
Modern parachutes are most often ram-air parafoils parachutes. These parachutes are specially designed to allow air to flow through the parachute that creates lift. The ram-air parafoil parachute gets its name from the aerodynamic principals of the air-foil.
The first written account of a parachute design was found in Leonardo da Vinci (1452-1519) sketchbook dating back to 1495.
Fauste Veranzio built a device similar to Leonardo’s which he used to jump from a tower in Venice, Italy in 1617.
The Montgolfier brothers successfully dropped animals from their hot-air-balloons in 1783 utilized an early version of the parachute.
All of the parachute devices prior to 1783 were designed using a rigid frame covered in cloth. It wasn’t until Sebastian Lenormand jumped from a tower in 1783 at the Montpelier Observatory in Paris that a device was used that looked similar to the modern parachute. Lenormand constructed his device from a round 14’ piece of linen cloth. Lenormand named the device a parachute.
Jean Pierre François Blanchard (1753–1809) is credited with bringing popularity to parachuting. His fame as a hot-air-balloonist was established when he became the first to pilot a balloon across the English Channel in 1785. He soon began experimenting with parachuting by dropping a dog from his balloon. Between 1797 and 1802, Blanchard became well known for his high altitude jumps in a basket suspended beneath a parachute. His highest jump was from 7900 feet (2400 meters). Blanchard is credited with being the first to use a folded parachute made of light-weight silk.
The first recorded emergency jump with a parachute was from a burning hot-air-balloon on July 14, 1808 by Jordaki Kuparento, a Polish balloonist, over Warsaw, Poland.
In 1964, Domina Jalbert of Florida invented a square canopy called the Ram Air Para Foil. The Ram Air worked by allowing air to pass through the double surface glider allowing for better maneuverability and increased lift. The Ram Air is the most popular sporting parachute today. The Ram Air is used in many air sports such as paragliding, powered parachuting, powered paragliding and skydiving.
The major parts of the parachute are the canopy, the risers, the container, and the suspension lines.
Canopies are made from a soft fabric such as woven nylon. They come in a variety of shapes ranging from square to round.
When the canopy is not in use it is stored in a container. Skydivers strap the container onto their body with the canopy inside. During freefall the skydiver pulls a rip-cord that releases the canopy from inside the container. The container is also known as a harness.
Directly attached to the container/harness are the risers. The brakes/steering lines are attached to the rear risers. Risers are made from heavy-duty thick fabric.
At the top of the risers are the suspension lines. These lines run from the riser to the canopy. They are often made from Spectra or Vectran which provide little or no stretch.
Parachutes can be steered by pulling on one or both of the brake lines that are attached to the rear risers. Pulling on one of the brake lines causes a turn.
When landing, the pilot will pull on both of the risers simultaneously to increase the angle of attack. The increase in the angle of attack will slow the forward momentum of the parachute. The action of pulling both brake lines during landing is called 'flaring' the chute.
Photo Credit:
Parachutes are used for a variety of reason but are most commonly used in skydiving. Skydivers generally leave the aircraft between 3000 feet (900 meters) and 13,000 feet (4000 meters). After a brief freefall the skydiver will deploy the chute and begin to pilot the parachute to the drop zone.
Parachutes are also commonly used for military operations. Most modern militaries have special forces that are deployed behind enemy lines through the use of parachutes. These 'paratroopers' often leave the aircraft at altitudes of 500 feet (150 meters) or less. The chute is deployed by the use of a static line that is attached to the aircraft.
A less common use of a parachute is during disasters. When disaster strikes in remote areas, the use of a parachute can be used to drop logistics into devastated areas that are hard to reach by other means.
Parachutes can also be used to slow the forward momentum of an aircraft of race car. Chutes used to brake the forward momentum are called 'drag chutes' or 'drogue chutes'. The space shuttle brakes with the use of a drag chute after it lands.
Besides skydiving, modified version of the parachute are used in other air sports such as paragliding, powered parachuting, and powered paragliding. The designs are based upon the Ram Air Parafoil design but differ from skydiving parachutes. Skydiving parachutes are designed to handle an impact load when the skydiver deploys their chute at 125 mph. In paragliding, the chute is designed to be more aerodynamic with a better lift to drag ratio. Powered parachutes are designed to carry a large load. |
Why the ice on Europa may not be nice
The moon of Jupiter may have a massive underground ocean, but a new report says that studying it could be extremely difficult
europa What's that surface really like? We'll have to wait and see... (NASA/JPL-Caltech/SETI Institute)
In the search of extraterrestrial life, two of the most promising—and nearest—targets are a pair of gas giant moons. Saturn's Enceladus and Jupiter's Europa. Why makes these two such charmers?
Water, what else?
You see, both of these moons have shown evidence of having underground oceans. And unlike the possible buried Martian lake discovered a while back, these oceans would be absolutely enormous. Perhaps rivaling the amount of salt water in the largest oceans on Earth. Wow! What are we waiting for?
europa image
These images of Europa's surface offer amazing details ... but leave plenty of mystery as well. (NASA/JPL-Caltech/SETI Institute)
Well, in the case of Europa, a new study out of Cardiff University is suggesting that the moon's welcome would be a bit frosty. And pointy.
Spiky surface
How so? To answer that, here's your new word of the day: Penitentes (say pehn-eh-TEHN-teez).
This is basically a type of ice spike that forms at dry, cold, and high altitudes. We actually have them on Earth. Here they are in a high plateau in South America.
Europa Penitentes Upper_Rio_Blanco_Argentine
On second thought, let's NOT land here. (Wikimedia Commons)
These frozen formations grow in rows, looking kind of like some sort of barrier around a medieval fortress. Which is essentially what they would be if you ever tried to, say, land a space probe on them.
Oh, here's the other thing. Those penitents that you see up there? They're around a couple metres or six feet high. That seems pretty tall ... until you realize that the ones on Europa are estimated to be around eight times that height. Like five-storey building tall.
Oh dear!
Clipper to confirm
europa clipper
An image of what the Clipper might look like above Europa. (NASA/JPL-Caltech)
If this report is accurate, our chances of landing on Europa and drilling under its ice cap into the ocean below just got a lot slimmer. Fortunately, we don't need to just take all of this on faith.
NASA will launch the Europa Clipper probe some time between 2022 and 2025. This spacecraft's mission will be to inspect the moon up close and examine plumes of gas that are shooting up from the moon's surface for signs of possible life. And, of course, it will also be able to take photos that should help settle once and for all whether the surface of Europa is a chrome dome or a field of spiky ice towers. Fingers crossed!
Write a message
Tell US what you think
The last 10 Planet articles |
Get Adobe Flash player
Functions of the Commission
The functions of the Commission as spelt out in article 194 of the Constitution and elaborated in section 77 of the Local Governments Act, 1997 are to:
1. Advise the President on all matters concerning the distribution of revenue between the Government and local governments and the allocation to each local government of moneys out of the Consolidated Fund.
2. Consider and recommend, in consultation with the National Planning Authority, to the President the amount to be allocated as equalisation and conditional grants and their allocation to each local government.
3. Consider and recommend to the President potential sources of revenue for local governments.
4. Advise the local governments on appropriate tax levels to be levied by local governments.
5. Deal with disputes between local governments over financial matters and tender advice relating thereto to the parties involved, the Minister and the Minister responsible for Finance as may be necessary.
6. Monitor local governments' budgets to ensure that they don't significantly detract from priority programme areas; where budgets detract from PPA, the LGFC is to inform the council and the President through the Minister for appropriate action.
7. Perform such other functions as Parliament may prescribe.
Parliament has passed the Local Government Finance Commission's bill. The new law is intended to strengthen the LGFC to effectively and independently advise government on the right course. The Act would also be a basis for establishing the Commission rules, regulations and conditions of service including pension scheme for its staff.
Crucial roles in fiscal decentralisation
The Commission plays the following crucial roles in fiscal decentralisation:
1. Advocates for substantial growth of unconditional grant relative to the other grants. By so doing it hopes that the vertical fiscal gap between the central government and local governments will be reduced.
2. Ensures that the mechanisms used for allocating the various conditional grants development oriented and responsive to people's needs. Agreeing on conditions, as the law requires has been the point of emphasis. In this way local governments would own t he grants and use them to alleviate poverty and induce development from the grassroots.
3. Ensures that "least developed local governments" are assisted through equalisation grant. By so doing horizontal imbalances amongst local governments would be reduced and thus "balanced growth" promoted.
4. Ensures that local revenue raising capacities are enhanced with the aim of gradually making local governments more dependent on local revenue rather than central transfers. The study on revenue enhancement is just one such effort to achieve that.
5. Advises local governments to promote accountability and improve capacities as a way of enhancing efficient utilisation of available resources.
In addition, the Commission promotes a balance between responsibility assignments and sources of revenues assigned to local governments. The Revenue Sharing Studies conduced in 1999 and 2000 are examples of our effort to achieve that balance.
All these is done to establish an equitable system for allocation of financial resources from the consolidated fund to local governments and promote efficient and effective local revenue mobilisation.
Methods of Work
The methods of work as adopted by the Commission are aimed at achieving transparency, promoting partnerships, strengthening consultative and collaborative interfaces as well as ensuring effective dissemination of findings and recommendations.
These are achieved through:
• Holding meetings and discussions within the Commission and with other stakeholders such as local governments, line ministries, donors and NGOs.
• Holding seminars, workshops and conferences as a two-way approach to collecting, disseminating and sharing of information. The Commission has also presented several papers on fiscal decentralisation in international and national conferences within the region.
• Preparing periodic advisory notes to both the Government and Local Governments.
• Constituting committees of stakeholders to co-ordinate specific issues. Currently the Commission chairs a Local Revenue Enhancement Co-ordinating Committee (LRECC) and a Local Government Budget Committee (LGBC).
• Conducting and sponsoring research in specific cases of interest. The research approach is collaborative in nature and the reports are discussed in seminars, workshops and conferences of stakeholders.
• Undertaking study tours to other countries and receiving foreign visitors as a way of promoting exchange of experiences.
Annual Report 2014-2015 |
Measuring Greenland's Snowfall
Note: We were doing a little housekeeping and came on this update on the Arctic Circle Traverse written back in June. It holds up as a nice view into what it takes researchers to "collect data," so here you go. We hope to hear more from Box when he returns from Greenland after retrieving data from his time-lapse cameras observing the Petermann Glacier. Even the best planned plans can go awry. So it went in April and May, when a series of mishaps beyond their control kept the five-person team led by Jason Box from heading out to the field for their Arctic Circle Traverse (ACT), a National Science Foundation-supported study of snow accumulation on the Greenland Ice Sheet.
Snow storms, the eruption of the Eyjafjallajökull Volcano in Iceland,which prevented airplane flying, and aircraft problems grounded the crew and originally dashed their hopes of getting out on the ice.
"Our biggest challenge was getting into the field," says Box. "We learned that the traverse, while labor-intensive, is more likely to succeed than depending on flights, especially in east Greenland."
Just when things were looking their grimmest, the team got a window of clear weather and set out for 13 days.
Sleeping under "turbulent" Aurora Borealis at night and blazing trails during the day, they successfully traversed roughly 700 kilometers. Over the journey, the team gathered the necessary information to map snowfall rates across the ice sheet, Box said.
They measured snow depth using radar, and took ice cores as well. Isotopes in the cores allow scientists to identify annual snow accumulation; radar and coring used in conjunction provide more specificity than either technique would alone.
As they traversed, a NASA P-3 airplane flew over their line and collected radar data that measured the layering structure of the snow, providing "virtual ice cores."
"It's nice to have the P-3 data, as it will cover a much larger area," says Box. However, the airborne radar doesn't replace actual ice cores, he says.
"So far there is no way to efficiently remotely sense the vertical profiles of density," says Box. "Cores remain necessary in-situ observational data."
The research aims to provide an accurate analysis of snowfall on the Greenland Ice Sheet. Box and his collaborators, Rick Forster (PI on a related NSF grant that seeks to fill holes in the snow accumulation data), Evan Burgess, and Clément Miège (University of Utah) are measuring annual snow fall to better understand how much of the ice sheet volume change and, in turn global sea level, is due to changes in snowfall or due to changing melt rates.
"We know that melt rates have increased in recent years," the group writes. "Yet, we also know that as climate warms, the atmosphere holds more moisture and consequently, more snow is delivered to the ice sheets. Our project will help better understand the effect on the mass budget of changing mass input from snow accumulation variations in the past 30-60 years. We're like auditors, with really thick parkas on."
Those parkas kept the crew warm as they worked and camped in temperatures as low as -35 C (-31 F) at night and up to -5 (23 F) to - 25 C (-13F) during the day.
Now that they're home, they've hung the parkas in the closet and begun the long task of analyzing the data, says Box.
"The core just made it off the ice sheet, and it needs to be put into the core melter to get the isotope and other chemistry data," he says. "A graduate student, Clement Miege, will spend much of the summer identifying layers in the ground radar data."
The team will present preliminary results at the AGU meeting in San Francisco this December. —Rachel Walker |
Java Observer
No one can deny the unprecedented fervor that surrounds Java as the ultimate programming language. It has been highly touted as the language to be using to program all your Internet/Intranet applications today.
The biggest manifestation of this is the tremendous number of Web pages -- characterized by dazzling, scrolling graphics -- activated by Java applets.
Is this the only way in which to leverage Java as a programming language? The answer is no. There are two different ways to run a Java executable: either as a standalone application or as an applet that is embedded in a Web page. Significant differences between the two executable models exist and will factor in how Java is used to build systems, now and in the future.
What is an applet?
Let's start the discussion with by far the biggest use of Java to-date -- the applet. Applets were originally intended to supply small, finite units of functionality for activating Web pages and have held true to that promise to this day. However, as Java has matured, applets are now seen as one way of delivering component-based functionality to Web pages. The original intent is that applets would be small applications -- hence the name. As Java matures and its use for creating active Internet and Intranet applications increases, this distinction will become irrelevant. Instead the distinction will be based solely on technical characteristics.
Java applets leverage and take advantage of features that are built into Web browsers. This enables developers to write applets that can contain a rich amount of functionality with a minimal amount of code. One of the major uses of applets is to display graphics and images. To incorporate GIF or JPEG files into a Java applet, it is not necessary to write any special decoding methods to interpret the image files. Instead applets can use the browser's built-in decoder to display the images.
Using the functionality built into the browser offers an applet a good degree of extensibility. If a new image file format becomes the hot format and is incorporated into the browser, the Java applet will automatically be able to handle the new format without any custom code having to be written.
An applet has no life outside an HTML page that is being processed by a browser. Thus the browser is the "container" of the applet and depends on the browser to call its methods. There are a lot of restrictions on the functionality of an applet. If you think about this, it is not unreasonable when you consider that an applet is an invited or uninvited guest that borrows your machine to execute and display.
Because applets run inside the browser container, the distinction about whether such restrictions derive from the browser or are inherent in the language often get blurred. This has a significant impact on the functionality that an applet can have. In a sense, a specification between the browser and the applet exists, which is driven by both the browser and the Java language. This is dictated by the Java Security Manager component that is part of the browser. When you select the Options Security menu item in Navigator, the Java selections in the dialog are controlling the level of security enforced by the browser. By the way, no real security specification for browsers exist, just a de facto acceptance of what security really means in the context of a browser running foreign code.
Four major types of restrictions are imposed on applets. First, an applet has limited access to the local and remote file systems. This is due in part to both the security features of Java as well as the browser. Both Netscape and Internet Explorer simply forbid all access.
Second, an applet class that was loaded from the Internet is not allowed to make any native calls. These remote classes cannot execute any local commands. This restriction is lifted for classes that are used by the applet but are loaded from the local machine. Some examples of such local classes are base classes such as Object and Component. Third, an applet cannot be a network socket server and may only open a socket connection to the server machine that served the applet's HTML page.
Lastly, there are restrictions imposed on inter-applet communication that are also driven by the browser. Netscape Navigator, versions 2.x, limits all inter-applet communications to applets on the same HTML page. Navigator 3.0 further restricts these applets to come from the same directory on the Web server and to have the same level of support for JavaScript.
An applet's life cycle begins when the user visits the Web page that contains the applet. After constructing the applet, the browser allocates a place for it within its viewing area on the screen.
At this juncture an important event occurs -- the applet is assigned a peer. What is this thing called a peer? It is the essence of the platform independence that makes Java such a powerful programming language. In the nutshell, a peer is an applet's connection to the local, platform-specific underlying Windows system. This peer is an inherent part of the Java virtual machine for that platform. Java components, which include applets as well as applications, receive much of their functionality by using the facilities of the local Windows system.
Next the browser sends several startup method calls to the applet. The first applet method that is called is init. This method is only called once and all one-time initialization for the applet is done here. Such functionality includes creating the applet's GUI and reading in the parameters. The browser then follows this with a call to the start method which basically puts the applet into the "active" state. This means that the page that houses this applet is now the active page in the browser.
When the user navigates away from this page, the browser calls the stop method, putting the applet into the "inactive" state. The applet is running and is not destroyed because many browsers cache Web pages.
When the applet needs to render its representation in the page, the paint method is called. As applets tend to be graphical in nature, this is where most of the processing action occurs. This method is called the first time the applet is displayed and any time that its needs to be repainted.
The destroy method is called when the browser no longer needs the applet. In reality, because most browsers cache the applet when the Web pages gets cached, this method is rarely called.
Whew, there is a lot of stuff going on here! In reality, an applet is simply a finite state machine that provides methods that are called on demand by the browser. It is a reactive component completely controlled by the browser and whose main responsibility is to implement the correct methods for it to function correctly.
What is an application?
An application is basically a program that just happens to be written in the Java language. At a high level, it is just like any application that is written in another programming language. From the Java perspective, an application runs without a browser, a security manager and without applet context.
An application is invoked from the command line by passing the class name of the Java application to the Java runtime and when execution starts and stops. It is the proactive voice in the execution context.
In general, applications have an easier lifestyle compared to applets. They have complete access to the file system and network, as you would expect from any typical application. They can also invoke native methods. Native methods are basically code that is written in another language such as C that is called from Java. In a way, it permits access to "legacy code" (from the Java perspective, of course) from Java. This gives them the ability to be very functional as compared to applets.
If the picture seems rosy for an application, it is for the most part. The one drawback to being an application is that the peer is not created automatically. Fortunately, most initialization activities do not require the presence of a peer. Usually this is not a big issue except in one instance, which is when dealing with images. In this case the peer must be created by the Java application programmer and assigned to the application before any image creation routines can be called. This is just one of the basic rules of Java programming. All languages have them.
As applets are called upon to perform more functionality, the support for interactions between applets, browsers and network resources can be expected to mature and become more refined. Some of the more recent advances have been signed applets, which provide the basis for secure electronic commerce as the Java Wallet technology. As new uses for the Internet and Intranet emerge, Java and applets will be sure to evolve with them.
Most of the use of Java is directed towards building applets, which enhance and activate Web pages. As more and more businesses and corporations adopt Java as an enterprise language, the use of Java applications will become more prevalent. Certainly, the recent release of the Java Enterprise technologies (Java RMI, JDBC) positions Java applications as distributed-object applications for the enterprise. Applets are the shining star of Java currently, but Java applications are just starting to glow.
About the Authors
Peter Fischer is vice president of technology at eForce Inc., Hayward, Calif., where he serves as the EAI solutions practice leader. He is a recognized idustry leader in EAI and e-business integration. He can be reached via e-mail at
Max Dolgicer is director at International Systems Group Inc., a New York City-based consulting firm that specializes in Enterprise Application Integration using leading-edge middleware technologies.
Upcoming Events
Sign up for our newsletter.
Terms and Privacy Policy consent
I agree to this site's Privacy Policy. |
A Study in GIS: An Explanation of the Technology Through the Marauder’s Map
One of the best elements of “Harry Potter” is the fascinating Marauder’s Map. The Map identifies every single person that inhabits the Hogwarts castle itself, any secret passageways and the entirety of the school grounds. The Map could not be fooled by things like invisibility cloaks, but was restricted by the “plottable” parameters of the Castle. Similarly, geographic information systems (GIS) rely on collected geographical data to power the reach and effectiveness of its technology.
GIS tools facilitate the collecting, organizing and analyzing of geographical data so that users can draw on that data to improve sales territory, field service and construction operations. The visualizations of GIS solutions help users of GIS draw on the insights gathered from the data to improve their field service operations and assess any areas of improvement in business operations based on collected map data.
Adds Context to Data Points
GIS tools automate the menial and time-consuming task of wrestling information into and out of an asset depository database. The data that a GIS solution stores, cleans and retrieves is vital for the user who wants to build relationships between distinct data points. This is particularly helpful when users are working on multiple projects with myriad workflows. In the “Harry Potter” books, the data points of the Marauder’s Map are tiny depictions of the physical individuals. In the films, the data points are wandering footprints.
Either way, while the Map itself is static (in that the layout cannot be edited or updated), it’s still very capable of handling large amounts of data. Additionally, the Marauder’s Map cannot be hoodwinked by invisibility cloaks, Polyjuice Potions or animagus transformations. Even Peeves the Poltergeist shows up on the parchment! While GIS solutions aren’t quite as advanced as the Map in identifying who exactly is moving around Hogwarts at any given time, it does have its perks as well.
Two of the key benefits of GIS tools are the clarification of relationships between seemingly unconnected qualities and the isolation of individual aspects of a graphed area. According to G2 Crowd users, the majority of GIS tools integrates nicely with modeling softwares and Excel-type spreadsheet tool. That means that any processed data can be turned into predictive models which has the extra boon of the software working in near real time.
User Roles and Access
GIS solutions come in a variety of skill levels, ranging from very user-friendly to extremely technical, from open-source to tailored very specifically to an industry. While the analytics capabilities of GIS are enhanced through the software’s integrations with computer-aided design (CAD) software, building information modeling (BIM) software and other drafting and design software, GIS relies on the distribution of user roles to organize and oversee access to the software. One immediate benefit of restricting access to data that a user painstakingly collects, maps out and analyzes is that they can provide their client with the data they require as well as the freedom for the client to use that cleaned data for their own purposes. GIS users no longer have to micromanage the data; they can simply transfer the data between softwares or users and concentrate on solving real-world problems quickly and productively.
In order to access the entire contents of the Marauder’s Map, the user must proclaim, “I solemnly swear that I am up to no good.” In order to wipe the contents from the parchment, to hide from view, the user must recite, “Mischief Managed.” Additionally, the Map has a fail-safe: If an unauthorized user attempted to breach the Map’s contents, the Map has the ability to deny entry. Along those same lines, GIS functionality and breadth can also scale according to database need. For example, if a user wants to expand their geological exploratory work for petroleum, GIS facilitates the creation of depositional environment maps with efficiency.
GIS tools can even be used to combine all data and maps into a singular location and disperse it throughout their networks. Users can also use GIS solutions to provide maps, full of up-to-date data and discovered relationships between data points, which can then be distributed to clients and organizations without the fear of unauthorized edits to that data.
Discovers Efficiencies
GIS data is incorporated into a business’ reporting system to find answers in efficiency. Do you want your field service workers to find quicker routes and maximize time spent on a site? Do you want to discover the areas that are too concentrated with workers or not concentrated enough? Typical implementations of GIS databases can result in savings in operational expenses, particularly regarding anything that relies on digitized conversions of accurate data.
Similarly, the Marauder’s Map gives you one tool to attempt to keep your wits around you as you navigate the depths of the castle. Hogwarts is a castle with a mind of its own. The staircases move, doors pretend to be walls and vice versa, classes can be held in rooms situated in the very top of the tower or down in the lower levels. How can anyone or anything keep track of the entire castle? By taking a look at the Marauder’s Map and discovering secret passageways or unexplored rooms and leveraging those features.
So What’s Next?
GIS users rely on the software to make their overall professional lives easier. The various applications of geospatial maps are difficult enough, investing in a software that can automate the tasks of creating maps, producing design, analyzing data points, exporting data from platform to platform and managing off-site locations can yield great benefits.
What do you use GIS tools for? If you don’t, what aspects of your field would benefit from automation and a unified platform of ownership? Check out the reviews users have left about various GIS solutions and figure out if any of the solutions would work for you as an individual or as an organization. While you’re there, explore the “Learn” tab of the category page, where you can read myriad business problems solved through the use of the software as well as the various buying considerations that should inform your future investment in a GIS tool.
Write a Software Review
|
Regulatory framework
Digital tools and technology are changing the world faster than laws are adapting. This gap affects people's rights, market competition, and the interplay between state and non-state actors. The ways such laws are devised needs to change to accommodate this pace. This should start from the question: 'What do people need?' not: 'How can what's already in place be tweaked?'
Any country seriously seeking to embrace digital industry will need to think about new kinds of accountability. Improving the accountability of machine learning or AI should be a priority.
This will require approaches which help policy and legal professionals understand how digital companies and technologies operate, and vice-versa. Increased co-development of laws and technology can enhance the rights and protections afforded to consumers. Similarly, as the lines blur between consumers and service providers, governments should prepare for difficult conversations about revenues, privacy and consent.
Policy interventions |
Тестирование IELTS: essay «Computers in our life»
Тестирование IELTS: essay «Computers in our life»
The following two tabs change content below.
Добро пожаловать на мой сайт репетитора IELTS по skype! Меня зовут Анастасия Гончаренко и здесь я пишу о том как сдать IELTS с первого раза. Мой балл на IELTS 8.5 - получилось у меня, научу и вас.
Тестирование IELTS: essay «Computers in our life»
Тестирование IELTS является необходимым для всех кто планирует учиться или работать в англоговорящей стране. Writing является одним из самых непростых этапов подготовки к экзамену IELTS. Как репетитор ielts я регулярно обновляю список тем, которые я даю студентам намеревающимся сдать экзамен IELTS. Сегодня мы обсудим тему эссе IELTS «Computers in our life».
Computers have been widely used in our daily life. How important is the computer in the development of modern society?
Тестирование IELTS : эссе 1
Evidence seems to suggest, that the XXI century has become the age of computers. They greatly impact us almost in all walks of life. We meet them at home, in transport, in airplanes, at railway stations, at work and even in bed.
It is common knowledge, that computer is a powerful instrument of calculation. Firstly, when we need to explore our profits, we use the computer for help. Secondly, we need the computer to calculate the taxes, we should pay. And, thirdly, all engineers, architects and other technical specialists need it to do research.
In addition to professional usage, the computer has brought people more benefits as a perfect communication device. For example, we post our photos and share out thoughts with hundreds of our friends in social networks. The modern human beings are able to chat to each other and to discuss every concern from cuisine and receipts to fashion and dressing. And, at last, we can study with the help of Skype and phone our relatives all over the world.
However, there are problems, or, in other words, negative side of computer century. For instance, we can loose a sense of reality, surfing the Net and collecting friends. That is because user may change name, age, place of leaving, sex and photo. In addition to it, computer games take much time from real life, studies and even work sometimes. So, from my point of view, we depend excessively on computers nowadays. Our banks, transport, medicine will have been paralyzed, if the Internet is switched off.
Generally speaking, the advent of the computer influences our society in a very strong way. It seems to me, that we are on the way to the better world with great opportunities in every city and even in every village in the countryside.
270 words
Тестирование IELTS: эссе 2
The world has stepped into the computer era. As a symbol of modern society, it has come into wide use and greatly promoted the development of our society. Nevertheless, almost every family in the world has a computer for the household use and despite all the benefits of computers it has some disadvantages for the society.
It is certainly true that the computer, as we all know, is a powerful instrument of calculation. All the companies and factories use it in offices and in workplaces to keep information and to perform different kinds of calculation. Besides, every individual user gains many profits from the computer by using many available applications and programs like photoshop or office. For example, computers can be used to make songs or videos, which are highly popular currently among people.
Furthermore, the computer as a perfect communication device has brought human beings much benefit. Thousands people all around the world use computers to stay in touch with their friends and relatives. Such programs like facebook and twitter are very beneficial and help people communicate to each other. In addition, conventional letters have been overcome by email, which is much faster, cheaper and more convenient.
It is quite true that the use of computers is not without problems. With computers people become lazier and less healthier. Lack of physical activities is a frequent reason of obesity. Moreover, it is argued that monitors have harmful radiation and it is a huge drawback. But all in all, the advent of the computer brings many exciting prospects to our society. This precious gift will further benefit the world, and hopefully it will help us to create a newer and nicer world.
280 words
Тестирование IELTS: эссе 3
We live in a world of advanced technologies, and computers play a major role. For that reason it has come into wide use and greatly promoted the development of our society.
To begin with, computers are a powerful instrument of calculation. Likewise it can be used to perform different types of task impeccably well, faster and more accurate than a human. Secondly, another reason why computers are so important nowadays is that all new inventions are designed with a using this clever machine.
On the other hand, the computers can aid people in their daily life, because they are a perfect communication device. Most everybody uses computers to communicate with each other using internet. For instance, talking by skype is one of the popular means of communication. We can hear our friends and relatives around the world. In addition, there are a lot of courses for the people which offer online study. Because of this people with disabilities can receive education.
However, there are detrimental effects of computers. The main treat to young users of computers is internet games and open access to different kinds of video. The presence of sexual content or violence has a bad influence on the psyche of the children. It should be properly marked and parents are obliged to draw their attention to this issue.
To sum up, computers are practically irreplaceable and people cannot live without them anymore. In my opinion computers have contributed a lot to development of modern society and if they are used wisely , they will continue to increase the quality of our life.
260 words
Заявка на бесплатный вводный урок английского языка
1. […] Тестирование IELTS: essay «Computers in our life» — 04.09.2015 […]
90 запросов к базе данных выполнено за 1,264 секунд. Яндекс.Метрика
Профиль в Google+ |
News Source:
News Tonight Africa
April 7, 2014
They found that increased carbon dioxide in atmosphere resulted in formation of rhizosphere, a microbe-rich area around plant roots, which further helps in absorbing carbon dioxide. Their findings illustrate that desert ecosystems may increase carbon intake in future to account for 15% to 28% of carbon currently being absorbed by land surfaces. |
The demise of gas-powered leaf blowers
I lived in a suburban house with radioactive thorium in the front yard
The first home my parents purchased was on the southwest side of West Chicago, a small suburb in the western part of DuPage County. While the community was the known for the railroad, industry, and a sizable population of Mexican residents, what we did not know was in the ground in our front yard also came to define the suburb.
The 1954 ranch house on a quiet street with no sidewalks was relatively unassuming: the home was just over 1,200 square feet, had a one car garage, three bedrooms, and a decent-sized yard. The self-contained subdivision was near a grocery store and some strip malls and was a ten minute car ride from the suburb’s downtown.
When my parents went to sell the home in 1988, a discovery was made: the front yard had radioactive material from a local plant. A Chicago company produced lanterns and opened a facility in West Chicago in 1932. The radioactive waste material from the plant, thorium, was then offered to the community as fill. The city and residents took the fill and used it all over the suburb. The plant was later acquired by Kerr-McGee and when the radioactive thorium was discovered throughout the community (after years of struggle), a good portion of the community became the Kerr-McGee Superfund site and the last of the contaminated soil was removed in 2015.
This front yard revelation had implications for selling the home: no one would want it. Supposedly, the radioactivity in the front yard was enough to equal that of an x-ray if someone sat between the two trees in the front for 24 hours. Eventually, Kerr-McGee purchased the home and years later, many yards on that street were torn up to remove the radioactive material.
It is hard to know if the radioactivity had any effects on those of us who lived in the house. Nothing obvious has emerged yet. We may have emerged unscathed. It was not Love Canal. Perhaps this could be considered an odd footnote in a suburban upbringing. Yet, at the same time, few suburbanites would expect to find they had purchased radioactive land. Furthermore, few Americans have a personal connection to a decades-long and costly fight to clean up and remove (this cost an estimated $1.2 billion alone) radioactive thorium.
Why Americans love suburbs #7: closer to nature
A consistent appeal of suburbia for many Americans is to be closer to nature and green space. While suburbanites appreciate their proximity to urban amenities without having to actually live in the big city, they also often appreciate more open space and closeness to nature. American suburbanites may be out of touch with nature and children may be exposed to less nature these days but the suburbs are viewed as offering access to nature just outside the single-family home.
As cities grew in the nineteenth century, they became dirty places. While this is an ongoing issue in many large cities still (think smog in Paris or air quality in Beijing), these growing cities had particular problems including how to construct sewers (Chicago’s efforts in battling excrement helped it grow), dealing with waste from all the horses, and soot (see pictures of Pittsburgh turned dark in the middle of the day). The suburbs offered some distance from the grime of the city and more proximity to pristine nature.
Exactly what kind of nature suburbanites experience is up for debate. As one critic of suburbia suggests, the suburbs often involve “nature band-aids.” Suburbanites may be interested in farms or “agrihoods” but the average suburban dweller has a small plot of land around their home. I am reminded of one situation I discovered in my research on suburban development where residents of a newer subdivision complained vociferously when the adjacent cornfield turned into a new development. This common process of suburban development – more agricultural or rural land or open space is turned into sprawl – can frustrate many residents.
One consistent experience involves using and caring for the lawns that surround many single-family homes. The green lawn is an important symbol of the owner’s social class as well as a space for outside recreation. Caring for the lawn is vitally important. Neighborhoods and communities exert pressure. Residents make sure their lawns are green in a variety of conditions ranging from watering during droughts, painting their lawns, and searching out the best seeds. They often have plenty of trees, prized by suburbanites for their foliage, functioning as key symbols of nature, and ability to define edges of properties and hide views of others.
Beyond lawns, suburbanites are often interested in parks, forest preserves, and green spaces. Theoretically, these uses limit the possibility that the green space can be turned into other uses. Even somewhat protected green space like a golf course can provoke concerns if it is turned into something else. Additionally, these spaces enhance property values of single-family homes, allow space for children to play, and can become sites of local social activity. Some of these places can offer more authentic nature (less controlled by humans) though many of these sites are carefully kept. Furthermore, even in these preserved spaces, it is difficult to truly escape the suburban noise and evidence of civilization.
Sometimes, nature can be perceived as the enemy of suburbanization. A great example is dealing with water. Flooding is a persistent issue. More housing alongside roadways and parking lots do not allow water to soak into the ground. Think the Houston area after a hurricane. In spaces with less human activity, flooding and waterways changing course do not have the devastating or annoying effects that they can in suburbia. Turning land into suburbia can have the effect of bulldozing over natural ways of dealing with water and instead trying to channel it or eliminate it around homes and other uses. This is not always successful and much money can be spent on the issue. For example, the Deep Tunnel project in the Chicago region is a massive civil engineering project born out of urban and suburban development.
Of course, the opposite can be true as well: suburbanization can be the enemy of nature. Rachel Carson’s influential work emerged by suburban settings. At the same time, nature itself can also adapt to suburbanization. The wildland-urban interface can move as creatures like coyotes, deer, baboons, and birds adapt to human activity.
While critics of suburbs may not understand why suburbanites cannot see the ugliness of sprawl, many Americans believe the suburbs offer a little more natural space in which to move and breathe.
How a 9-year-old estimated that Americans use 500 million plastic straws a day
Statistics are often vital to public campaigns to fight social problems. The problem of plastic straws is no exception. Here is how 9-year-old Milo Cress developed the oft-cited statistic:
But as Cress began to dig into research on plastics and the environment, he noticed there wasn’t much data: “I couldn’t find anything on our use of straws in the United States,” he said.
“Why I use this statistic is because it illustrates that we use too many straws,” he said. “I think if it were another number, it still illustrates the fact that there is room for reduction. That’s really my message.”
Sociologist Joel Best, who has written about the social construction of statistics, could have a field day with this.
With all of the debate regarding this figure, couldn’t someone with expertise in this field offer a number that has some more rigor? Even if the number changes a bit, say it goes down to 200 million straws day, it would not matter much as either figure is huge. And this is the whole point (and this is often the case for advocates against a particular social problem): the big number is intended to shock and spur action.
Fighting smog not by reducing driving but by insisting on more efficient cars
Smog and air pollution due to vehicles is a familiar sight in many large cities. Yet, Crabgrass Crucible suggests the fight against smog in Los Angeles did not target driving itself but rather automakers:
The ban on fuel oil easily found favor among antismog activists. After all, like the steps with which smog control had begun, it mostly targeted the basin’s industrial zones. Harder to swallow in Los Angeles’s “citizen consumer” politics of this era, even for antismog activists, were solutions that might curtail the mobility associated with cars. Consonant with national trends noted by automobile historian Thomas McCarthy, there was a widespread reluctance to question orthodoxies of road building and suburban development. Even the “militant” activists at the 1954 Pasadena Assembly only went so far as a call to “electrify busses.” By the 1960s, as motor vehicles were estimated to cause nearly 55 percent of smog, there were suggestions for the development of an electric car. Yet Los Angeles smog battlers of all stripes raised surprisingly few questions about freeway building. For many years, Haagen-Smit himself argued that because fast and steady-running traffic burned gasoline more efficiently, freeways were smog remedies. So powerful and prevalent were the presumed rights of Angelenos to drive anywhere, to be propelled, lit, heated, and otherwise convenienced by fossil fuels, that public mass transit or other alternatives hardly seemed worth mentioning.
Once pollution controllers turned their sights to cars, they aimed not so much at Los Angeles roads or driving habits or developers as at the distant plants where automobiles were made. Probing back up the chain of production for smog’s roots, local regulators and politicians established a new way of acting on behalf of citizen consumers. Rather than pitting the residential suburbs of the basin against their industrial counterparts, in an inspired switch, they opened season on a far-flung industrial foe: the “motor city” of Detroit. The APCD’s confrontations with Detroit car makers had begun during the Larson era, but quietly, through exchanges of letters and visits that went little publicized. In 1958, after the nation’s chief auto makers had repeatedly shrugged off Angeleno officials’ insistence on cleaner-burning engines, the Los Angeles City Council went public with its frustration. It threw down the gauntlet: within three years, all automobiles sold within the city limits had to meet tough smog-reducing exhaust standards. Because its deadline had passed, a 1960 burst of antismog activism converged on Sacramento to push through the California Motor Vehicle Control Act. The battle was hard-fought and intense, but the state of California thereby wound up setting pollution-fighting terms for its vast car market. (232-233)
This helps put us where we are today: when the Trump administration signals interest in eliminating national MPG standards for automakers, California leads the way in fighting back.
Ultimately, this is an interesting accommodation in the environmentalist movement. Cars are significant generators of air pollution. Additionally, cars do not just produce air pollution; they require an entire infrastructure that uses a lot of resources in its own right (building and maintaining roads, trucking, using more land for development). Yet, this passage suggests that because cars and the lifestyle that goes with them are so sacred, particularly in a region heavily dependent on mobility by individual cars, the best solution is to look for a car that pollutes less. This leaves many communities and regions in the United States waiting for a more efficient car rather than expending energy and resources toward reducing car use overall. And the problem may just keep going if self-driving cars actually lengthen commutes.
Thorium contaminated soil out of West Chicago; still groundwater
Earlier this week, the last thorium contaminated soil was shipped out of West Chicago:
After more than 30 years and $1.2 billion worth of cleanup work, the final rail cars filled with contaminated materials from the former Kerr-McGee factory site in West Chicago have been shipped out of town.
Mayor Ruben Pineda said the occasion is cause for celebration. On Tuesday, he gathered with officials from the Environmental Protection Agency, the Department of Energy, Weston Solutions, DuPage County and other organizations that have helped with removing thousands of pounds of thorium waste produced by the factory. They watched the rail cars head to Utah, where the materials will be buried in the desert.
However, this is not the end of the thorium saga:
Although the soil is gone, city officials said they are waiting for the federal government to provide about $32 million to resolve issues with the contaminated groundwater at the site.
“We still have a lot of work to do out there,” Pineda said. “If we were to get (the $32 million), we could finish the project relatively quickly and (the factory site) would turn into a beautiful park.”
Both parts of this process – removing the soil and finding the funds to completely finish the job (see earlier posts on the long search for funds) – have been lengthy.
With an end in sight, I wonder how long it will take for the idea that thorium is part of West Chicago’s character to dissipate. This has been an ongoing issue for over four decades and this industrial, working-class suburb has often attracted certain attention because of the radioactive material. But, once the thorium is gone for good, those who lived in the community will move away or pass on and newer generations have little or no understanding or experience with this part of the community’s past. Will the community want to remember how it came together to get the thorium out or would it be better to just forget the whole episode and its negative connotations?
What if car-free central Paris catches on?
It is a day for pedestrians in Paris:
|
Pi Day: 10 surprising facts to know about the never ending number
Pi, or π, is defined as the ratio of the circumference of a circle to its diameter. Pi is an irrational number, meaning it cannot be written as a simple fraction. Instead, it can be expressed as an infinite, non-repeating decimal (3.14159…) or approximated as the fraction 22/7. It’s represented by the Greek letter “π”.
Museums and science centers mark the day with educational programs, music, pi memorization challenges and at least one parade, though many math fans celebrate simply by enjoying a slice of pie.
Here are some nerdy yet fun facts for Math and Science lovers
* There is an entire language made on the number Pi. Some people love pi enough to invent a dialect in which the number of letters in the successive words are the same as the digits of pi. Mike Keith wrote an entire book, called ‘Not a Wake’ in this language.
* Before the π symbol was used, mathematicians would describe pi in round-about ways such as “quantitas, in quam cum multipliectur diameter, proveniet circumferential,” which means “the quantity which, when the diameter is multiplied by it, yields the circumference.
Latest Update |
Chord Ear Training Practical Guide Ear Training Page
Chord Ear Training
Hearing and understanding chord progressions is something that all musicians need. It is unfortunate that most chord ear training courses teach you to memorize the sound of each triad or 7th chord structure in relation to itself rather than within a key center. While hearing that a Major 7 chord has the sound of a root, 3rd, 5th and 7th is useful in some situations, more often that not you are listening to chord sound within a key center, so each chord’s root is not the root of the key center. So the individual chord tones should be related to the key, not to the chord.
Modulation with Chord Ear Training
A Student of chord ear training should start with developing the skill to hear chords within a key center. Of course chord progressions do modulate and overtime students should develop the skills needed to hear these modulations. But it is important to remember that many chord progressions do not modulate and the better you get at hearing within a key the less you will modulate.
The Types of Chord Ear Training Exercises
The two most common types of chord ear training are singing and listening exercises. The chord ear training singing exercise should first focus on singing the 4 basic triad arpeggios. In the key of C these four arpeggios would be C Major: C,E,G, C Minor: C,Eb,G, C Diminished: C, Eb, Gb and finally C Augmented: C,E,G#. You should sing these chords over a drone so in this case you would use a “C” drone.
How to get a Drone Sound for Chord Ear Training
Some musicians use a electronic tanpura to create a drone sound for chord ear training. You can purchase a physical tanpura or there are websites where you can listen to a tanpura. Personally I recommend using a MetroDrone because it also allows you to set a distinct rhythm. Remember overtime you want to speed up your ability to sing the aforementioned arpeggios and any exercise that you do with chord ear training. The MetroDrone also allows you to improve many other aspects of your musicianship. Read more about this here.
Listening Exercises for Chord Ear Training
Listening exercises for chord ear training can be tricky and many ear training programs and courses don’t understand how their exercises can actually hurt a student’s development. The first premise of chord ear training is you tend to hear chords in a key center.
So if you hear a C Chord: C,E,G and then quickly afterwards you hear an F Chord: F,A,C many students will get confused because the C Chord created a key center and of course an F chord is a very common chord in the key of C so they hear the notes of the F chord as F=4,
A=6 and C=1. Many times when students are just beginners they just get the F chord wrong when they answer because they haven’t modulated to F to hear the F chord properly.
Chord Ear Training that Hurts Your Development
So you can see that doing chord ear training can force students into modulating on every chord they hear in an exercise. THIS IS NOT HOW YOU HEAR MUSIC. When you listen to any chord progression a majority of the time you are not modulating to a new key for each chord.
So an exercise that plays you multiple chords and expects you to hear each chord as let’s say 1,3,5 for a Major triad is actually causing you harm because in the long run you want to hear chords in a key center.
Doing Chord Ear Training with a 12 Bar Blues Progression
Let me give you one more example of how you should hear chords. Everyone loves the Blues and all musicians use the Blues Scale when they improvise over the whole 12 bar progression. They use the Blues Scale (in C this would be C,Eb,F,F#,G,Bb) because they hear all the chords of the Blues chord progression in one key. If it’s a C Blues then you should hear all the chords in C. Many times students get the false impression from the exercises that some ear training courses give that every chord in a chord progression is a new key center. A 12 bar Blues progression is a classic example of why you don’t want to start learning each chord of chord ear training as a new key center and that is exactly what you do when you listen to all Major chords one after another and hear them all as 1,3,5.
So What Type of Chord Ear Training should I do?
For the reasons stated above I usually don’t start students with chord ear training because of the problem of modulation. If you feel you must start with some chord ear training use a drone like the MetroDrone. This will ensure that you are in a key center or play a cadence in a key center before you identify a chord structure. I suggest reading my article of How to Practice Ear Training. This will help you understand the process.
Recommendations for Chord Ear Training
I recommend to my students that they first start with learning the sound of all notes in a key center. Overtime I progress them to hearing two notes simultaneously then three etc… This chord ear training exercises actually teach them how and when they modulate. But for a beginner, I would start with learning how to hear all 12 notes in a key center. Then progress through various courses before you start chord ear training. I often recommend two books: A listening courses call “Ear Training One Note Complete” and a singing course called “Contextual Ear Training.”
Recommended Ear Training
The best way to proceed is to send me an email. Tell me the following things:
• Your musical history.
• What instrument you play
• Your goals as a musician
• How long you have been studying music
• Any strengths or weaknesses in music that you have noticed
• How much time you have to practice every day
• Any severe budget constraints you might have in purchasing ear training courses.
From this information I will give you recommendation and if appropriate recommend some ear training courses to help you in your development. |
Hydrolysis of Reactive Dye | Removing Process of Hydrolysis of Reactive Dye | Why Low Affinity Reactive Dyes are Preferred for Dyeing?
Hydrolysis of Reactive Dye:
Under alkaline condition reactive dyes react with the terminal hydroxyl group of cellulose. But if the solution of the dye is kept for long time its concentration drops. Then the dye react with the hydroxyl group of water. This reaction of dye with water is known as hydrolysis of reactive dye. After hydrolysis dye cannot react with fibre. So hydrolysis increases the loss of dyes.
This hydrolysis occurs in two stages. At first the concentration of dye initially increases and then begins to decrease. Where as the concentration of hydroxyl compound increases continuously. Then the hydroxyl compound cannot react with dye.
1. Hydrolysis of halogen containing reactive dye,
D-R-Cl + H-OH = D-R-OH + H-Cl
2. Hydrolysis of activated vinyl compound containing dye,
D-F-CH2-CH2-OSO3H + H-OH = D-F- CH2-CH2-OH + H2SO4
Hydrolysis of reactive dye
Fig: Hydrolysis of reactive dye
For preventing hydrolysis the following precautions are taken—
1. As hydrolysis increases with increasing temperature during dissolving and application temperature should not be more than 40°C.
2. Dye and alkali solution are prepared separately and mixed just before using.
3. Dye and alkali should not be kept for long time after mixing.
Why low affinity reactive dyes are preferred for dyeing?
If the reactivity of the dye is increased considerably, the rate of reaction with the fibre increases. There fore, the dyeing can be carried out in a short time. However in this case the rate of dye also increases, leading to deactivation of a part of the dye. This results in wastage of the dye. If on the other hand the reactivity of the dye is decreased, the extent of hydrolysis can be reduced considerably. However this results in the slower rate of reaction with the fibre also. The ultimate object of dyeing is to react as much of the dye ass possible with the fibre and minimize the hydrolysis of the dye. This is achieved in practice in two stages. The dyeing is first started from the aqueous medium under neutral conditions when the dye does not react either with the fibre or with water.
Then gluber salt or common salt is added to exhaust the dye onto the fibre as much as possible. In this respect, this stage of dyeing (exhaustion) resembles the dyeing of direct dyes on cotton. Then the second step (that of fixation or reaction with the fibre) is carried out by adding the alkali (usually used soda ash). Since the exhausted dye is already on the fibre, it is more likely that the exhausted dye reacts with the fibre in preference to water. However the dye present in the dye bath (which contains a substantial amount of the reactive dye) can now react with water since it is under alkaline condition. It is already stated that the hydrolyzed dye cannot further react with the fibre but dye to the affinity forces; it is absorbed by the fibre and is retained in it.
During the subsequent washing or soaping the substantivity held hydrolyzed dye gets stripped into the washing bath thereby reducing the washing fastness of the dyeing. If the affinity of the original dye is reduced to a very low value, this problem will not arise and a rigorous treatment of the dyeing with boiling soap or detergent solution removes almost all hydrolyzed dye.
However if the affinity is very low, exhaustion of the dye bath prior to fixation cannot be achieved substantially. This results in a larger amount of the reactive dye remaining in the dye bath and getting hydrolyzed when alkali is added subsequently. If the dye has high affinity for cellulose like a direct dye, it becomes difficult to remove the hydrolyzed dye from the dyeing since it is also absorbed by and retained in the fibre by fairly strong affinity forces, through not as strong ass the covalent bond formed between the dye and the fibre. Hence in actual practice low affinity dyes are selected for converting in to reactive dyes.
You have some articles on Reactive Dye:
1. Introduction of Reactive Dye | History of Rective Dye | Uses of Reactive Dye
2. Why So Called Reactive Dye | History of Reactive Dye | Which Fibers Can be Dyed with Reactive Dye?
3. Chemical Structure of Reactive Dyes | Commercial Names of Reactive Dye | Properties of Reactive Dye | Popularity of Reactive Dye
4. Classification of Reactive Dyes
6. Stages/Steps Involved in Reactive Dyeing
7. Stripping Process of Reactive Dye
8. Important Factors of Cold Brand Reactive Dye | Factors of Reactive Dye
9. Different Methods of Reactive Dye Application | Pad-batch Method | Pad Dry Method | Pad Steam Method
10. Knit Dyeing with Reactive Dyes(Hot Brand) | Knit Dyeing with Hot Brand Reactive Dyes
Back To Top |
Skip Ribbon Commands
Skip to main content
Winter Allergies
Female blowing her nose
Did you know you can get allergies in the winter? Very often winter allergies are caused by indoor allergies such as mold, dust mite and household pets. When the weather is cold outside, an individual may spend more time indoors and when the furnace kicks on, particles of dust, mold spores and insect parts are dispersed through the air congesting our nose and triggering a reaction.
• Dust mites are microscopic bugs found in mattresses, bedding and carpets.
• Mold is a fungus that thrives in damp, humid areas such as basements and bathrooms.
• Household pets can cause allergies because individuals are allergic to a protein found in pet dander, saliva and urine.
• Fireplaces and furnaces can blow warm air circulating particles into the room.
Common Symptoms
• Sneezing
• Wheezing
• Congestion
• Coughing
• Itchy eyes and nose
• Watery eyes
• Cold-like symptoms
• Dark circles under the eyes
• Fatigue
Steps and Treatments for Winter Allergies
Keep furnaces clean by changing the filters to keep the air clean and vacuum the carpet regularly to keep dust mites from growing. The cleaner the air is running in the house, the less likely your allergies will be a problem. If an individual has a history of seasonal allergies, allergists recommend starting medications two weeks before the problems began to occur.
Over-the-counter medications such as antihistamines can be used to reduce symptoms as well as nasal sprays that may reduce swelling or inflammation. Your primary physician or an allergy physician may prescribe oral medications, nasal sprays or allergy shots for symptoms that are worsening and not responding well to other treatments.
Always consult your physician before taking any medication. Allergies can be triggered at any time and at any age. If you have any questions or concerns about your allergies, contact your primary physician or an allergist at Boys Town National Research Hospital. |
Camera traps set up in the Strzelecki Desert of South Australia captured some unusual behaviour among the region's dingoes last year: despite other food options, the wild dogs were recorded engaging in cannibalism.
The surprising discovery was made by ecologist Paul Meek from the Department of Primary Industries' Vertebrate Pest Research Unit – and it's the first time that camera traps have ever recorded such behaviour.
While dingo control measures stir up much debate among Australia's scientists, the country's largest land predators continue to be eradicated in some areas due to the threat they pose to livestock and some native species. Meek had been testing out a new, more humane, trap that captures and euthanizes the dogs when he made his cannibalism discovery.
After catching a dingo late at night, Meek opted to leave the carcass in place until the morning. "When I returned, it was absolutely decimated – there was just a trail of intestines," he tells New Scientist.
To find out what night-time scavenger had eaten the dead dog, Meek set up cameras near his trap and ended up with a surprising answer: other dingoes had devoured the remains. Camera traps at other carcasses recorded similar behaviour, with hundreds of visits by numerous dogs. Meek even observed dingoes aggressively approaching trapped but still living dogs (though they were seen eating only those that had already died).
Dingoes' eating habits have long been the source of their bad reputation. Their appetite for livestock has seen the dogs poisoned and shot for centuries, and they've been shut out from vast stretches of land by a 5,500-kilometre dingo-proof fence. The opportunistic hunters are also known to prey on vulnerable native species like koalas and wallabies, and have even been blamed – likely unfairly – for the extinction of the thylacine through hunting competition.
Today, however, many experts believe the dogs should be protected. As Australia faces an extinction crisis, these efficient hunters can play an important ecological role, they argue, by controlling much more destructive pests that gobble up native wildlife – like feral cats and invasive red foxes.
Meek has spent years studying Australia's pest animals, as well as the use of remote cameras in monitoring wildlife, but this is the first time that camera traps have recorded cannibalism in dingoes. And it begs the question: why eat your own species?
Some animals do resort to eating their own kind in times of stress or during a food shortage, and when wildlife ecologist Benjamin Allen witnessed dingo cannibalism during a drought in 2009, the lack of food seemed like an obvious explanation. But in Meek's research area, the dingoes were well fed – so why eat other dogs?
A quick look at other corners of the animal kingdom reveals plenty of explanations for cannibalistic behaviour. In some frogs and salamanders, newborn larvae or tadpoles will eat other young or eggs of the same species. Some sharks infamously partake in "intrauterine cannibalism", where unborn sharks eat their sibling embryos in the womb. These behaviours might seem grisly, but they help the young animals get a head start on gaining necessary nutrients, with the added bonus of taking a bite out of future competition. In fact, some young animals get their early boost of energy by eating their parents – or parts of them (say hello to the caecilians).
And let's not forget the femmes fatales of the mantis and spider world. Eating your mate can provide a convenient source of nutrition for a female who's about to devote plenty of energy to producing offspring, and in some species of spider, males who make the ultimate sacrifice actually produce the most offspring!
Cannibalism clearly has its uses. In the case of the dingoes, Meek suspects the high density of dogs in the area meant the carnivores couldn't be too choosy with their food. In an area full of competition, it's not a good idea to pass up an easy meal – even if that meal looks a whole lot like you.
H/t: New Scientist
Grey Seals Related Content 2016 08 01
Top header image: Klaus Stiefel, Flickr |
From giant megamouths to jaw-dropping goblins, some truly bizarre sharks cruise the world's deep oceans. Ancient and highly specialised, these mysterious creatures posses a slew of unique adaptations for life in the depths, making them quite the spectacle when they turn up. While conducting a fish population survey off the Scottish coast recently, a team of scientists unexpectedly caught one such animal: a two-metre false catshark.
Researcher Christopher Bird photographs the shark before its release. Image: Christopher Bird/used with permission
Unofficially known as "sofa sharks" because of their flabby bodies, false catsharks (Pseudotriakis microdon) are a far cry from the Jaws trope. They spend their lives between 200 and 1,500 metres below the surface, sluggishly swimming or hovering along the sea floor. Studying deep-sea animals is no easy task, and there is a lot we don't know about these enigmatic sharks, but interestingly, they're spotted more frequently than you might think.
Several reports have claimed this shark to be "the second in a decade for Scotland", but UK marine fisheries advisor Tom Blasdale clarifies that isn't the case. "As one of the scientists who was on board, I feel I should correct this," he says. "We actually have seen this species before in Scottish waters, quite a few times. It's still an interesting fish to see, though! And I hope its new name, the 'sofa shark', sticks!"
Being an "oceanic couch potato" isn't the worst label that's been pinned on false catsharks over the years. The species was also dubbed "the sea goat" after a specimen in the Canary Islands was found to have a pear, potatoes, a plastic bag and a soda can inside its stomach (way to go, humans). This, combined with other intestinal finds like pufferfish spines, has led scientists to believe that the slow-going sharks indulge in scavenging from time to time.
sofashark-shark 2-2015-10-5
Image: NOAA Okeanos/Wikimedia Commons
When they're not chowing on leftovers, sofa sharks mainly feed on bony fish, smaller sharks and squid. If you're wondering how a slow predator could land such quick prey, it's a good question. The most likely explanation is that, like Greenland sharks, false catsharks are capable of quick bursts of speed when they need a bit of a boost.
MORE: Greenland shark dissection
We don't actually know how big these animals can get, but since the largest specimen ever caught measured just under three metres, the Scottish shark was truly quite the find. "It took three of us to get it off the fish belt it was so big," recalls deep-sea ecologist Christopher Bird, who was also aboard the ship.
It's unclear whether or not the shark survived, but Brit Finucci, a PhD candidate at Victoria University of Wellington who studies deep-sea sharks and their relatives, explains that it really depends on what state the animal was in. "Chances are, even if the shark was still alive at the time, it was probably on its way out," she says. "Not only do the animals have to cope with the trauma of being hauled up by net, but there's also significant environmental changes to adjust to, including pressure, temperature and light, as well as being taken out of water. It'd be quite a shock to the system. Simply throwing the animal back overboard while in this state may not increase its chance of survival."
That said, Bird assures us that the team did everything in their power to give the shark a fighting chance. "We quickly measured it, weighed it and took a few blurred pictures, but as the shark was still alive we wanted to get it back in the water as soon as possible. Everybody on board had a great passion for these sharks."
The measurements collected will provide important information that will help us better understand this little-known species. "It was such an amazing experience being able to witness this rare shark and once again proves how much we still don't know about the deep-sea!" says Bird.
For more information on the expedition check out the research blog!
Top header image: NOAA Okeanos/Wikimedia Commons |
1001 Inventions: Discover the Muslim Heritage in Our World is an exhibition which began a tour of the UK this week at the Science Museum in Manchester. Paul Vallely, Associate Editor at the Independent, lists 20 of the most-influential inventions from the Muslim world.
“From coffee to checks and the three-course meal, the Muslim world has given us many innovations that we take for granted in daily life.”
Here are the top Muslim achievements that have shaped our world:
3. The game of chess we know today evolved from the players of Persia earlier than the 10th century. The rook comes from the Persian word rukh, meaning chariot.Chess teacher Orrin Hudson-FBphoto
CHECK Out: From Slum Life in Uganda to Teen Chess Champion
(story idea submitted by Priscilla Martinez)
1. Wow- this is quit a lot of things that I associate with the Western World. This is great for westerners as myself to know, but I would think it might serve as a fantastic reminder to those radical negative terrorist who see themselves against so many of the very things their heritage created.
2. The core of the Islamic world was the ancient Near East, first unified under the Persian Achaemenid empire of the 6-4th centuries BCE, and later by the Greco-Roman Christian civilization. Latin Europe was an outlying, provincial region of that Ancient Near East. The Arabs inherited all the legacy of the ancient Greeks (that is, what they didn’t wantonly destroy) and they learned much, much from India and China. The Chinese claim that they invented checks in banking, but as this post shows, that occurred during a time when Muslims traded in China, so there may have been an Arab influence; China’s economy at that time was very flourishing. Some of the claims made here for Arab inventions are really upgrades of ancient Greek inventions. The Arabs were originally desert nomads, engaged in long range commercial transactions. They learned much from other civilizations and passed it on. The “Arabic” numerals were Indian originally (and some people think they originated in China). True, that Europe in medieval times had more respect for Arab civilization that in recent times. It’s been said that a UN survey found more books translated into Spanish than into all the languages of the Islamic countries. From the 13th century or so, Islam became very conservative.
• yes but not everything was derived from others, how did coffee come from others? the pin-hole camera, he didn’t come up with that based off of the greeks, he didn’t agree with the greeks and had a theory of his own which he proved correct, Leonardo da vinci would have used things from abbas ibn firnas, the surgical procedures and instruments developed by them, where would they get that from? if these muslims didn’t discover or invent these things the modern world wouldn’t be like what it is today. yes, at the BEGINING they were desert nomads, but they developed, and when the Prophet Muhammad was sent, and revealed the quran, that caused people to seek knowledge and be more involved and care more, hygiene and cleanliness are crucial aspects or islam, both physical and spiritual cleanliness “cleanliness is half of faith”, it is also crucial for muslims to seek knowledge as it mentions in the quran, and a lot of scientific discoveries are mentioned in the quran, at a time where there were no instruments to help discover them, like the expansion of the universe, “”And it is We who have constructed the heaven with might, and verily, it is We who are steadily expanding it.” (51:47)
orbits, “It is He Who created the night and the day, and the sun and the moon. They swim along, each in an orbit. ” (21:33)
the protective layer around earth, “”We made the sky a preserved and protected roof yet still they turn away from Our Signs..” (21:32)
also the development of the baby in the womb is described in the quran in detail.
[Edited by Good News Network, omitting YouTube video links.]
Leave a Reply |
Length - examples - page 17
1. Two rulers
2. Mirror
3. Bridge piers
bridge-369809_960_720 One quarter of the bridge pier is sunk into the ground. Two thirds are in the water. Protruding above the water is 1.20 m long. Determine the height of bridge piers.
4. Steamer
5. Divide
6. Car consumption
7. Sunflower Field
sunflower_1 The trapezoidal sunflower field is located between two parallel paths which are spaced 230 meters apart. The lengths of the parallel sides of the field are 255 m and 274 m. How many tons of sunflower will come from this field if the hectare yield is 2.25.
8. Blades
ruler2_1 1st blade 2,5 m, 2nd blade. .1.75 m. How many same long pieces of this two blades can be do the biggest? How long is one piece?
9. Conversion of units
meter_2 Complete the following length data
10. Fuel economy
car1_4 How many kilometers is sufficient petrol in the cylinder fuel tank with a diameter 40 cm and the base of tank length 1 m, when it is filled to 60% and if the car consume 15 liters per 100 km?
11. Cycling trip
12. Car model
13. Clotheslines
14. Cableway
15. Rounding
map1 What width and length in centimeters may have rectangle land when round the dimensions to the meter , the width is 5 m and length 7 meters?
16. Clay
lopatka How many cubic centimeters of clay is in a pit of dimensions 4 m x 3 m x 3 m?
17. Ruler
pravitko_1 How far from Peter stands 2m hight John? Petr is looking to John over ruler that keeps at arm's distant 60 cm from the eye and on the ruler John measured the height of 15 mm.
18. Tree shadow 3
19. Painting a hut
malovka_4 It is necessary to paint the exterior walls of hut whose layout is a rectangle of 6.16 m x 8.78 m wall height is 2.85 meters. Cottage has five rectangular windows; three have dimensions of 1.15 m x 1.32 m and two 0,45 m x 0.96 m. How many m2 is necessary
20. Earth and Sun
Do you want to convert length units? |
• Breaking News
For powering your smartphone or your Tesla Model 3, there's currently nothing better than the lithium-ion battery. Since its introduction in 1991, the rechargeable lithium battery has been the standard for everyday tech devices and electric-vehicle power. Many of the world's more than 3 million electric vehicles run on lithium-ion batteries. But as the world races toward an electric future, it needs something better than the lithium-ion battery in order to keep pace.
"Lithium is pretty much hitting a wall right now. If you really want to increase energy density, you have to go to a completely different paradigm," said Yifei Mo, a materials science and engineering professor at the University of Maryland. More energy density means cheaper, lighter batteries that last longer on a single charge.
Fortunately, there are battery start-ups trying to build better batteries, ones with lower costs, improved energy densities and better performance for supercharged industrial products and consumer technology, as well as electric vehicles, which would charge more quickly and travel longer distances. Starting this year, several start-ups with batteries they believe are big improvements over current lithium-ion technology will introduce their cells to the commercial market.
Sila is just one of several battery start-ups that recently received major funding to continue tweaking battery tech. Last year the Alameda, California-based company took on $70 million in Series D financing from several investors, including Siemens' global venture firm, to build its first commercial production line for silicon anode batteries. That's exactly one decade after being co-founded by Berdichevsky, a mechanical and energy engineer and the seventh employee at Elon Musk's Tesla, who led the development of the battery system in the Tesla Roadster (the car that SpaceX, also founded by Musk, launched into orbit in 2018).
Emerging variations of the current lithium-ion battery have taken about 10 years of research. Only now are start-ups gearing up for the commercial spotlight, a rollout that will take at least a few years, and possibly even another full decade.
"The material required for one car is the equivalent of 10,000 smartphones or 1,000 smart watches," said Berdichevsky. "We'll be in consumer devices to start. Over the next five years, we'll scale up with automotive partners." One of Sila's current auto partners is BMW.
Current lithium-ion batteries are limited in their material parts and physical energy density. New battery technology seeks to improve both the safety and energy efficiency of lithium-ion batteries where there is no risk of fire if the battery overheats or becomes damaged.
Each lithium-ion battery is composed of four essential parts: the anode and cathode — the electrodes that bookend each lithium-ion cell — a liquid electrolyte and a separator. Positive and negative currents are created as the electrolyte carries lithium ions through the separator to and from the anode and cathode. It's this process that generates the charge that's stored in the battery.
If the chemicals making up the anode and cathode — respectively, graphite and some type of metal oxide — heat up too intensely, it can break down the physical separator, which leaves the highly flammable electrolyte exposed. Recall Samsung Galaxy Note 7 phones exploding and you can see, literally, the problem. And maximum lithium-ion energy density today is about 260 watt-hours per kilogram; by comparison, most current electric vehicles hold between 220 and 250 watt-hours per kilogram.
One new battery technology is solid-state, which replaces not only the graphite anode with one made up of lithium metal but also the liquid electrolyte and separator with one solid piece, usually ceramic, glass or flame-retardant polymer. Taking this approach is Solid Power, a Colorado-based manufacturer of solid-state batteries that received $20 million in Series A financing in 2018. According to company executives, the battery they're developing leads to at least 50 percent more energy density.
Secretive Stanford University spinoff QuantumScape is also developing a solid-state battery, in partnership with Volkswagen. Last year Volkswagen increased its stake with a $100 million investment. PitchBook data shows the San Jose-based start-up has a valuation of $1.75 billion. According to a press release announcing the deal, QuantumScape's battery would allow Volkswagen's E-Golf to travel 466 miles — its current range is 186 miles — on a single charge, making it comparable to ranges achieved by conventional gas-powered vehicles. According to Volkswagen, QuantumScape's battery should be faster-charging and much lighter than current lithium-ion batteries.
Yet solid-state batteries probably won't be available en masse until sometime next decade, as one Nissan vice president said last year. Even QuantumScape's press release states a commercial production target for 2025.
The longer timeline for solid-state technology is a symptom of how current battery factories are set up. They're built to handle lithium-ion production with liquid electrolytes, and switching to solid materials is more than a matter of just replacing processes on a factory floor.
"It's an emerging technology in the very, very early stages of commercialization," said Dean Frankel, Solid Power's head of business development. "It just takes time from a scale-up standpoint."
While some start-ups work toward perfecting and scaling up the solid-state battery, others like Sila Nanotechnologies hope to take advantage of current lithium-ion manufacturing processes to bring batteries quickly to market. Instead of creating a solid-state battery, Sila just replaces the graphite anode with one composed of silicon, a material that absorbs lithium ions about four times faster than graphite.
What's more, most lithium-ion batteries with graphite anodes have a charge-rate, or C rate, of less than 1 percent. Start-ups developing new cells with silicon anodes say the C rates of their batteries are much better, a key differentiator to enabling an electric-vehicle future, since most people don't want to wait around more than an hour for a car to charge when pumping gas takes just minutes.
"We can sustain a charge rate 10 times as fast as a conventional graphite cell," said Robert A. Rango, CEO of Enevate.
The Irvine, California-based company creating a next-generation lithium-ion batteries with silicon anodes is armed with $111 million in funding, which includes an investment made last year by South Korea battery company LG Chem. Rango said Enevate, whose batteries have been in the works for 10 years, is about a year and a half away from the first commercial deployments of its technology, most likely in electric bikes and scooters.
Still, silicon anode batteries have one potential drawback: Silicon material swells, which means every charge causes the battery to deteriorate. It's a problem both Berdichevsky and Rango said their respective companies have solved.
"Silicon does expand, and that's been one of the challenges of the industry," Rango said. "In our cells, we've been able to contain the expansion. Our cells have specifications that meet electric-vehicle requirements." Those requirements? That a battery is able to charge to 80 percent after it has been charged and discharged 1,000 times.
The long development timeline for these start-ups is a sign of how difficult pushing battery technology can be. And while improvements in the range of electric vehicles is certainly one of the major implications of a better battery, successors to the current lithium-ion battery will most likely be initially found in much smaller items.
"You're talking about a generational technological shift that has to happen," Berdichevsky said. "In 150 years of batteries existing, there have been four commercially relevant chemistries to come to market. And every time you go to these new chemistries, they get harder."
Read More<
Niciun comentariu |
ASTM C623 - 92(2015) en
Standard Test Method for Young''s Modulus, Shear Modulus, and Poisson''s Ratio for Glass and Glass-Ceramics by Resonance
50,27 60,83 Incl BTW
Over deze norm
Status Definitief
Aantal pagina's 7
Gepubliceerd op 01-05-2015
Taal Engels
1.1 This test method covers the determination of the elastic properties of glass and glass-ceramic materials. Specimens of these materials possess specific mechanical resonance frequencies which are defined by the elastic moduli, density, and geometry of the test specimen. Therefore the elastic properties of a material can be computed if the geometry, density, and mechanical resonance frequencies of a suitable test specimen of that material can be measured. Young''s modulus is determined using the resonance frequency in the flexural mode of vibration. The shear modulus, or modulus of rigidity, is found using torsional resonance vibrations. Young''s modulus and shear modulus are used to compute Poisson''s ratio, the factor of lateral contraction.
1.2 All glass and glass-ceramic materials that are elastic, homogeneous, and isotropic may be tested by this test method.2 The test method is not satisfactory for specimens that have cracks or voids that represent inhomogeneities in the material; neither is it satisfactory when these materials cannot be prepared in a suitable geometry.
Note 1: Elastic here means that an application of stress within the elastic limit of that material making up the body being stressed will cause an instantaneous and uniform deformation, which will cease upon removal of the stress, with the body returning instantly to its original size and shape without an energy loss. Glass and glass-ceramic materials conform to this definition well enough that this test is meaningful.
Note 2: Isotropic means that the elastic properties are the same in all directions in the material. Glass is isotropic and glass-ceramics are usually so on a macroscopic scale, because of random distribution and orientation of crystallites.
1.3 A cryogenic cabinet and high-temperature furnace are described for measuring the elastic moduli as a function of temperature from –195 to 1200°C.
ICS-code 81.040.30
Ga naar winkelwagen |
The Voice of the Finishing Industry since 1936
• PF Youtube
• PF Facebook
• PF Twitter
• PF LinkedIn
6/1/2018 | 4 MINUTE READ
Are Burn-Off Ovens Safe?
Steelman Industries’ Carlton Mann says burn-off ovens are safe, and he offers some considerations for installing one.
Q: Are burn-off ovens safe to operate in a plant?
A: People do often have concerns about the safety of burn-off (heat-cleaning) ovens in their facilities. After all, they are gas-fired, work at high temperatures and remove combustible materials from fixtures and parts. It seems like there are many ways things could go wrong.
Related Stories
However, burn-off ovens are extremely safe to operate, as proven by the roughly 10,000 that have been in operation around the country during the past 30 years. Here are a few reasons why they are so safe:
• They are built like incinerators, so any fire will be safely contained.
• They have primary and backup water spray systems to control the process and douse any potential fires.
• They have explosion relief doors on top that open to relieve pressure and then close to keep air from entering the oven.
Like any industrial equipment, there are potential dangers, however. To avoid them, the oven should be kept in good repair. This means fixing or replacing parts that are broken and never “jumping out” safety devices such as pressure or limit switches. Usually, the manufacturer will have a service department that can guide you through the repair process and send you the correct components.
You also should check the water sprays before each operation of the oven, making sure you see a fan-shaped mist from the primary sprays and a large fan from the backup. Never load sealed containers or other items that may have unvented cavities into ovens. Examples include structural tubing that might be part of a racking system and water-jacketed components. Also never load uncured paint or solvents into the oven. These materials will decompose all at once and overload the afterburner.
Another way to help avoid danger is to wait until the oven temperature is below 500°F before opening the doors. This assures that any combustible vapor still present in the oven will not ignite. If you open the door and see smoke, flame or glowing embers, close the door and wait until the burning has ended. The primary water sprays can be used to speed the cooling process. Be aware that the cart and parts inside the oven will be hot for some period after the doors are opened; protective gloves and gear should be used for unloading.
When planning for a burn-off oven, consider:
Location. This is the most important consideration. There should be plenty of space in front of the oven for loading and space around the oven for maintenance. An example of a bad installation is when the oven is placed through a wall, so the doors open into the plant, but the setup doesn’t include a door beside the oven for access to the burners and valves. This results in a service nightmare, with numerous long trips down the hall and out a door to get to the other end of the oven.
Burn-off ovens tend to be dirty with ash and pigment that remains after the burn. The pigment sticks tenaciously and is easily tracked around the plant, so, you should isolate the oven from your coating line to prevent cross-contamination.
Hooks and fixtures will have a coating of pigment after thermal cleaning, which needs to be removed before placing them back on the line. Be sure to locate the oven near a space where you can spray the parts with a pressure washer.
Some facilities place their ovens outside, under a roof. What they don’t consider is that the oven will get wet if the wind blows when it rains. In these locations, you should at least protect the control side of the oven, although totally enclosing it is best.
Utilities. You will require natural gas or propane fuel for heating the oven and water for control. Be sure that your gas meter and regulator are large enough to handle all the gas-fired equipment in your plant. Also be sure the pressure and piping are adequate for the oven before it’s delivered. It’s very frustrating to have your oven installed only to find out that your gas service is too small. Water usage on ovens is low, usually 1-5 gallons per minute on an as-needed basis. Problems can occur, however, when pressure fluctuates as spray washers are used or tanks are filled. Before your oven is delivered, check your pressure and see if it fluctuates during the day. If it does, it can be stabilized with a pressure tank and check valve. If it’s too low, a pump may be necessary.
Stack. The exhaust stack should go through the roof or through a sidewall and then extend above the roof. If the roof or building wall is made of combustible material, the wall cannot be closer than 2 feet from the stack. For combustible roofs, insulated thimbles are available that reduces this distance requirement.
Permits. You should apply for an operating permit with your state department of environmental quality before you purchase an oven to be sure that you will be allowed to operate it. The oven manufacturer can help you obtain such a permit.
Exhaust fans. If you have exhaust fans in your building, be sure that you have sufficient make-up air. If not, the fans will draw air down the stack and pull smoke out of the oven. This will cause the oven to malfunction because it’s starved for air. We get calls every year when the temperature drops and people close windows and doors that were providing make-up air in warmer weather.
Carlton Mann is a product manager for cleaning ovens at Steelman Industries. Visit |
• Pleasant Grove ISD
Acceptable Use Policy
Computer and technology are used to support learning and enhancing educational instruction. Computer networks and telecommunications allow people to access information from other computers in different locations. It is the policy of Pleasant Grove ISD that all computers be used in a responsible, efficient, ethical, and legal manner. Failure to adhere to this policy and the guidelines established below will result in the revocation of access privileges and/or disciplinary actions involving local, county, state, or federal agencies.
It is the belief of Pleasant Grove ISD that the educational benefits of the Internet for students, teachers, and staff far exceed any potential disadvantages. The majority of sites accessed can provide a wealth of educational opportunities. It is the intent of Pleasant Grove ISD to provide access to such services to further the educational goals and objectives set by the district. Pleasant Grove ISD is in full compliance with the Children's Internet Protection Act (CIPA) through the use of a content filtering device when using district computers. Parents should be aware that students using telecommunications have the potential to access unacceptable sources if they disobey or disregard district rules and guidelines. Even though the vast majority of Internet sites provide useful information, some sites may contain information that is offensive, defamatory, or inaccurate. The intent of Pleasant Grove ISD is for technology resources to be used as a valuable educational tool. |
You are the owner of this article.
Early Brigantine: Days of pirates and shipwrecks — A look back at Atlantic County history
Brigantine is a far cry from its romantic past of pirates, brawling whalers and shipwrecks. Legend has it that the beach was the scene of savage duels to the death, treasure burials and international wrecks caused by roaming bandits and their false beacons.
History states that the famous Captain William Kidd (1645-1701 ) haunted the sea lanes off Brigantine and used the Absecon Inlet to hide when the pursuing ships of England, Spain and Holland drew too close for comfort.
Romance then took a turn among the sand dunes when Kidd fell in love with a girl from Sandy Hook and decided to abandon the life of piracy.
Brigantine was noted in the early 1700s for another kind of pirate, the beach pirate who preyed upon sea merchant marines temporarily lost in the fogs of the Atlantic and lured them onto the off-island shoals by means of false lights. Once the beached ships were helpless, the pirates attacked, sacked and killed.
During the 1800s, Brigantine was approachable by boat only and in 1881 a bridge was planned, however it was not built until many years later and was completed in 1924. This bridge was constantly destroyed by storms, especially the hurricane of 1944. The current bridge was built in 1972 of a modern concrete construction and was partially rehabilitated in 2007.
The bridge was named the Vincent C. Haneman Memorial Bridge in honor of Justice Haneman of the New Jersey Supreme Court, who was a resident of Brigantine.
Once the bridge was constructed, golfers from the region would wend their way around the town to enjoy a highly rated golf course.
On Aug. 7, 1889, the Brigantine Beach Railroad was incorporated to build tracks from Pomona on the Camden Atlantic Railroad to Brigantine Island and was in operation until Oct. 9, 1903
During those years, a trolley operation ran with sixteen cars and even had a double-decker sightseeing bus, the type used in London today. Boatloads of visitors would arrive from Atlantic City just in time to take a ride along the dunes of the island until it was dismantled in 1908.
The first hotel on the island was built in 1838 and was replaced and rebuilt several times due to fires. The latest Brigantine Hotel was built in 1927 and still stands at the edge of the ocean.
Since the city had become primarily a fishing community, housing consisted of small fishing shacks existing well into the 1960s, but they have now been replaced by single homes and many condominiums. Driving to Brigantine today, one would pass several casinos and the municipal marina and continue into the city to enjoy fine restaurants.
Load comments
Submit all material to
|
The space between contour lines on a topographical map is a contour interval. The contour interval is an even space that represents an increase in elevation. For instance, if the map uses a 20-foot interval, there are co... More »
Web users can download topographical maps of the state of New Jersey from the state's website for the Department of Environmental Protection, Rutgers.edu and the website for the U.S. Geological Survey. The Historical Top... More »
www.reference.com Geography Maps & Cartography
A topographic map shows topography and features of the Earth's surface represented by various symbols, according to the University of Mount Union. The most common symbol of a topographic map is a contour line, which desi... More »
A flat map is a projection of Earth's surface onto a two-dimensional plane, according to NationalAtlas.gov. Because Earth's surface curves, any projection onto a flat map creates distortions. A globe is the most accurate... More »
In geography, spatial distribution refers to how resources, activities, human demographics or features of the landscape are arranged across the surface of the Earth. It is the physical location of salient features of a p... More »
|
What is Mesothelioma?
What is Mesothelioma?
Mesothelioma is the most serious asbestos-related disease, but what are the common symptoms, causes and who is at risk.
The UK first prohibited the use of blue asbestos in 1985 with a final ban on all asbestos use in 1999. Today it is still a significant health hazard to workers and a liability for employers. This article considers the most serious asbestos-related disease: Mesothelioma.
What is Mesothelioma?
What is Mesothelioma?
Mesothelioma is an extremely nasty and permanent cancer that infects the lining of the lungs (pleura) and, less commonly, the lining surrounding the lower digestive tract (peritoneum). It can be caused by exposure to asbestos dusts of all kinds but especially to blue asbestos (crocidolite)
It is also a very subtle form of cancer providing only a few noticeable symptoms (shortness of breath, or chronic coughing that can easily be confused with allergies or the common cold) until it becomes extremely advanced.
Mesothelioma is almost exclusively related to asbestos exposure, with symptoms taking 20-50 years to appear. In many cases mesothelioma is discovered by accident when doctors are looking into these symptoms.
In most cases, treatment proves to be ineffective but primarily aims to lessen the effects of the symptoms - there is a 100% mortality rate normally within 5 years of diagnosis.
Common symptoms of Mesothelioma
Common symptoms include chronic cough, chest pain and shortness of breath. If you have a history of asbestos exposure and are experiencing any of the above, have a health check-up.
Quit smoking! - Studies show a strong link between smoking and mesothelioma with smokers being 9000% more likely to develop mesothelioma
Causes of Mesothelioma
Approximately 2,500 people in the UK are diagnosed with mesothelioma each year, with men being five times more likely to be diagnosed than women. Most mesothelioma deaths are now a legacy of past occupational exposures to asbestos.
It is believed that 90,000 people in the UK will have died as a result of mesothelioma by 2050.
If asbestos fibres are disturbed they become airborne and can be inhaled.
Asbestos fibres are the right length and diameter to penetrate deep into the air exchange areas of the lungs, and into the pleural cavity. Here the fibres irritate the lining of the pleura which can cause gene mutations that may result in the growth of cancerous cells and develop into mesothelioma.
Where can you find Asbestos?
Asbestos can still be found in any industrial or residential buildings built or refurbished before the year 2000. It is in many of the common materials used in the building trade that you may come across during your work or find in your home.
Who is at risk?
Most trades can be exposed to asbestos related hazards especially when working on buildings where the work involves cutting, drilling, demolition etc; so what does the law require you to do?
The Control of Asbestos Regulations 2012 require duty holders to:
Manage asbestos properly in non-domestic properties:
• Undertake suitable and sufficient assessments in order to identify asbestos (presumption of asbestos/sampling surveys required)
• Refurbishment/demolition survey before structural work
• Asbestos management plan – including re-inspection schedule
• Asbestos register for building
• Provide information, instruction and training to all employees likely to be exposed to asbestos – not just asbestos removal workers
• Protect employees and anyone else who may be affected by exposure
The number of fatalities from mesothelioma is unfortunately expected to rise due to the length of time between exposure to asbestos and the onset of the cancer, making it difficult to diagnose.
However, the likelihood of developing mesothelioma is relative to the duration and intensity of exposure to asbestos.
Inadvertent and short-term contact with asbestos means that exposure to fibres will be minimal, with little chance of long-term health issues such as mesothelioma developing. If you are at all concerned then consult your GP. |
Everything you need to know about New Horizon's Pluto trip
Everything you need to know about New Horizon's Pluto trip
Everything you need to know about New Horizon's Pluto trip
Remember Pluto?
Discovered in 1930, it used to be a planet before being downgraded to "dwarf planet" status in 2006 because some bastards redefined what it took to be a planet. Since then, this mass of frozen nitrogen has been sulking about an average 7.5 billion kilometres away, playing orbital swapsies with Neptune.
But for part of 14 July, Pluto had a new best friend: NASA's New Horizons probe.
From pictures to data, here's everything you need to know about this long-range flyby.
A long trip
New Horizons has taken a long time getting to Pluto - launching from Earth on 19 January 2006. In February 2007 it got intimate with Jupiter, using the gaseous giant's gravitational pull to sling shot further out into the solar system, en route to Pluto.
It's hard to put into perspective just what a huge undertaking this mission is, but to get the best shots of Pluto and its moons, New Horizons has a window of between 100 to 150 kilometres (after a journey of five billion kilometres, that's like threading the proverbial needle), with a required time frame of 100 seconds.
Expect this to form the basis of the hardest-ever maths exam question in years to come.
Incredible pictures
One of the coolest aspects of the approach to Pluto has been the increasingly detailed pictures that New Horizons has been beaming back - but the real money shots aren't going to arrive until 00:53 GMT on 15 July. New Horizons was successful in its close range flyby of Pluto, but due to a required shift in the angle of the probe, we have to wait before the images can be beamed back.
These will show the surface of Pluto in more detail than ever before - head to the gallery to see the latest shots (we'll update them as and when they arrive - they've got five billion kilometres to travel).
A lot of science stuff
It's not just pictures that the New Horizons probe is after - this is the list of sensors and gadgets strapped to this deep space voyager and an idea of what they'll be looking for:
• REX (Radio Science EXperiment): Measures atmospheric composition and temperature; passive radiometer.
• LORRI (LOng Range Reconnaissance Imager): Telescopic camera; obtains encounter data at long distances, maps Pluto’s far side and provides high resolution geologic data.
• SWAP (Solar Wind Around Pluto): Solar wind and plasma spectrometer; measures atmospheric “escape rate” and observes Pluto’s interaction with solar wind.
• PEPSSI (Pluto Energetic Particle Spectrometer Science Investigation): Energetic particle spectrometer; measures the composition and density of plasma (ions) escaping from Pluto’s atmosphere.
• VBSDC (Venetia Burney Student Dust Counter): Built and operated by students at University of Colorado; measures the space dust peppering New Horizons during its voyage across the solar system.
Its continuing mission...
New Horizons isn't about to turn tail and head home after its Pluto flyby. Part of NASA's solar system reconnaissance, should the mission prove successful it will mean the US are the first country to reach every planet with a space probe (they'll be looking to stick a flag on them all next), before journeying farther into the Kuiper Belt (the region beyond Pluto and Neptune's orbit) to examine one or two of the ancient, icy mini-worlds in that vast region.
After that? It'll be left to drift into the big inky black, until it encounters another solid object and falls apart, or is intercepted by an alien race and perceived to be a message of hostile intent, sparking the beginnings of an intergalactic war. Hopefully the former.
Ready for my close up
The final image of Pluto before New Horizons readied itself for its flyby. The angle of this final approach means that we won't be getting any communication from New Horizons until just after midnight on 15 July. The photo shows off the dwarf planet's mysterious "heart" - an area of pale dust and ice seen toward the lower half of this image.
Mooning about
The first high-resolution image of Pluto's largest moon, Charon. The lack of craters on the moon's surface leads NASA's scientists to believe that its current surface may have only been formed quite recently.
Young mountains
One of the first high-quality images sent back from New Horizons' flyby, this image shows a range of mountains rising some 3,500 metres above the surface of the icy body of the dwarf planet - likely formed no more than 100 million years ago. Which, in space terms, is quite young.
Hail Hydra
The outermost moon of Pluto still holds many secrets, but this new, blurry image has helped confirm a number of theories - namely its shape, and that it's covered in water ice.
A sense of scale
NASA's graphical representation of the size of Pluto (2,370 km in diameter) and its largest moon Charon (1,208 km in diameter), compared to our own watery Earth.
Ready for launch
An image of New Horizons before it set off on its journey back in 2006.
Dynamic duo
An image of Pluto (on the right) and its moon Charon when New Horizons was six million kilometers away from its flyby target.
We have flyby
Part of the NASA team celebrate the sharpest image of Pluto ever seen. More images to follow. |
Free Worksheets and Coloring pages for kids from PreSchool to Higher Grades
1143 folders & 1778 worksheets found.
Write the Number in Box to Match Occupation/1
Write the Correct Suffix for Root Words/1
Write the Correct Collective Noun/1
Write the Common Noun for Given Proper Noun/1
Write some or Any to Complete the Sentense/1
Write Homophone of each Word/1
Write Countable Noun/1
Use Correct Punctuation Mark at the End of a Sentense/1
Unscramble Antonyms of Words/1
underline the pereposition in sentenses/1
Year 1 English is an important year for both students, schools and Parents. We understand the need of worksheets for students to work and practise. That is why we have covered several skills from Number in Box to Match Occupation to Correct Suffix for Root Words, Common Noun for Given Proper Noun and Complete the Sentence, Punctuation Mark and much more. Register to use our year 1 Free English worksheets without any limits, and become a master of Year 1 english. Check our Year 1 English video lessons if you get stuck.
We also have Free English worksheets for Year 2 to Year 8 and we provide Science and Math Worksheets too.
|
Denge Sound Mirrors
Shepway, England, United Kingdom
About Denge Sound Mirrors
The Denge Sound Mirrors are fine examples of initial attempts at an early warning system for aircrafts.
From 1916 to the mid 1930’s, Dr William Sansome Tucker developed an early warning system known as the ‘sound mirrors’. These were strange looking concrete buildings, designed to listen for enemy planes arriving from the Continent.
They worked in much the same way as the modern radio telescopes do today. There were three designs, built to explore the technology and perfect the concept. These are all to be seen on the Dungeness peninsular, although there are other examples of the dishes which can be seen in other places in Britain (notably Hartlepool, Seaham, Redcar and Sunderland in the North East, Dover, Romney Marsh and Selsea in the South).
The first version is a 70m curved wall, around 5m high, and the other two are dishes around 5m in diameter. All used the same principal of microphones at the focus of each structure. The intention was to set up a string of sound mirrors to determine the direction as well as the distance of planes approaching.
Although the sound mirrors were obsolete by the start of World War II, the concept behind them had the great merit of developing the infrastructure to enable radar to be used efficiently, as it used the same principles of having a string of listening posts throughout the country. A must see for anyone interested in the development of radar and early warning systems. The sound mirrors at Dungeness are only accessible on guided tours, as they lie on an island in the middle of an old gravel pit.
Related Places
Bletchley Park
Bletchley Park was Station X, the central location of British code cracking operations during World War II. Explore
Comments (0)
|
AOF’s Guide on Staying Healthy
The major causes of death, especially, in low-income and underserved neighborhoods are chronic diseases, such as, obesity, diabetes, heart attack and cancer. Obesity is the condition in which a person is overweight and has a high degree of body fat. Obesity is measured by the Body Mass Index (BMI). Hence, maintaining a healthy human body in the 21st century, in order, to increase life expectancy becomes paramount.
Now, if you are tired of being sick and tired of being sick and tired, you need to get into the driver’s seat about managing your health: adopting a healthy lifestyle and making the right choices for you and your family, since, these are some of the things that can put obesity and its related chronic diseases at bay. The pharmaceuticals and food companies will not do it for you because that would adversely impact their bottom lines.
First, one of the guiding principles at the American Obesity Foundation, (AOF) is our belief in the power of the “Unseen Hands” in the lives of men and women. This belief is even more, evidenced in the health outcomes of many previously sick persons across the spectrum, now living healthier and more purposeful lives.
In adopting a holistic approach to solving endemic challenges, such as, obesity and obesity-related suffering, AOF supports research findings on the importance of spirituality, whether, in an individual, family, or community’s progression. As such, in addition to embracing nutrition, exercise and fitness, good state of mind, healthy lifestyle options, there is a need to embrace spirituality. Start with gratitude – positive thinking is very essential to being healthy. You need to clear out your mind, remove all the depressing and negative emotions and fill it with positive thoughts. You can get involved in practices, such as, meditation and yoga to get your positive energies going.
Moderation in caloric intake (portion control), ought to become the watch word as you make healthy, your new happy. Eating the right portion sizes pays off because big servings lead to calories overload. Staring your day with oatmeal for breakfast makes portion control easy at lunch. Instead of counting calories, try this amazing concept: divide your plate into two sections; fill about half or more with vegetables and /or fruits and the remainder with roughly equal amounts of starch and a high protein food like, chicken or fish. Maintaining a healthy weight is important for overall health and even better, this way of eating may help prevent overweight, cancer, heart disease and other common killers.
Taking nutritious foods and drinks is non-negotiable for your kids and others in the household. Fruits and vegetables have been shown to boost immunity and reverse chronic diseases like obesity, heart attack and even cancer.
What to eat: A variety of unprocessed and fresh foods helps kids and adults obtain the right amounts of essential nutrients. It helps to avoid foods high in sugars, fats and salt, which can lead to unhealthy weight gain and related suffering. Eating a healthy, balanced diet is, especially important for growing kids and their development.
Add fruits, such as apples, bananas, oranges, pears, and watermelon; legumes, such as beans, lentils, chickpeas, black-eyed peas; vegetables, such as broccoli, cabbage, and carrots. People whose diets are rich in vegetables and fruits, have a significantly lower risk of obesity, heart disease, stroke, diabetes and certain kinds of cancer.
What to avoid eating: Avoid eating a lot of red meat, palm and coconut oils, sugary foods and beverages, saturated fat – found mostly in foods that come from animals; trans fat (trans fatty acids) – found in foods made with hydrogenated oils and fats, such as stick margarine. Fats and oils are concentrated sources of energy, and eating too much fat, particularly, the wrong kinds of fat, can be harmful to health. For example, people who eat too much saturated fat and trans-fat are at higher risk of heart disease and stroke.
Heart-healthy eating includes fat-free or low-fat dairy products, such as, skim milk, fish high in omega-3 fatty acids, such as salmon, tuna, trout, about twice a week.
Also, reduce your salt intake: When cooking and preparing foods, cut back on the amount of salt and high-sodium condiments (e.g. soy and fish sauces). Avoid snacks that are high in salt and sugars; limit intake of sodas and other drinks high in sugars (e.g. fruit juices, flavored drinks, cordials and syrups).
People whose diets are high in sodium, including salt, have a higher risk of high blood pressure, which can increase their risk of heart disease and stroke. Similarly, those whose diets are high in sugar have a greater risk of becoming overweight or obese, and an increased risk of tooth decay.
Drink plenty of water to prevent dehydration, promote bowel movement and reverse constipation. Make it a daily habit to drink water first thing in the morning and last thing at night. It is possible to prevent or control obesity. The following are some of the different ways you can increase your water intake to help you maintain a healthy body.
Green Tea
Green tea has been found to be very effective in weight loss and will help in treating weight loss without any dieting or weight loss pills. Boil the green tea leaves and let it simmer for a few minutes. Drink two to three times a day to see visible results in days.
Apple cider vinegar & lime juice
Start your day with honey
Take a teaspoon of honey and mix it in a glass of hot water. Add a teaspoon of lemon juice to this water mixture. Drink this first thing in the morning on an empty stomach. Repeat daily for two to three months to see an effective weight loss.
Drink warm water
If you have the habit of drinking cold water, try to replace it with warm water which will help in eliminating the fat deposits in your body. Drink warm water after every meal and make sure that you leave a gap of half an hour between the food and the water. Never drink water immediately after eating.
Exercise regularly. Remember, fitness is more important than being thin. Importantly, there is more to daily health than weight and the pursuit of being underweight. In this regard, fitness is more important than being thin. If you know, you have heart disease or conditions that make you more vulnerable to an unhealthy heart, discuss with your doctor who would recommend appropriate steps to improve your health.
Get some rest. Getting enough sleep and rest has been shown to help with managing and coping with stress. Relaxing and ability to cope with problems whether at home or in the workplace, can improve emotional and physical health.
Engage in wholesome relationships: A family history of poor eating habits, obesity, smoking, high blood pressure, coronary artery disease, high blood cholesterol, domestic violence, diabetes, sedentary lifestyle, drinking too much alcohol can all be changed. The cycle of negative relationships with self and others does not have to continue because as Maya Angelou, said: “when people know better, they do better”. Since, there’s no sure way to know who is at risk of cardiac arrest, for example, reducing the level of risk is the best strategy. Go for regular checkups, get screened for heart disease and live a heart-healthy lifestyle. Don’t smoke, and use alcohol in moderation. Eat a nutritious, balanced diet and stay physically active.
Stop smoking: Some people smoke because they think, it helps them stay thin. However, researchers from the University of Louisville’s School of Dentistry, led by David A. Scott, in their findings released on this year’s World No Tobacco Day, showed cigarettes lead to infiltration of bacteria in the body. Plus, that the mouth is the dirtiest parts of the body and smoking makes it worse.
Know your risk factors: Be aware of your weight, BMI, blood pressure and cholesterol levels. Above all, listen to your body, and if something isn’t right, talk to a doctor. |
What is a sty?
A painful red bump on the eyelid which is caused by an infected hair follicle or an oil producing gland in the eyelid that has become clogged and infected.
To avoid complications, a sty should never be squeezed. Instead, a hot compress should be applied to bring it to a head. It should then break by itself.
Although rare, sties can lead to severe infection and swelling of the eyelids which requires immediate medical intervention. Therefore, it is best to have the child examined if any unusual swelling occurs.
Written by admin |
Next Previous Contents
1. Introduction
In the early nineties, the Institute of Phonetics, Faculty of Arts, Charles University, Prague, and the Institute of Radio-Engineering and Electronics, Academy of Sciences, Prague, had managed to assemble a complete TTS implementation for the Czech and Slovak languages, using a linear prediction based diphone synthesis. This TTS engine then served as a base for further research and applications in speech synthesis, until the source became too large and complicated to be easily modified, ported to new hardware or operating system, or to be well understood by anybody except the authors. By the end of 1995, when a need for testing some new prosody modelling hypotheses had arisen, these limitations were slowly becoming a major burden. A new implementation of part of the system was eventually written from scratch (starting in 1996) and it is still expanding, integrating the original results with numerous recent improvements. It has been baptized Epos in 1998.
Our primary design goal is to allow the user the ultimate control over the TTS processing. We avoid hard-wired constants; we use configuration options instead, with sensible default values. Most of the language-dependent processing is driven by a rules file, a text file using an intuitive and well-documented syntax. A rules file lists the rules to be applied on a written text structure representation to yield a corresponding spoken text structure representation (in fact it could be the other way round in principle, but somehow no one seems to need that). Some aspects of user-definable behavior don't fit into the concept of a rules file, and are therefore settable with various options in conventional configuration files. Finally, many other external files can be referenced either by the rules file, or a configuration file, such as segment inventories or dictionaries.
Most of these files have to be processed before any actual TTS processing has finished. That's why Epos is implemented as a background process, i.e. as a daemon under UNIX-like OSes and as a service on Windows NT and similar OSes. Epos reserves a TCP/IP port for any communication with client applications using a custom, quite generic protocol for TTS data flow control, called TTSCP. A simple TTSCP client utility named say-epos has been provided with Epos, but there are many more specialized TTSCP clients in existence.
Epos currently supports several main speech generation algorithms. A linear prediction coding speech synthesizer written by Ellen Víchová allows voice inventories as small as 25 kilobytes, whereas much larger but high-quality voices are available with a time domain synthesizer. Some additional synthesizers not under GPL are used with Epos; you can at least use the virtual speech synthesis to synthesize your own texts using some of them, if you are connected to the Internet. (Your text is partly processed and sent to our server. Then the generated speech signal is sent back to you.) The last option is to use the semi-free MBROLA speech synthesizer, though the MBROLA synthesizer itself is not a part of Epos. We are constantly working on improving the synthesizers.
Epos offers several interesting facilities for prosody generation. The rule-based (yet surprisingly acceptable) prosody based on the prosody research of Zdena Palková offers maximum reliability; the highly flexible neural network framework developed by Jakub Adámek presents a powerful meld of neural network based and rule based tools; and these facilities can also serve as a signal source for a linear prediction based prosody model implemented by Petr Horák. A lot of integration and assessment work in this area has also been done by Daniel Sobe.
The name Epos is not an acronym. It is Greek, go have a look yourself.
This section still has to be (re)written by somebody. At present, try to look at for additional introductory information or ask the authors by email.
Meanwhile, tell us what kind of introductory information you would like to see here. The documentation is provided for you and we need your feedback.
Next Previous Contents |
Resistors (Ohm's Law), Capacitors, and Inductors
From Mech
Jump to: navigation, search
The symbol for a resistor: Resistor symbol.gif
Real resistors:Real resistor photo.jpg
Try wikipedia for more on resistors and for the resistor color codes.
The relationship between the current through a conductor with resistance and the voltage across the same conductor is described by Ohm's law:
where V is the voltage across the conductor, I is the current through the conductor, and R is the resistance of the conductor.
The power dissipated by the resistor is equal to the voltage multiplied by the current:
If I is measured in amps and V in volts, then the power P is in watts.
By plugging in different forms of V=IR, we can rewrite P=IV as:
The symbol for a capacitor:
Capacitor symbol.gif or Capacitor polarized symbol.gif
The capacitor on the right is polarized. The potential on the straight side (with the plus sign) should always be higher than the potential on the curved side.
Real capacitors:Capacior photo.jpg
Notice that the capacitor on the far right is polarized; the negative terminal is marked on the can with white negative signs. The polarization is also indicated by the length of the leads: the short lead is negative, the long lead is positive.
A capacitor is a device that stores electric charges. The current through a capacitor can be changed instantly, but it takes time to change the voltage across a capacitor.
The unit of measurement for the capacitance of a capacitor is the farad, which is equal to 1 coulomb per volt.
The charge(q), voltage (v), and capacitance(C) of a capacitor are related as follows:
where q(t) and v(t) are the values for charge and voltage, expressed as a function of time.
Differentiating both sides with respect to time gives:
Rearranging and then integrating with respect to time give:
v(t')=\frac{1}{C} \int_{t_0}^{t'} i(t)dt + v(t_0)
If we assume that the charge, voltage, and current of the capacitor are zero at t_0=-\infty, our equation reduces to:
v(t')=\frac{1}{C} \int_{-\infty}^{t'} i(t)dt
The energy stored in a capacitor (in joules) is given by the equation:
The symbol for an inductor:
Inductor symbol.gif
Real inductors (and items with inductance):Inductors photo.jpg
An inductor stores energy in the form of a magnetic field, usually by means of a coil of wire. An inductor resists change in the current flowing through it. The voltage across an inductor can be changed instantly, but an inductor will resist a change in current.
Unless we are tuning an oscillator or something, we generally don't purposefully add inductors to mechatronics circuits. However, any device with coils, such as motors or transformers, add inductance to a circuit.
The relationship between the voltage across the inductor is linearly related by a factor L, the inductance, to the time rate of change of the current through the inductor. The unit for inductance is the henry, and is equal to a volt-second per ampere.
The relationship between the voltage and the current is as follows:
v(t)=L \frac {di}{dt}
If we multiply both sides by dt, we get:
Integrating both sides from t0 to t' gives:
which is equal to:
assuming that the voltage, current and energy of the inductor are all zero at t=-\infty reduces the equation to
The energy stored in the inductor is given by:
Elements in Series and Parallel
Resistors connected in series and parallel:
Series parallel resistors.gif
Two Elements in Series and Parallel
Resistor Capacitor Inductor
Series R_{eq}=R_1+R_2\, C_{eq}=\frac{C_1 C_2}{C_1+C_2} L_{eq}=L_1+L_2\,
Parallel R_{eq}=\frac{R_1 R_2}{R_1+R_2} C_{eq}=C_1+C_2\,
L_{eq}=\frac{L_1 L_2}{L_1+L_2}
More than 2 Elements in series or parallel
Here we provide the equations for calculating the equivalant resistance of three or more resistors in parallel; the same form can be applied to the corresponding equations for capacitors and inductors. Of course, you can always just simplify a network of elements by combining two at a time using the equations above.
To find the combined resistance of resistors connected in series, simply add the resistances:
If the resistors are connected in parallel, the equation is:
Proof for Resistors in Parallel equation
Here we provide the derivation for the parallel resistors equation. The corresponding equations for capacitors and inductors can be derived with a similar method.
Parallel resistors.gif
We can prove the equation for parallel resistors by using Kirchhoff's voltage and current laws:
V_s=V_{R1}=V_{R2}\, (KVL)
I_s=I_1+I_2\, (KCL)
Plugging in the constitutive law for resistors in the second equation yields:
Hayt, William H. Jr., Jack E. Kemmerly, and Steven M. Durbin. Engineering Circuit Analysis. 6th ed. New York:McGraw-Hill, 2002.
Personal tools |
Different types of cancers have been one of the leading causes of death in the world. In fact, 5% of the world’s populace suffer from cancers and the mortality rate is pretty high as well.
Although there are plenty of ways you can cure cancer such as undergoing plenty of chemotherapy sessions, the procedure, per se, is quite damaging to the other cells of your body.
The reason for such a procedure is to kill cancer-causing cells or cells that have been mired with the dreaded disease. However, it is not so forgiving in that your other cells can also get damaged all throughout the therapy as well. That is why the patient experiences hair loss at a dramatic rate as well as being nauseous and vomiting, to name a few.
However, there is a new line of treatment that is way better than the conventional radiation chemotherapy procedures.
Bone Marrow Stem cell transplants have been one of the leading medical procedures
in the treatment of different cancers. The idea is to extract the stem cells from a human body to be transplanted back to the patient who has suffered the dreaded disease.
The cells can either be Allogenic or ones that were extracted from a family member (who is compatible) and it can also be Autologous which are stem cells that were extracted directly from the patient themselves.
Why is the Bone Marrow So Important?
As you probably know already from grade school biology, bone marrow is that soft tissue that is embedded in the center of your skeletal system. It is the site where your main blood cells are produced- this being the red and white blood cells as well as the platelet that help aid in blood clotting.
The stem cells that you want are predominantly food in the person’s bone marrow and in the bloodstream as well. They first form multiple immature blood cells and once they mature, they enter the marrow and is then distributed to the different parts of the body.
Why Chemotherapy is Dangerous
The reason why chemotherapy has been the de facto procedure in the treatment of cancer is that it kills those cancer-causing cells pretty rapidly. But, this is done at the expense of your other body parts, this, including the bone marrow.
So, How Does Bone Marrow Stem Cell Transplant Come into Play?
Because of the chemotherapy’s harmful effects to the patient’s body, autologous bone marrow extraction is out of the equation.
So, the best possible track is to extract the stem cells from another person- may it be from the patient’s direct family or someone who is compatible with them.
Once a donor is found, the patient undergoes a few rounds of chemotherapy (yes, I know, this procedure is important). After which, the body will then be susceptible to foreign matter, which is why transplanting the new stem cells are going to be more effective in fighting the cancer cells by then.
Although I mentioned that this procedure is better than conventional chemotherapy methods, chemotherapy is also needed just to prime the body to accept the transplanted cells better.
That being said, there are also some cases where the new cells are deemed as “foreign” and might be attacked by the person’s own immune system. Therefore, it is important that the transplanted cells come from a person who is completely compatible with the patient.
For emergency cases 1-800-700-6200 |
• ~ Mechem Engineering ~
To be one of the respectable & reputable water engineering company in Malaysia
How SCBA Aerator Work
• Name : How SCBA Aerator Work
How SCBA Aerator Work How SCBA Aerator Work
Description : How SCBA Aerator Work
Submerged aeration : Results from trapped air in the aerator pipes.
Bio-Disc aeration : Aerobic bacteria absorbs oxygen directly from trapped air underwater and it atmospheric pressure.
Suface aeration : Waste water from the bottom of the tank is trapped in the aerator pipes and transported to the surfaced the tank where it absorbs additional oxygen
Mixing action : Rotation of the bio-disc aerator causes air to be transported and mixed through out the tank. Oxygen is diffusedin all parts of the aeration tank. Shorter retention time is gained in addition to no anaerobic activity in the aeration tank. |
IT Doctor: Why is the Computer So Hard to Use?
October 27, 2011, By George Lang
I once had a professor in an undergraduate course who hated Bill Gates; she said, “If you are going to make a profit off something you build and sell, that something ought to work!” She was, of course, referring to the Microsoft Windows operating system and the Microsoft Office suite of applications.
Too many of us (even college professors at times) are intimidated by computer technology; so much so, that we choose to avoid it at every turn. Such frustration, fear, and animosity beg the critical question, “Why can’t they build a computer that is easy to use?” The secret solution is educating oneself in its use.
Unfortunately, user-friendliness does not come naturally to computer hardware and software manufacturers. There are some exceptions: special folks, like Steve Jobs, who have made a healthy living out of providing it. Jobs is a prime example of the importance of answering the critical question, because it is overwhelmingly evident that consumers flock to the products that do so, and avoid the ones that do not; i.e., manufacturer profitability is directly proportional to the degree of inherent user-friendliness it possesses.
Computers automatically perform an incalculable number of very complex tasks over extremely short time periods. Occasionally, something goes wrong and most folks don’t have a clue what to do next. In many cases, a simple reboot of the system will solve the problem; but data loss is always a possibility. Avoiding such circumstances requires users to educate themselves a little deeper into the technologies they use.
Staying in shape is an excellent analogy; a little hard work goes a long way toward solving the problem. But do we blame Nabisco and Hershey for creating the cookies and candy that put us in the mess in the first place? Similarly, many of us would rather sit back and bad-mouth the creative computer geniuses that seem to be the root cause of our misery.
Doctor’s prescription: In a complex society, if you want to get the most out of them, you need to spend a little extra time educating yourself about the technologies you use.
© 2008-2012 - All rights reserved | Privacy Policy |
Header Ads
With the developing countries being far behind the developed ones, it becomes important that the latter do not overrun the needs and aspirations of the former
Globalisation is a multifaceted and all-encompassing phenomenon. It is a process that breaks down boundaries between countries and slowly transforms the globe into a unit. Globalisation has economic roots and political consequences, but it has also brought into focus the power of culture in this global environment – the power to bind and to divide in a time when the tensions arising out of integration and separation tug at every issue that is relevant to international relations.
Historically, the process of globalization can be traced to what is known as the `voyages of discovery’, when Vasco da Gama and Columbus went around the globe in search of wealth for their rulers. In other words, it began with the emergence of capitalism in Europe about five hundred years ago. Its later development was in the forms of colonialism and imperialism. Characteristics of these two systems have been the economic exploitation and plunder of territories that were colonized and people who were dominated in order to ensure accumulation of capital or wealth by the colonial and imperialist powers
Globalization is carried out through the policies of liberalization, deregulation and privatization. Liberalization policy encourages, among others, free market and free flow market can encourage efficiency and healthy competition, but it is mostly motivated by the desire to accumulate large profits, to the extent that the interests and welfare of people are often compromised. Also, in order to attract capital investments various governments offer many incentives, which, when benefiting foreign capital, cause losses to the country.
Deregulation strengthens the possibility of free competition by reducing, setting aside or abolishing all restrictions or barriers to economic activities. One of the consequences of deregulation is that the position of the low-income groups is weakened. Regulations, which exempt payment or impose low fees on health or education for the lower income groups, can be cited as an example. If these regulations are removed under the diktats of the IMF, which opposes subsidies, it will surely increase the burden on the economically weaker sections.
It is true that privatization can be a better alternative when the management or administration under the public sector is fraught with inefficiency, waste and corruption. But in practice, it is also well known that privatization does not necessarily offer a better and cleaner management. Management of water, electricity and telephones after privatization has not become any better than before, but the charges and levies for these services have increased to the extent that they have become burdensome on the people.
In addition, there are also pressures from the developed countries – through IMF, World Bank, World Trade Organization (WTO) and trade blocs – to remove trade barriers in developing countries, so that their doors can be widely opened to capital and commodities from the powerful countries. When this happens, the big capitalist countries get an opportunity to dominate the economy of the developing countries, which are striving for developing countries, so that their doors can be widely opened to capital and commodities from the powerful countries. When this happens,, the big capitalist countries get an opportunity to dominate the economy of the developing countries, which are striving for development. In this way, the process of neo – colonialism begins.
When national economies are increasingly inter-linked and capital is mobile, crisis in one country can rapidly spill over to other countries, creating a kind of global domino effect. There can be no denying that this is exactly what has happened in the past. One good example in this context is the South East Asian crisis. Developing countries have been ruined in the process. In such situations, powerful countries step in to ‘help’ these countries and take over the production process with the help of MNCs and international financial institutions.
The idea of the nation-state was perhaps the most significant concept and practical innovation of modern times. But, in the new global age, it is becoming increasingly irrelevant. Now, the very idea of the “nation state” is fast becoming obsolete. Economic control by developed countries result in strong political influence over developing countries. These processes beget development of economic and political structures at the international level, which often play more important roles than the national, economic and political structures. Dominant economies use these international agencies to expand their interests and global influence. USA often manipulates the United Nations Security Council to further its global policies. Among agencies within the economic sphere which are used by big powers especially USA, to influence or control the developing countries in the Third World are the World Bank (WB) and the International Monetary Fund (IMF). When the WB or IMF lend to developing country, they often impose strong conditions, camouflaged as structural adjustments. Economic or corporate adjustments involve steps to reduce costs and control labour, in a way that erode the competitive advantage of loanee countries. These often result in the reduction of wages or increase in productivity without increase in the wages. Expenditures grouped under subsidies for social sector, including education, health and housing are reduced. Experiences in various countries like Mexico and the Philippines demonstrate how WB and IMF loans have resulted in their being dominated by developed countries via these institutions.
Besides, globalization also encourages the dissemination of what is characterized as global culture. ‘International’ cultural traits have become more popular among youths than `national’ ones. Developments in communication and transportation have heralded the emergence of a global mass culture. Increasing “homogenization” is a worldwide culture fact and a direct consequence of globalization. It is feared that this will lead to stripping away of individual identity of cultures and a bland uniform world.
In terms of dress, fashion, social mores and intellectual practices, most people are fast becoming indiscernible from Westerners. All this, thanks to “opening up” of the country to the global economy. Western-style individualism is on rise. The age-old cultural value of social conformism is fast losing relevance. Traditional societies, who prided themselves on their exclusively, are fast losing their distinctiveness. Some refer to this as “Mcdonaldisation” or “Coca-Colasation” of the world and view it as one of the supposedly pernicious effects of global capitalism. In the past, this had led to socio-cultural conflicts and this may happen in the future too.
So, why do these leaders of developing countries accept globalization? Firstly, the waves of globalization are too strong an influence of the superpowers too great, so much so that the process has become inevitable. Secondly, in the Asian region, many countries and leaders accept capitalism as the best way to achieve progress and development. So hey readily accept the influence of globalization that originates from the capitalist countries, although they realize its weaknesses and dangers. Thirdly, some of the political leaders and leading economic figures believe that globalization, the policies related to it and the capitalist system can provide them and their groups with huge profits. As has been amply shown, many among them have become rich and powerful through greed, corruption and cronyism. That these have also been responsible for the downfall of many of them, is altogether a different story.
Globalisation has become the overarching fact that all countries and cultures of the world must contend. The real challenge of globalization is that of reaping the undeniable opportunities it offers for increasing general level of prosperity throughout the world through liberalized trade regimes and international cooperation. With the developing countries being far behind the developed ones, it becomes important that the latter do not overrun the needs and aspirations of the former. It is only in this way that the world can hope for lasting peace and prosperity.
No comments
Powered by Blogger. |
Sustainable and Ethical Fashion: What does it mean?
This past weekend I was lucky enough to be able to take part in The Collective speaker series organized by LAB MPLS which occurs annually here in Minneapolis. I spoke about the benefits and drawbacks of current sustainable and ethical fashion options and why we do not have any 100% sustainable choices in the garment industry. I wanted to share all that information and be able to elaborate on a few of the topics as well as offer suggestions on how we can move forward with the knowledge and resources available.
Photo taken by Anna Lee
Photo taken by Anna Lee
What is it?
What is sustainable fashion vs. ethical fashion? How are they different?
Sustainable fashion refers to the fabric, processes and quality (among so many other things) that contribute to less of an environmental impact. Ethical fashion, also known as social fashion is most often about the producers and their fair and safe working environments, living wages and more that contribute to enhancing quality of life. Ethical fashion can also refer to a company’s overall focus and mission in relation to how they give back to the world and community they operate in. Our dedication to discovering and supporting the businesses that strive for either of these operatives are what we call “Conscious Consumerism”.
Within Sustainable and Ethical fashion, there are many ways that clothing can be labeled Sustainable or Ethical. Examples include: Organic, GOT certified, reused, recycled (either fiber or garment form), deadstock, natural fabrics, long lasting design, quality, transparent production, etc. Then there are things that don’t necessarily fall into either the sustainable or ethical category but we still associate with positive purchasing and conscious consumerism. These are things like made in the USA, shop small, shop local and independent design.
But what do all of these subcategories mean? What are natural fabrics and what are their benefits and drawbacks? Bellow are several of the most common sustainable and ethical garment categories along with a brief explanation of their contribution and shortcomings to our closed loop initiatives.
What is it?: Natural fabrics are textiles who’s fibers are made from organic (living) material. Examples of this are cottons, linens, silks, wools, bamboo, and more.
Pros: These are naturally replenishing materials that break down easily once discarded. Depending on the fiber they can be durable, soft, breathable, moisture wicking and overall very comfortable.
Cons: Many natural fabrics require a massive amount of water to sustain crops. Cotton, for example, requires 2,400 liters in order to produce a single t-shirt. Additionally, there is often the use of pesticides and chemicals in growing and production.
Other Facts: Bamboo specifically is one of the most economical and sustainable crops, it is one of the fastest growing plants on earth, requiring very small amounts of water and releases 35% more oxygen into the air than hardwoods. But bamboo actually requires an unbelievable amount of energy and chemicals to process into wearable fibers. The process of turning the plant into the fabric is actually the exact same process as creating rayon, which uses petroleum, a non-renewable, toxic substance.
What is it?: These are natural fabrics made without the use of chemicals, pesticides or harmful dyes.
Pros: Not using toxic and harmful chemicals in production can not only be good for the environment, but good for all workers involved in the manufacturing process.
Cons: Requires far more water and land to produce the same yields as conventional fibers. Arguably, organic cotton uses more resources and can be overall more harmful to the environment than conventional cotton.
What is it? This is any fabric made from breaking down existing product or materials and may be one of the most promising paths forward for sustainable textiles. The most popular example of this is fabric made from used water bottles. But it doesn’t necessarily need to be synthetic. There are also recycled natural fabrics such as cotton and wool.
Pros: Fabrics uses existing material are a solution for items that would otherwise be waste or pollution. If this process were perfect, it could theoretically create a closed loop process of turning waste back into usable products.
Cons: The technology for recycled fabrics is still quite poor. Fibers such as cotton and wools are broken and damaged in the aggressive recycling process which creates a lower quality finished product, both in appearance and durability. Think about a typical cotton fiber. It is 2.5” is length and then twisted together to make a yarn. which is then woven and knitted to create a fabric. When this fiber is recycled, the fabric is broken down, breaking most of those cotton fibers, which are then twisted back together to make a new yarn. But the shorter fibers don’t hold together as easily, and are often a lot more rough in hand feel. Many companies, are working toward incorporating recycled fabrics into their designs. However, they are fairly limited because these fabrics do not past durability testing. As an example recycled cotton fibers must be mixed with 80% non-recycled fibers in order to pass some industry standard quality tests.
Other Info: Even with these downsides, this is one of the more promising paths to sustainable fabric as this technology improves. Some companies are even pursuing ownership of the fibers in garments even after the product has been purchased by the consumer. The reason being, once the garment has reached the end of it’s lifecycle, it must be returned to the same company, taken apart and made into something entirely new.
What is it?: Any garment that has been previously purchased or owned and then discarded, donated or sold.
Pros: This is a great option for many consumers, maybe even the most sustainable and ethical when looking at a single garment. We are purchasing garments that have already been made. The carbon footprint of any garment purchased second hand is small and limited only to that in transportation, cleaning and packaging.
Cons: In order to maintain a supply chain, quality and durable clothing will still need to produced in order to sustain a resale market. Every garment has a lifespan, and none of the garments that currently exist are going to last forever, especially if they are getting used. Also, donation practices are still very imperfect. Somewhat thanks to fast fashion and low quality garments, small percentages of garments donated make it to new homes. Many textile products donated are destroyed or shipped back overseas, wasting fuel in transportation and ultimately polluting the environments of some of the most needy. Our resale market is often skimming the good product off the top, while continuing to leave a lot of waste behind that must be dealt with.
What is it?: This is a relatively new option derived from the shared market place similar to Airbnb or Uber. It’s the idea that clothing can simply be rented instead of owned.
Pros: You don’t actually need to own many of your clothing items, or at least the type of garments you wouldn’t wear very often (ex. formal wear). This can save money and stress. Garments that may get used one time in your own closet, can be rented at a fraction of the price of purchasing and can be sent back to a company in order to be cleaned and worn again by another individual.
Cons: This too is quite imperfect. MOST of the garment rental companies own the product that is available for rent. Meaning, we haven’t necessarily solved the problem of unworn garments. These company’s’ closets end up being an inflated version of our own, where some items are in continual rotation, while others sit and wait and often never get any use. While it still saves your wallet, it hasn’t entirely solved the problem of unworn garments, it’s just hidden it.
What is it?: Small scale businesses which can be somewhat ambiguous, but often means privately and independently owned.
Pros: Your individual purchases make a more significant impact for a single business. You may have already heard or seen this saying: Every time you purchase a garment from a small business, an actual person does a happy dance. Also, It may be much easier to inquire as to business practices and sources, by talking to owners directly;
Cons: Shop small doesn’t always mean sustainable or ethical. It is still necessary to determine whether best practices are being used.
What is it? Any item that is being sold and/or has been manufactured in your community and/or the United States.
Pros: More of the money you spend stays in your own community. Sales tax contribute directly to your own streets, schools, parks and government facilities. Employees paid are likely from your own community as well, meaning you are helping support your neighbors. Also, the cost and footprint of transportation from business to your home is drastically reduced, lowering the overall carbon footprint. And, when considering USA made, it is more likely garments are being produced under the strict USA codes for health and safety, and legal minimum wages.
Cons: Similar to shop small, this doesn’t automatically mean that the businesses use ethical or sustainable practices. And, it’s hard for us to remember in a city like Minneapolis but many sustainable or ethical options are not often local for many people.
What is it?: Companies that have a specific, stated and practiced social effort.
Pros: You know that the company is supporting a cause and you know what that cause is, which helps you to have a better understanding of where your money is going.
Cons: Not necessarily sustainably focused. Many of these companies are still international, contributing to waste in packaging and transportation.
What Now?
Having the knowledge of what all of these things mean, how do we even begin to make our decisions?
As a business owner, sustainability advocate and fellow consumer, these are my personal suggestions and beliefs: Let’s do the best we can.
Let’s consider the entire system, not just the garment itself. Where did it come from? How long will it be used? How we will ultimately dispose of it? Where will it go? What will it become? Does it still have value? Ultimate sustainability will be the result of a closed loop system. We can strive to extend the lifespan of the things we own and close that gap to the best of our ability.
Let’s consider that significant and meaningful improvement and change in the fashion industry is going to be an evolution. While many of us can stand to own less, our answers are likely not as easy or simple as not buying things. We live in a society that depends on our economy and we are all connected. Instead of just throwing on the breaks, let’s consider advancement and how we can all contribute to steering that advancement in the direction of sustainability and better quality of life for all.
Let’s support the businesses doing the most good. If a company is doing the absolute best they can now, with the resources and technology available, it is likely they will keep doing the best they can as their resources expand and technology improves.
Let’s advocate for change outside of the clothing industry. Renewable energy and manufacturing regulations impact our industry and are necessary for a sustainable future. Learn, vote, advocate and educate.
If you are interested in learning more or would like to get more involved, here are helpful resources:
Fashion Revolution
The True Cost
Green America
Responsibility in Fashion
Know the Chain |
Tynamite's story writing lessons > #1 Children's Short Stories | Writing Courses | WritersCafe.org | The Online Writing Community
#1 Children's Stories
#1 Children's Stories
A Lesson by tynamite
Learn the structure and tone of children's stories, the most basic of all stories.
It has come to my attention, that a minute few writers on this website only know how to write children's stories.
And by that, I mean stories written in a style for children.
I was reading a first person story on this website in the viewpoint of a child, about how she hated how her mother was a victim to domestic abuse because of her father. I told my friend about it and we laughed about it mocking it.
Examples of children's books with traditional children's style narration are Mz Wiz, Bill's New Frock.
I recommend that you buy Ms Wiz In Jail, Bill's New Frock, Molly Moon Book number 1.
Here is an example extracts from audiobooks for those who don't get it
<-- Bill's New Frock
<-- Molly Moon 1
<-- Ms Wiz Adventures
In children's stories, there is always a bad person (the antagonist) (he's antagonising me! stop it!), and the good person troubled by the bad person who wins in the end (the protagonist).
A common trait of children's books is that they tell the reader how they should feel for the protagonist. They do not let the reader come to their own conclusions about whether they should be empathetic or not.
They also have a common structure.
Introduction (to characters, scenery) --> Problem (is introduced) --> Build up (of situation)--> Conflict (happens) --> Resolution (of the problem) --> Ending (usually happy)
If you read the kind of stories that primary school (elementary) children come out with, it will most likely be in that structure because that's all they know.
For an example of a common children's story, read my story called The Black Portal and the tale of The Trampy Cat and the Food.
The Trampy Cat and the Food follows the format well and 100%, and it's a moral story slash fairy tale.
I have also studied how children's stories start.
99% of them start out with something bad happening to the antagonist in chapter 1.
This is the first lesson, because children's stories are
1> Accessible. Anyone can do it, given a prompt.
2> Widespread
3> The Foundation Lots of writers try to write above their weight. I started out on children's stories, they didn't. They fail, I don't. We had to write one in primary and secondary school for our SAT tests (when I was 11) and GCSE English qualifications (when I was 16). (I'm from the UK.) I must have wrote loads of them at school.
I'm being a writing mentor for Aliciah on this site, and I gave her 5 prompts for childrens stories and I asked her to write one, so I can find out what her writing style is. If you're going to do so, make sure that it's 3-4 sides of A4 handwritten, and that there is no more than 3 characters.
Here's a message I sent Aliciah, who I'm the writing mentor of thanks to Chelsea. She chose prompt number 1, and she's still editing her story after 2 months! -_-
I've thought of 5 prompts for you for the children's short story, to give you some ideas.
Remember it has to be written on lined A4 paper, with 3 characters maximum, and to aim for 2 or 3 pages, no more than 4 pages. Don't spend more than 45 minutes on it, as it's only practise.
Aliens come down to planet earth, and they can only say English words that begin with the same letter like a or t. In future they plan on learning words of other letters, but now they can't.
Write a story about a child who shoplifts, feels guilty and then puts what she stole back. The security man thanks the child.
Here's what I wrote at 14. Write a story about a boy who catches her younger sister sneaking her Dad's pie, who plans on getting her in trouble for it.
Write a story about a spoilt kid who always wins games by cheating, and they get what they deserve, when someone else starts a new game of their own.
Write a story about a fox in the woods, that manages to escape being shot with a gun, when hunters show up on their horses. The dilemma for this story will be whether the fox decides to save its child; as the fox could either gamble its own life by saving the fox, or take the risk that the hunter that is coming from behind, won't see the fox child.
Next Lesson
Subscribe Subscribe
1 Subscriber
Added on August 26, 2018
Last Updated on August 26, 2018
No Rating
My Rating
Login to rate this
Birmingham, England, United Kingdom
|
Change the chapter
(a) The pressure inside an alveolus with a $2.00 \times 10^{-4} \textrm{ m}$ radius is $1.40 \times 10^3 \textrm{ Pa}$, due to its fluid-lined walls. Assuming the alveolus acts like a spherical bubble, what is the surface tension of the fluid? (b) Identify the likely fluid. (You may need to extrapolate between values in Table 11.3.)
Question by OpenStax is licensed under CC BY 4.0.
Final Answer
1. $0.0700 \textrm{ N/m}$
2. This fluid is probably water at $37^\circ \textrm{C}$
Solution Video
OpenStax College Physics Solution, Chapter 11, Problem 55 (Problems & Exercises) (1:14)
Sign up to view this solution video!
View sample solution
Calculator Screenshots
OpenStax College Physics, Chapter 11, Problem 55 (PE) calculator screenshot 1
Video Transcript
This is College Physics Answers with Shaun Dychko. We have this formula for the pressure inside a spherical object and it's four times the surface tension divided by the radius of the object. We can solve this for gamma by multiplying both sides by r over four. Then we'll switch the sides around and we get gamma is pressure times radius over four. So that's 1.4 times ten to the three Pascals, times two times ten to the minus four meters radius of the alveolus, and divide by four and you get 0.07 Newtons per meter is the surface tension. We're told that the surface tension of water at 20 degrees Celsius is 0.0728 and at 100 degrees Celsius it's 0.0589. This is pretty close to 20 degrees but slightly less. So it's tending towards this number here and so at some temperature probably between 20 and 100, well, we know that the temperature of the body is 37 degrees and so that's fluid. It's probably water at 37 degrees Celsius. |
You have typed in a few commands now, and some of those commands took additional information such as a file name. The additional pieces of information you type after a command name are called arguments, or synonymously in most contexts, parameters.
In Command Prompt, arguments are separated by spaces. When you type cd .. for example, Command Prompt splits the text you typed by spaces. The first piece of text is always taken as the command name. Every piece of text after that is taken as an argument. The argument is passed in a list to the internal programming of the command. If you ever write software or scripts, you will eventually learn how to get this list of arguments too.
Try typing help cd again. You will see something like:
Displays the name of or changes the current directory.
CHDIR [/D] [drive:][path]
CHDIR [..]
CD [/D] [drive:][path]
CD [..]
The above is documentation that Command Prompt users learn how to read. The CHDIR lines are telling you that CHDIR is another name for CD. More importantly, notice the argument [..]. The brackets usually mean the parameter is optional. But what is ..? If you read down, you’ll see:
This help documentation is telling you that you can optionally type .. as an argument and it will work.
Now take note of the optional /D argument. What does that do? Well read down and you’ll see:
Now we know! You can specify /D as an argument to switch from C:\ to D:\ or E:\ or wherever you may need to. This is useful if you want to change your working directory to a USB flash drive. Example: cd /D "X:\Travel Files".
To force Command Prompt to ignore spaces and treat text as a single argument, put quotes around the argument. That is why you need quotes around file and folder names containing spaces.
Arguments such as /D above are often called switches. The argument can be thought of as switching an option on/off. File name arguments are not switches. You’ll often see switches start with a forward slash /, but sometimes a hyphen - or double hyphens -- too.
Switches typically are case insensitive. For simple commands, a good programmer should have remembered to check for both capitalized and lowercase arguments while looking at the argument list. Some very complicated programs have so many switches that they need capital and lowercase letters to be different things. Some programs are complicated enough to abandon the one-letter convention and use full words for switches.
Don’t be surprised to see variation. If you glance at the documentation you’ll be okay.
Most commands will not have help text with HELP. Programmers typically include documentation inside their command, which you can access with the right arguments.
If you are stuck and want to see documentation, try specifying /? as an argument. For example, enter cd /? and you will see the same text help cd gave you. Some programmers use /h or -h to give help text instead.
Feel free to try these arguments for new commands. If you’re worried about what might happen when they don’t work, check online first. Testing unknown arguments in cd or dir might be harmless, but you would be a smart person to think twice before seeing how shutdown handles shutdown /?. For the record, shutdown /? does give you help documentation instead of shutting down your computer.
Prev ————————– Next
Leave a Reply
Your email address will not be published. |
Wikipedia logo This page uses content from Wikipedia. The original article was at IRT Sixth Avenue Line. The list of authors can be seen in the page history. As with Metro Wiki, the text of Wikipedia is available under the Creative Commons Attribution-Share Alike License 3.0 (Unported) (CC-BY-SA).
The IRT Sixth Avenue Line, often called the Sixth Avenue Elevated or Sixth Avenue El, was the second elevated railway in Manhattan in New York City, following the Ninth Avenue Elevated. In addition to its transportation role, it also captured the imagination of artists and poets.
The line ran south of Central Park, mainly along Sixth Avenue. Beyond the park, trains continued north on the Ninth Avenue Line.
The elevated line was constructed during the 1870s by the Gilbert Elevated Railway, subsequently reorganized as the Metropolitan Elevated Railway. By June 1878, it serviced Trinity Place, Church Street, West Broadway, and then ran along Sixth Avenue from Rector Street in Lower Manhattan to 59th Street. The following year, ownership passed to the Manhattan Railway Company, which also controlled the other elevated railways in Manhattan. In 1881, the line was connected to the largely rebuilt Ninth Avenue Elevated, in the south at Battery Place, and in the north on a spur between 53rd Street and 59th Street.
Due to its central location in Manhattan and the inversion of the usual relationship between street noise and height, the Sixth Avenue El attracted artists; in addition to the John French Sloan work shown at Wikipedia:IRT Sixth Avenue Line, it was also painted by Francis Criss and others. [1]
As with all elevated railways, the Sixth Avenue El made life for those nearby difficult. It was noisy, it made buildings shake, and it bombarded pedestrians underneath with dropping ash, oil, and cinders. Eventually, a coalition of commercial establishments and building owners along Sixth Avenue campaigned to have the El removed, on the grounds that it was depressing business and property values. The Sixth Avenue El was closed on December 4, 1938 and razed during 1939, paving the way for the replacement underground IND Sixth Avenue Line, which opened between 1936 and 1940.
When the El was taken down, much of the scrap metal was sold to the Japanese. It became a common thought during World War II that some of this metal was being used in armaments against Americans, and was so remarked upon in E. E. Cummings' well-known 1944 poem "plato told":
plato told
him:he couldn't
believe it [...]
(he didn't believe it, no
sir) it took
a nipponized bit of
the old sixth
el:in the top of his head: to tell
The footings for the El were again rediscovered in the early 1990s during a Sixth Avenue renovation project. [2]
Station listingEdit
• Jackson, Kenneth T. (ed.), The Encyclopedia of New York City, "Elevated Railways", Yale University Press, 1995. ISBN 0-300-05536-6.
External linksEdit
view talk edit
New York City Subway Lines
IRT Manhattan: 42nd St ShuttleBroadway-7th AvLenox AvLexington Av
Bronx: Dyre AvJerome AvPelhamWhite Plains Rd
Brooklyn/Queens: Eastern PkwyFlushingNostrand Av
Former: 2nd Av3rd Av6th Av9th Av
BMT Manhattan trunks and branches: 63rd StAstoriaBroadwayManhattan BridgeNassau St
Eastern Division: Archer AvCanarsieJamaicaMyrtle Av
Southern Division: 4th AvBrightonCulverFranklin AvSea BeachWest End
Former: 3rd Av5th AvBrooklyn BridgeFulton StLexington Av
IND Manhattan/Bronx trunks: 6th Av8th AvConcourse
Brooklyn/Queens: 63rd StArcher AvCrosstownCulverFulton StRockawayQueens Blvd
Former: World's Fair
Connections Chrystie St60th St
Future 2nd Av
v d e
MTA: New York City Subway
Shuttles NYCS Route S (42nd StreetFranklin AvenueRockaway Park)
Defunct NYCS 89HKTVWJFK Express
BMT 12345678910111213141516Brooklyn Loops
Shuttles 63rd StreetBowling GreenCulverGrand StreetOther
Expansion Second Avenue Subway7 Subway ExtensionFulton Street Transit Center
Divisions A Division: IRTB Division: BMTIND (Second System)
Lists Inter-division connectionsInter-division transfersLinesServicesStationsTerminalsYards
Miscellaneous AccessibilityChainingHistoryMetroCardNomenclatureRolling stock
Other: NJT (buses)NYCT busesRoosevelt Island Tramway |
Category Archives: Storytelling, Art, and Craft
IOU a story that works: Narrative Debt
Which is a tad prescriptive.
However, a gun can hang on the wall for an entire novel and still not go off, because the protagonist can’t reach it. He/she might be trying to reach it, they might even get their fingers on it, but the antagonist might stop them pulling the trigger, or they might have unloaded it at some point, or the protagonist might die just as they reach the gun.
The writer’s job is to not forget that the gun is there, because the reader won’t. ‘Hang about, they’re in the library. There’s a .357 Magnum in that drawer, in the desk that he/she is standing behind. They put it there, you muppet, just shoot the bugger.’ It is best not to make readers think of your hero as an idiot, unless you intend them to think of the hero as an idiot — which is a difficult trick to pull off.
Narrative debt means, to me, ‘Don’t cheat the reader’. [Caveats apply]
Don’t neglect to tell them something of importance that the POV character would know. (Jack Reacher novels: ‘Echo Burning’: Reacher takes a phone call, gets a one-word answer to a question, but the reader doesn’t know what the answer is until much later.)
Don’t drop in something out of nowhere to fix a plot problem and just leave it there without going back and working it into the plot earlier. (His Dark Materials ‘Amber Spyglass’ too many to mention)
Don’t let a plot, sub-plot, or character just fizzle out and disappear without some kind of closure. (Jason and the Argonauts: Heracles just wanders off halfway through the story and never returns.)
A writer can of course get away with all these things from time to time, (Child and Pullman are very good writers, and Jason and the Argonauts is a couple of thousand years old as a story) but they have to know what they are doing (not entirely sure what the hell Pullman was doing if I am completely honest, very irritating book that). They can’t just do it because it is easier than building a story that works. A writer owes a reader a story that works, that is the contract between the two: ‘Give me your time and I will give you a story worth reading.’
Narrative debt sometimes makes the writing process a lot harder. Tough. That’s the job you sign up to when you decide to become a writer. If you want to just make stuff up that makes no sense, then become a politician (and even they need Spin Doctors to make their nonsense sound reasonable).
A story that works is satisfying. It doesn’t have to tie off every single plot thread in a neat little bow at the end, but it does have to keep its promises to the reader.
I build stories via characters, so most of my narrative debts accrue from interactions between characters and from what I do to them in the process of telling the story.
If I have a character that hates another character and at some point they have to make a decision as to whether or not they save that hated person from some jeopardy, then they have to think about it. They won’t suddenly overturn their entire dynamic with that character just because the plot requires it of them. To be fair, in a first draft they might, but then I will go back and fix it in the second. It is what second drafts are for, fixing plot holes like that.
And usually it is already there in the character, because I know my characters. I treat them as real people. What do you mean you don’t? Oh right, you worked on ‘Lost’ and ‘Heroes’.
In Kinless, I have a character called Kihan. He turns up in the story and makes a decision to do something for this land that he does not know and has no connection with, which will probably result in his death. Several beta readers pointed out that he had no reason to do this in the first draft. However the fix was already there, he had a perfectly valid reason for doing this, it was in the narrative debt relating to the character. He did it because of who he was, what he had been through, and what he wanted to be. And all this leads to what he becomes.
But narrative debt is also a structural thing.
Lovers have to love. Enemies have to fight. Stories have to make sense. A story is a construct. The writer is choosing what to put in and what to leave out. The writer is making choices all the time. The writer’s choices are the story.
Let’s go back to ‘Lord of the Rings’.
Gollum, as a character, had to get his hands on the ring. Aragorn had to become the king. Saruman had to get his come-uppance. Frodo had to be utterly destroyed by his quest to destroy the ring. Those things had to happen because that is the nature of storytelling.
Gollum gets his hands on the ring and in the process destroys it (still the best damn scene in the book). Aragorn had to face up to his fears and surmount them. Saruman betrayed everything he stood for and lost everything because of this betrayal. Frodo had to suffer to get the ring to Mount Doom and such suffering remains with a person. And all the other characters had their own journeys to complete too.
That is narrative debt.
If the ring was destroyed without Gollum getting his hands on it then he would just be an ineffectual monster who was easily defeated. If Aragorn did not grow a pair and step up then he would be an ineffectual hero. If Saruman did all that he did and got off scot free then what is the cost of evil. And if Frodo did all that he did and returned to his previous life without a care, then what is the cost of heroism.
It’s a debt.
It’s a contract with the reader.
The writer makes the deal, ‘Read my story and I won’t let you down, I won’t treat you — the reader — as an idiot, I’ll pay off on the debts my story accrues.’
Otherwise, the reader might as well read Hansard.
Image Attribution
So who the hell’s head are we in now?
Transitions between scenes, between acts, between storylines, are an important part of storytelling. They ease the reader from one place to another without making their eyes stop as they pause to try to puzzle out where the hell they are.
But in multiple POV storytelling, transitions are vital.
Who? What? Where? When?
Those four questions have to be answered at every transition from one POV to another.
Because transitions orientate the reader. They tell them: WHOSE head they are in, WHAT is going on, WHERE it is happening, and WHEN it is happening.
The biggest problem with Multiple POV storytelling, and the reason why they tend to be longer stories than those using a single POV, is the transitions. You always have to reset the scene if you are using scene-breaks between POVs.
You can’t just switch and hope the reader keeps up. Clarity, always clarity, who/what/where/when needs to be absolutely clear at all times to the reader.
You don’t really want to confuse your reader with the simple stuff, do you?
If you are using POV shifting or Omniscient-head-hopping then the Where, When, and What are taken care of by the initial Transition into the scene, but you still have to be absolutely clear about Whose eyes the reader is behind.
Think of it like speech attributions. When writing dialogue you have to attribute the lines to a character. You can sometimes do away with the attribution if it is obvious who is speaking, but it does have to be obvious.
It is the same with POV transitions, you can avoid the attribution of a POV but only if it is obvious and it is rarely that obvious. Err on the side of caution and let your editor tell you if it is unneeded.
Don’t go commando if you’ve forgotten to put your jeans on too.
With the more usual (these days) technique of using scene-breaks between POVs, you have to reset the entire scene, every time.
Even when using fast-cutting techniques, in a battle-sequence for instance, the reader still has to be told where on the battlefield the character stands, what is happening immediately around the character, and when all this is occurring.
There are dodges and tricks you can use to avoid too much set-up. It is a fast cut after all. You can avoid the When portion by having a scene every so often that orientates the reader in time, which avoids having to say, ‘three minutes later’ and other clunky phrases.
But clarity is everything. Clarity is the only real rule in writing. Be clear, don’t leave the reader guessing; unless of course you intend to leave the reader guessing, but don’t be ambiguous by mistake. Readers really don’t like that and your book might well make a nice dent in their wall if you irritate them too much. (Ah eBooks, the joy of throwing a crap book across the room will soon be gone from human experience. Shame that, I think its good for the wallpaper, it’s certainly good for the soul.)
Transitions should ideally take place in the first paragraph of a new scene, or as close as possible to it. And the first thing the reader needs to know is WHO. That is a vital bit of information because it orientates readers to the plot. If they know whose head they are in, then they know the back-story, they know that character’s (apparent) role in the story, and therefore they don’t have to think about this stuff.
They can think about all the other good stuff you are putting into the story scene instead.
After Whose head, the reader needs to know Where and When. If the scene is taking place in the same location as the previous scene then you just have to make sure the reader knows it is the same location. If it is happening immediately after the previous scene then, again, you just have make sure the reader knows this.
But if the location has changed or the time has changed then you have to orientate the reader. You have to tell them Where and When. This is like the establishing shot in a film. Is it a room, a moor, a bridge? Is it dawn or night, or day? This stuff is why multiple POV stories are longer, because this is description and no matter how efficient you are at description it takes up words.
Then there is What. This is not about what happens during the scene, because that is the purpose of the scene. It is about what is happening when the scene opens. Is it in the middle of a fight, a love-scene, somebody having a cup of tea. The scene will play out from there, but there is always an initial What, that helps to set the scene.
The easiest way to do this, and this is about as clunky as it gets because it’s an example, so it is deliberately obvious, is:
David thought about Mary while he walked across Blackfriars Bridge in the moonlight.
David thought = David’s POV or WhoDavid is thinking about Mary = WhatBlackfriars Bridge = WhereMoonlight = When.
Then you can slot all the descriptions and so forth into the scene as usual. The reason that Multiple POV takes up space is because you have to describe it all as the POV character sees them. You can’t go on what another POV character has seen because the reader doesn’t know if this POV character has seen the same thing.
Another reason multiple POVs take up room is because you can’t keep using lines like the one above. Some writers do, some successful writers do, but I’d hardly call it craft. That’s like nailing four roughly equal lengths of wood to another wider piece of wood and calling it a table. It’ll do the job, but it is hardly crafted with loving care.
The glimmer of the moonlight shone into David’s eyes, reflected from the surface of the Thames, but he hardly noticed. Mary. What should he do about Mary? He turned right onto Blackfriars, the steel cold beneath his hands, when he stopped and stared out over the glimmering river. Did he have the right? Should he do this? Mary. What was he to do about Mary?
Neither is that to be honest, I just knocked it up for this blog. But after an editor has got hold of it, after I have revised it a few times, then it will be crafted with loving care and then — if I’m lucky — it will sing.
Transitions are incredibly important, so make them sing from the page, make the reader barely notice that they are reading a scene-shift.
Because another thing about scene-breaks is that they are where a reader will put the book down and go off to do something else before returning to the book. Scene-breaks and chapter-breaks are like opening lines, so make them sing, but make them into transitions too.
(If you are writing in the Literary Genre, where stories are supposed to be ‘difficult’ and hard to read, just ignore all this. This ain’t art; this is craft. Leaving the reader constantly guessing about all this stuff might well win you a Booker, so go for it — just don’t think writing crafted novels is easy in comparison, it’s the exact opposite.)
Image attribution: every stock Photo
1 Comment
Filed under Narrative Modes and POV, Storytelling, Art, and Craft
First posted to ‘of Altered States’:
But pure SF?
And doing it better.
First posted to ‘of Altered States’:
Leave a comment
Has SF Lost Something over the Decades?
So I read them.
And started thinking.
First posted to ‘of Altered States’:
Leave a comment
Feel The Fear
First posted to ‘of Altered States’: 6th 2012
My first book has just been published by Firedance books.
It’s a novella, or short novel (the differentiation between the two is grey as shaded erotica), of forty thousand words.
But it’s my first published long form story. In much the same way that ‘A Posturing Fool‘, published in the first volume of of Altered States, was my first published short form story.
So the dreaming is over, I am now a published author. It’s a strange thing to realise a dream, even if it is only part of a larger dream, which is to be a ‘successful’ published author—though how I will measure success is unknown to me. (Pantser in life as well as in writing. I make this stuff up as I go along).
A Posturing Foolwas a story I feared, a story I had to write despite the fear. Semi-autobiographical, liable to upset people I care about, and difficult to edit. A scary thing to write something like that, but all writers must write those stories; it’s the only way we grow as writers.
The ‘Tales of the Shonri‘, which can be found here and here, started in a similar way, though not quite so ferociously fearful. Simply put: I was challenged to write erotica. The story A Warrior’s Goodbye was the result. Not sure if it is particularly good erotica, but it is certainly a good story and it led to the ‘Tales of the Shonri‘, which led to ‘The City of Lights‘ which is my first published eBook.
I was scared of writing erotica. Battle scenes are easy, but sex scenes (which are also action scenes) are scary as all hell. Mainly because you have a punch-up in public but you generally get your kit off in private. So, for a writer, the only knowledge you have is your own. You don’t get to watch real people having a delightful time in bed in the same way that you get to see people having a rambunctious time outside the pub every Saturday night.
But still. I wrote it.
And lost my fear of sex scenes. To be honest, I am still more inclined to show curtains blowing in the wind than go all slot A into tab B, because showing sex is pretty unimportant to me, showing the results of sex, how it changes the dynamics of a relationship, how it changes the path of a story, that is much more important. Fight scenes change character dynamics in a much more direct way, particularly if one of the characters ends up dead, and even then you don’t go all parry, thrust, move a foot, on the edge, on the flat, move the other foot, either. Action works best if you don’t describe it too completely.
However, if I need to write a sex scene, and with Medina as a character I pretty much do, then I know I can write a sex scene.
Then again, ‘A Posturing Fool‘ has a sex scene in it, but that was different because when I wrote that story I never intended to publish it. It was catharsis not storytelling. Whereas ‘A Warrior’s Goodbye‘ was written to be read by others, it was meant to be published.
A step-change that.
So I feared to write ‘A Posturing Fool‘ and did it anyway, which led me to a place where I could write another story I feared to write ‘A Warrior’s Goodbye‘, which led me—because that world had to be explored—to write ‘The Tales of the Shonri‘ on Writerlot, which led me to create ‘Tales of the Shonri: The City of Lights‘ as a long-form story constructed from several of the Writerlot stories, which led me to sub it to Firedance Books (albeit as part of the collective already, so foot already through the door), which led to my first published eBook.
Fear see, it just gets in the way, but it’s a hell of a spur to good work. Feel it, push past it, and see what opens up before you.
First posted to ‘of Altered States’: 6th 2012
Leave a comment
Filed under Storytelling, Art, and Craft
Fan Fiction and World Building
So what’s the problem with this?
So what’s my problem with this?
I’m just not sure that is very wise at all.
First posted to ‘of Altered States’:
Leave a comment
Writing into the Void
I used to be arrogant about my writing. I knew I could write a good line. I knew I could write believable dialogue. I knew I could create a solid plot out of thin air.
But I was writing into the void. I’m not sure who called it this; who used the word void to describe it. I read it somewhere but I am not sure who wrote it. Sorry about that, but the word resonated not the name attached to it. I used to call it writing into the vacuum, but void is better, void is more precise: it describes the process exactly as it happens.
Writing with nobody to read your work, nobody to see the flaws, nobody to show you the little things you have to know. Friends? Family? They are good for “Can I write?” Not because of what they say, after all they are unlikely to tell you you’re crap, but for the look in their eye as they say it. You can see the surprise, the respect; they know that the story works and they show that to you in their reaction. What they can’t do, however, is read your work as a writer would.
So you teach yourself, on your own, bit by bit, sphere by sphere, move by move. Sitting there writing away, learning how things work on the page the hard way, self-educating yourself to write.
I used to call close-third-multiple: viewpoint writing, because I had never heard of close third and needed something to describe what I was trying to do. Struggling with keeping the viewpoint firmly fixed in a single head in a single scene. Why? Because it felt right. It felt like that is the way it should be. Writing, reading, revising, rereading, revising, rereading…. every time spotting another instance where I let a line slip, when I had fallen out of the character’s head. Learning that the best way to learn how to write close third is to write first person.
Not knowing why this worked, just groping towards a style. I already had a voice. I’ve never had a problem with voice (I started writing at 11 obsessive teenager scribbling is very good for letting your voice through) but style, now that was a fish of a different genus.
And so it went with passive sentences too. Using the grammar checker — remember when grammar checkers talked about clause splicing and so forth, no readability stats, no way of knowing which sentences was passive and which were not (computers still aren’t to be trusted on that score, not completely; they’re machines: they don’t know the meanings of the words you’re using. So always be careful, but they are useful — just don’t have the green lines on. Because those things are irritating, distracting and utterly worthless).
So I’d do a grammar check. 3% passive sentences. Then I would go page by page. If it flagged up a passive sentence on that page then I’d go paragraph by paragraph. Zeroing in on the sentence. Finding the right paragraph and going sentence by sentence through that paragraph until I found it. Then altering it. Switching it around. Until it was not flagged as passive any more. Learning how to write sentences first time out of the box so my grammar check always says 1% (0% happens very occasionally. Some sentences have to be passive — it’s not a mortal sin, only a clumsy one).
And so on, with story structure, with character scenes vs plot scenes, with action vs reflection, with pacing. All the time on my own, writing into the void arrogantly sure that I could write.
And then I found writing sites.
I’d done nanowrimo and been on those forums and I think I managed to help some people and upset a whole lot more. Not much changes there. What can you do? You are who you are.
But on other writing sites I started seeing the wood for the trees. I started seeing the little things that make all the difference. I started learning the lingo. And I started to talk to other writers for the very first time. And I wrote and I wrote and I wrote, all the time, everyday, bit by bit, and I posted to threads, and I asked the questions, and my confidence grew.
Especially once I started giving and receiving critiques, that is where I started making the hard choices, the writer’s choices. Working for the story not my ego.
Arrogance is based on your own fear that maybe you can’t do this. Arrogance will make you give fixed answers to questions of style and pace and voice. Arrogance will blind you to the way forward.
Confidence is based on knowledge. Confidence allows you to see that there are many answers to any question about the craft, the art, the truth, and they are all correct. Confidence will show you the way forward.
Arrogance is false and confidence is real.
Leave a comment
Filed under Storytelling, Art, and Craft |
2025AD Homepage
Hacking at high speed: How safe is the automated vehicle?
How can cyber attacks on connected cars be prevented? (Photo: Fotolia)
Article Publication Meta Data
Angelo Rychel
Angelo Rychel
Show Author Information
Article Interactions
Rate this article on a scale of 1 to 5.
4.83 12 votes
Automated cars are destined to make our roads safer. But if computers take over driving, will car hacking become the new danger? An expert explains how hackers could infiltrate a vehicle, how the industry should react – and why terrorists do not even pose the biggest threat.
Imagine you are driving 70 miles per hour on the freeway. Then, as if by magic, strange things start to happen. The air condition begins to blast, the windshield wipers turn on and loud music is roaring from the speakers. Still at high speed, your car suddenly shifts into neutral gear – and then the engine is being turned off.
What sounds like a nightmare actually happened to WIRED journalist Andy Greenberg in July 2015. Using a loophole in the car’s infotainment system, the two hackers, Charlie Miller and Chris Valasek, demonstrated to the journalist that it is possible to remotely hack a car moving at high speed (see a video here). With fully automated driving relying on the connectivity of vehicles, the question arises just how immune self-driving cars will be against hacks. To find out, we spoke to Professor Christof Paar of the Horst Görtz Institute for IT security in Bochum, Germany.
Professor Paar, will cars become increasingly vulnerable for cyber-attacks as they get more and more connected?
If you take a 1984 Volkswagen Beetle, there are precisely zero access points for hackers. Automated vehicles will at least become potentially more vulnerable because the number of gateways for hackers grows.
Will the automated car ever be one hundred percent hacking proof?
They will never be one hundred percent secure. But in the field of IT security it always comes down to one question: are there enough incentives for the invader to justify the effort? This fact sometimes gets lost in the public discussion: every car hack is associated with considerable complexity and costs.
Who might be interested in hacking a car – in spite of the efforts?
We can distinguish between two groups of actors: criminals and state-sponsored hackers. Criminals only act if it pays off financially. At the moment experts still wonder what a business model for car hacking could look like. One phenomenon we know from the web is ransomware: malware that restricts access to your computer. The user has to pay ransom to get his data back. If we apply this to cars, it is imaginable that hackers paralyze the electronic system. The difference is, you could just have your vehicle towed to the repair shop where they reset it to factory setting. Problem solved. If your computer’s hard disk is encrypted, there is often not such an easy solution – because you might lose your personal data.
That sounds like good news for customers and car manufacturers alike.
It is still too early to give the all-clear. Sometimes it takes years to develop criminal business ideas. The more applications are installed in the car, the more potential for abuse arises.
How likely is it that terrorists will use car hacking for assaults?
Terrorists lack the resources for complex hacking attacks. We have not seen any hacking assaults by terrorists yet – even though they had been predicted after September 11.
You named a second group of perpetrators. What kind of danger do state-sponsored hackers pose?
They could be the real threat because they have the financial and personal resources. I can imagine scenarios where intelligence services use car hacking for political or industrial espionage.
How might a state-sponsored attack take place?
I can only speculate. But we are already experiencing that on the internet, nation states are less wary of fueling conflicts than in the analog world. If intelligence services succeed in hacking thousands of cars on a nation’s highway system at the same time, they could bring them to a halt on the road and very effectively cripple a sizable part of our transportation infrastructure. At the moment I see this as the biggest possible threat.
IT security expert Christof Paar
What would a worst case scenario look like? Can hackers win total control over a car?
For self-driving cars, the risk is definitely higher. If computers steer the whole vehicle, a complete takeover cannot be ruled out entirely. But this is extremely complex. For the next generations of vehicles, I suspect that hackers could be more successful in switching off parts of the vehicle, like the motor or the accelerator pedal. This is what the Jeep hackers succeeded in doing. Of course this could already cause severe damage.
Which interfaces can hackers use to infiltrate a vehicle?
There are several of them in a connected car. One is the wireless interface that networked vehicles will use to communicate with other cars and the infrastructure. A second interface comes with the infotainment systems that modern cars are equipped with. In theory, they are supposed to be completely isolated from a car’s driving systems. But there are always loopholes. Take for example your car’s airbag system. If it does not work properly, a flashing warning light signals this on your dashboard. A complete separation from airbag electronics and the dashboard is simply not achievable.
How can car manufacturers ensure the best possible security?
That is the pivotal question. In one sentence: they have to do their homework. There is no magical solution but with appropriate effort, OEMs can develop systems that are secure in practice. They have to look at the state of the art in security engineering for the entire system. This includes secure software on all levels, hardware security, secure implementation and much more. On top of that, OEMs have to get used to the idea that they will play a game of cat-and-mouse with the hackers. That means continually identifying weak points and eliminating them.
OEMs increasingly hire external hackers to test and penetrate the security system of a vehicle before it is put on the market. Not long ago, they would have sued hackers instead of encouraging them. Is a rethinking process taking place in the industry?
Tesla already offers a reward to so-called white hat hackers who detect vulnerabilities and report the problem to Tesla. Culturally speaking, this must seem strange to car manufacturers. But sooner or later, there will be no way around it. White hat hacking is no silver bullet but it helps to reach a higher level of security.
In the US, Congress intends to pass a law that is designed to tighten vehicles’ protections against hackers. Is this a problem that policy-makers are able to solve?
The American approach places the responsibility on the manufacturers and imposes fines in cases of misconduct. I do not know if a legislative framework can create more security. But what I support is requiring OEMs to report every car hacking incident. We need full transparency in order to reach system security.
If OEMs do their homework: Will they reach a level of cyber security that the public will accept?
I am optimistic that the risk of cyber-attacks will be tolerable in the end. How high the public acceptance will be is hard to predict. Spectacular hacks causing highway accidents could undermine the public trust. But at the same time, we must also look at the opportunities that networked cars will bring: in the European Union alone, 25 000 people die each year in road accidents. Electronic driver assistance could save many if not most of those lives!
About our expert:
Professor Christof Paar has the Chair for Embedded Security at the Horst Görtz Institute for IT Security at Ruhr University Bochum, Germany. One of his research focus points is IT security in cars. In 2003 he founded ESCAR, the leading international conference for electronic car security.
Article Interactions
4.83 12 votes
Rate this article on a scale of 1 to 5.
Article Publication Meta Data
Angelo Rychel
Angelo Rychel
Show Author Information
Related Content |
What is the Relationship between the UK Prime Minister and the Monarch
The Queen has a special relationship with the Prime Minister, the senior political figure in the British Government, regardless of their political party.
Although she is a constitutional monarch who remains politically neutral, The Queen retains the ability to give a regular audience to a Prime Minister during his or her term of office, and plays a role in the mechanics of calling a general election.
If either The Queen or the Prime Minister are not available to meet, then they will speak by telephone.
These meetings, as with all communications between The Queen and her Government, remain strictly confidential.
Having expressed her views, The Queen abides by the advice of her ministers.
The Queen also plays a part in the calling of a general election.
The Prime Minister of the day may request the Sovereign to grant a dissolution of Parliament at any time.
In normal circumstances, when a single-party government enjoys a majority in the House of Commons, the Sovereign would not refuse, for the government would then resign and the Sovereign would be unable to find an alternative government capable of commanding the confidence of the Commons.
After a general election, the appointment of a Prime Minister is also the prerogative of the Sovereign.
When a potential Prime Minister is called to Buckingham Palace, The Queen will ask him or her whether he or she will form a government.
To this question, two responses are realistically possible. The most usual is acceptance.
If the situation is uncertain, as it was with Sir Alec Douglas-Home in 1963, a potential Prime Minister can accept an exploratory commission, returning later to report either failure or, as occurred in 1963, success.
After a new Prime Minister has been appointed, the Court Circular will record that "the Prime Minister Kissed Hands on Appointment".
This is not literally the case. In fact, the actual kissing of hands will take place later, in Council.
Enjoyed this? Discover more about the British Monarchy.
Giving You More
Latest News & Features
Writing Tips for Beginners: 7 Ways to Stop Self-Sabotage
Our Twitter Feed |
Kronberg im Taunus PDF
The History of the Kronberg Painter’s Colony
The painter’s colony in Kronberg was one of the most earliest of the German painter’s colonies of the 19th century. Altogether 60 artists worked, over several years, in the Kronberg painter’s colony including such well known names as Wilhelm Trübner, Jakob Fürchtegott Dielmann, Hans Thoma or Carl Morgenstern. Anton Burger’s move to Kronberg in 1858 is associated today with the foundation of the Kronberg artist’s colony.
The history of this development is closely associated with the nearby City of Frankfurt, where many of the artists, who we count as part of the colony, were born. To name only a few: Anton Burger, Philipp Rumpf, Karl Theodor Reiffenstein as well as Otto Scholderer. Besides the place of birth, these artists were connected in particular, through their common studies under Jakob Becker, professor for Genre and Landscape Painting at the Städelsches Art Institute from 1842 to 1872.
Through the removal of their domestic centres to the rustic surroundings of the Taunus village the artists reacted to increasing industrialisation and technical changes in the big city culture. The rusticity served them as a projection space for a still intact and unaltered “healthy world”. The motives, often simple, once found, served as an image of a personal experience of a reality that they captured in painterly values.
From the beginning the guest house “Zum Adler” played a special role. It offered the new arrivals not only accommodation but also acted as a meeting place for the artists.
From Rustic Idyll to Noble Villa Settlement
After the end of the 19th century wealthy Frankfurt citizens discovered the small Taunus town as a health and holiday resort and built their summer villas there. After the settlement of Victoria of Prussia in 1888 the romantic seclusion and rustic peace came to an end for the resident painters. From Empress Friedrich, the widow of Emperor Friedrich III, the town’s development experienced a considerable stimulus. A society oriented artists group who, like Norbert Schrödl or Ferdinand Brütt were well off and close to the Empress, joined the “Back to Nature” movement. They turned their attention mainly to portraiture and historical painting of society’s events.
The Disintegration of the Colony
As impressionism triumphed in Germany the artist’s colony gradually disintegrated. Since the death of Anton Burger in 1905 the growth of the colony had been severely restricted and the younger generation of artists Nelson G. Kinsley, Philipp Franck and Fritz Wucherer, despite adopting impressionistic tendencies in their painting could not revive the colony.
The Artists' Colony
Tourist Information
Berliner Platz 3-5
61476 Kronberg im Taunus
+49 6173 7030
Klaus E. Temmen
Katharinenstraße 7
61476 Kronberg im Taunus
+49 6173 7030
Site Plan Kronberg im Taunus |
In the great city of Bologna, hidden away in a tiny piazza behind the church of San Petronio, is a statue of Galvani, an 18th century physicist. What, you might wonder, has this scientist got to do with Gothic horror stories?
Look closely at the statue and you will observe that he is holding a tray onto which is pinned a dead frog. Galvani had discovered in his researches that a spark of electricity attached to the muscle of a dead frog made it jerk back to life. In doing so he introduced a new verb to the English language; to galvanise, to spring into action.
Galvani San PetronioCuriously, Mary Shelley took his scientific treatise explaining about twitching frogs’ legs as holiday reading when she travelled through Italy and Switzerland with Percy Bysshe and John Keats. I have no idea whether they collected any frogs for experimentation on their stay in the Euganean Hills, although frogs and toads are there in abundance. In fact, on a couple of roads there are warning signs to drive carefully during the breeding season for fear of squashing them, and there are also two local frog festivals each year in the hills, where frog risotto followed by deep fried frogs’ legs with polenta are on the menu for the communal feast.
What we can readily surmise, however, is that this idea of reawakening from the dead insinuated itself deep into her unconscious thoughts, and Mary Shelley eventually came to write Frankenstein, one of the most famous Gothic novels ever written. I like to think that only now are the Euganean Hills being given due credit for this masterpiece of horror. Then again, perhaps the hills had nothing to do with it.
Frankenstein Junior |
Standard 8: World History and Geography: Medieval and Early ModernTimes - The Renaissance
• No Comments
Contributed By
Students analyze the origins, accomplishments and diffusion of the
Renaissance, in terms of:
1. The way in which the revival of classical learning and the arts affected a new interest in "humanism" (i.e., a balance between the intellect and religious faith)
2. The importance of Florence in the early stages of the Renaissance and the growth of
independent trading cities (e.g., Venice) with emphasis on their importance in the spread of Renaissance ideas.
3. The effects of re-opening of the ancient "Silk Road" between Europe and China, including Marco Polo's travels and the location of his routes.
4. The growth and effect of ways of disseminating information (e.g., the ability to manufacture paper, translation of the Bible into the vernacular, printing)
5. Advances in literature, the arts, science, mathematics, cartography, engineering, and the understanding of human anatomy and astronomy (e.g., biographies of Dante, da Vinci,
Michelangelo, Gutenberg, Shakespear)
Learning Registry Activity
Topics and Grades
Grade: 7
Topics: History-Social Science |
Four examples of organisms classified under the Kingdom Monera are bacteria, mycoplasms, cyanobacteria, and archae bacteria. Organisms that meet the qualifications of the kingdom Monera are called Monerans. More »
Examples of monerans include bacteria and blue-green algae. Bacteria are the most populous of all living organisms and critical to life on this planet. As decomposers, they break down organic matter and return the nutrie... More » Science Biology
The two domains of prokaryotes, which are archae and bacteria, contain the common organisms of cyanobacteria, halophiles and hyperthermophiles. Prokaryotes occur in many forms, and some species are more common than other... More »
Purple and green bacteria and cyanobacteria are photosynthetic. Photosynthetic bacteria are able to produce energy from the sun's rays in a process similar to that used by plants. Instead of using chlorophyll to capture ... More »
Autotrophic bacteria include cyanobacteria, green sulfur bacteria, purple bacteria, halophiles and methanogens. These bacteria, along with several types of plants and fungi, have the ability to produce their own food thr... More »
All organisms belong to one of five kingdoms: animal, plant, moneran, fungi and protist. All living organisms fall into specific categories using taxonomy, which groups all living things into groups based on biological c... More » |
Alice Bailey
Masonic Biographies
Alice Bailey
Born: Wednesday, 16 June 1880
Died: Thursday, 15 December 1949
Alice Ann Bailey was a writer, Theosophist, and Freemason who utilized the pen to share occult knowledge and promote goodwill for all Mankind's development.
Alice Ann Bailey was a Theosophist amd medium who wrote 27 books on esoteric and theosophical subjects. She is widely considered to be one of the most prominent theosophical writers in the 20th century. She took many of the themes of the Secret Doctrine by Blavatsky and added and deepened them, adding a new angle and interpretation to many ideas held by the Theosophists.
She was born in 1880 in Manchester England, and spent her youth and adolescence deeply enmeshed in upper class British culture. She would end up moving to the United States in 1907 to marry an Episcopalian minister. It was there she began to write, under the inspiration of an entity she called The Tibetan, a Master of the Wisdom. She joined Co-Masonry around this time as well, becoming a member of Verulam Lodge #525 in New York, one of the lodges of the American Federation of Human Rights at the time.
Her interest in Masonry deepend her occult studies, and references to the Craft and the esoteric meaning and significance of it’s rituals and ceremonies lace her works. She would eventually break away and found her own Masonic order in 1930, furthering her own Masonic interpretations. She founded several organizations based on promoting goodwill among men and a Great World Religion that would unite all faiths.
More Famous Freemasons
Explore Famous Freemasons
Read More
Interested in becoming a member of the worlds oldest Fraternal organization?
Read More
"If I have seen further than
others, it is by standing
upon the shoulders of giants."
Comasonic Logo
|
Heart Murmurs
It is not uncommon for a veterinarian to discover a heart murmur in an otherwise healthy 8 year or older canine patient. The news of the murmur for the pet owner often comes as a surprise, and leads to the question, what should be done?
The most common type of murmur in an aging patient is called MMVD, which stands for myxomatosis mitral valve disease. This is primarily a wear and tear disease of the “three leafed” valve between the left atrium and left ventricle. Essentially the valve leaflets become thickened and irregular over time and no longer provide a tight seal when closing. The absence of a tight seal leads to back flow of blood through the valve, which creates the turbulence that we hear with the stethoscope, and is called a murmur.
This type of mitral valve disease often eventually leads to congestive heart failure in the patient. However MMVD is characterized by a long preclinical phase. This means a long time between when the murmur is first noticed and the onset of clinical signs. Until recently no therapies have been proven to prolong this time before the onset of congestive heart failure (CHF).
The question for the doctor was always, “is there a benefit to the patient, to begin a heart medication prior to any clinical signs of CHF?” In other words was there anything that could be done about this murmur?
A recent study demonstrated the benefit of pimobendan (a heart medication), in significantly prolonging the time of onset of CHF or cardiac related death in dogs with MMVD and heart enlargement.
In this study it was determined that dogs with MMVD and heart enlargement; proven by radiology and echocardiogram, would benefit from starting the drug pimobendan early on, prior to any clinical symptoms.
This information is a big leap forward for the canine patient with a heart murmur. Starting medication early on will delay the onset of CHF and cardiac related deaths. In the middle aged patients with MMVD and proven heart enlargement there is now a clear recommendation from the veterinary profession.
In answer to the earlier question “what can be done for a non-clinical patient with an established heart murmur?”
Chronic oral therapy with pimobendan should be initiated every 12 hours in these patients. The medication is safe and well tolerated by dogs and once started, long term prognosis is substantially improved. In fact the study showed the preclinical time period was doubled!
Pimobendan is not a new drug but rather has been used to treat patients in CHF, it is the idea of starting the medication early on that is new and quite beneficial. This is another step forward for pets and veterinary medicine.
Dr. Alan Main is the owner of West Suburban Veterinary Practice and his dedicated team have been providing services to the West Suburban Humane Society for over 10 years.
Donate to WSHS
|
How are paints made for various painting mediums?
Paints are made by adding pigment (color) to something that will help it stick to the surface of a canvas (or anything else it can be painted on).
Color, also referred to as pigment, is a powder with a specific color. For artist grade and premium grade paints this is ground up metal in most cases. Cobalt blue is made with Cobalt powder mixed into a medium.
For Student grade paints, the pigment is usually obtained by using a colored die and mixing it with chalk (the one you write on a black board with) or marble dust. This helps manufacturers reduce the cost of the paint.
Oil Paints
Oil paints are generally made by grinding the pigment powder with linseed oil. The grinding can be done by manually (with grinding stone, etc) or using a machine that can do it much faster.
Linseed oil has forms the most flexible film when it dries compared to other drying oils. The disadvantage of linseed oil is that it tends to yellow more than other oils over time. The other oil used for making oil paints is walnut oil, which doesn’t yellow as much but is forms a less flexible film than linseed oil does. It is, sometimes, mixed with linseed oil so the resultant paint will have a strong film and yellow less.
Cheaper oil paints tend to have less pigment and oil and more filler (marble dust, chalk or any other filler). Artists grade paints have less filler but they do contain some. Premium grade paints have no filler. They may, however, have stabilizers for certain pigments.
Acrylic paints
Acrylic paints are made by mixing pigment into acrylic emulsions (plastic). Like oil paints, artist grade acrylic paints have more pigment and than student grade paints.
Watercolor paints
The medium for watercolor paints is not water. It is gum arabic. Pigment added to gum arabic to make watercolors. Gum arabic is soluble in water which is why water can be used with watercolors as medium (to move the paint around, etc)
Egg Tempera
This is not as popular as the other 3 mediums but this has been around for a while. Egg tempera paints are made by mixing pigment into the egg yolk (not the egg white).
Encaustic paints
Encaustic paints are wax sticks that heated and the wax is used for painting while it is hot. When the wax dries it cannot be manipulated. Encausing paints are made by adding pigment to heated beeswax
Leave a comment
|
Tuesday, April 28, 2015
ZX Transmissions
ZX Spectrum was the first computer I had. It used to store data as analog audio, that would be recorded to tape, which could sound interesting at times. Recently I've been discussing possibility of using such data audio files in noise/glitch music. In the end, I created an audio data transmission, that could be used for such purposes. It sounds like this:
It is free to grab and use in your music, right here, 48kHz, 16 bit, mono WAV file. Take it, if you like.
If you'd like to make such noises yourself, it's really quite simple. Here's the procedure I used. First I created some data, I used Photoshop to make a picture, I made it 256x192 pixels, which is ZX Spectrum native resolution, filled it with geometrical figures in different shades of gray. Converting the picture to bitmap made gray areas fill with repeating patterns (which make more interesting sounds than just random noise). Then I saved the picture to BMP format, but anything without compression would do. Actually you can use any data file, just (for this example) make it no longer than 6KB (bigger files will crash emulated computer when loaded, where I intend to load it).
I have a Spectrum in a dusty box, but it is way easier to use an emulator. I chose ZX Spin, as it lets you load any data file right into emulated memory and it saves WAV files out of the box. The home page seems to be down, but you can find the program here. Spectrum ROM have been allowed for free distribution, so you can use an emulator, even if you don't own the actual machine.
I decided to use video memory region, so I could see how picture loading works. First I wrote a command to save the region. To skip the trouble of learning how, you can use Z80 snapshot included in the download, just load the file into the emulator. You should see a message, "Start tape, then press any key". Now load the data, using menu "file/load binary file". Pick saved picture file and enter 16384 as the address. Now the screen should be filled with scrambled picture from input file. Pick menu "recording/audio/start recording", enter file name to save to and press any key to start the transmission. You should see moving stripes and hear transmission noises. When done pick "stop recording". This will create fun, modem like noise transmission. Try to pitch it down for some extra flavor. Have fun.
Wednesday, April 8, 2015
Game Bot
You can see the machine is action here:
|
Music schools » Music Resources » Music terms » Letter B
Music glossary - Letter B
Music Glossary
Backbeat: The accentuation of beats 2 and 4; generally found in the genres of blues and rhythm.
Bagpipe: Wind instrument popular in Western and Eastern Europe that has many tubes, one of which plays the melody while the others sound the drones, or sustained notes; a windbag is filled by either a mouth pipe or a set of bellows (uilleann pipes).
Ballad opera: English comic opera, normally featuring spoken dialogue alternating with songs set to popular tunes; also called dialogue opera.
Ballata: A form of italian 14th century poettry and music.
Bar: Also called a measure, a bar is a segment of written music in which there is a designated number of beats.
Basso continuo: Music that is played by 1 or more bass instruments and a keyboard instrument; it's one of the most distinct features of the Baroque era.
Bass Note: Lowest note of a chord.
Bass viol: See double bass.
Beat: The unit of musical rhythm.
Bebop: Complex jazz style developed in the 1940s.
Bent pitch: See blue note.
Blue note: A slight drop of pitch on the third, fifth or seventh tone of the scale, common in blues and jazz.
Blues: A style of jazz, both vocal and instrumental, introduced in the first decade of the 20th century. The most persistent characteristic of the blues is a 12-measure pattern, instead of the 8-measure and 16-measure patterns of ragtime.
Bomba: A style of Puerto Rican folk music derived primarily from African music and dominated by percussion instruments as well as call and response vocals.
Bongo: A pair of small drums of differing pitches, held between the legs and struck with both hands, of Afro-Cuban origin.
Bridge: Transitional passage connecting 2 sections of a composition, too transition. |
Switch Vs Route
This give the following Topics to study
Redistribution and Patch control
And each is covered in some detail.
A partial list of topics covered in switch are.
Switch Operation (CAM TCAM and other switch tables)
STP (all modes)
STP enhancements like BPDU guard and ULD detection.
Ether channels and port channels
Multilayer switches
High availabilities (redundet router and redundant supervisors)
IP telephony
Securing switch devices
Port security
Vlan ACL’s
Private VLANS
and the list goes on….
Hope that will also mean more posts as well. 🙂
Take care all
Spanning Tree enhancements (Backbone Fast)
Last time I look at the spanning tree enhancment I covered uplink fast, this is for detecting when a directly connected root port fails and switching over to a back up in the shortest time possible. But what happens if the link that fails is not directly connected. When a switch loses its link back to the root and needs to find an alternate path back. In the digram below switch B is blocking its port to Switch A to prevent loops.
The question is what happens if the link between Switch A and the Root fails? Well with out backbone fast the following sequince takes place.
When the link fails Switch A will no longer be receiving BPDU’s from the root, the direct link is down and the port on switch B is blocking so not forwarding BPDU’s.
Switch A will assume it is the new root and start to send BPDU’s towards Switch B declaring it is the root. However Switch B will see these are inferior BPDU’s to the on it has stored for the port connected to Switch A and ignore them.
This will continue to happen until the BPDU on the port times out, after which the port will go in to the listing and learning state before starting to forward. This is 20 seconds (max age timer) plus 2 x 15 seconds for the listing and learning stage. so a total of 50 seconds.
The idea behind Backbone fast is to cut this by 20 seconds by bypassing the max age timer. The idea is that if Switch B can confirm it still has a link back it’s current known root switch, then it can ignore the max age timer and start the listing and learning process on a port immidatly it receives a inferior BPDU.
Once backbone fast is enabled, when a switch receives a inferior BPDU on one of its ports, it will send a RLQ (root link query) packet out all it’s non designated ports including its root port (so all ports that lead back to the root). If it receives a RLQ response (these are sent from the bridge) then it knows it still has a link to root. It can then age out the port it is receiving the inferior BPDU’s on and start the listing learning stages. If it does not receive any responses then the switch has lost connectivity to the rest of the network and needs to start recomputing the whole STP.
Either way the max age time has been eliminated and 20 seconds have been shaved of the re convergence / fail over time.
Just like Uplink fast Backbone fast is configured on a switch level with the following command.
Switch(config)#spanning-tree backbonefast
and it needs to be configured on all switches on the network.
CISCO’s document HERE explains it in much more details and more examples.
Spanning Tree enhancements (Uplink fast)
In my last job, I jumped straight in to configuring Rapid spanning Tree, I mean what is the point of running Standard STP with its 50second fail over times, when you can enable Rapid-STP and gain sub second fail over??
Well if you want to pass your CCNP SWITCH you need to know it, and you need to know how to configure the enhancements. Actually having read through them and labed them up. They do help in understanding how STP works and how the original protocol was improved in a number of way, before CISCO took all the enhancements and came up with Rapid-STP.
Over the next few post I will be covering all of the basic enhancements, including uplinkfast, backbone fast, portfast, loopguard etc..
This is normaly configured on access switchs that have two links back to the root, in these cases after the initial STP algrothem has run, one of the ports (lowest priority back to the root bridge) will be designated as the root port, while the other will be blocked. See digram below.
Now with standard STP, if the active link fails, the switch sees the root port link has fail and as it is receiving root BPDU’s on the backup blocked port it starts to bring this up. However with out uplink fast enabled this requires the port to go through the listening and learning stages. By default this is 30 seconds of outage, and even with best STP tuning it still results in a 14 second outage.
However with uplink fast configured the switch keeps track of the blocked ports that point back to bridge and forms them in to an “uplink group”. Now if the primary link goes down the switch can pick the next best root port and immediately places it in the forwarding mode as this will not be creating a loop. This creates an almost instant fail over of the primary link. However switch CAM tables will now be out of sync, which could result in frames being sent down the wrong links. To sort this out, the switch creates dummy frames with source address from its CAM table, and destination of multicast address. this updates the other switches on the network.
Now when the link comes back up, the switch waits twice the forward delay + 5 seconds before it switches back over. This allows the core switch at the other end of the link to have time to run through STP and start forwarding on the port.
And that’s Uplink fast. Providing a method to allow instant fail over of directly redundant links towards the root.
Configuration is very simple and is carried out in global config mode.
Switch(config)Spanning-tree uplinkfast
CCNP SWITCH (retake)
Well no luck I’m afraid 🙁
I agree with many of the other complaints about this exam, there seems to be a large number of questions that are not covered in the course material. I say that having read the foundation guide, cert guide, flash cards, and quick reference sheets.
CISCO have now made a statement that due to the high levels of complaints they will be reviewing the exam. So rather than wast time trying to pass it again. I will carry on my studies with the ROUTE course, which has had much better reviews, and come back to the SWITCH. Hopfuly by then CISCO will have sorted out the issues.
Those of you doing you switch exam my be interested to read these. Some updates to the CCNP SWITCH cert guide have been released. It looks like they cover some the the Planning topics, and also in there is some SVI stuff.
There has been a lot of discussion over CISCO’s handling of the planing part of this exam, so hopefully this extra material will help clear it up. Having glanced though it I remain to be convinced, but I will reserve full judgement till later.
Enjoy the read and will be back later with a new CCNP topic to review. |
What is Difference between Dye Sublimation Ink and Pigment ink?
• 0
What is Difference between Dye Sublimation Ink and Pigment ink?
Inkjet printers use either dye sublimation ink or pigment ink. Each has its own characteristics. If you use the wrong type of ink, it may result in poor image quality or reduced print life.
Dye Sublimation Ink
Digital Sublimation Ink also known as Digital Textile Sublimation Ink. It is a special type of ink used for dye sublimation printers, which are commonly used to produce vivid, full-color, photo-quality images that resist peeling, cracking, and washing away from various types of substrates. It begins by the introduction of heat and then controlled using pressure and time. For hard items like metals, ceramic, fiber board, and similar types of materials, a special coating on the printing surface is usually required so that the material will accept the dye sublimation ink. Without this coating, the print is likely to fade away and crack easily.
Sublimation Transfer Ink
In comparison to sublimation transfer ink, pigment Inks use tiny particles of colored material to provide ink color, rather than paper-staining dyes. Pigment inks are comprised of tiny, encapsulated particles that sit on top of the paper, instead of being absorbed into a paper is fibers, which is what happens with dyes. For T-shirt printing on dark based / light based cotton materials with transfer papers, it is good to use pigment inks because it is waterproof. Most pigment ink printers may use up to eight different ink colors: cyan, light cyan, magenta, light magenta, yellow, light gray, medium gray and black. This ink is much more stable than dye-based inks and can last more than 200 years on some paper types, under ideal (museum-quality lighting and framing) conditions.
Sublimation Printing
SUBLISTAR has been recognized for over 10 years as one of the highest quality and largest suppliers of sublimation printing materials & equipments in China, such as dye sublimation ink, sublimation transfer paper, etc. Any need, pls feel free to let me know, we will waiting for you!
Leave a Reply
Search Here |
The most original equality is that of all beings and phenomena as identical manifestations, unrepeatable and unique though interconnected, from the bottomless background of reality and life. In this sense all are sacred and endowed with an intrinsic value, irreducible to their evaluation, mercantilization and instrumentalization by economic or other interests. The world and nature are immensely worthy of respect, in their entirety and in each interdependent entity that constitutes them. All beings, from the most elemental to the most complex, are equally important, for their incomparable uniqueness and because each contains within themselves the totality of the universe in which it is simultaneously contained, according to the holistic vision of the world where today the millenarian spiritual traditions and the contemporary science converge.
On the human stage, equality lies not only in all humans being identical as different manifestations of the same universal life, but in having all an unlimited potential of development and achievement that is not susceptible of comparisons and judgments according to external criteria. Every human being, like every living being and every phenomenon of the universe, is a splendorous and unique fulguration of the mystery of the world and as such must be recognized and respected, no matter how elemental, limited or negative it may seem to be to a superficial or hasty look. The evolution of societies, nations and cultures is measured by the degree to which they integrate in the public and private domain this recognition, respect and trust in the inexhaustible potentialities of each human being, as well as their capacity to stimulate and promote their full development.
Author: Paulo Borges. Copyrights iLIDH 2015- |
In Defense of TI-Basic
Edsger Dijkstra, famed Computer Scientist, once said "It is practically impossible to teach good programming to students that have had a prior exposure to BASIC: as potential programmers they are mentally mutilated beyond hope of regeneration." Of course, Dijkstra didn't mean that all BASIC programmers are essentially bad, but that BASIC introduced bad practices into their style that were difficult to rectify later. Dijkstra also wrote the famous essay Go To Statement Considered Harmful, a criticism of a statement held dear to many BASIC programmers.
However, this post isn't meant to be about the merits or demerits of BASIC. I want to talk about a similar language: TI-BASIC, the programming language included on certain classes of TI graphing calculators. The language is very likely familiar to students who used a TI-83 or TI-84.
I must admit that this language is very special to me: it's essentially my first experience with real programming. I did have some experience with LOGO and Lego Robotics when I was a kid, but I wouldn't necessarily consider those "programming" in the sense of the word that most people are accustomed to. I even attempted to purchase "C++ For Dummies" in the first grade (of course, that was way over my head!). I really had my first taste of programming with TI-BASIC. I remember sitting in Algebra class in 8th grade (where I first was able to use a graphing calculator) and seeing the mysterious PRGM key. I was fascinated by the concept of writing programs that could do things like solve quadratic equations.
Programming on a TI-83 was an experience, to put it in nice words. The device has a small screen, user-unfriendly keyboard, and the TI-BASIC interpreter is slow. When I started using the language, I used GOTO statements almost exclusively. Dijkstra's belief must have seemed to be true.
Although, one day, I reached a strange bug when I used a GOTO to exit a for-loop. It seemed like the loop counter variable would retain its previous value the next time that the loop was entered, which is the opposite of what I expected. I searched the problem online, and I discovered a forum post where another programmer was having the exact same issue that I was having. There was a reply with a stern warning to never use GOTO, and instead use more structured programming techniques.
Another problem that I often ran into was the issue of creating a "friendly window", or a window where the graph screen was equal to pixels. This was necessary because the functions used to draw to the screen were split into graph-based and pixel-based. By using a friendly window, a programmer can pass the same arguments to both functions. Since I often found myself writing the same code over and over, I looked for a way to save myself the typing. There are no user-defined functions in TI-BASIC, but a programmer can call other programs from within the same program. Thus, I made a program that set a friendly graphing window and then called that from within whatever program I wanted to.
There were more problems with TI-BASIC from a programmer's perspective. It only had 26 variables (A-Z, though it is possible to get more variables by using lists or other creative techniques the alphabetic variables were easiest to use). Also, it was not possible to indent code, and lines often wrapped around the screen due to the short screen width. Therefore, it became rather obvious early on that I had to document my program clearly and concisely. TI-BASIC itself has no comments, but one may use a string on a line by itself as a sort of pseudo-comment.
Though TI-BASIC is clearly a language with its flaws, I think that these flaws in turn taught me good programming design principles by realizing why such things are necessary. When I decided to move onto a more user-friendly language like Java, I knew the importance of structured programming, abstraction, and documentation.
I still have a few of my old TI-BASIC games lying around somewhere. Maybe someday I'll put a link to them somewhere. I look back fondly on my time spent on the TI-83, and I hope that my code might serve as a resource for another budding programmer. |
How to: Help Our Children With Self Control
One of the key things we focus on in our Citywise School Projects with the young people we mentor is Self-control.
Our impulses are very much related to our emotions, that’s something that parents know altogether too well. Sometimes children can seem to be able to do very little about their impulses and their emotions, even when the consequences of their actions are very clear!
But something is going on deeper inside, two parts of their brain are actually battling one another:
The amygdala (just above the ear) controls our impulses and emotions – for example, urging us to eat a whole box of chocolates, but the pre-frontal cortex (at the front of the brain) is thinking through the consequences, telling us for example that eating a whole box of chocolates may not be good for our diet! This all happens so quickly that often we can decide in seconds only to regret it later. Want to learn more? Click here.
It is the same for our children! Watch this video to see it in action for yourself!
So how can parents support their children when they are facing this battle for self -control?
1. Realistic Expectations
Well it may be helpful to know that children may experience a significant growth in the ability to show self-control around the ages of 5 and 6. After this their ability does not greatly increase without help and practice. So, it is helpful to have realistic expectations about the age of the child and be willing to help them grow.
2. Change the Situation
It may seem obvious, but you can help your child change the situation to reduce temptation; encourage them to put their electronic device away during homework time, or walking away from a situation where they feel angry. Allowing them to make this decision teaches them to identify temptations and empowers them to change it for themselves.
3. Some Activities to Try
Our brains are a muscle and so like any other muscle they need training! Help your child to train their self-control muscle with these 6 Citywise recommended activities:
1. Teach kids to save money for something bigger they really want
2. Create an end of the day reward if kids complete all their chores
3. Play freeze tag
4. Teach your kids to put a toy they really want on their birthday or Christmas list rather than buying it right away
5. Have a staring contest
6. Use your other (non-dominant) hand to do tasks like brushing your teeth for a week
You can find more activities here.
So enjoy watching your child train their self control muscles, and share their victories with us in the comments! |
Using seven basic principles of sound landscaping practice, a homeowner can manage and enjoy an investment in a beautiful and drought-tolerant garden environment. The benefits to the gardener include the ease and pleasure of a healthy, natural environment and reduced water and sewer bills compared to conventional landscaping.
The community is served when natural ecosystems of the area are preserved, including habitat for a valuable wildlife population. Birds, bees, and butterflies, as well as the food chain that they are part of, contribute to the health and beauty of the neighborhood.
The local water utility is able to control its costs of operations and improve its service to all customers when water supply demands during peak hours of operation are reduced with efficient irrigation practices.
Xeriscape principles:
Planning and Design: A good design aims to produce the highest quality landscape at the least possible cost with limited maintenance and water requirements.
Limited Turf Areas: Formally maintained and irrigated turf areas create the highest water use in landscapes. Physically limiting the square footage of turf to areas of functional use or to areas near entryways or other locations with frequent visual contact is an easy, quick way to reduce water need without sacrificing important visual impact.
Efficient Irrigation: Irrigating turf areas separately from other plantings and separating high-water-use plantings from low-water-use plantings can be effective in saving water and in producing better quality plants.
Soil Improvements: Soil improvements, based on site-specific soil analysis, promote moisture penetration and retention and make maximum moisture available for plant intake.
Use of Mulches: Properly used, mulches benefit the landscape by reducing water needs, reducing weed growth, cooling the soil, preventing erosion, and providing visual interest.
Appropriate Maintenance: Routine maintenance keeps landscapes at peak attractiveness and helps reduce water use. Weeding, proper pruning, and irrigation-system adjustment are some maintenance practices that help reduce demands for water.
Plant Selection: Native and naturalized plants will survive on natural precipitation or with minimal amounts of supplemental irrigation. They provide insurance against loss of plant material during drought or other water supply crises and reduce the need for water on an ongoing basis.
CTAWWA • 90 Sargent Drive • New Haven, CT 06511 |
You are on page 1of 7
INDTRODUCTION TO INDUSTRIAL PSYCHOLOGY It began as a branch of psychology in December 1901,when Dr. Walter Dill Scott in the U.S.
A spoke on the possibilities of the application of psychological principles to the field of advertising Industrial Psychology is the third or fourth of the most popular branches of psychology in India. It tries to understand the human problems that have arisen as a result of tremendous expansion of industry in the last few decades People are the essential ingredients in all organizations. Industrial psychology has the potentiality to contribute to the productivity of industry and business on one hand and achieving great ereffectiveness and fulfillment of working on hand on the other Definition and Nature of Industrial Psychology It is defined as the study of man and his behavior with the aid of scientific methodology. Definition and Nature of Industrial Psychology The science of behavior (what we do) and mental processes (sensations, perceptions, dreams, thoughts, beliefs, and feelings) Scope Scope of Industrial Psychology Scope of Industrial Psychology Psychology is an extremely broad field, encompassing many different approaches to the study of mental processes and behavior.The Personnel Selection, Personnel development, Human Engineering, Productivity Study, Management, Accident prevention and safety measures and Labor Relations are the scope of the Industrial Psychology.Industrial psychology is a branch of behavioral science that directs its research and courses of study to business. It is not a new science. In fact one of the earlier books on the subject, Hugo Munsterbergs The Psychology of Industrial Efficiency was published by Houghton Mifflin in 1913. Departments of management, design, production, pricing, marketing and distribution all benefit from knowledge of industrial psychology. 1. Work Behavior:-
The psychology of work behavior is one form of industrial psychology. Attitudes of employees as related to their performance is a main theme. Variables in employee personalities and abilities are listed and situational and background differences are studied. The industrial psychologist also studies human mental and physical abilities, administering tests and assessing values and establishing job-related criteria. Human-error factors also are monitored, as are costs and causes of accidents. 2. Management
Many management skills fall under the umbrella of industrial psychology. Managers must be educated concerning the area of employee supervision. Expertise in perception and assessment is required in order to make proper decisions as to whether to promote or admonish. Determination of training needs and abilities to resolve conflict are skills that managers would learn in their study of industrial psychology. Motivational tactics are imperative to the success of industry, thus the industrial psychologist also may devise financial or other incentives 3. Environmental Design :-
Environmental design is another area of industrial psychology. The psychology of the work space concerns the environment of the worker. Performance can be affected adversely or positively depending upon the employees surroundings. The industrial psychologist recommends physical arrangements, colors, noise, lighting and ergonomics. 4. Product Design
Product design is another avenue of industrial psychology that is important to a successful business. A product that has been designed bearing safety, efficiency and desirability in mind may have a higher chance of being successful in the marketplace. The industrial psychologist can collect data and analyze buying trends to make recommendations for a feasible, salable design. 5. Organizational Studies
The overall function of the business may be evaluated by the industrial psychologist. Data relating to job descriptions and hierarchy may be studied and recommendations put forth. The omnipresent tendency to resist changes of any ort and maintain a status quo has been a great hurdle in the acceptance of industrial psychology by employees and managements all over the world because practice of industrial psychology often demands radical changes in the outlooks and attitudes of both employees and the employers Employers are also averse to changes because often they are not sure about the efficacy of the new ideas and are least inclined to take risks. HISTORY OF INDUSTERIAL PSYCHOLOGY
Industrial Psychology is almost as old as Psychology itself. Psychology came about in 1879 in the laboratory of Wilhelm Wundt in Germany and William James at Harvard. Both of them were philosophers and physicians fascinated with the mindbody debate. The older discipline of philosophy could not alone deal with this debate, more room and new tools were needed, giving way to Psychology. Texts applying psychology to business first appeared in 1903; the first IndustrialOrganizational (I/O) psychology text appeared in 1910 (Landy, 1997). It is believed that four men developed the tone and structure of I/O psychology: Hugo Munsterberg, James Cattell, Walter Dill Sco tt, and Walter Bingham (Landy, 1997).Moore and Hartmann (1931) stated that while psychology moved into the educational and clinical fields, no psychologist who respected his position dared venture into the office or workshop, and Hugo Munsterberg was the first man to break the ice (p4). It is speculated that Munsterberg was forced to the field of I/O psychology because of conflict with his Harvard colleagues (Landy, 1997). His book, Psychology and Industrial Efficiency, is regarded as the first I/O psychology textbook, published first in 1910. Krumm (2001) states that formal training in Industrial Psychology began when the book was published, while Landy (1997) asserts: This book was the bible for the application of differential psychology in industry later publications did not replace the structure Munsterberg has put in place; they built on it (p470). Munsterberg was primarily interested in personnel selection and use of psychological tests in industry. Career Management:Career Management ensures others know about you and your value. Although Career Management is one of the five phases of career development planning in our model, it is deliberately front and center since activities related to career management are relevant to all the other phases. Also, career management, unlike the other phases, is a continuous process that occurs throughout one's career and not just at discrete times. It may be helpful to think of career management as a philosophy and set of habits that will enable you to achieve career goals and develop career resiliency.Successful career management is accomplished through regular habits of building relationships, engaging in career development conversations, updating your career development plan, and setting new goals as life and career needs change. Being proficient at career management also means possessing basic skills related to job searching and managing changes in a resilient manner Career Development is a process where employees strategically explore, plan, and create their future at work by designing a personal learning plan to achieve their potential and fulfill the organization's learning, seeking opportunities, taking risks, and finding ways to contribute to the organization in a productive and motivated fashion. Goddard's Career Development Training Program is designed to help employees take responsibility for their careers by offering courses in the following three areas: 1. Career Planning (CP) - Training Programs and services that assist employees in conducting individual assessments and establishing a professional career development plan that helps them reach their full potential and fulfill the organization's mission. Career Enrichment (CE) - Training Programs and services that enable employees to develop, expand, and full utilize existing competencies in their current career field, participate in a rotational or developmental assignment, engage in coaching and mentoring activities in order to propagate a motivated, productive, and resilient workforce. Career Transition (CT) - Training Programs and services that help employees assess, explore, and reality-test their potential for changing career fields, transitioning into management, transferring into other directorates, leaving the Center, Agency or the Federal Government, or phasing into partial or full retirement. Company Closure, Relocation & Restructuring PSM recently managed a large restructuring project for a client involving the closure and relocation of a major manufacturing and sales site leading to redundancies, promotions and outplacement support and the negotiation of temporary and permanent relocation packages. PSM were able to support our client at all stages throughout the restructuring programme linking with both the existing and new site management teams and including the following: 1. Strategy - Meetings with senior management prior to initiation of the program to discuss the HR and legal implications and produce an appropriate strategy plan including timetable, communication process and projected costings. Communication Development of the initial announcement to all employees of the proposed restructure and impact on jobs, offering the right to elect appointed representatives and including the production of redundancy skills matrices, where appropriate. Consultation - Developing and co-ordinating with line management, an effective consultation process and attend individual meetings with affected employees to discuss the impact on jobs and receive their feedback and ideas on alternative solutions in moving forward. Relocation Working with line management and supporting individuals who transfer into alternative either temporary or permanent employment, including developing relocation/ information packs, visits to the new location and proposal of relocation benefit packages where appropriate. Outplacement The preparation and delivery of outplacement workshops to provide the necessary support and assistance to employees to assist their search for new employment e.g. production of e ffective CVs, job search techniques etc. Exit Packages To propose appropriate exit packages for employees who leave the company.
Restructuring In conjunction with the senior management team, review the initial implementation of new organisational structures and as they develop, to assess their effectiveness in line with the future needs of the business.
Individual Differences Individual differences are the facts that make people different from each and other. We all know that we are different from each other.. in may ways such as : our physical aspects, our likes, dislikes, interests, values, psychological makeup (and the list goes on) in other words... the whole "Personality". That people differ from each other is obvious. How and why they differ is less clear and is the subject of the study of Individual differences (IDs). Although to study individual differences seems to be to study variance, how are people different, it is also to study central tendency, how well can a person be described in terms of an overall within-person average. Indeed, perhaps the most important question of individual differences is whether people are more similar to themselves over time and across situations than they are to others, and whether the variation within a single person across time and situation is less than the variation between people. A related question is that of similarity, for people differ in their similarities to each other. Questions of whether particular groups (e.g., groupings by sex, culture, age, or ethnicity) are more similar within than between groups are also questions of individual differences. Personality psychology addresses the questions of shared human nature, dimensions of individual differences and unique patterns of individuals. Research in IDs ranges from analyses of genetic codes to the study of sexual, social, ethnic, and cultural differences and includes research on cognitive abilities, interpersonal styles, and emotional reactivity. Methods range from laboratory experiments to longitudinal field studies and include data reduction techniques such as Factor Analysis and Principal Components Analysis, as well as Structural Modeling and Multi-Level Modeling procedures. Measurement issues of most importance are those of reliability and stability of Individual Differences. Research in Individual Differences addresses three broad questions: 1) developing an adequate descriptive taxonomy of how people differ; 2) applying differences in one situation to predict differences in other situations; and 3) testing theoretical explanations of the structure and dynamics of individual differences.
employee development Definition Encouraging employees to acquire new or advanced skills, knowledge, and view points, by providing learning and training facilities, and avenues where such new ideas can be applied. Training and development What is Employee Development ? Employee development is a joint initiative of the employee as well as the employer to upgrade the existing skills and knowledge of an individual. It is of utmost importance for employees to keep themselves abreast with the latest developments in the industry to survive the fierce competition. Believe me, if you are not aware of what is happening around you, even before you realize you would be out of the game. As they say there is really no age limit for education. Upgrading knowledge is essential to live with the changes of time. Employee development goes a long way in training, sharpening the skills of an employee and upgrading his/her exist ing knowledge and abilities. In a laymans language, employee development helps in developing and nurturing employees for them to become reliable resources and eventually benefit the organization. Employees also develop a sense of attachment towards the organization as a result of employee development activities.
Organizations must encourage their employees to participate in employee development activities. Employees also must take skill enhancement or employee development activities seriously. Do not attend trainings or other employee development activities just because your Boss has asked you to do so. Dont just attend trainings to mark your attendance. You just cannot use same ideas or concepts everywhere.
It is excellent if you know Microsoft Excel or for that matter Microsoft Word. Remember simply knowing few basic functions of MS excel would not help you in the long run. It might help you in the short run. Excel is not just to store your required data. There are many other formulae and advanced applications which one should be aware of.
Enhance your skills with time. Employee development can also be defined as a process where the employee with the support of his/her employer undergoes various training programs to enhance his/her skills and acquire new knowledge and learnings. Every organization follows certain processes which not only help in the professional but also personal growth of an employee. Employee development activities help an employee to work hard and produce quality work. Examples of Employee Development Activities Professional Growth Employee development activities must be defined keeping in mind an employees current stage and desired stage. Knowing an employees current and desired stage helps you find the gaps and in which all genres h e/she needs to be trained on. Human resource professionals must encourage employees to participate in internal or external trainings, get enrolled in online courses to increase their professional knowledge and contribute effectively. Personal Growth Employees start taking their work as a burden only when an organization does not provide any added benefits or advantages which would help in their personal growth. Soft skills classes, fitness sessions, loans with lower interest rates are certain initiatives which not only motivate an employee to do quality work but also help in employee development. Employee development not only helps in enhancing knowledge of employees but also increases the productivity of organizations. Employees, as a result of employee development activities are better trained and equipped and work harder to yield higher profits. What Are the Big Five(TRAITS )Dimensions of Personality? Today, many researchers believe that they are five core personality traits. Evidence of this theory has been growing over the past 50 years, beginning with the research of D. W. Fiske (1949) and later expanded upon by other researchers including Norman (1967), Smith (1967), Goldberg (1981), and McCrae & Costa (1987).The "big five" are broad categories of personality traits. While there is a significant body of literature supporting this five-factor model of personality, researchers don't always agree on the exact labels for each dimension. However, these five categories are usually described as follows: 1. 2. 3. 4. 5. Extraversion: This trait includes characteristics such as excitability, sociability, talkativeness, assertiveness and high amounts of emotional expressiveness. Agreeableness: This personality dimension includes attributes such as trust, altruism, kindness, affection, and other prosocial behaviors. Conscientiousness: Common features of this dimension include high levels of thoughtfulness, with good impulse control and goal-directed behaviors. Those high in conscientiousness tend to be organized and mindful of details. Neuroticism: Individuals high in this trait tend to experience emotional instability, anxiety, moodiness, irritability, and sadness. Openness: This trait features characteristics such as imagination and insight, and those high in this trait also tend to have a broad range of interests.
Full Personality Structures One of the earliest -- and perhaps the most wide taught -- models of dividing personality was Freud's division of the mind into the id, ego, and superego. A good place to find out about the id, ego and superego is at Victorian Science. The essay there is by David B. Stevenson.
Another kind of personality structure involves divisions of personality based on the functions they carry out. One of the most promising of these functional divisions is the Systems Set, which is described in part on this to pull up the special topic. The Systems Set: New Opportunities in Dividing Personality The Systems Set is a division of personality into its functional areas. [Note: The Systems Set is not technically a part of the Systems Framework, but rather an offshoot of it. Nevertheless it is covered on this web site because it has proven useful in many contexts where a division of personality is called for.] The development of the Systems Set involved a several-step process. First, about 400 parts of personality were surveyed, collected from personality textbooks and other sources. Next, these were arranged in functional groups and defined (Mayer, 1995). A number of these functional clusters is shown below. Each cluster is itself made up of many subsidiary parts. For example, models of the self include one's own autobiographical story, the self concept or self concepts, self-esteem, and many other parts (see Mayer, 1995). These functional clusters, however, are a fairly comprehensive group, including, as they do, most areas necessary to describe parts of personality. The problems involved in dividing personality are fairly apparent: There are no distinct boundaries between systems. Rather, they interpenetrate and blend into one another. In addition, multiple groupings are possible. That said, it is possible to identify distinctions from the past that have proven useful and apply them to personality function. The System Set employs several distinctions: Those between the inner personality and its plans for outward expression, that between consciousness and non-conscious systems, and that between cognition, on the one hand, and motivation and emotion on the other. Applying these time-honored distinctions to a comprehensive collection of functions, one possible solution to the division issue is to identify four more-or-less discrete groups of function (Mayer, 2001). The energy lattice. This functional group provides direction to the person, drawing on subsystems that both motivate the individual and qualify those motives with emotions that guide the individual's social behavior. The knowledge works. This functional group involves the individual's models of the self and the world, along with the intelligences that operate to construct those models and to think with them. The social actor. This functional group involves the individual's characteristic or preferred styles of social expression, including attachment patterns, as well as the person's social skills. The conscious executive. This functional group involves the capacity to self-reflect and self-govern, as well as the conscious experience of those portions of personality to which the individual has access. (Most personality processes are regarded as non-conscious; that is, unconnected or disconnected from consciousness) An example of these divisions, applied to the overall functional clusters above, can be seen in the next figure. This division of mind has now been used in a number of studies and shows considerable promise in performing well. When compared to other structural models, it tends to outperform them. For example, in one study, knowledgeable judges were asked to sort traits according to the functions of personality the traits described. Some judges used the functions described by the Trilogy-of-Mind division (motivation -- emotion -- cognition). Other judges employed the Systems Set. Judges using the Systems Set were able to include far more traits, and to assign them with greater inter-judge reliability. The Systems Set has also been used to classify clinical change techniques and psychiatric disorders of DSM-IV-TR, according to the areas of personality influenced Emotional intelligence Emotional intelligence (EI) refers to the ability to perceive, control, and evaluate emotions. Some researchers suggest that emotional intelligence can be learned and strengthened, while other claim it is an inborn characteristic. A number of testing instruments have been developed to measure emotional intelligence, although the content and approach of each test varies. The following quiz presents a mix of self-report and situational questions related to various aspects of emotional intelligence. What is your emotional intelligence quotient? Take the quiz to learn more.
Strongly Agree Agree Disagree Strongly Disagree Visual Skills - Definition The term 'visual skills' includes all neuro-muscular and perceptual elements that together give rise to reflexive-passive, and volitional-active vision. These are not limited to the neuromuscular and neurosensory elements of the eye and retina, but integrate inputs from and outputs to other sensory modalities and neurocognitive functions. Visual skills involve the combined efforts of the eyes, eyelids, extra- and intra-ocular muscles, several cranial nerves, cortical and subcortical pathways, brainstem and spinal connection, various cortical loci and subcortical nuclei, audition, kinesthesia and proprioception, and balance. The functional elements of visual skills include vergence and duction movements, binocular coordination, saccades, pursuits, accommodation, target acquisition and fixation, and a number of distinct perceptual elements including "spatial organization, object perception, visual memory, visual thinking, allocation of visual attention, and the ability to integrate visual information with other sensory and output modalities Psychomotor learning:- is the relationship between cognitive functions and physical movement. Psychomotor learning is demonstrated by physical skills such as movement, coordination, manipulation, dexterity, grace, strength, speed; actions which demonstrate the fine motor skills such as use of precision instruments or tools, or actions which evidence gross motor skills such as the use of the body in dance, musical or athletic performance. Behavioral examples include driving a car, throwing a ball, and playing a musical instrument. In psychomotor learning research, attention is given to the learning of coordinated activity involving the arms, hands, fingers, and feet, while verbal processes are not emphasized Career management The word career refers to all types of employment ranging from semi-skilled through skilled, and semi professional to professional. The term careers has often been restricted to suggest an employment commitment to a single trade skill, profession or business firm for the entire working life of a person. In recent years, however, career now refers to changes or modifications in employment during the foreseeable future. There are many definitions by management scholars of the stages in the managerial process. The following classification system with minor variations is widely used: Development of overall goals and objectives, Development of a strategy (a general means to accomplish the selected goals/objectives), Development of the specific means (policies, rules, procedures and activities) to implement the strategy, and Systematic evaluation of the progress toward the achievement of the selected goals/objectives to modify the strategy, if necessary. Employee Welfare:Employee Welfare includes anything that is done for the comfort and improvement of employees and is provided over and above the wages. Welfare helps in keeping the morale and motivation of the employees high so as to retain the employees for longer duration. The welfare measures need not be in monetary terms only but in any kind/forms. Employee welfare includes monitoring of working conditions, creation of industrial harmony through infrastructure for health,industrial relations and insurance against disease, accident and unemployment for the workers and their families. Labor welfare entails all those activities of employer which are directed towards providing the employees with certain facilities and services in addition to wages or salaries. Labor welfare has the following objectives: 1. To provide better life and health to the workers
2. 3.
To make the workers happy and satisfied To relieve workers from industrial fatigue and to improve intellectual, cultural and material conditions of living of the workers.
The basic features of labour welfare measures are as follows: 1. 2. 3. 4. 5. Labour welfare includes various facilities, services and amenities provided to workers for improving their health, efficiency, economic betterment and social status. Welfare measures are in addition to regular wages and other economic benefits available to workers due to legal provisions and collective bargaining Labour welfare schemes are flexible and ever-changing. New welfare measures are added to the existing ones from time to time. Welfare measures may be introduced by the employers, government, employees or by any social or charitable agency. The purpose of labour welfare is to bring about the development of the
whole personality of the workers to make a better workforce.
Succession planning :Succession planning is a process for identifying and developing internal people with the potential to fill key business leadership positions in the company. Succession planning increases the availability of experienced and capable employees that are prepared to assume these roles as they become available. Taken narrowly, "replacement planning" for key roles is the heart of succession planning. Effective succession or talent-pool management concerns itself with building a series of feeder groups up and down the entire leadership pipeline or progression (Charan, Drotter, Noel, 2001). In contrast, replacement planning is focused narrowly on identifying specific back-up candidates for given senior management positions. For the most part positiondriven replacement planning (often referred to as the "truck scenario") is a forecast, which research indicates does not have substantial impact on outcomes. Fundamental to the succession-management process is an underlying philosophy that argues that top talent in the corporation must be managed for the greater good of the enterprise. Merck and other companies argue that a "talent mindset" must be part of the leadership culture for these practices to be effective. Succession planning is not a new phenomenon. Companies have been wrestling with ways to identify, develop, and retain their talent for decades. So, why is succession planning suddenly popping up on every companys radar screen? Todays organizations are facing higher demands in a global market with the retirement of the Baby Boomers and the widening talent gap. The home-grown and paper-based succession planning that companies relied on in the past are no longer meeting the needs of todays workforce. In order to achieve results, companies need to start with the basics, create a strong process and then invest in the tools and technology to instill a talent development mindset in their organization. This report highlights research findings on succession planning efforts in Best in Class organizations across multiple industries. Succession planning is a process whereby an organization ensures that employees are recruited and developed to fill each key role within the company. Through your succession planning process, you recruit superior employees, develop their knowledge, skills, and abilities, and prepare them for advancement or promotion into ever more challenging roles. Actively pursuing succession planning ensures that employees are constantly developed to fill each needed role. As your organization expands, loses key employees, provides promotional opportunities, and increases sales, your succession planning guarantees that you have employees on hand ready and waiting to fill new roles. |
to the top
#8 - While studying a large colony of macaque monkeys
PowerScore Staff
PowerScore Staff
Posts: 6670
Joined: Wed Feb 02, 2011 4:19 pm
Points: 3,343
Complete Question Explanation
"Most Strongly Supported" questions are a subset of Must Be True questions. The correct answers do not always have to be true, but they are the answers that have the most support from the stimulus. These can be looked at as reverse strengthen questions, where the stimulus provides the most support for one of the answers, and less (or no) support for the four incorrect answers. As with strengthen questions, we do not have to prove one of the answers, just help it more than we help the others. Students who treat these as strict Must Be True questions may find themselves arguing with the correct answer because it doesn't need to be true. You should always avoid arguing with the answers, regardless of the question type, because you are never looking for perfect answers but always and only the "best" answer from among those provided. This is especially true in the case of Most Strongly Supported questions like this one.
The stimulus sets out that baby macaques imitate humans who make the same sort of gestures that adult macaques make, and they don't imitate those gestures made by humans but not made by adult macaques. We only know about four types of gestures (lip smacking, sticking out tongues, opening and closing mouths, and making hand gestures), so there may be other types of gestures missing from our data set (like nodding or shaking the head, for example). For that reason, we have to be careful not to overstate our case, but since this is Most Strongly Supported and not pure Must Be True, we could still accept an answer that is broader than our evidence may actually prove.
Answer A: This is an opposite answer, as it appears that it is the least supported choice and may even be a good Cannot Be True answer. Baby macaques don't mimic whatever they see, because they don't mimic humans making hand gestures or opening and closing their mouths.
Answer B: This answer brings in outside information about what baby macaques cannot do and about their muscle coordination. As with any question in the 'Prove" family, new information is not allowed, so this answer must be rejected.
Answer C: As with answer B, this answer brings in new information. The stimulus tells us nothing about why or how adult macaques use these gestures, only that the babies imitate humans who do gestures also done by adult macaques and do not imitate humans doing certain gestures not done by adult macaques. For all we know, it has nothing to do with entertainment and may instead be a form of symbolic communication.
Answer D: Probably the most attractive wrong answer, this one fails for being too speculative about what the baby macaques are thinking. We cannot know, and there is no support for the claim, that the babies are making any mistakes. They may be perfectly aware that humans are not adult macaques, or they may be getting us mixed up with their elders, but the stimulus offers no guidance either way.
Answer E: This is the correct answer. As mentioned previously, we cannot prove this answer is absolutely true, because there may be other gestures that haven't been tested, but at least based on the data we have so far this is the answer with the most support. Out of the four types of gestures tested, the baby macaques only imitated humans when they did gestures also done by adult macaques. Since this is the answer that has the most support from the stimulus, it is the best answer of the bunch, and it is the credited response. |
The first step towards avoiding internet censorship and control (alternative DNS Roots, opennic and why you should care)
As governments and corporations look to exert more control over the internet the issue of avoiding internet censorship and promoting freedom of speech has become a central issue in shaping our internet for the future. To ensure that information is both free and uncensored it is imperative that political and economical forces are not able to unfairly modify the internet architecture for their own purposes. At the centre of this is issue is the Domain Name Service (DNS).
DNS is a directory of computers and their associated names, much like a ‘phone book. When you type an address in to your browser (for example, your computer asks the DNS service to find the IP address that is associated with this address so your computer knows where to connect to to get the page you have requested. The DNS is a hierachical structure, made up of a number of Top Level Domains (TLDs). These TLDs are the right-most part of the adrress, like the .com, .net, etc that we all know.
Anyone can run a DNS server. However, to resolve the domains we all know, your server needs to talk to the top-level or root servers. These servers are run by corporations and are distributed around the world. The overall administration of the DNS and IP addressing falls to an organisation called Internet Corporation for Assigned Names and Numbers (ICANN). ICANN is a non-profit organisation which was set up by the US Federal Government to control DNS, which was previously within US Federal remit. The US federal government has retained influence over ICANN, not least because ICANN is operated within US jurisdiction. ICANN charges a large amount of money for the privilege of setting up a TLD or being a reseller for domains within a TLD, which used to be free when the internet was first created.
DNS can also be used to track your internet access. This is because every site you visit generates a DNS request, which can be logged, leaving a record of all of the hosts on the internet that you connect to. DNS can also be used to censor your access; if a domain is removed or blocked from DNS, you cannot resolve the domain name to the IP address on which it is hosted, thus stopping access to the domain. Censorship using DNS blocking has already been implemented in many countries.
However, there is a solution to this invasion of your privacy. Alternate DNS root systems can be used which do not have such censorship. This also provide an added bonus: free to register domains and TLDs, thus making DNS free, open and globally distributed, as it was always intended to be.
One such alternative root provider is opennic. Opennic allows you to resolve a host of new TLDs whilst still allowing access to the existing, ICANN administered domains. It’s easy to use, it just takes a simple configuration change on your PC to benefit. Click this link for more discussion on why this is a good idea and to find out how to make the simple change.
So there we are. Object to censorship, control and artificial costs. Join me in using opennic now and keep internet freedom alive.
One thought on “The first step towards avoiding internet censorship and control (alternative DNS Roots, opennic and why you should care)
Leave a Reply |
Honey Bees
Honey Bees are actually very beneficial “pests” due to their important role as pollinators of many agricultural crops. They are kept in hives by beekeepers and are rented to growers for crop pollination. Honey and beeswax are both useful commodities and are extracted from the hives.
However, when their venom does become a threat in or around a home or other structure, eliminating the bees becomes a top priority. When we see continual honey bee flight to and from a hole in the building, we know there is a nest. We can also determine a nest’s existence by listening for bees buzzing inside. Destroying the bee nest is the only option if bees appear overly aggressive which may indicate they are Africanized honey bees. Oftentimes, however, bees live in building walls, attics, or are tucked away where they are impossible to reach. Either way, they are a threat to people.
Eliminating or removing bee nests is a delicate operation that requires special equipment, protective clothing, and skill to prevent stings. Killing bees in a void with an insecticide can come with serious consequences as well: Amazon, therefore, recommends using an experienced beekeeper if bees are located in an easily accessible place such as a hollow tree.
Africanized honey bees have now been in Florida for quite some time and they are physically indistinguishable from European honey bees but exhibit a much more aggressive nature especially when their nest is disturbed. Tests have shown that Africanized honey bees become alert to disturbances and prepare for colony defense much quicker than Europeans. Africanized honey bees also cause 6 to 10 times more stinging than Europeans and will continue to attack for longer periods of time and at much greater distances from the nest or hive. The Africanized honey bee nest is generally smaller than its European counterpart and their nests are often built in exposed areas such as drainage culverts and under highway overpasses.
Lastly, dead bees and their dead brood will decay and produce strong odors. Stored honey can absorb moisture and ferment or overheat without adult bees to tend to it. This can result in burst cap-pings, producing leaking honey from combs which can penetrate ceiling or walls, causing stains, sticky puddles around doors or windows, and softening of drywall. Other pests, such as ants, carpet beetles, moths, and cockroaches could become a problem, as they will be attracted to the odor of the honey and the dead bees.
To avoid these problems it is very important to remove all traces of the nest after the bees are treated. This may require a contractor to open a wall void or ceiling area. X Terminator Pest Control can recommend a contractor or professional cleaning service that can remove remaining honey, comb, and dead bees. It is also essential to plug all holes where bees were entering the structure as old nesting sites are extremely attractive to new swarms of honey bees as well as other insects.
Please call X Terminator Pest Control for more information about honey bees. |
Indigenous geraniums.
Geranium (Wayside garden centre} Pelargonium (Davenport garden centre)
Around the world these perennials thrive as wildflowers. Famous for their delicate, jewel-toned flowers, attractive foliage and a low mounding habit they make the ideal plant for a wildflower garden
Plants from this family, especially the geraniums and pelargoniums, have been hybridized and are widely cultivated the world over for their spectacular displays of flowers and striking colours.
Geranium is a genus of 422 species of flowering annual, biennial, and perennial plants that are commonly known as the cranesbills. There are 33 species in southern Africa of which 4 species occur in the Cape. (I’m discounting the introduced species which have naturalised in many parts of our area)
In the surrounding areas of Knysna we have 3 indigenous species: Geranium incanum var. incanum, Geranium incanum var. multifidum and Geranium ornithopodon.
The name Geranium is derived from an ancient Greek word geranos, a crane, referring to the similarity of the long beaked fruit (seed capsule) to the bill of the crane, incanum = hairy, hoary, grey or silver coloured, multifidum = many divided (referring to the leaves), ornithopodon = “bird feet”, from the Greek ornithos (“bird”) and pous (“feet”); …
Geranium incanum (Wayside garden centre)
Geranium incanum Burm.f. var. incanum
Family: Geraniaceae
Common names: Carpet Geranium; Horlosies, Vrouetee, Bergtee, (Afrikaans); ngope-sethsoha, tlako (Sotho).
It carpets the verges of national roads and covers large patches of grassland at the coast. The leaves are deeply divided with the odd leaf turning shades of yellow, orange and red.
Distribution and habitat
It occurs naturally in the southwestern and eastern parts of the country where it can be found scrambling about through natural vegetation. They are plentiful in Steenbok Nature Reserve, Leisure Isle and can also be seen along the N2 to George.
This plant is used traditionally by both African and Europeans to make a medicinal tea from the leaves which is used to offer relief from certain complaints such as bladder infections, venereal diseases, and conditions relating to menstruation.
Growing Geranium incanum
Planted near walkways, they soften the edges, in rockeries they can be tucked into crevices creating a softness in a hard landscape.
Geranium incanum is easily propagated from both seed and cuttings. Selected forms, such as those with darker coloured flowers, are best grown from cuttings. Fresh seed sown in spring or autumn is easily germinated and will produce a variety of darker and paler forms. Seed can be sown directly onto a well prepared seedling medium in trays and lightly covered. Once watering has been commenced the trays should never be allowed to dry out completely. Seedlings can be transplanted into separate containers once they are large enough to handle.
Plants from this family, especially geraniums and pelargoniums, have been hybridized and are widely cultivated the world over for their spectacular displays of flowers and striking colours.
The beautiful Carpet geranium flowering on a road verge
Geranium ornithopodon
Geranium ornithopodon Eckl. & Zeyh. Eastern Cape, Western Cape
Family: Geraniaceae
It differs from G.incanum by having lobed and velvety leaves with pretty dark veined pink flowers. It is commonly found at forest margins by the coast and on lower mountain slopes of the western and eastern Cape. There is a thriving colony growing along the Phantom Pass road, just beyond Lightly’s.
This plant is not commercially available, but it can be grown easily from seed or cuttings just like G.incanum. |
Cringe Definition. What does Cringe mean?
Cringe has two meanings. It can mean that someone moves or bends their body in a way to indicate that they are afraid. This usually includes folding the body inward and slightly turning away from the source of the fear. When someone cringes they do not move or run away.
Cringing is something that is done while standing still (or at most only moving back a few steps). Although it is possible for someone to cringe in fear, and then run away from the perceived danger. This definition is often used in past tense, as it describes something that has already or just happened.
• The boy cringed in fear as we heard the dog’s loud bark.
• The woman cringed in fear when the villain suddenly appeared on the movie screen.
Synomyms for Cringe
There are many synonyms for this definition of cringe. Here are some of the most commonly used.
Cower- refers to when someone physically shrinks because of fear. When someone cowers in fear they tend to get low and hide from the perceived threat.
• The family cowered in fear when they heard the burglar break into their house.
• He cowered in fear when he saw the bear.
Shrink- can be used in different ways.
As a synonym to cringe, it refers to when someone physically either makes themselves smaller by moving backwards due to fear or feeling overwhelmed or overpowered by someone else. Shrinking doesn’t always have to be a physical movement, it can be a backing down. A weaker person can shrink to the wishes of a stronger person.
• Even though John thought his idea was better, he didn’t have the courage to fight harder for himself, and instead shrunk in fear to his louder colleague.
• She would shrink in fear every time the man came into the room.
Recoil- refers to someone quickly or suddenly flinching or springing backwards in fear or in disgust. Oftentimes when someone recoils, it’s because they are reacting to something a person is doing at the moment.
If someone says something or does something that makes someone afraid or disgusted, or have another negative reaction, that person may recoil. When someone recoils they tend to move away from that person.
• After learning the truth she recoiled when he tried to touch her.
• He recoiled in surprise after being shocked by the metal bar.
Cringe can also be a feeling of embarrassment or disgust. In this case cringe is more of an internal reaction than the first definition. One may cringe at something that happens to them, or they may cringe at watching something happening to someone else.
• I cringed out of sympathy when the student was laughed at for answering the question incorrectly.
• I cringed out of disgust when I realized the she wasn’t my friend, but had just been using me the whole time.
Synomyms for Cringe
Here are some synonyms for this usage of cringe.
Wince- refers to someone making a grimace or perhaps shrinking inward in reaction to something. Winces are usually made as a reaction to pain and the movement is usually involuntary.
• He winced in pain after he was hit.
• She tried not to wince as the doctor stuck the needle in her arm.
Shudder- refers to when someone shakes, or trembles in fear. While shudder is often used to describe how one reacts to fear, one can also shudder in disgust.
• The abused dog shuddered when his rescuer first touched him.
• I shudder to think about how he is going to react to this.
Antonyms for Cringe
Here are two common antonyms for cringe. They can be used for both usages of the word.
Advance- means to move forward in a meaningful, purposeful way.
• The boy confidently advanced across the room towards the stranger.
• The team advanced the ball forward.
Approach- means to come near or to come close to someone or something.
• The dog cautiously approached the front door to his new house.
• When meeting someone for the first time she always approaches them with confidence.
Bonus Word
Cringeworthy- means that something is embarrassingly bad. This phrase has nothing to do with fear, it’s simply a reaction to something being so awful it’s embarrassing. For example a bad joke, or a really bad performance. When something is cringeworthy someone will usually grimace or have some sort of involuntary physical reaction.
• The movie was unwatchable, it was absolutely cringeworthy.
• Watching her yell and make a fool of herself was cringeworthy.
Add comment
You entered an incorrect username or password
Sorry, you must be logged in to post a comment. |
What Are Nutrients and Why Do You Need Them?
Tablescape with fish, vegetables, and smoothies
Alexandra Grablewski / Getty Images
The dictionary definition of "nutrient" is something that provides nourishment, which is a broad definition. But in the field of nutrition and diet, nutrients are more specific. In fact, there are six specific categories of nutrients, all of which are necessary to sustain life:
Grouping the Nutrients
Humans like to put things into categories because it's easy to remember what they do and we can compare and contrast them with other things. In nutrition, we often group nutrients by size or what they do in the body. We start with two groups, micronutrients and macronutrients (water is usually left alone in its own group).
Carbohydrates, proteins, and fats are called macronutrients because they're large, and energy nutrients because they provide the fuel your body needs to do things. Vitamins and minerals are called micronutrients because they're much smaller in comparison. That doesn't mean they're less important; they're still essential nutrients, but you only need little bits.
Micronutrients can be classified by whether they're soluble in fat or soluble in water. Vitamins A, D, E, and K are fat-soluble, and the B-complex vitamins and vitamin C are water-soluble. Minerals are grouped as major minerals or trace minerals, depending upon how much of each mineral is necessary.
You can also group nutrients by whether or not they are organic, by which we mean organic chemistry, not organic farming or food production. Water and minerals are inorganic while all the rest are organic because they contain carbon atoms.
Three Reasons Why Nutrients Are Important
1. They provide energy. Carbohydrates, fats, and proteins provide the energy your body needs to carry out all the biochemical reactions that occur throughout the day (and night). The energy is measured in calories (kilocalories, technically, but we usually just call them calories). Gram for gram, fat has more calories than either carbohydrates or protein; one gram fat has nine calories, and the other two have four calories per gram.
2. They're necessary for body structures. Fats, proteins, and minerals are used as raw materials to build and maintain tissues, organs and other structures such as bones and teeth. Carbohydrates aren't on this list, but your body can take any extra carbohydrates and convert them into fat, which can be stored in adipose tissue.
3. They help regulate body functions. All six classes are involved in regulating various body functions such as sweating, temperature, metabolism, blood pressure, thyroid function, along with many others. When all of the different functions are in balance, your body is said to be in homeostasis.
Not Quite Nutrients, But Still Important
You might have read about phytonutrients, which aren't included in the major classes. That's probably because they're fairly new in the world of nutrition research and aren't essential for survival.
Phytonutrients are chemical compounds found in plants that offer potential health benefits. Since they typically occur in foods that are also nutritious, it can be difficult to know how much of the health benefit is due to the regular nutrients or the phytonutrients. Some better-known phytonutrients include polyphenols and carotenoids.
Fiber is a type of carbohydrate that your body can't digest so it doesn't provide energy or structure. Fiber is necessary for digestive system function because it adds bulk to stool, so it is easier to eliminate. There are two types of fiber: soluble fiber that dissolves in water and insoluble fiber that doesn't dissolve.
Was this page helpful?
Article Sources
• Gropper, Sareen Annora Stepnick, et al. Advanced nutrition and human metabolism. Australia, Cengage Learning, 2018.
• Smolin LA, Grosvenor, MB. "Nutrition: Science and Applications." Third Edition. Wiley Publishing Company, 2013. |
SAT math tutor Charlotte NC
5 Must-Know SAT Math Tips
Your SAT Test Day is approaching, and you need SAT Math tips and strategies to maximize your score. Remember that your SAT Math subscores will reflect how you perform on specific questions tied to The Heart of Algebra, Passport to Advanced Math, and Problem Solving and Data Analysis concepts.
SAT Math Tip #1: Use this approach to answer every SAT math question
• What information am I given?
• Separate the question from the context.
• How are the answer choices different?
• Should I label or draw a diagram?
Step 2: Choose the best strategy to answer the SAT Math question
• Look for Patterns.
• Pick numbers or use straightforward math.
Step 3: Check that you answered the right question
• Review the question stem.
• Check units of measurement.
• Double-check your work.
SAT Math Tip #2: Use this method for multi-part Math questions
Step 1: Read the first question in the set, looking for clues.
Step 2: Identify and organize the information you need.
Step 3: Based on what you know, plan your steps to navigate the first question.
Step 4: Solve, step-by-step, checking units as you go.
Step 5: Did I answer the right question?
Step 6: Repeat for remaining questions, incorporating results from the previous question if possible.
SAT Math Tip #3: Translate words into math
Translate the words in the question into math so you can solve.
SAT Math Tip #4: Review number properties and SAT math relationships
Recognizing number properties will save you time on Test Day. Number properties rules include odds and evens, prime numbers, and the order of operations. You can pick numbers to help you remember the rules.
For SAT math relationships, knowing the difference between ratios, proportions, and percents can save valuable time. Being able to move easily among percents, fractions, and decimals will also save time.
SAT Math Tip #5: Make sure your calculator is allowed and bring it with you on Test Day.
Check the official SAT website to make sure the calculator you plan to use on the SAT math section is allowed.
Contact us to learn how we can help increase your SAT math score!
One-To-One ACT Prep & SAT Prep: Charlotte NC
We often get asked this question. It’s because we focus on the individual student and not a classroom of students. The largest ACT/SAT score increases are seen when ACT/SAT prep is customized for the individual student. This is also why a baseline test is so effective. Our ACT/SAT practice test score reports are used as a student diagnostic.
The ACT/SAT tutor will hone in on the specific needs for the individual. The diagnostic report reveals trends such as pacing and themes among certain question types. On the SAT reading section, the student may have the most trouble with command of evidence. For the ACT math section, it might be quadratic equations that need the most attention. Our ACT/SAT tutors will come to students’ homes on weekdays or weekends.
ACT Score Increase - Northern VA - Richmond VA - Charlotte NC.jpg
SAT Score Increase Northern VA - Richmond VA - Charlotte NC.jpg
Contact us to get started with your customized ACT/SAT prep program. |
thumbnail of WILL 75th Anniversary Celebration
Hide -
During the earliest years of radio you would have listened to something like that. That is if you were a radio hobbyist who constructed your own crude receiver from the wire and an oatmeal box you would have picked up any number of signals in Morse code from other amateurs. U.S. military and other commercial point to point radio operators voices music and Morning Edition came much later in 1996. Radio was all dots and dashes. But as University of Michigan media scholar Susan Douglas explains then came the audio on tube and that changed everything. The Audi onto which most people haven't heard of was. The precursor to the vacuum tube which of course revolutionized both radio reception and later radio transmission when it was discovered in the teens that the vacuum tube could oscillate as well as receive radio signals so really was the kind of foundational technology first for radio reception and then for radio transmission tubes such as the audio and could transmit and receive voices in music.
But tubes were expensive and unreliable. Fortunately at about the same time someone discovered that a simple crystal could serve as an effective and rugged receiver. There was an explosion of ham operators using these crystals and building their own sets beginning in 1907 and they are model for reception conceptually with the audio onto but very few kids could afford it and so they were really using Crystal detectors and what they began to do was build radio clubs around the country and they were especially prevalent in port cities New York Chicago Detroit Washington D.C. and Baltimore. And by 1910 there were more of these ham operators on the air than there were either commercial operators or military operators. More and more people had radios but the idea of broadcasting had not yet taken hold. Rather radio was viewed as an improved version of point to point wired communication. According to University of Wisconsin media scholar Robert McChesney it was replacing wired communication.
The sickly in those days that might ship to shore communication that's why the Navy played such a central role in radio the first 20 or 30 years in existence because ships used it to stay in touch for sure when they're out in the middle of the ocean. And that was a revolutionary development in these very early days of radio the electromagnetic spectrum on which it operates was a vast unexplored territory. Following the example of the year Marconi the self-proclaimed inventor of wireless to lead the Navy and business users thought of radio as a point to point medium says University of Illinois journalism professor Jerry Landay. His vision was that radio would be used to provide information traffic from ship to shore and back namely by very wealthy people who like to cruise to Europe and back and who wanted access to other people so that there be point to point messages also. He envisions sending news bulletins so that while they were all at sea they would have a sense of the events of the day
so he was really one of the first to anticipate radio as a carrier. News information but on a point to point basis. Secondly it will be used by ships when they were in trouble and that's where his thinking stopped. Meanwhile the growing number of ham radio operators had a different idea. It suddenly dawned on a number of people mostly citizen hams or amateurs. That they could build their own transmitters. They could pick up their fiddles and bows and play or sing or or 8 or invite the neighbors in to do the same thing over their homemade radios while other people listened to it on their homemade radio. In 1900 no laws were in place to regulate who could operate a radio transmitter or what they could do with it while hams were on the cutting edge of what became known as broadcasting. They also soon brought about the first wave of radio regulation says Susan Douglas. They shared homework they share technical information and some of them are
anticipating computer hackers of today especially sought to taunt the U.S. Navy because they didn't like some of the military control of the airwaves and they also like to defy authority and so they would do things like deliberately interfere with naval operators at various naval bases in New York Newport Rhode Island and cetera. They would also pose admirals and announce that some naval ship was floundering at sea and when it sailed into port safe and sound. Of course the military officials were furious. And some of this prankster ism by 910 led military officials to go to Congress and ask that the ham operators be driven out of the airwaves in April 19 12 the ocean liner Titanic hit an iceberg and public perceptions of radio were changed forever in the sinking of the Titanic radio play the roles of both hero and villain if not for the wireless operator on board who broadcast a distress signal. None would have survived. But from the tangle of wireless traffic in the wake of the disaster
someone had assembled the message all Titanic passengers safe towing to Halifax which was printed in The New York Times. Unfortunately says Susan Douglas the message was a pastiche of at least two other messages and it was a great deal of blame placed on ham operators who were thought to have maliciously put together this message that seems not to have been the case but that combined with the enormous amount of interference that the rescue ships confronted when they were trying to signal back and forth to get message messages to the land about who had survived and what was the status of various survivors. Prompted Congress to believe that the hams were now constituting a menace to the airwaves because there are too many of them and there was too much interference. So what happened was there was radio regulation passed in 1900 that required everybody to get a license if they were going to transmit. But it kicked the hams out of the main part of the spectrum which was basically a portion of the EM spectrum that's unused today
and they were kicked down to the short waves which were then thought to be completely useless. But the hams were not about to be shut down. And it turned out that short ways travelled even farther than longer ways on the AM band. Susan Douglas Wood had also began to happen was an experimental transmission of voice and music between one thousand twelve thousand nine hundred seventeen lead to Forrest the inventor of the Audi on cover the nineteen sixteen elections from his experimental station north of New York City where he announced that Charles Evans Hughes had won the election and went to bed. DEFOREST also covered. 916 Harvard-Yale football game. Some music was being sent out experimentally so the hands were beginning to test the airwaves for the transmission of voice and music and as well as to send signals back. Obviously there was enormous concern about how the airwaves were being used during World War One and the government simply shut down all ham operations during this period due to security concerns the US government banned all amateur broadcasting and confiscated
many transmitters but radio expertise was badly needed in the military and thousands of ham operators were given work in the war effort. Thousands more were trained and these young men and they were mostly young men formed what would soon become the first audience for the use of vacuum tubes in the post world war one period in 1900 the ban on private broadcasting was lifted with more people trained in radio technology and thousands of war surplus radio tubes available. Amateur radio experienced its biggest boom for hams. The x ing became a favorite past time. The object was to tune in stations as far away as possible according to Brown University historian Susan Smullyan. The best thing is that they would say their call letters and where they were broadcasting from. So you could hear them and then you could go downstairs. If you're into the house or you go to school the next day and brag that you've heard something from our way that the stage was set for a major shift in thinking about radio radio began as a new and
improved point to point medium. It was then used by amateurs to reach out to distant places to connect with other radio enthusiasms. But now the broadcast era was about to begin. What happens is these guys take with them this interest in talking across long distances into being radio listeners and to being a member of a radio audience which is just beginning to coalesce or to have people who think of themselves as radio listeners. In the early 20 if any single place and time marked the start of the broadcasting boom it was the Pittsburgh radio station KDKA on the member second 19 20. Susan Douglas what happened is that this guy Frank Conrad who was a ham and who had gotten some of the surplus tubes from World War One began sending out what were called Wireless concerts from his garage outside of Pittsburgh and the wireless conferences consisted of the following extremely high tech approach get a pretty crude microphone stick in front of a pretty
junky phonograph and broadcast music out into the air. Well this that became something of a sensation and a department store in Pittsburgh began advertising components to make your own receiver so that you could listen to Conrad's broadcasts. Well the head of Westinghouse not being a fool and Conrad was a Westinghouse employee realized he better get Conrad out of the garage and onto the roof of the Westinghouse plant and bring him into the fold which indeed they did and the inaugural broadcast for what became KDKA one of the most famous stations in America. Was the coverage of the one thousand twenty presidential election returns and the explosion was on. Robert Mayer now a retired University of Illinois professor of economics recalls listening to the election returns on KDKA. I remember listening to the national election returns
in 1920 there were very few radios around there and the only radio that I knew there was a fellow reading rise New York weekly out of the garage and set up folding chairs and a whole bunch of hundred officers. That was that was 1920. The radio craze had begun suddenly says Robert Majed everyone had to have a radio station. You've got a lot of commercial enterprises and start radio stations newspapers car lot of utility. Companies and they would use the radio station not as a commercial enterprise but they would use it almost entirely to do what we call public relations today to spread favorable publicity about their core enterprise and it would not have advertising as a rule on it as we know it today. Department stores and car dealers libraries schools and churches all quickly got in on the radio act. Robert Mayer recalls one of his first radio encounters.
There was a bank there that had a small table radio. They would set bank would set the radio. Kind of a table on the outside their door. And people from town would go and gather it is down the steps. Susan Douglas says educational institutions quickly join the broadcasting boom colleges and universities got involved there was a lot of eager thought that they would be the universities of the air and people would necessarily have to go to university anywhere they could sit in the comfort of their homes and listen to the great lecturers of America and get their courses that way so colleges and universities began to set up radio stations. And this just really exploded between 20 and 22. In fact many of the advances in the science and the art of broadcasting came from the educational sector says Robert much as any educational radio. That was the pioneering of radio broadcasts in the United States I mean in the first few years that's where the action was. And there were other things. But the people who really
ran with it were the people of places like here in Madison and Champaign Urbana and stations associated largely with the major land grant universities the universities that saw their mission of serving the entire population of the state. And that's who ran that in the big stations the major stations started in the early 20s by the colleges but there were hundreds literally hundreds of licenses granted to colleges and universities to do radio broadcasting in the first half of the 1920s. Well before 1921 University of Illinois faculty and students had been experimenting with radio technology Electrical Engineering Professor Hugh Brown supervised construction of a small spark transmitter in his laboratory. In an Oct. 19 21 he received an experimental license with a call letters 9 x J spark transmitters could only broadcast in Morse Code in 9 x J was used mostly for engineering experiments. University of Illinois Professor Emeritus Robert Mayer. It was as experimental as a as the computer started grew
up in the same way that the community that it was. In somebodies laboratory and then British and the operation got too big for that laboratory and they had to provide a space for that sort of thing. Students and faculty in Electrical Engineering had been tuning in Friday night broadcasts from KDKA and that inspired them to build a vacuum tube transmitter of their own. On March 28 1900 to the University of Illinois received a license for the new transmitter with a call letters W r m Hugh Brown told people this stood for we reach millions with a single 50 watt vacuum tube. It's unlikely the signal traveled very far but w r m was quickly seen as a way to extend the university's educational mission says University of Illinois Professor George Douglas author of the book the early days of radio broadcasting. They tried to provide a mix of things and occasionally they would they would get into broadcasting the home football games and so forth. But they almost all of them would try to provide things relating to the.
Kinds of course were they had like they would have agricultural programming. They would have grain prices and things like that. In August 1906 University of Illinois alumnus botia Sullivan donated money for a new radio station in author of his late father. He said the station was to be used as a means of educational aid to the boys and girls of the state of Illinois to whom my father was so endured. Mr. Sullivan's gift included a new one thousand watt transmitter and enough money for a new radio building and tower in December 1906 moved into the Roger Sullivan memorial station at the west end of the line I field on Wright Street in Urbana. The University of Illinois was ready for the educational radio boom. George Douglas now it just so happens that. Or the University of Illinois station had had a plan for those in a big way in that they had a lot of places on campus that they were that were wired for radio so that they could you know from lecture halls and from Smith music all the university
installed underground lines to 27 campus locations. These lines allowed for live broadcasts of classroom lectures by university professors and concerts by university bands and became the University of the air. The mission of the station became to provide education in the broadest sense a mission that carries through today. The one thing that was different in those days from what the mission of oil is today is that they they did try for a while to do something like formal instruction or they would have a professor come in and read a lecture at a certain time. Robert Mayer remembers visiting the station for a live broadcast of the men's glee club. The first time that I joined a group coming over and I think it was probably the first time they come over. Whereas in the spring of 1929 something a lot bigger there than it is today I mean it was more or less
routine lots and lots of things on the radio but that was really quite something we were. Look forward to that. You are very proud of that. Meanwhile the number of radio stations was growing rapidly. Robert McChesney says thousands of radio transmitters began to literally jam the airwaves. There were 90 or 96 channels and the AM band the relevant band in the 1920s for radio broadcasting. And if people use a very low power you could have more than one person using a channel in the country
at the same time. You could have several in fact but the technology wasn't advanced as it is today or would become you know just a couple decades later. So it did put distinct limits on the number of people like a broadcast that was quickly reached by the mid-1920s. This soon led to what became known as chaos of the air. That's the official historical term for the year between 26 and early 27 is the chaos. Of the air period and what happened with an effect the secretary of commerce or the Commerce Department which had sort of been the defacto regulatory body of radio stopped issuing licenses are trying to regulate their way so it's hopeless to force Congress to basically pass a law to sub something up to do the job and do it right. In a 1927 Congress passed the radioactive 1927 which established Federal Radio commission as a temporary body basically to come in and clean up the airwaves assign license to make it clear so it works so people would not have static on the radio
receivers the stations were fighting over signals. But if you get one station for each frequency that they are able to get the radio active one thousand twenty seven created the Federal Radio commission it empowered the FARC to reallocate the airwaves to decide who would be allowed to broadcast. Congress wanted to quiet the chaos and to devise a rational process for issuing licenses. But Jerry Landay says the rationale was never quite clear. The airways by statute in the Communications Act of 1907 is deemed a public resource to be operated Here's the cover to be operated in the public interest. Now nobody however ever defined what that really meant. The FARC would have to deal with competing claims from fowls of nonprofit and commercial broadcasters who wanted their own licenses but added to their claims were those of the newly forming commercial networks says Susan Douglas one of the things that you have happening is NBC is formed in one thousand twenty six. And CBS is formed in one thousand twenty seven and their networks and they're
starting to battle for affiliates. And what they're trying to do is get as many local and regional stations to be affiliates with the commercial networks. So you have that element of competition and of course NBC and CBS have money when you have money it means you have good lawyers when you have good lawyers and experts it means that when you go before the FCC to battle for a license you have an enormous advantage directed by General Counsel. Louis CALDWELL The Federal Radio commission resolved the competing claims by adopting a particular definition of the Public Interest says Jerry Landay. What was done was to call the noncommercial stations propaganda stations. The line was that the international order of Oddfellows or the local library or the unions that ran some radio stations or the municipal were feeding their listeners propaganda. On the other hand he called those stations which were tending to be commercial like WGN Chicago.
GENERAL PUBLIC INTEREST station meaning they were good since broadcasting was beginning to be professionalized it may come as no surprise that licenses were given to those deemed most professional. But Susan Douglas as a result of the reallocation had an unfortunate effect on nonprofit broadcasters. Well they have priced FRC had a not surprisingly corporate bias and gave the best lots on the spectrum to those stations that had the biggest power transmitters and the most money invested in them and what they made labor stations and college stations do is share wavelengths which with each other. So do timesharing and some of the stations were not allowed to transmit at night so as to leave room for some of the other commercial stations. So you see an enormous drop in the number of educational and labor stations between one thousand twenty seven and one thousand thirty four. George Douglas has researched the decline in educational broadcasting during this period.
The educational stations of which there were I think by 1928 there were about 200 of them. They had to take a big cut. And I think by the mid 30s are only about 35 left. The University of Illinois radio station w r m had been assigned to eleven hundred Kill cycles since 1925 with a nine hundred twenty eight reallocation plan w r n became W I L L and was moved to 620 Kulas cycles and ordered into timesharing with W. S. F. L.. The voice of labor in Chicago and JJ d run by the Fraternal Order of moose before. Station manager Joseph Wright could arrange a time division with those stations. He received another telegram from the commission saying it was reassigned again to five hundred seventy CULE cycles and would share time with a. At the University of Wisconsin and PCC of the Chicago Northside Congregational Church immediately three commercial stations
challenge well and applied with the FARC to be placed on the same frequency so well was reassigned. Yet again this time to a frequency of eight hundred and ninety kilo cycles just twenty killer cycles away from the very powerful w l s again was to share time with two other stations. K us d in Shenandoah Iowa and K F and F in for million South Dakota Jerry Landay. You were supposed to share time. With usually a commercial rival commercial rivals took a certain portion of the day and then and then the noncommercial station on the air and usually the hours assigned to the commercial stations where the hours in which most people listen. And ultimately for those stations who lost this game the noncommercial stations the FCC was able to say sorry you're not serving the public interest. Nobody listens to you.
So you had one of these wonderful bureaucratic catch 22 the FRC also limited to 1000 watts of power during the day reduced to two hundred fifty watts at night. The battle for a usable frequency on which to broadcast began to take a toll on oil wells management. Among other things they couldn't afford the process of sending lawyers or legal teams or station managers to spend long hours in Washington D.C. cooling their heels at the FCC and filing document upon documents. To establish their right to a trial in fact Joseph Wright who was the manager of the moment in letters the onerous burden of having to spend money on these. And a lot of states could avoid it. While archives contain a number of personal statements from the time including this one from station manager Joseph Wright. Those of us responsible for the direction of station w i l l have at various times seriously contemplated retirement from the broadcasting game.
This is due soley to the fact that our power limitations so restricted our audience that we felt we could not justify the expenditure of tax money to go ahead with the work even though the total yearly cost of operating the station is absurdly low. Our only problem has been that of making it possible to take to a sufficient number of our citizens. The endless supply and constant stream of educational material and of agricultural commercial and industrial information that is ours to give. Joseph Wright continued to press for a better frequency for W I L L finally in 1035 the Federal Communications Commission which succeeded the FARC granted a license for WRAL to broadcast on five hundred eighty CULE cycles using a directional antenna. This latter requirement was to protect the signal of station B W in Topeka Kansas which already occupied the five hundred eighty channel. Oil remains on five hundred eighty cool cycles today broadcasting at 5000 watts during the day but only 100 watts at night.
One of the amazing chapters in this station's story was that the leadership of radio was able to swim through this mark and the loaded deck and get 580 and win a permanent place on the air. W I L L can now celebrate 75 years of educational broadcasting and public service. It was never guaranteed but those who have worked here those who have supported the station and those who have listened throughout the years have kept it alive. Here's hoping for the next 75. This is MORNING EDITION. Our involvement from National Public Radio News in Washington I'm Carl Kasell. This is ALL THINGS CONSIDERED I'm Robert Siegel. And I'm Linda Wertheimer. This is FRESH AIR and Terry this is a young five babies MORNING EDITION I'm Craig COHEN Good afternoon and welcome to writing this unless this is ALL THINGS CONSIDERED on AM 580. I'm Alex Ashleigh. I'm meteorologist at Keys. Well good
This record is featured in “Documenting and Celebrating Public Broadcasting Station Histories.”
WILL 75th Anniversary Celebration
Contributing Organization
WILL Illinois Public Media (Urbana, Illinois)
If you have more information about this item than what is given here, we want to know! Contact us, indicating the AAPB ID (cpb-aacip/16-79v15q57).
Jack Brighton hosts this program, which reviews the history of radio broadcasting and the beginning of WILL's history, on the station's 75th anniversary. Brighton reviews the beginning of radio technology, its acceptance into popular culture, and the University of Illinois's first ventures into the broadcasting world. The program includes commentary from many experts.
Asset type
No copyright statement in content.
Media type
Embed Code
Host: Brighton, Jack
Interviewee: Douglas, Susan
Interviewee: Mayer, Robert
Interviewee: McChesney, Robert
Interviewee: Landay, Jerry
AAPB Contributor Holdings
Illinois Public Media (WILL)
Identifier: will_am_971019_75th_anniversary_celebration_dat (Illinois Public Media)
Format: DAT
Generation: Master
Duration: 02:00:00
Chicago: “WILL 75th Anniversary Celebration,” WILL Illinois Public Media, American Archive of Public Broadcasting (WGBH and the Library of Congress), Boston, MA and Washington, DC, accessed May 22, 2019,
MLA: “WILL 75th Anniversary Celebration.” WILL Illinois Public Media, American Archive of Public Broadcasting (WGBH and the Library of Congress), Boston, MA and Washington, DC. Web. May 22, 2019. <>.
APA: WILL 75th Anniversary Celebration. Boston, MA: WILL Illinois Public Media, American Archive of Public Broadcasting (WGBH and the Library of Congress), Boston, MA and Washington, DC. Retrieved from |
The nuclei of certain naturally occurring isotopes, and of others produced artificially, contain excess energy, i.e., they are unstable. To attain stability, nuclei with excess energy emit that energy in the form of nuclear, ionizing radiation and, in that process, frequently change into different elements. (See paragraph 215e.) (Ionizing radiation is defined as radiation capable of removing an electron from a target atom or molecule, forming an ion pair.) Isotopes, the nuclei of which emit ionizing radiations to achieve stability, are termed radioactive. Radioactive isotopes are referred to as radioisotopes or radionuclides.
a. Radioactive Decay. The process wherein radionuclides emit ionizing radiation is also termed radioactive decay. Each radioisotope has its own characteristic decay scheme. A decay scheme identifies the type or types ionizing radiation emitted; the range of energies of the radiation emitted; and the decaying radioisotope’s half-life.
b. Half-Life. Half-life is defined as the time required for half of the atoms of a given sample of radioisotope to decay. Half-life values range from fractions of a millionth of a second to billions of years. Theoretically, no matter how many half-lives have passed, some small number of nuclei would remain. However, since any given sample of radioactive material contains a finite number of atoms, it is possible for all of the atoms eventually to decay.
c. Data Plotting. Radioactive decay may be plotted in a linear form as shown in Figure 2-X or in a semilogarithmic form as in Figure 2-XI. The latter has the advantage of being a straight lineplot. The straight line form is used extensively in radiation physics, particularly when dealing with isotopes with short half-lives, since it allows direct determination by simple inspection of the activity at any given time with a precision adequate for most purposes.
214. Measurement of Radioactivity.
a. The international system of units is based on the meter, kilogram, and the second as units of length, mass, and time, and is known as Systems International (SI). The amount of radioactivity in a given sample of radioisotope is expressed by the new Systems International (SI) unit of the Becquerel (Bq). The old unit was the Curie (Ci). One Becquerel of a radioisotope is the exact quantity that produces one disintegration per second. The Curie is 3.7 x 1010Bq disintegrations per second. Thus 1 Bq = 2.7 x 10-11Ci and 1 Ci = 3.7 x 1010Bq. As the Becquerel is inconveniently small for many uses as was the Curie inconveniently large, prefixes such as micro (�) (10-6), milli (m) (10-3), kilo (k) (103), mega (M) (106), and giga (G) (109) are routinely used. Following nuclear detonations, the amounts of radioactive material produced are very large and the terms pets-becquerel (PBq) (1015Bq) and exabecquerel (EBq) (10l8Bq) may be used. The term megacurie (MCi) (106Ci) used to be used.
b. The amount of radioactive material available at any time can be calculated by using a specific mathematical formula:
from which the following can be derived
c. The terms in these formulae are as follows:
(1) At = activity remaining after a time interval, t.
(2) Ao = activity of sample at some original time.
(3) e = base of natural logarithms (2.718…).
(4) = decay constant of the particular isotope, derived from the half-life.
(5) t = elapsed time.
(6) T1/2 = half-life of the particular isotope.
d. This formula can be used to calculate the activity (A) of an isotope after a specific time interval (t) if the half-life (T1/2) and the original activity (Ao) are known.
(1) Example: If 3.7 x 1010Bq (= 1.0 Ci) of 6oCo (cobalt) is the original amount of radioactive material at time to, what will be the activity of the 60Co remaining 1 month later?
A1 month = activity remaining after 1 month (t)
A0= 3.7 x = 10 10Bq (original activity)
T1/2 = 5.27 years (half-life of 60Co is 5.27 years)
t = 1 month (time elapsed since the original time).
(2) Substituting in the formula gives the following:
(3) All values have to be converted to the same time units, in this case, years. Therefore:
(4) In other words, the activity of 60Co after 1 month is 0.99 of its original activity, a reduction of only 1%. This could not be determined with precision from a graphic plot of activity versus time.
Leave a Reply |
Friday, May 21, 2010
Ancient Mexico
We stood on the spot where Hernan Cortez watched a ceremonial ball game almost 500 years ago and gazed at the terraced stonework rising from the edges of the ancient I-shaped court. By 1520, just 2 years later, the entire civilization whose ancestors built the magnificent city around him fell to the Spanish conquerors, their people reduced to slaves. It would be another 20 years before Pope Paul III declared these people to be human, despite the advanced technology they employed in their construction techniques, the fine artwork that decorated their buildings and their clothing, their skills in weaving, pottery, and jewelry making, their sophisticated knowledge of astronomy and much more we hardly know anything about.
We were visiting the ruins of the Zapotec capital at Monte Alban, founded in 500 BC and expanding to a city of 25,000 people over the next 1200 years. For some unknown reason, the place was then abandoned for several hundred years, before being taken over by Mixtecs, a tribe from the north, around 950 AD. These newcomers used the old tombs to bury their own kings, but the Spaniards later ransacked them for their gold, silver, pearls and precious stones. Only one tomb has been discovered with the burial artifacts intact, the treasure displayed at the local museum.
The sheer size of the ruins was impressive, a central plaza surrounded by pyramid style buildings. Each building has their own function, and all but one are aligned perfectly to north, south, east and west. The exception is a diagonally aligned observatory, which is covered in hieroglyphics, and the tip of the arrow shaped building points to the Southern Cross constellation at solstice. The ancient culture had 5 compass points, which included zenith. We were a few days short of seeing proof of their cleverness, when a shaft of sunlight would reflect briefly through a slot in one of the pyramids to signal the onset of rain and time to plant the corn. The high priests orchestrated such demonstrations to show the common people that they were in communication with the gods. They also had underground tunnels that enabled them to disappear from the central podium and reappear high on an outlying pyramid without being seen by the throng that was gathered to witness their magic.
Monte Alban, however, was not the place where the blood sacrifices were performed. These took place at other villages a few miles away, where the strongest young men, the winners of the games, conquering warriors and other successful citizens would be given the honor of having their still beating hearts torn out to appease the gods. Perhaps this had something to do with the doubts about their humanity.
We visited two of these outer villages. Mitla is an archaeologist’s worst nightmare. First, the Spanish built their San Pablo church right on top of a large section of the site. Then the town of Mitla sprang up, so that today pieces of the ruins are being excavated from under washing lines in someone’s back yard, or walls tumble into disrepair in a parking lot. The tombs and buildings that remain have been subjected to vandalism. This is a real shame because the finely painted frescos that recorded the history of the ancient village are all but lost. These must have been lovely set next to the intricate designs of the stone mosaics this site is famous for. Each wall is made up of millions of tiny, precisely cut stones, some with additional carving on their protruding side. Put together, they form the classic design patterns such as the Quetzalcoatl, a feathered serpent, or the eternal life square wave pattern. These and other designs are still used in their hand woven rugs and blankets. Despite the degradation, there were still plenty of halls, palaces, tombs and plazas for us to poke around.
Our third ruins in the Oaxaca area were at Yagul. Set at the top of a hill, well off the beaten track, the views of the valley below were stunning. A unique feature of this site was a labyrinth of interconnecting rooms. We wandered through this maze-like area until we came upon a bunch of workmen mixing the cement for the ‘original’ plaster on the walls. As with the other places, there were old men trying to sell us ‘genuine’ Aztec clay figurines, kids selling made-in-china looking toys, and women selling everything from local woven clothing to not so local shell necklaces.
The modern city of Oaxaca, nestled between these ancient neighbors, provided us with 4 days of markets, street fairs, music, museums, elaborate churches that rivaled any we saw in Europe, restaurants and food carts. The area is known for coffee, chocolate, mescal (a type of tequila), mole (a spicy chocolate and chili sauce for meats), embroidered clothing, black pottery, dot art wooden animals, as well as the lovely hand dyed and woven rugs. Many of the colors for the dye comes from an insect that lives on the local cactus, which are dried and crushed, then mixed with various compounds (lime juice, baking soda and others) to produce a surprising range of colors. Our ramblings took us to a ‘petrified’ waterfall, the minerals in the water calcifying on the rocks high in the cliffs to create the effect of frozen falls. We also visited the largest tree in the world here, a 2000-year-old cypress whose trunk is 14 meters in diameter – that’s the length of our boat!
This was to be our last side trip before our Tehuantepec crossing to El Salvador. It only whet my appetite for inland Mexico, and her ancient people.
No comments: |
How to apply behavioural economics to the design process
6 min read
Hyper Island…that magical wonderland where design unicorns play, collaboration flourishes, and UX design skills grow on trees… As an alumna myself I miss the hyper vibes and was excited to be invited with fellow-alumna Rita Cervetto to share insights from our design practice at Common Good. We embraced the opportunity to give back and ran a behavioural design workshop with the Digital Experience Design Master students.
Behavioural Design is all about creating the right environment for people to make a decision or take action towards their goals. It can be applied to encourage a desired behaviour, to stop unwanted behaviours as well as to form habitual routines.
Behavioural design is often referred to the concept of nudging, which “proposes positive reinforcement and indirect suggestions as ways to influence the behaviour and decision making of groups or individuals” (thanks Wikipedia). The term was coined by Richard Thaler who built on the “fast and slow thinking” theories of Nobel laureate Daniel Kahneman, and introduced nudging to public policy. Proving positive effects of applying behavioural economics in practice over the last 10 years got him a well-deserved Nobel Prize in 2017. So with all these Nobel recognitions the field of behavioural economics is getting more traction and gaining importance in the design field.
To me behavioural design is a combination of the design process and behavioural economics. Design is creative, generative, explorative approach to problem solving — it’s the crafty cool kid on the block. Behavioural economics is the nerdy one — scientific, measurable, analysing how people make decisions. The two don’t play together easily, but when they do — great things can happen.
The role of the design process is to explore the whole user experience and go wide on the end-to-end journey, while behavioural economics zooms in on the specific moments of decision making and applies small nudges strategically to cause big impact. That’s why behavioural design can also be very well applied to the human-centred design process to make products and services more intuitive, effective, and easy to use.
Back to Hyper… What did the workshop cover
Everything at Hyper Island is hands-on, fast-paced and real-world ready so we translated all theory into a practical 101 talk and a workshop applicable to the briefs that the students are currently working on.
First step of the process is to do explorative research to understand exactly what people do in context. It is best to combine different research methods to get the complete picture.
Unlike traditional economic theories based on the idea that humans always act in their best interest, behavioural economics argues that people don’t make rational choices and are influenced by their environment, social norms and an array of cognitive biases.
While doing the research it is important to capture all factors affecting the user’s decision making — apart from individual cognitive biases, people are massively affected by their surroundings in terms of how choices are presented to them, how are they framed, and what is visible to them at any given time. As social animals, humans are evolutionary conditioned to behave in accordance with tribe norms and do what others are doing — particularly role models of authority and liking.
Considering all these factors we can synthesise our knowledge about users by using the behavioural persona tool. While traditional personas represent one fictional ideal customer to target, behavioural personas are based on real people involved in the service and focus on what they do, how they do it and why. They are archetypes of users with similar ways of thinking and acting in relation to a service. I wrote more about them here.
The journey map is the tool to synthesise our understanding about the context of the persona and visualise their current experience regardless of the service or product they use. There is no one way to do journey mapping, and it makes sense to experiment and adapt the format for each project. The basic structure of a journey map consists of steps users take and the touch points plotted on a timeline. For behavioural design it is helpful to illustrate each step as a job story to clarify what users need to get done and what outcome they are seeking to get out of it.
By having the holistic view of the context, you can identify behavioural principles that explain the current behaviour and the ones to help you ideate how to nudge your persona towards a desired action. A great tool for this is the Persuasive Patterns deck of cards to plot applicable principles against the journey map. The selected patterns are relevant to product and service design and we enjoy using them at Common Good. To learn more about the plethora of cognitive biases and how to play with them — a good starting point is thе awesome cheatsheet by Buster Benson.
Once you’ve identified some opportunities, you can define the ideal future state — capture it with a “How Might We” design challenge and introduce a new job story with the intended behaviour and envisioned outcomes.
Behaviour = Motivation + Ability + Prompt
The Fogg Behavior Model (B=MAP) by BJ Fogg, PhD is a useful guidance to identify what stops people from performing behaviours or how to support desired behaviours. Here’s a plain explanation — for a person to do something, they have to be sufficiently motivated and able to do it as well as something has to prompt them to act at the right time. Below is a non-exhaustive list of factors affecting people’s motivation and ability as well as prompt examples. These can be used like lego blocks to design behavioural interventions.
In behavioural design it’s important to design the environment in which the action is taking place as much as the touchpoint of interaction. A design idea for intervention may be brilliant, however if either motivation, ability or a prompt is missing – it won’t work. Hence, we need to analyse the context in which the key behaviour is happening and which factor needs to be adjusted for the intervention idea to be successful.
Test & Measure
Testing the effect of the behavioural intervention is very important to get actual evidence that the design works in the real world. During ideation we get very creative and might plug in assumptions to the mix, so we need to validate that we’re doing the right thing to get the job done.
One way to go about is is with a hypothesis-driven experiments. Starting with a design hypothesis that outlines the idea and how it will cause the desired behaviour. Then describe how you’ll go about testing it and what you will measure. And it’s also important to define what result will be convincing enough that it works.
The key premise of running experiments is that correlation does not equal causation. This means that even if we observe the behaviour happening, it doesn’t necessarily mean that it is because of our intervention.
To check if this is true, we do an A/B test — getting two groups of similar participants — giving one group the design intervention and the other not, and see what happens over a period of time. At the end you compare the results and see if there are any differences.
A/B testing and controlled experiments are the most common methods for validating behavioural design, however there are many more ways to test effectiveness. For example, the Validation Patterns card deck is a collection of many more lean testing approaches to validate risky assumptions.
Even if the numbers show good results, we still need to make sure that users will like the change and can adapt to it by using qualitative testing methods to obtain feedback for improvement.
Workshop outputs
The Hyper Island students went through the process of creating behavioural personas for their project, picking the most crucial action on the journey maps to address and defining a behavioural challenge. After using the Mental Notes cards to identify key behavioural factors that apply in the context of their challenge, we ran a crazy eights ideation session to solidify the intervention concepts. Some amazing ideas came out of it — Uber for emergency contraception anyone?!
Applying the lens of behavioural economic principles can be helpful to identify new opportunities and invisible barriers for users. However, similar psychological factors can also be used for not-so-good purposes — it is a tool that can be (and is being) misused by ill-intentioned businesses to manipulate people into buying their products or “hooking” them on to addictive apps. We encouraged the DXD students to have critical thinking and always ask themselves what are the consequences of what they’re designing.
As designers we sometimes get too focused on the design goals and forget to think about the consequences of our solutions. Don’t just figure How Might We lift the plane into the air, but also make sure that it lands safely.
Sense-check at every step: is what we’re designing in the best interest of the people using it?
Please, don’t use behavioural economics to make people buy more things they don’t need or become trapped and unable to delete their account (see: dark patterns). Instead, use behavioural design to create meaningful change in people’s lives. That would be ace! |
Peaches and Plumbs Booksellers
“He who cherishes an individual beyond his homeland… is nothing…"
Towering over the rest of Greek tragedy, the three plays that tell the story of the ill-fated Theban royal family—Antigone, Oedipus the King and Oedipus at Colonus—are among the most enduring and timeless dramas ever written.
Antigone, perhaps Sophocles' finest play, pits the power of the state against traditional values and common decency.
Antigone stands up to Creon by simply stating that her honor, and the honor of her family, compelled her to do what she had done, and that it was also the law of the gods.
Creon says that she must pay the ultimate price for her "treachery." He sentences her to death, and orders that she be walled up inside a tomb.
Here we find all sides of Sophocles’ genius on display: the verse gorgeous and the characters brilliantly drawn.
It is the classic example of someone standing up and doing the morally correct thing, knowing full well that what they do may cost them their life.
SOPHOCLES (circa 496 BC) is one of three ancient Greek tragedians (along with Aeschylus, and Euripides) whose plays have survived down through the centuries. His most famous plays feature Oedipus and Antigone: they are generally known as the Theban plays. Sophocles is best remembered for having influenced the development of the drama, most importantly by adding a third actor, thereby reducing the importance of the chorus in the exposition of the plot.
Add To Cart |
Economic Development: People Vs. Place Strategies (Part 1)
The goal of economic development is complex and multifaceted. It involves the revitalization of places that are rundown as well as the enhancement of social and economic mobility for people in the area. The big question is whether economic development is about creating vibrant places to enhance the lives of the people living there or whether it is about improving the well-being of the people who live there to create a sense of place and community.
Cities often take three approaches in urban revitalization. A people-oriented strategy helps people without regard to where they live. A place-based people strategy uses place-specific strategies to enhance the well-being of the people who live there. A pure place-based strategy enhances the physical landscape and architectural design to improve the economic potential of a place without regard to the people who live there.
People-Oriented Strategy
Grounded on the idea that poor people with low skills need assistance regardless of whether they live, the goal is moving people from welfare to work. Assumptions are that people need to have access to jobs and that people vote with their feet. If they are not happy where they are, they should move to somewhere better.
This strategy is focused on human capital and improving access and mobility for individuals. Examples include: education assistance, job training, housing assistance, relocation assistance and addressing skills gap and education gap.
Placed-Based People Strategy
With the premise that people cannot be separated from place, and strategies to combat poverty must treat individuals in the context of their community, the goal is to strengthen community institutions and enhance the standard of living for residents. This strategy assumes that jobs need to be accessible to people and that community and place play an important role in people’s well-being.
This strategy is similar to people-based strategies, but the goal is to help people within a defined, targeted geography. Examples include: job training for only city residents, housing assistance for a specific neighborhood and career counseling for university students.
Written by Jeff Khau who is an Analyst at RSG |
Famous French Poets
Early Influences
If you do not speak a particular language it is perhaps difficult to appreciate the significance of any of the literary works that have come from a certain country or culture. For most of us such is the case with the work of French poets, which may not be familiar to us in and of themselves, but their influence may well have filtered into more recognizable avenues.
Such is the case of young Arthur Rimbaud born in 1854 whose short life is packed with adventure and innovation. Born in Charleville-Mézières, France he began writing at an early age and although an exceptional student ran away to Paris as a teenager and produced most of his work before the age of 21. His career though short lived is said to have greatly influenced the Surrealist movement and that of the Dadaists and he is credited with creating a symbolist style which would be adopted later by many other writers.
Regarded as one of the most influential lyrical poets of the 20th century Paul Eluard was born in 1895 and was initially influenced by his associations with the Surrealist poets of that time producing some of that genres most noteworthy examples. His later connections with the political world influenced the direction of his work and the Second World War encouraged him to write beyond the obvious tragedies and focus on a message of hope.
Victor Hugo, born 1802, will perhaps be more recognizable as a name from the literary world that has transcended the barriers of language and culture. Best known for his novel Les Misérables, which now appears regularly as a musical, he started his literary journey writing poetry and by the end of his illustrious career was considered to be at the fore front of the Romantic Literary movement in France.
Moving On
Following the First World War many artists and writers became disillusioned and cynical about the way the world was heading and this was reflected in their work at that time. André Breton a poet and writer was born in 1896 and quickly established himself as an anti-fascist. His contact with likeminded and even more extreme individuals during his time serving in the war shaped much of his work and he is credited with being the founder of the Surrealist movement that would be influential for some time to come.
Eugène Guillevic, more commonly known professionally as simply Guillevic, was born in Carnac France in 1907 and died in 1997. He was one of the more familiar French poets of the latter part of the 20th century and his straightforward approach to writing was reflected in his no frills use of language, preferring to forgo the use of metaphor which he found to be misleading. Like many French writers he was passionate about politics and became a communist sympathizer during the Spanish civil war joining the party officially in 1942. Later he was recognized for his poetic works when he received several prizes and accolades and some of his engaging work can be read in translation though much of it remains in his mother tongue. Maybe it’s time to learn French so these and many other wonderful poets can be truly appreciated. |
Arthur Miller’s Death of A Salesman – Controversial Tragedy Assignment
Arthur Miller’s Death of A Salesman – Controversial Tragedy Assignment Words: 1385
Tragedy was a very controversial Issue In literature until recent years. Recent figures in literature have set a clear definition for tragedy. Author Miller is one of these figures. Plays and novels have distinguished the definition of tragedy. According to the Merriam-Webster Dictionary tragedy is a serious piece of literature typically describing a conflict between the protagonist and a superior force and having a sorrowful or disastrous conclusion that excites pity or terror.
Miller’s explains that a raging hero does not always have to be a monarch or a man of a higher status. A tragic hero can be a common person. A tragedy does not always have to end pessimistically: it could have an optimistic ending. The play Death of a Salesman, by Arthur Miller, is a tragedy because it’s hero, Wily Loan, is a tragic figure that faces a superior source, being the American dream and the struggle for success. Loan also excites pity in the reader because of his defeat and his inability to become a success or teach his children how to make their lives successful.
Don’t waste your time!
Order your assignment!
order now
Miller defines a flaw as “an inherent unwillingness to remain passive in the face of what one conceives to be a challenge to one’s dignity… ” Loan fulfills many of the requirements of being a tragic hero. Wily is not “flawless” in his actions, which by Miller’s standards make him a tragic hero. It is not wrong for Wily to have flaws and it does not make him a weaker man but a tragic figure. Miller designed the play so that Wily could be a tragic hero and for this he needs to have a flaw. Will’s flaw Is that he Is unable to see things In a more realistic perspective.
Charley says something in the play that sums up Will’s whole life. He asks him, “When the hell are you going to grow up? ” Will’s spends his entire life in an illusion. He sees himself as a great man that is popular and successful. Wily exhibits many childlike qualities. Many of these qualities have an impact on his family. His two sons Biff and Happy pick up this behavior from their father. He Is Idealistic, stubborn, and he has a false sense of his Importance In the world. The extreme to which he followed the dream brought him to disillusionment and a goose sense of reality.
Wily created a reality for himself where he “knocked ‘me cold in Providence,” and “slaughtered ‘me in Boston. “(p. 33) “Five hundred gross in Providence” becomes “roughly two hundred gross on the whole trip. ” The ultimate result of his disillusionment is his suicide. It Is ironic that he dies for his ideals although they are misconstrued. Another of Wally’s flaws Is his disloyalty to Linda. Wily is unable to hold strong against temptations such as the women he slept with in Boston. Biffs faith in his father is lost after he encounters the situation.
This may have been the cause for Biffs failure in life. Another of Miller’s guidelines for a tragic hero Is that a common man can be a tragic hero. Wily sums up to many of the characteristics shown In Arthur Miller’s article, article. Wily awakes each day to face the hard struggle of work. Although Wily is not very successful as a businessman he still goes to work everyday because he must support his family. Wily placed a great deal of importance on the success of Biff. Wily believed that the best way to achieve success was the fast way.
Will’s dreams or his children to become successful shows his role as a common man. Wily went to extremes to try and reach his goal of Biff becoming successful. Biff is the most important thing in Will’s life because he is Will’s last shot at success. If Biff doesn’t want to be successful and doesn’t love him, then Wily would be more satisfied in killing himself in order to try and show Biff that he really is a success. If Biff does love him and wants to become a success then Wily is satisfied in killing himself in order to give Biff a better shot at success with his life insurance money.
Will’s actions and his desire for Biff to become a success and live happily make him a common man. Miller says a tragedy usually deals with a greater power that is taking the freedoms of a lesser power. The lesser power deals with this and fights back against the greater power, while putting something of importance on the line, making him/her a tragic hero. Wily is unable to become a success because he is not able to reach the American dream and work for this successfulness. Although he fights for this successfulness, he fails. Wily has wasted his life on trying to become a success.
Wily puts his final stride toward success is in Biff. Wily has spent his life raising Biff and trying to teach him how to become successful. The problem is that Wily doesn’t know how to reach success and he teaches Biff that success is fast and easy when it’s not. Wily always believes he can achieve that kind of success. He never lets go of his wasted life. He dreams of being the man who does all of his business out of his house and dying a rich and successful man. Furthermore, Wily also dreams of moving to Alaska where he could work with his hands and be a real man.
Biff and Happy follow in their father’s footsteps in their lofty dreams and unrealistic goals. Biff wastes his life being a thief and a loner; furthermore, Biff, along with happy try to conjure up a crazy idea of putting on a sporting goods exhibition. Biff really knows that Wily has never been successful and he looks down upon Wily for teaching him the wrong ideal. Biff does realize that Wily has wasted his life in order to make Biffs better. “Miss Forsyth, you’ve Just seen a price walk by. A fine, troubled prince. A hardworking, unappreciated prince. A pal, you understand? A good companion.
Always for his boys. ” (p. 114) Another idea that supports the fact that Death of a Salesman is a tragedy is that there is a possibility of victory. Miller speaks about the things that make a piece of literature a tragedy is his article “Tragedy and the Common Man. ” Miller says that for a piece to be truly tragic an author can not hesitate to leave anything out and must put in all the information they have “to secure their rightful place in their world. ” Although it does not happen in this play and Wily is unable to overcome the greater Orca, he is able to make an impact on it.
Will’s failure sets an example that Biff understands. Wily could have still been successful if he was able to see the flaws in his ways and teach Biff the right way to be a success, which is in hard work. If Wily successful then Biff may have reached success for himself and make Wily a successful father as well. The reader must look at Will’s suicide through Will’s eyes. He killed himself in order to give Biff a better shot at being a success. Wily doesn’t understand that killing himself is wrong and he is not looking for any pity.
Wily has sacrificed his own life so that Biff could have a better life. This truly does make him a tragic hero. Wily Loan is a tragic figure in the play Death of a Salesman. Wily faces a superior source in the play and puts his life on the line for his beliefs and the beliefs of others. He meets the requirements of Miller’s article for a tragic hero. Death of a Salesman also meets Miller’s requirements for a tragic play because of Will’s role in the novel along with the other standards that Miller sets for a tragedy. The exploration of tragedy by people such as Miller helps to define it more clearly.
How to cite this assignment
Choose cite format:
Arthur Miller's Death of A Salesman - Controversial Tragedy Assignment. (2019, Mar 14). Retrieved May 22, 2019, from |
Luke 2:1-5 – Birth of Jesus: Background
Luke begins the story of Jesus’ birth by situating it against the background of a Roman census (Lk 2:1-5), an event in the secular history of the Roman empire. The setting is indeed solemn. A similar background is also provided for the call of John the Baptist and his prophetic ministry in Luke 3:1-2. By referring to an enrolment of “all of the world” i.e., the Roman empire, Luke suggests the world-wide significance of the birth of Jesus. Caesar Augustus ruled over the Roman empire from 27 B.C.E. to 14 C.E. Palestine was then a territory under Roman rule. At the mention of Augustus, Luke’s contemporaries would recall a major event in the Roman empire, namely, the end to the long civil war in the empire and the consequent peace that was brought about by Augustus. The altar of Pax-Augusta (Augustan peace) in Rome testifies to the Augustan era pf peace. He was hailed in some parts of his kingdom as “saviour of the whole world”, and even ‘god’, as many Greek inscription of the time testify. A particular inscription, known as the ‘Priene inscription’, has the following regarding the celebration of Augustus’ birthday: “the birthday of the god has marked the beginning of the good news through him for the whole world”. This political and religious background is in sharp contrast with the picture Luke paints of Jesus’ birth. For Luke, the real Saviour of the world and the real bringer of peace is none other than Jesus (Cf. Lk 2:11, 14, 38) whose birth story he is about to narrate. In contrast to the majesty and opulence of the Roman emperor who was regarded as saviour, the circumstances that attend Jesus’ birth are totally different; Jesus, the Saviour and King of Peace is born in lowly surroundings.
Luke’s statement about the census in Luke 2:1-3 is beset with many difficulties. During the time of Caesar Augustus censuses for the sake of taxation-assessment were held in certain provinces of the empire. But there is no evidence, apart from Luke 2:1-2, of a census that covered the entire Roman empire. Similarly, there is no evidence of a decree by Caesar Augustus requiring a census of the entire empire. Besides, the requirement to register oneself in one’s own ancestral town is also not attested by any other source. It is, therefore, impossible to solve the chronological and other difficulties raised by the Lukan reference to the census. For Luke, however, the Roman census is a providential reason for the movement of Joseph and Mary to Bethlehem, the city of David and Joseph’s ancestral town. (Bethlehem is about 90 miles from Nazareth). Luke wants to show that Jesus comes from the Davidic lineage through Joseph (Cf. Lk 1:32-33). The census, thus, serves an important function for Luke. It also helps the evangelist to situate Jesus’ birth in the context of world events. What is important for Luke is that Jesus would be born in Bethlehem, the city of David (Cf. Ps 87:6). David was first a shepherd when God called him. He was grazing the sheep of his father Jesse in the vicinity of Bethlehem (1 Sam 17:14-15, 34-35). Also, Micah 5:22 speaks of Bethlehem as the place of the origin of the ‘ruler of Israel’ (Cf. Mt 2:6). This text of Micah is quoted in Matthew 2:6: “And you, O Bethlehem … from you shall come a ruler who will govern my people Israel”.
What do you think? |
Algorithms 101 Roads & Town Centers
Alright, there’s this one algorithm that I’ve solved before. I’ve always found it to be a rather fun little exercise to work out. It popped into my head recently and I wanted to recollect how the logic of it went, but my Google-fu wasn’t so great. In the end, I didn’t find the algorithm problem statement but I’ve recollected it as best as I could from memory. If you know what the name of this challenge is I’d love to know what it’s called or if I’ve put it back together correctly. Ping me @Adron.So the story goes something like this. There once were some nations with a number of cities in each nation. Every citizen has access to every city and every city has a town center for all the citizens to enjoy. Recently the roads were damaged from a lack of maintenance work, ya know, like in real life. Meanwhile there was a revolution that led to catastrophic war that destroyed all the town centers in the nations. So now none of the cities have reachable town centers or functional working town centers anymore! The citizens of the world are angry at the nations and demand immediate fixes to their roads and town centers, with a priority on the town centers! The leaders have decided that the roads shall be repaired and have hired me (you) to assist!The nation has n cities, we’ll number 1 to n. The cities have two way roads, totalling m roads. A citizen has access to the town center if: their city contains a town center and their city has a road to travel from their city with a town center to another city with a town center.
The following is a map of one great nation of cities with currently impassable roads that must be repaired.
The cost of road repair is croad and to build a town center is ctown center. To start off with, you’re given q queries, where each query consists of a map of the nation and the value of croad and ctown center. For each of the queries the minimum cost of a town center accessible to all the citizens should be printed on a line.
The Input
The first line is an integer, q, denoting the number of queries. The subsequent lines describe the queries in a particular format. Each query is basically formatted like this:
• The first line of each set for a query will have four integers describing the number of cities n, the number of roads, m, and the cost to build a town center ctown center, and the cost to repair a road croad. Each integer is space seperated.
• Each line following that will have two integers describing a road for use between cities ui and vi. Each integer is space seperated.
The Output
Each query should have a result displayed that is the integer denoting the minimum cost of making town centers accessible to all the citizens.
The following example includes three queries (denoted by the first line having a 3). The subsequent queries then have a definition line of [6,4,2,1], [7,6,2,1], and [9,7,3,4] respectively. Then below those query definition lines are the road definitions between the cities, and respectively the town centers.
Sample Input
6 4 2 1
1 2
2 3
4 5
5 6
7 6 2 1
1 2
2 3
3 4
2 4
5 6
6 7
9 7 3 4
1 2
1 5
2 4
2 3
3 9
6 8
Alright, this one has a nubmer of elements to resolve. Here’s how I’ve worked through these three samples to get answers. I’ve drawn out the three examples so that there is a visual understanding of where the cities are also.
The first nation described by the section [6 4 2 1] would look something like this based on the road data of [1,2], [2,3], [4,5], and [5,6].
The next nation in section [7,6,2,1] would look like this with the road data of [1,2], [2,3], [3,4], [2,4], [5,6], and [6,7].
The last nation defined as [9,7,3,4] with road data of [1,2], [1,5], [2,4], [2,3], [3,9], and [6,8] looks like this.
Observations and Solutions
Dataset 1
Here’s how I worked these out. In the first example there are n=6 cities, m=4 roads connecting those cities, with the price of building a town center set at ctown center = 2 and the price to repair a road set at croad = 1. To meet the requirement of having access to a town center in every city, I could acheive this through several options, which I can then pick the cheapest from those options.
Rebuild the roads and build one town center in each connected part of the nation. In this scenario that would included 4 roads at 1 cost each, and one town center in each island of the nation at 2 cost each. The total would equal
roads x croad = total
4 x 1 = 4 to rebuild the roads.
cities x ctown center = total
2 x 2 = 4
total croad + total ctown center = combined total.
4 + 4 = 8
The next option would be to possible just rebuild town centers in each city and not rebuild the roads. This is quick to determine the cost of, as it just requires calculating the town center cost.
cities x ctown center = total
6 x 2 = 12
Out of the two solutions, the cheapest is to go with option one, rebuilding the roads and building one town center for access in each of the island segments of the nation.
Dataset 2
Alright, the next data set shows a city of 7 cities, among two islands, and 6 roads connecting those cities. Just a quick look, I’m guessing this one will work out the same as the first one. Working through I could build town centers in each city.
cities x ctown center = total
7 x 2 = 14
But I could also rebuild road [1,2], [2,4], and [2,3] on the left island with a single town center build, which would mean I need one town center and three roads for island one. The second I could build one town center in Tupelo and then rebuild road [5,6] and [6,7]. Total is two town centers and five roads.
Town centers would cost 2 x 2 = 4. The roads would be 5 x 1 = 5, adding 5 and 4 gives a total of 9 to rebuild enough so that all cities can regain access to a town center.
Dataset 3
This third dataset has three island areas. The first has 6 cities with 6 roads. The next island has 2 cities and 1 road, and then final island has one city with no roads. This last city, Solace, just needs a town center built to gain access to a town center.
For the island with two cities will need at least one town center. But with the prices increased let’s take a look to be sure. If I replace one town center, we’ll say in Tupelo, and rebuild the road that would be ctown center of 3 and road cost of 4 giving a total cost of 7. But if I rebuild a town center in San Francisco and Tupelo, that would only be 6. The previous plus this solution gives me three town centers at a cost of 5.
Now considering the last island segment I have 6 cities, and a number of roads that in some cases connect just one city. For the stretch that connects Legoland and York I could rebuild a town center in York, then the road [1,5] and [1,2] which would give Lenia, Legoland, and York direct access to a town center. The total would be 3 + 4 + 4 = 11 for a single town center and two roads. Then looking at things, the cheap scenario would be to rebuild a town center in Odessa and road [3,4] and [3,9] for another 11 total. Overall for this island of cities within the nation I’ll have 4 roads rebuilt with 2 town centers, for a total of 22 cost.
Adding that together for the entire nation of Dataset 3, that’s 22 + 5 + 3, giving a total cost for direct access to town centers of 30 cost.
The question now is, what’s the best algorithm to solve all of this. I’m going to make this algorithm a two parter, because this is such a whopper of a problem to lay out that I’ll give readers a chance
Sample Output
With each of these resolved, the output of the algorithm should print to stdout with the following results.
So…what’s your take on this one. If you build a solution, I’ll add it to my blog post in a few days. Let me know and we’ll sync up on it and talk about what an ideal solution might look like. Ping me @Adron.
Also, with any involvement in coding & hacks I’ll send a Thrashing Code sticker, kudos, and next beer is on me! |
Question description
Crash Course World History, episode 1.38: World War II Course World History, episode 2.20: World War II, A War for Resource ANY of the videos you are to watch for homework this week, write about 100 words on World War II and the beginning of the Cold War. In particular, address how those two major events related to one another and, in a preview of your Group Paper, how they related to can use external sources to answer the question but only in 100 words |
Popular Content
1. 2 points
Increasing the clock frequency to 260 MHz
2. 2 points
3. 1 point
Hi @aliff saad, Please attach a screen shot of the Arduino IDE errors. Please attach the path of where you have install the LSM9DS1 library. best regards, Jon
4. 1 point
Maybe one comment: In the ASIC world, "floorplanning" is an essential design phase, where you slice and dice the predicted silicon area and give each design team their own little box. The blocks are designed and simulated independently, and come together only at a fairly late phase. ASIC differs from FPGA in some major ways: - ASIC IOs have a physical placement e.g. along the pad ring. We don't want to run sensitive signals across the chip, RF may follow some voodoo rules to minimize coupling, etc. In comparison, the FPGA IOs are physically routed en masse to the center of the die (this is more complex for large devices, but the first restrictions I'll run into are logical e.g. which clock is available where, not geometrical). - For ASICs, we need the floorplan to design the power distribution network as an own sub-project (and many a bright-eyed startup has learned electromigration the hard way). - In the ASIC world, we need to worry about wide and fast data paths both regarding power and area - transistors are tiny but metal wires are not. You might have a look at "partial reconfiguration", here the geometry of the layout plays some role.
5. 1 point
Waveforms SDK calibration
Hi @Thore The calibration is implicitly used in the SDK. The set or got voltage values with API functions are always corrected based on the device calibration parameters.
6. 1 point
Axi DMA from all memory
Hi @Rickdegier, Welcome to the Digilent forums! I am not the most confident on this topic, but I have used the DMA some. The most important facet here is to make sure that your buffer is actually contained in the DDR memory. Different parts of the program can be placed in different memories using the linker script in your application project's src folder (lscript.ld). You should check that file to make sure that your global arrays are placed in the DDR. Second, if the data cache is enabled (likely), you should make sure to flush and invalidate the buffer memory area around your SimpleTransfer calls (functions to do this are in xil_cache.h). Lastly, I personally have had more success using malloc to create my buffers than using global or local arrays - I'm not sure why this is, from a cursory google search, it looks like the DMA will allow transfers into program memory when you aren't careful. You may want to reach out to Xilinx on their forums. Thanks, Arthur
7. 1 point
That's amazing to hear! I appreciate all of your help. I will update the board files now. Take care, Justen
8. 1 point
Bluetooth stack in verilog
Hi @Aamirnagra, Unfortunately, I have not worked with implementing the BLE protocol with HDL nor have i found any examples. We do have a Pmod BLE along with the Pmod BLE IP Core usable with microblaze that facilitates the uart communication. best regards, Jon
9. 1 point
power supply
a fuse?
10. 1 point
hdmi ip clocking error
Hi @askhunter, I did a little more searching and found a forum thread here where the customer is having a similar issue. A community member also posted a pass through zynq project that should be useful for your project. best regards, Jon
11. 1 point
12. 1 point
Hi, I indeed have a sd card with the correct files in. Anyway, i found a way to make it work without the LVLSHFT here : https://github.com/NicholsKyle/ECE387_SimonSays/wiki/My-Design#schematic-drawing. Now I'm struggling to display correctly on the MTDs, it's certainly my code the problem. But thanks for your help @jpeyron.
13. 1 point
hdmi ip clocking error
Hi @askhunter, Please attach a screen shot of your vivado block design. Have you tried changing the MMCM to PLL in the DVI2RGB IP Core? best regards, Jon
14. 1 point
JTAG-HS2 firmware erased by accident
Hi Jon Many thanks - cable now working Many Thanks Martin
15. 1 point
Hi @askhunter, I believe that you would only need the more recent DVI2RGB IP Core and the IF folder in the Vivado library. best regards, Jon
16. 1 point
Hi @askhunter, Here is the newest version of the DVI2RGB IP Core. Please add the full vivado library folder in the ip repository. There is mandatory files that need to be included for the DVI2RGB IP Core to work. What development board are you using? best regards, Jon
17. 1 point
How to connect an external FIFO to FPGA
I have a few random thoughts on the subject ( is anyone surprised? ) I looked over an old project where just for fun I used a 128Kx32 single clock FIFO built with BRAM. It was for the Nexys Video Artix device which has the same 36Kb BRAMs. it used 116 BRAMs and worked at 100 Mhz with a mid-range speed part. 36Kb/9 = 4096 bytes plus parity, 131072/4906 = 32 9-bit BRAMs, 4x32 = 128 BRAMs to implement a 128Kx32 FIFO so Vivado must have found some way to save 12 BRAMs. If you have 18-bit data that's fine as the BRAMs can be organized as 32Kx9 where the extra bit is meant for parity. From experience I can tell you that using the parity bit for data can get tricky but is entirely possible. If you need a dual clock FIFO then expect to use more BRAMs. If there isn't much else in your design timing won't be a problem. If you are trying to place 116 BRAMs into a complicated high speed design then you will find yourself needing to leanr about timing closure strategies. If I don't have to worry about resource usage, timing issues I'd use an HDL to implement RAM or FIFO structures as it's portable, more or less. I tend to just bite the bullet and use the vendors tools to implement resources like block memory, PLLs, and such as these resources aren't really that compatible between vendors and I usually do care about resource usage and timing. Also, vendor IP 'wizards' sometimes creates constraints for them and take care of a lot of little details that ultimately save time.
18. 1 point
Thank you Dan. Your answer helps clarify things greatly for me on this subject. I appreciate you taking the time to help me out. -Sean
19. 1 point
Yes, you can combine more than one block RAM. There is more than one way to implement a FIFO. If I had to do it for myself, I'd write it in plain Verilog, it's about two or three screen lengths of code if the interface requirements are "clean" (such as, one clock and freedom to leave a few clock cycles of latency, before the first input appears at the output). I didn't check but I think there is an "IP block wizard" for FIFOs in Vivado that may do what you need. With "expensive" I meant just that, it costs a lot of money to use half an FPGA just for memory.
20. 1 point
21. 1 point
22. 1 point
I'm going to echo @xc6lx45 and suggest that you reconsider. Does your Kintex have insufficient BRAM for an on-chip FIFO? Using BRAM would be so much easier. If you choose to use the external FIFO, you'll have to adapt the logic that you use to interface to the FIFO to use the signaling of this other FIFO. Sometimes it helps to post more context: what do you want the system to be able to do that it does not now?
23. 1 point
Hi @Esti.A, The first error that you get is the following one: ERROR: [Common 17-179] Fork failed: Cannot allocate memory This kind of error is generated when your machine does not have enough RAM memory. Please post here the configuration of your system (CPU, OS, RAM, ..). Do you have swap enabled? Also, have a look on this post.
24. 1 point
Hi @jpeyron, Thankyou for your kind reply. I will check the code that you have sent. The board that am using is Zynq ZC702, evalutaion kit. NO, I didn't try any auxillary channel. Today i will try and update you. Thankyou once again
25. 1 point
Thanks much, Jon. Best Cuikun
26. 1 point
Dynamic voltage and frequency scaling
Here are two additional articles I have read on the technique being applied to a zynq. https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/18842065/Zynq-7000+AP+SoC+Low+Power+Techniques+part+5+-+Linux+Application+Control+of+Processing+System+-+Frequency+Scaling+More+Tech+Tip https://github.com/tulipp-eu/tulipp-guidelines/wiki/Dynamic-voltage-and-frequency-scaling-(DVFS)-on-ZC702
27. 1 point
Dynamic voltage and frequency scaling
https://www.xilinx.com/support/documentation/white_papers/wp389_Lowering_Power_at_28nm.pdf page 3
28. 1 point
Dynamic voltage and frequency scaling
>> is it possible to present DVFS on it. >> For now I now about clock wizard, DCM, PLL for different clock generation (frequency) but this is not frequency scaling mi right? you may have your own answer there. This is some university project? Have you done your own research? For example, this has all the right keywords: https://highlevel-synthesis.com/2017/04/12/voltage-scaling-on-xilinx-zynq
29. 1 point
30. 1 point
Vivado is complaining that there are active (not commented) pins in the constraint file that do not have matching port names in your design. Open your design_1_wrapper.v file and reconcile the port names specified there with the constraints file. It is not uncommon to have to change the name of a pin in the constraints file to match the port name in the wrapper. This could happen for example if you used "Make external" on an i/o pin from an IP block. One thing that has helped in the past was to delete the top level wrapper and regenerate it. Sometimes when you make pins external after generating the wrapper, there can be inconsistencies between port and pin naming. Xilinx UG903, page 42 and following elaborates on the scoping mechanism Vivado uses.
31. 1 point
Stuck in SDK
Kris, Printing via xil_print is using STDIO by default and could reconfigured in Vivado and SDK. With little info you provided a lot left for guessing. My last recommendation is to pay attention to a configuration of your build in SDK. The interrupt interrupt service routine (ISR) might not work if GCC compiler has optimization ON. To make it work the variable in the ISR should be declared volatile. Debug build usually has flag -O0 that optimization none. Good luck!
32. 1 point
Understand Resource Usage for FIR Compiler
@aabbas02, What's the data rate of the filter compared to the number of taps? As in, are you going to need to accept one sample per clock into this filter, or will you have reliable idle clock periods to work with? I'm wondering, if you have to build this, how easy/hard it might be. Dan
33. 1 point
So, if I am getting the point of the previous two posts from xclx45 and Dan, make your own filter in Verilog or VHDL and figure out the details ( signed, unsigned, fixed point, etc, .... etc ). You can instantiate DSP48E primitives (difficult) or just let the synthesis tool do it from your HDL code (easier). Debating how things should be verses how they are when using third party scripts to generate unreadable code seems like a waste of time to me... If you don't like what you get then design what you want. If you can make the time to write your own IP ( would be nice to not depend on a particular vendor's DSP architecture ) you'll learn a lot and save a lot of time later. If a vendor's IP doesn't make timing closure for your design a nightmare and you don't have the time to figure out all the details just let the IP handle it. I suspect that trying to optimize the FIR Compiler will be frustrating at best. I once had to come up with pipelined code to calculate signal strength in dB at a minimum sustained data rate. My approach to converting 32-bit integers to logarithms used a combination of Taylor Series expansion and look-up tables. I had a few versions**. One was straight VHDL so that I could compare Altera and Xilinx DSP tiles. One instantiated DSP48 tile primitives for a particular Xilinx device. These were fixed point designs. There's theory and there's practical experience.. they are usually not the same. ** I played with a number of approaches based on extremely limited specifications so there were quite a few versions. Every time I presented one the requirements changed and so did the complexity and resource requirements. I should mention that my intent for mentioning this experience is not to denigrate the information presented by others or to claim superiority in any way. When getting advice it's important to put that into context. A lot of times facts aren't necessarily relevant to solving a particular problem. If I haven't made this clear I've never had the experience that vendor IP optimizes resource usage... in fact quite the opposite. This is why in a commercial setting companies are willing to pay to develop their own IP. Sometimes FPGA silicon costs overshadow development costs.
34. 1 point
Understand Resource Usage for FIR Compiler
@aabbas02, So let's start at the top. An N-point FIR filter, such as this one, requires N multiplies and N-1 adds. Let's just count multiplies, though, for now. Let's now look at some optimizations you might apply. A complex multiply usually requires 4 multiplies. If you have complex input and taps, that's 4N real multiplies. There's a trick you can use to drop this to 3N multiplies. If the filter is real, and the incoming signal is complex, you can drop this to 2N multiplies by filtering each of the real and imaginary channels separately. If your filter taps are fixed, a good optimizer should be able to replace the zeros with a constant output, the ones with data pass through, etc. Implementing taps with two or three bits this way is possible. This only works, though, if the filter taps are fixed. Many of the common FIR filter developments generate linear phase filters. These filters are symmetric about a common point. With a little bit of work, you can use that to reduce the number of multiplies (for dynamic taps) down to (N-1)/2. There is a very significant class of filters called half band filters. In a half band filter, every other tap (other than the center tap) is zero. With a bit of work, you can then drop your usage down to (N-1)/4 multiplies. This optimization applies to Hilbert transforms as well. In the given list above, I've assumed that you need to operate at one sample in and one sample out per clock. In that case, there's no time or room for memory, since all of the stages need their data on every clock. I should also point out that I'm counting multiplies, not DSP slices. If your multiplies are smaller than 18x18 bits, you may be able to use a single DSP slice per multiply. If they are larger, you might find yourself using many DSP slices per multiply. It depends. Let's now consider the case where you have an N sample filter but you are only going to provide it with one sample of data every N+ samples (some number more than N). You could then multiplex this multiply and implement an N-point filter with 1 multiply and 2^(ceil(log_2(N)) RAM elements. If your filter was symmetric, you could process a filter roughly twice as long, or a sample rate roughly twice as fast while still using a single multiply. The half-band and hilbert tricks can apply to (roughly) double your filter size or your data rate again. In these cases, however, you can't spare the multiply at all, since it is used to implement every coefficient. That should give you both ends of the spectrum. Now, while I don't understand how Xilinx's FIR compiler works, I can say that if I were to make one I would allow a user to design crosses between these two ends of the spectrum. In the case of a such a cascaded filter, however, you may find it difficult to continue to implement the optimizations we've just discussed, simply because the cascaded structure becomes more difficult to deal with. (By cascade, I mean cascade in implementation and not filtering a stream and then filtering it again.) Looking at your two designs, none of the optimizations I've mentioned would apply. In the high speed filters we started out with, sure, you might manage to remove a multiply or two by multiplying by zero. On the other hand, you can't share multiplies across elements if you do so. For example, you mentioned [1 2 3 4 0 1 2 3 4]. Sure, it's got repeating coefficients, but this is actually a really obscure filtering structure, and not likely one that I (were I Xilinx) would pay to support. Try [ 1 2 3 4 0 4 3 2 1] instead and see if things change. Similarly for [ 1 0 0 0 0 0 0 0 1]. Yeah, it's symmetric about the mid-point, but in all of my symmetric filtering applications I'm assuming the mid point has a value of 2^N-1 or similar so that it doesn't need a multiply. Your choice of filters offers no such optimization, so who knows how it might work. Hope this helps to shed some light on things, Dan
35. 1 point
Amen. And that's a good view for all Xilinx IP. The structure for a simple digital filter is not that complex; you can implement them in HDL. I've done that. Xilinx IP is convenient but usually not the best approach when you have concerns about using up limited resources. The issue for using very fast resources like BRAM and DSP slices is that they are placed in particular locations throughout the device with limited routing resources for the signals between them or other logic. You can let Xilinx balance throughput, resource usage, logic placement, and throughput or you can try to do that yourself. Trying to use 100% of every BRAM or DSP resource in order to minimize the number of BRAM or DSP resources used is not easy. In my experience FPGA vendors are content to have their IP wizards make the customer think that he needs a larger and more expensive device. So that's the trade-off; let the vendors' tools do the work to save time or write your own IP and be responsible for taking care of all the little details that the IP hides form you. I've spent some time experimenting with DSP resources from various FPGA vendors. They are complicated with a lot of modes and depending on how you use them throughput can decline substantially from the ideal. Just read the user's guide and switching specs in the datasheet to get the idea. Generally the DSP slices are arranged to perform optimally with certain topologies but not all. Implementing designs that are iterative or have feedback can get ugly; especially when you try and fit that into a larger design using most of the devices resources. As a general rule, in my experience, use vendor IP and don't ask a lot of questions or design your own IP and be prepared to learn how to handle a lot of details that aren't obvious. Time verses convenience.
36. 1 point
Hi, [1 2 3 4 0 1 2 3 4] is not symmetric in a linear-phase sense. That would be e.g. [1 2 3 4 0 4 3 2 1]. You could exploit the shared coefficients manually, see e.g. Figure 3 for the general concept. But this case is so unusual that I doubt tools will take it into account. The tool does nothing magical. If performance matters more than design time, you'll always get better results for one specific problem with manual design. One performance metric is multiplier utilization (e.g. assuming you design for a 200 MHz clock, one DSP delivering 200M operations / second performs at 100 %. Reaching 50+ % is a realistic goal for a simple / single rate structure). For example, do I want to use an expensive BRAM at all, when I could use ring shift registers for delay line and coefficients. Then you only need a small controlling state machine around it that does a full circular shift for each sample, muxing in the new input sample every full cycle (the BRAM makes more sense when the filter serves many channels in parallel, then FF count becomes an issue).
37. 1 point
I'm really not too interested in spending a lot of time fixing peoples code or teaching HDLs.. but... Why did you comment out the enable from your port and make it a local signal? I don't think that you quite understand the concept of enables. What do you suppose is going on with your concurrent assignment to i_en? Try to figure out what it is that your code is doing. What's being clocked and what's not? Look at where you assign values to the signal counter ( that's where the answer to your question will be found if your grasp of VHDL for synthesis is sufficient ). Why is counter type integer? What do you supposed happens when the synthesis tool tries to use an unconstrained integer tp implement a counter? Don't try an stuff all of your logic into one process; put your counter into its own process. Look around for some examples of implementing a counter in VHDL. My impression is that you haven't quite grasped the basic concepts of the VHDL that you are trying to use. Here's my suggestion: Create a standalone LFSR entity, a standalone counter entity, and a toplevel entity that instantiates both the LFSR and counter components. Don't use type integer anywhere in your code. Only use if..elsif..else statements in your code. See if you can shift your LFSR only when the counter reaches a certain value. Write a testbench to exercise your toplevel source file. You may not get exactly what you want at first but you will have a nice little project that along with some help from the simulator can help you learn VHDL.
38. 1 point
RISC-V on Nexys A7?
@Dan The viewpoint that I presented wasn't meant to be the only reasonable one. True, being able to say that you implemented a RISC-V processor on your Nexys-A7 doesn't involve life-threatening feats of daring-do. Though it might sound like I'm trying to discourage FPGA beginners from following recipes to accomplish what they aren't capable of accomplishing on their own that's not my intent. I'm merely suggesting that aeon20 listen to what he's said and re-evaluate his goals. I'd point out that the Wright brothers, who had no recipe and little in the way of tutorials ( they were not the first ones to fly or even attempt to fly ) they did have considerable practical experience in the mechanics of the parts of their experimental planes. My point is that they were leveraging their expertise in one area to try and accomplish a goal in another area. So I view your example as supporting my point. By the way building airplanes from a kit is a real thing. I've used recipes from others in building software applications for a particular brand of Linux that I want to use. I really don't want to figure out how all of the libraries, frameworks, scripts and tools used to build the application work; I just want to use the application on a particular version of a particular distribution of Linux. Sometimes this doesn't work out as in order to get my application I need to build the framework or tool from scratch and it end up being more work that I want to put into it. When it does succeed I still don't know how all of the dependencies ( and there can be a LOT of dependencies ) work and I don't care. If someone wants to play around with RISC-V there are development boards with silicon implementations of the processor that will be much higher performance than anything implemented in a low end FPGA. So the motivation must be different. Some will see using a recipe to build an application as the same thing as using a recipe to build an soft-processor. I would disagree. I'm not questioning the validity of anyone's motivation. I'm suggesting that there might be a more rewarding path.
39. 1 point
@askhunter It's not clear from your pictures what it is that you are referring to since the times scales are different. The purpose of post place and route timing simulation is to show the relative signal path delays in your implemented design as well as possible inferred latches or registers hidden by IP. The RTL simulation merely indicates if your behavioural logic is performing as you intended ( assuming that the testbench is well designed ). It is merely a simplified (no delays, no setup, no hold times) idealistic representation of simple logic. If the timing simulation doesn't give the same results as the RTL simulation then it's unlikely that your hardware will behave as you intend either. In the typical professional setting a lot of people are working on parts of a large design effort simultaneously. No one can afford to schedule a design effort where everything is done sequentially. In such a case timing simulations become a very important indicator of risks of projects not making deadlines. It simply isn't possible to create a lot of hardware, software, test protocols etc sequentially or even in parallel and 2 weeks before shipment throw all that stuff together for the first time and then figure out why things don't work. So we have a lot of ways to do simulation that offer increasingly more accurate, and hence reliable, views of how our design ( after it's been optimized, re-worked and reduced to LUT equations ) might actually work in a system before having to run it in hardware. When there are 10 engineers doing parts of 1 large FPGA design and all of those parts are integrated it's not uncommon for some of them to start failing due to limited routing resources and clock line limitations.
40. 1 point
Hi @Ciprian, After some time, I managed to solve this issue. In fact, It was a problem in the hardware and device tree configuration. I discovered it when probing with another example project named Zybo-hdmi-out (https://github.com/Digilent/Zybo-hdmi-out). However, as this project is for a previous version of Vivado, I tested with Vivado 2017.4. Surprisingly, it worked fine but with another pixel-format in the device tree. The Zybo-base-linux project which I used, has a pixel format in DRM device tree configuration set to "rgb888", however, for the Zybo-hdmi-out, it displayed correctly with pixel-format "xrgb8888". If I use other pixel formats, no output is displayed in both cases. Going deep into the configuration of both projects, I discovered that there are some differences in the VDMA and Subset converter settings, which changed to the configuration in Zybo-hdmi-out, solves the problem of colors and rendering, considering also a pixel format in the devicetree equal to "xrgb8888". I attached the images of both configurations. In addition to this, I managed to update the design for the Vivado version I use (2018.2) with no more differences that a change in the AXI memory interconnect replaced by the AXI Smart connect in the newer version, which is added automatically when using Vivado autoconnect tool for the VDMA block. Hope this information could help others which run in the same issue. Thanks for your help. Luighi Vitón
41. 1 point
if you can bring it up once in Vivado HW manager (maybe with the help of an external +5 V supply), you might be able to erase the flash. If not, you may be able to prevent loading the bitstream from flash e.g. by pulling INIT_B low (R32 is on the bottom of the board, its label slightly below the CE mark). See https://www.xilinx.com/support/documentation/user_guides/ug470_7Series_Config.pdf "INIT_B can externally be held Low during power-up to stall the power-on configuration sequence at the end of the initialization process. When a High is detected at the INIT_B input after the initialization process, the FPGA proceeds with the remainder of the configuration sequence dictated by the M[2:0] pin settings.""
42. 1 point
A UART Based Debugger Tool
43. 1 point
OpenLogger ADC resolution + exporting
Hi @sgrobler, Our design engineer who designed the OpenLogger did an end-to-end analysis to determine the end number of bits of the OpenLogger. This is what they ended up doing in a summarized fashion: <start> They sampled 3 AAA battery inputs to the SD card at 250 kS/s and set the OpenLogger sample rate to 125 kS/s and then took 4096 samples; they then took the raw data stored on the SD card and converted it to a CSV file and exported the data for processing. Their Agilent scope read the battery pack at 4.61538 V and as they later found from FFT results the OpenLogger read 4.616605445 V, leading to a 0.001226445 V or ~1.2mV difference, which is presuming the Agilent is perfect (which it is not), but it was nice to see that the values worked out so closely. They calculated the RMS value of the full 4096 samples in both the time domain and using Parseval's theorem in the frequency domain as well, both of which came up with the same RMS value of 4616.606689 mV, which is very close to the DC battery voltage of 4616 mV. Because RMS is the same as DC voltage, this gives the previously mentioned DC value of 4.616605445 V. They can then remove the DC component from the total RMS value to find the remaining energy (the total noise, including analog, sampling, and quantization noise) of the OpenLogger from end-to-end. With the input range of +/- 10V input, this produces an RMS noise of 1.5mV. At the ADC input, there is a 3V reference and the analog input front end attenuates the input by a factor of 0.1392, so the 1.5mV noise on the OpenLogger is 0.2088mV at the ADC. With the 16 bits (65536 LSBs) over 3V, 0.0002088V translates to ~4.56 LSBs of noise. The ENOB is a power of 2, so log(4.56)/log(2) results in 2.189 bits, giving us a final ENOB of 16 - 2.189 = ~13.8 bits. Note though that this ENOB of 13.8 bits is based on system noise and not dynamic range, so for non-DC inputs (which will likely be measured at some point) the end number of bits is not easily determined. The datasheet for the ADC used in the OpenLogger (link) shows that the ADC itself gives an ENOB of about 14.5 bits at DC voltage (so the 13.8 bits is within that range), but at high frequencies, this of course rolls off to lower ENOB at higher frequency inputs. Thus, they cannot fully predict what the compound ENOB would be over the dynamic range, but they suspect it all mixes together and is 1 or 1.5 bits lower than the ADC ENOB response. </end> Let me know if you have questions or would like to see the non-abbreviated version of his analysis. Thanks, JColvin
44. 1 point
OpenCV and Pcam5-c
Hi @Esti.A, If you clone the repo you obtain the "source code" for the platform and you have to generate the platform by yourself. This is a time consuming and complicated task and is not recommended if you do not understand SDSoC very well. I advise you to download the last SDSoC platform release from here. You will obtain a zip file that contains the SDSoC platform already build. After that, you can follow these steps to create your first project.
45. 1 point
OpenCV and Pcam5-c
46. 1 point
HI @Sandrine In Sync mode the trigger is not available. The I2S interpreter needs to see the transitions on the clock signal, so if you use Sync mode select Edge option (sample on both edges) for Clock signal. Repeated captures for the Logic Analyzer can be done from Script tool like this: for(var c = 0; c < 10 && wait(); c++){ print(c) Logic.run() Logic.wait() }
47. 1 point
Zybo Z7-10 audio passthrough
Hi @jpeyron The DMA audio demo uses the d_axi_i2s_audio IP core, which has a S2MM output and MM2S input. The first thing I've tried was to route the output directly into the input, which didn't work. In addition, the way the C code handles recording is by configuring the DMA block to record, then telling the i2s core to store N bytes from the input into a register. The HDMI demo works by reading video data into a series of video buffers, and displaying image data from a series of frame buffers. I can make an HDMI passthrough by pointing the display output buffer to the video input buffer. I'm wondering if there's a similar solution for audio. I've had some trouble getting that instructables project to work. The i2s controller looks fine, but the SerialEffects block doesn't seem to match the block diagram (which is really blurry). I'll try it again and see if I missed something. Does that d_axi_i2s_audio IP core have any documentation?
48. 1 point
MMCM dynamic clocking
@rangaraj, It's a shame you only know VHDL coding, since the Verilog code I posted above would give you the ability to generate an arbitrary clock--unencumbered by the constraints of the PLL, with frequency resolution in the milli-Hertz range (100MHz/2^32). Perhaps you want to take another look at it? Sure, it would have some phase noise, but ... that could be beaten now if necessary by using an OSERDESE2 component. I've got an example design I'm working on that does just that and should knock the phase noise down to 1.25ns or better. Dan
49. 1 point
MMCM dynamic clocking
50. 1 point
We have a few demo projects using the Basys2 in RF communications that we haven't gotten had time to document yet. Although it's different, there are a lot of similiarities that you may be able to use. Take a look at the attached Zip File. The demo project is using the Pmod RF1 to communicate keyboard presses to another Basys2 that is controlling a speaker. PmodRF1 Basys demo project by digilentinc, on Flickr Here is a link to get the files for the project: https://www.dropbox.com/s/83winp3zyio7or7/Basys2AudioRfPmodHID.zip?dl=0 Hope that helps! |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.