text stringlengths 144 682k |
|---|
Encyclopedia of Behavioral Medicine
Living Edition
| Editors: Marc Gellman
Mortality Rates
• G. David Batty
Living reference work entry
DOI: https://doi.org/10.1007/978-1-4614-6439-6_475-2
A mortality rate is an estimate of the proportion of a population group dying during a specific period of time. Mortality rates can be based on how many people die of any cause (“total mortality”) or can be used to describe the death rate of a certain illness or condition, such as dementia or avian influenza.
Mortality rate is calculated as the number of people dying (numerator) divided by the number of people at risk of dying (denominator). The latter estimate is typically based on midyear population data.
In order to produce readily understandable results, the mortality rate – which is typically calculated as a small fraction – is often multiplied up by expressing it as deaths per 100 or 1,000 individuals. For instance, in a town of 10,000 residents, if ten people die of a heart attack over a given period of time, the mortality rate due to this condition would be said to be one in 1,000 persons.
References and Further Reading
1. Bonita, R., Beaglehole, R., Kjellström, T. (2006). Basic Epidemiology. World Health OrganizationGoogle Scholar
Copyright information
© Springer Science+Business Media LLC 2017
Authors and Affiliations
1. 1.Department of Epidemiology and Public HealthUniversity College LondonLondonUK
Section editors and affiliations
• Anna C. Whittaker
• 1 |
Forests in a Changing Climate
Climate changes are likely to affect important ecological processes that will, in turn, affect key natural resources. For example, temperature and precipitation changes could mean that, and will become more frequent in some areas of the country. The emissions that cause climate change also lead to problems that put additional stress on trees.
Coupled with altered hydrology and increased disturbance and stress, climate change will affect how within the U.S. and will cause changes in , and soils. How these resources are affected will have broad implications for maintaining , including and the capabilities of forests. Each impact on one aspect of an ecosystem can affect a variety of others, producing a series of cumulative effects that can make it difficult for ecosystems to adapt. These effects and many other issues are considered in being created across the country.
from .
Responding to Climate Change
Meeting the diverse challenges that climate change is imposing on Earth's environments requires many approaches, and specific responses will depend heavily on the management goals for a particular resource. Scientists are currently working to understand the risks posed to ecosystems, through examining characteristics and changes in and conducting on impacts and ecosystem vulnerabilities. Public lands, , and will all be affected, and each will require different management considerations. Specific management tools practiced under can be valuable for helping forests respond to a changing climate.
For those charged with managing ecosystems, climate change can seem like a daunting challenge. Fortunately, a range of exist to help ecosystems adapt to climate changes, and to contribute to climate change mitigation by reducing the amount of greenhouse gases in the atmosphere. These options are often complementary to actions that land managers employ regularly.
Principles for Managing Lands Under Climate Change
Although land managers already have many tools available to begin to address climate change, management thinking may need to reconsider issues like spatial scales, timing, and prioritization of efforts. The following principles* can serve as a starting point:
Managing multiple stressors: Impacts from changing climate are often first felt through their effect on ecological disturbance (wildlife, flood, insects, and disease). Managing ecosystems for resilience to these forces is a wise place to focus resource action.
This page features information from the (CCRC). The majority of the CCRC is dedicated describing ecosystem responses to climate change, and how natural resource management may be able to respond to those changes. Please follow the links in the text, or explore the for further information.
*Adapted from |
How To Use a Hydrometer
Many starter kits contain a hydrometer and many beer, wine and cider kits refer to this device, so what is it and what does it do? A hydrometer measures the density of a liquid (called the specific gravity). Water has a specific gravity of 1 (or 1.000 on a hydrometer). Liquids that are more dense than water will have a higher reading, while liquids that are less dense than water will have a lower reading.
When you add sugar to water, it increases the density. Unfermented beer, wine, cider and mead contain a lot of sugar, therefore the specific gravity is higher than 1.000. As fermentation proceeds, yeast convert the sugar to alcohol, which is less dense than water. The result is that as fermentation progresses, the specific gravity will gradually decrease.
So, why is gravity important and why do you need to use a hydrometer? There are two reasons. Firstly and most importantly, taking a hydrometer reading is the only sure way to show that fermentation has stopped. Secondly, by taking hydrometer readings both before fermentation has started and after it has finished, we can calculate the alcohol level of our beer, wine, cider or mead.
First of all though, let's look at how you read the hydrometer. The hydrometer is a weighted glass instrument, which needs to be floated in a liquid. Ideally you should use a trial jar for this, as it means you don't put your hydrometer into your brew, which carries the risk of introducing an infection. To fill the trial jar, use a sterilised wine pipette or turkey baster. Float the hydrometer in the trial jar and it should look like this:
As you can see, the reading is above 1.000, in fact it is actually measuring 1.036. You need to take the reading from the surface of the liquid, rather than where it curves up the glass of they hydrometer. This hydrometer is sitting in unfermented beer, if it was in wine, then the reading would be much higher, in the region of 1.075 to 1.085, depending on the expected end alcohol content.
So, how do you tell if fermentation has stopped? By taking a hydrometer reading, then waiting for 24-48 hours and taking a second reading. If the gravity has decreased, then fermentation has not stopped. If the gravity reading remains the same, then fermentation has completed. Let's look at another beer sample:
In this picture, we can see that the gravity reading is now 1.014. Although this is a little high for beer, may kits will finish around this point, but again, check the reading over several days to make sure it hasn't dropped any further. Once you are sure that fermentation has stopped, you can proceed to bottling.
Calculating the alcohol content of beer, wine, cider or mead
If you took a hydrometer reading before you added the yeast (Original Gravity or OG), and another one at the end of fermentation (Finishing Gravity or FG), you can calculate the alcohol content of your wine, beer, cider or mead. To do this, you use the following equation:
(OG-FG) x 131.25 = ABV
So for a typical kit beer, we might have an OG of 1.038 and a FG of 1.010. This would give us 0.028 x 131.25, which equals 3.7% ABV.
For a wine, the OG might be 1.086 and the FG 0.995 as an example. This would give us 11.9% ABV (0.091 x 131.25) = 11.9
Typical hydrometer readings for beer, wine, cider and mead
Each brew will be different and depending on the types of sugars present, hydrometer readings may vary, but here are some typical finishing gravities for beer, wine, cider and mead. If your FG is substantially higher, then your fermentation may have stopped prematurely and will need to be restarted using a restart yeast.
Wine and mead have a higher range of final gravities, depending on whether you want a dry or sweet finish:
Hydrometer dos and don'ts
Do use a trial jar to contain your sample in which the hydrometer floats.
Do sterilise your pipette or turkey baster before taking the sample.
Do keep your hydrometer in its protective case, it is fragile and will always break when you need it most!
Do drink the sample you take after fermentation has stopped - this will tell how well your brew has fermented.
Do not float they hydrometer in your demijohn or fermenting vessel. It could infect your brew and you may not be able to retrieve it, especially from a demijohn.
Do not put the sample back into the brew - you could contaminate it.
Do not bottle your brew if the hydrometer reading is higher than expected of still changing, you could end up with exploding bottles. |
Cnidarians and Ctenophores: next step in evolution!
Phylum CNIDARIA (jelly fish, anemones, corals)
Cnidarians and Ctenophores represent the next step in evolution: radial symmetry. After the primitive cell aggregation of sponge, these are the first two Phyla of animals presenting this new structure.
A part of radial symmetry, they present a body with two layers of tissue, tentacles bearing stinging cells called cnidocytes and a digestive cavity with a single opening (mouth and anus at the same time! BLEACH!)
It is an important phylum presenting more than 9000 species, living in the marine environment. Jelly fish, sea anemones, corals and sea pens belong to this Phylum.
In relation to sponge, cnidarians present also a primitive nervous system. There is no a proper brain but several nerve cells that respond to tactile and chemical stimulus. Cnidaria are anyway quite simple in construction. Neither respiratory nor secretor systems are present.
A cross section of the body wall shows two layers: outside an ectoderm with the cnidocytes and inside an endoderm. In between a jelly like material called mesoglea is present. The mesoglea could be very thin, like in anemones, or very thick, like in medusas.
Cnidocytes are present in all cnidaria and are located on the tentacles. They contain threads called nematocysts that can be ejected by tactile or chemical stimulation for defence or prey capture.
Cnidaria body shape is in two basic types: polyp or medusa. Polyp is sessile and presents the mouth end up. Medusa is planktonic and has the mouth and the tentacles hang down. One is the upside-down form of the other.
In some classes, cnidaria alternate a planktonic medusa stage, during their larval phase, with a sessile polyp stage, during their adult phase. Other cnidaria have only one form for all their life, either medusa or polyp. Besides, many species are individual animals while a great number too are colonial, presenting morphological and functional specialization among the individuals.
Cnidarians are divided in 4 Classes, based on their structure.
Anthozoa: (sea anemone and coral). It is the largest of the 4 classes, presenting about 6000 species. It occurs only in the polyp form and could be both individual (anemones) and colonial (corals). The animals present numerous septa through the all body, which increase the surface area of the digestive tissue. Most of them belong to the group of the Hexacorals and present 6 or a multiple of 6 tentacles.
Schyphozoa: (jelly fish). It is present with about 200 species of large planktonic cnidaria. Medusa is the dominant form with big and solitary individuals. Several species have also a sessile polyp stage. Many species are highly poisonous even for human been.
Hydrozoa: (hydroids). It present about 3000 species of small colonies, where prevails a colonial polyp stage which alternates with a solitary medusa stage. In this class the life cycle is very complex and generation alternation is the rule. A particular group is represented by the Siphonophore, which presents complex colonial pelagic medusae.
Cubozoa: (cubomedusae). It presents few species of cubic jelly fish with 4 long tentacles used for the prey capture. Polyp stage is much reduced. They are highly poisonous and they can cause even death in case of multiple stinging. They are present in shallow tropical seas.
Phylum CTENOPHORA (sea gooseberry, sea walnut)
Like Cnidaria, Ctenophora have single digestive cavity, two body layers with mesoglea in between and tentacles with specialized cell for prey capture.
It is a small phylum with only about 90 marine species; most of them are pelagic and component of the zooplankton. The majority of them are spherical and measure few cm of diameter. Few species have a flat body up to 1 m long, like the belt of Venus.
Differently from Cnidaria, Ctenophores present a peculiar locomotion system with cilia. Cilia are particular structure organized in combs (ctena in ancient greek) arranged in row, which overlap each other. There are 8 rows of ciliary combs from the top to the bottom of the animal. Thanks to the uniform beating of cilia, ctenophores can swim.
Moreover, ctenophores present on the top of the tentacles, specialized cells called colloblasts. These cells contain a coiled thread that is discharged under stimulation, which capture and immobilize preys from the zooplankton.
Even if ctenophores are transparent, light reflection on the cilia gives the impression that they have rainbow colours on their comb rows.
… it continues! …
Return from Cnidarians to Marine Biology
Return from Cnidarians to Home Page |
Surgical Ablation of Atrial Fibrillation (AFib)
Heart & Vascular Care
Surgical Care
Featured Physician:
Erick L. Montero, MD
There are various methods for treating atrial fibrillation (AFib) available today. AFib is considered an arrhythmia and is the most common type of arrhythmia. AFib occurs when the electric signals in the heart’s two upper chambers miscommunicate and begin to fibrillate, or contract faster and irregularly. The symptoms are a heartbeat that is too fast or too slow and causes an irregular rhythm. AFib also causes the blood to pump irregularly which makes the upper and lower chambers work inefficiently. If left untreated, AFib could lead to stroke or heart failure.
Surgical ablation of atrial fibrillation is a procedure also referred to as the “maze” procedure because the incisions made have maze-like patterns. The surgeon creates a number of incisions to cause scar tissue on the left and right atria. The scar tissue disrupts the abnormal electrical impulses that cause atrial fibrillation and prevent further erratic electrical signals from forming, restoring a normal heart rhythm. This procedure is typically an open chest procedure with a very high success rate. |
Doctor Who
Doctor Who (1963)
1 corrected entry in The Three Doctors
(1 vote)
The Three Doctors - S10-E1
Corrected entry: It's stated that Omega caused the black hole when he created a supernova. Supernovas don't cause black holes, they're created by huge stars imploding.
Correction: Basic lifecycle of extremely massive stars: they're born, they live for several million years, they die in a large supernova. There are two potential outcomes of a supernova: the creation of a pulsar or neutron star, or the creation of a black hole, depending on how massive the original star was. So yes, black holes are indeed created in supernovas.
Join the mailing list
Add somethingBuy the booksMost popular pagesBest movie mistakesBest mistake picturesBest comedy movie quotesMovies with the most mistakesNew this monthPearl Harbor mistakesJurassic Park mistake pictureThe Big Bang Theory mistakesMan on Fire endingMan on Fire questionsDeadpool 2 triviaHow the Grinch Stole Christmas quotesApocalypto plotDenzel Washington movies & TV shows25 mistakes you never noticed in great moviesCommando mistake videoMore for Doctor Who
|
Dismiss Notice
Dismiss Notice
Join Physics Forums Today!
Investigation of binary numbers
1. Sep 5, 2012 #1
Hi :-)
I have questions regarding the binary properties of numbers.
I would like to discuss some very specific attributes of "scalar" values.
IF the goal is to compile a pattern recognition algorithm instead of training it with test sets,
Then I am investigating a method for the compilation of neural nets/matrices and attempting induction instead of deduction as in "Curve Fitting".
This attempt has introduced speculation that requires more information outside of the box "standard mathematics" in order to determine the validity of this direction of research.
To dicuss this, i need the proper forum and the proper individual(s) that can step outside of the box. Please point me where I can learn how to articulate the following in order to be able to ask the right questions.
Inside of the box that is standard mathematics a scalar has magnitude and no direction.
x=100 in decimal or 1010 in binary is a count when using the base 10 algorithm and iterating from rightmost digit to leftmost digit.
effectively counting.
Outside of the box every scallar has direction when reversing the direction of the counting algorithm and using a measuring algorithm starting from left most digit to rightmost. in other words, the result when treating the scalar as a measure instead of a count, results in a "path" or measurement (not a count as in the previous example.)
1010 from left to right indicates the 1st half of the 2ndhalf of the 1sthalf of the 2ndhalf.
This behaviour facinates me.
has anyone any books, papers, or references on this strange topic in relation to curve fitting? A name, anything?
Last edited: Sep 5, 2012
2. jcsd
3. Sep 5, 2012 #2
Staff: Mentor
Some scalars (such as real numbers) can be considered to have a direction, if you consider negative vs. positive scalars. Integers are used to count things, but rationals and real numbers are used for measurements.
Could you elaborate on what you're doing here? How do you get "1st half of the 2nd half of the 1st half of the 2nd half" out of 1010?
If I'm following what you're saying, you aren't working with the properties of numbers - you are just encoding something in a string of numeric digits.
4. Sep 6, 2012 #3
User Avatar
Science Advisor
I really have no clue what you are talking about. If you are going to work "outside" the standard "box", you will have to define, very carefully, exactly what you mean by "measurement", "direction of a scalar" and other words that you are not using in the standard way.
5. Sep 6, 2012 #4
My apologies Halls of Ivy... you are correct, and i am ... a little sloppy. ;-) thanks. I'll do better. When you correct, I shall implement to the best of my ability.
Dead on Mark... except for the encoding statement. Encoding implies action. i took no action but on an existing property.
for the sake of argument let us suppose the proposition the subject is a property be true.
Then, Encoding takes advantage of this property i am attemting to isolate for observation. (After this, i can begin to use the scientific method)
The path is from discreet math and is something I found when experimenting with Morton Location Codes, compression, and hashing algorithms.
In this example, I am using the same principle, but with only one dimension and trying to keep it simple (no interleaving).
Counting: in binary, the decimal number 10 is represented as 1010 (lower limit = 0000 upper limit = 1111)
The set A is the set of all integers from lower limit 0 to upper limit 15(1111) or A = {a:a = 0..15}
Then, the 10th element is 9.
Measuring: in binary, the decimal number 10 is represented as 1010. Consider this a binary path into a binary space when using the divide and conquer algorithm.(less t, only 5 steps instead of the 10 required when counting or iterating)
Let a = 10-1 or 9 (binary 1001) (you have to subtract one because you are measuring and not counting.)
The set D is the set of all digits in the path we wish to consider or D = {d:d= 1,0,0,1}
Let set B = upper or lower half of set A or B={b:b = 0,1,2,3,4,5,6,7 OR 8,9,10,11,12,13,14,15} depending on the value of the current digit.
Consider d[0] of 1010 = 1. if d[x]=0, use lower half of set B, when d[x]=1 use upper half of Set B. cardinality of Set B now 15, will be half after this step.
Our 1st digit is a 1, Then Let B = {b:b 8,9,10,11,12,13,14,15} and we now have only 8 elements to consider.
Our 2nd digit is a 0, then Let B={b:b 8,9,10,11}
Our 3rd digit is a 0, then Let B = {b:b 8,9}
Our 4th digit is a 1, then Let B={b:b 9}
When the cardinality of Set B is reduced to 1 using the divide and conquer algorithm, We have our answer and it is the same as when counting.
I hope I got close…. This is not easy to describe.
I want to know more about this if possible. I'm breaking my brain on how training a matrix allows it to recall patterns.... I can't figure out how to reverse the process and compile a matrix. it should have been simple function composition but... sigh. lost again.
CLARIFICATION: if U is a vector space using the counting coordinate system and V is a vector space using the measuring system, then the operator I am looking for maps U to V. (it almost sounds right... need help... over my head ;-)
Last edited: Sep 6, 2012
6. Sep 7, 2012 #5
To: Consider d[0] of 1001 = 1. if d[x]=0, use lower half of set B, when d[x]=1 use upper half of Set B. cardinality of Set B now 15, will be half after this step.
7. Sep 7, 2012 #6
User Avatar
2017 Award
Staff: Mentor
I do not see anything new, "outside the box" or whatever here.
In a similar way, you can consider 0.10012 = 1/2 + 0/4 + 0/8 + 1/16 = 9/16 where the subscript 2 denotes a binary number and the other parts are decimal.
8. Sep 7, 2012 #7
Yes sir...it is not out of the box, more appropriately worded as.... "over my head"
MFB!!!! Sir, what reference can I find on the expression you just used... it has that DARN harmonic series flavor... again.
That is SWEET!!!! but it sucks as well, because it hints that wherever you point me, is going to put me back in imaginary space dealing with complex numbers and banging my head against Rieman Hypotheses.
I have to learn to articulate this base proposition or whatever it is before I can present the argument.
I would like to have an area of study, or a reference that would give me the starting point I need to choose the right forum to continue my line of questioning and research.
I need education before I can continue. just so that I can find the right words to communicate with you.
I'm asking here for that... direction.
Once I know the name, I can google pappers or text books that can continue my education.
I can't make a speculative statement like:
"The consequence of mixing counting and measuring algorithms without regard for unit of measure is that the -1 can behave as 1/2 under very specific operations resulting in a deviation similar to that of Li(x) and x/Log(x) from the zeta function"
because it has no base proposition that I can articulate and you will immediately cease contact.
I'm having to use a bit of geometry to get there and am having difficulties explaining a geometrical problem in mathematical terms.
I want to learn, and am unable to continue my education, and am hoping you might donate some of your time to .... show me the way.
Last edited: Sep 7, 2012
9. Sep 7, 2012 #8
Stephen Tashi
User Avatar
Science Advisor
I'll make a guess about what Hermy is asking.
He wants to know a symbolic expression for a function f(S) whose domain is a string of N binary digits and whose range is a subset of the non-negative integers.
The function is defined by an algorithm that amounts to interpreting the string as process that cuts out pieces of a vector of numbers until only one number is left. (This is in contrast to the usual way of interpreting a string of binary digits, which amounts to adding up various powers of 2.)
The algorithm for computing the integer y as a function of the string S is as follows:
Form a vector of integers V[0] that lists the first 2^N non-negative integers.
(For example, if N = 4, V[0] = [0,1,2,...,15]
Set i = 0 and iterate the following steps until the process defines a vector V[i+1] that contains a single integer
Examine the i+1 digit in the string
if the digit is 0, form a new vector V[i+1] that consists of the integers listed in the first half of V
if the digit is 1, form a new vector V[i+1] that consists of the integers listed in the second half of V
(For example if the ith digit is 0 and V = [0,1,2,...7,8,...15] then V[i+1] = [0,1,2,..,7] )
if V[i+1] contains only a single integer y, the function returns y.
Otherwise set V = V[i+1], i = i+1 and repeat the above steps.
10. Sep 7, 2012 #9
Yes Sir... "In contrast" as in I just changed the direction of the algorithm. from one that has to ask more questions than the other.
Almost an inversion... but with a twist I am unable to identify.
It seems to appear in the strangest spaces. and mostly around the operators -1, 1/2, and the square root of a -1.
Mr. Tashi, What do I call the expression you used and what topic should I begin studying in order to express myself as you. I'm at the limit of my experience.... sniff... ;-)
The only patent I've ever seen that took advantage is the "Bus-Switch Encoding" patent I found a few years back.
takes advantage of the fact that, for example, a 32 bit buss can send 4 channels of 8 bit operands at once.
by using this path thing, I can sort my operations into a subspace such that all of my 8 bit calculations, 16 bit, and 32 bit calculations can be... non-deterministic... I don't need to ask the question "carry?" as when counting. I just.... navigate to the correct space ... perform the operation encoded in the element there.
effectivly, the encoding is a lookup into a set of 8,16,and 32 bit ORs without the hidden carry cost associated with most ALUs.
It should speed up my ALU simulator... but again, having problems describing what I'm doing.
by placing my simulated objects in an octree.... I can easily use this property to do more adds per frame than most by taking advantage of this property that is built in to the numbers of my x,y,and z members.
I do not have to ask the question "how small a bit vector is required for the operation"...it's what I couldn't express to the sweng@gamedev guys several years ago...and have been trying to learn how to express ever since.
I embarassed myself so bad with my barbarian grammar, that I can't go back there until I can explain myself. I HAVE to be able to communicate with people like Mr Crosbie Fitch whom I admire.
I rely on you for input.
Last edited: Sep 7, 2012
11. Sep 7, 2012 #10
It might be helpful to note that the number of digits in the path is ALWAYS the number of operations required to reduce the set to 1 element.
Also, ... I intentionaly mapped the example vector spaces so that you could see the relation between counting and measuring that I am researching.
And,... it appears...until i learn more, that the ratio of energy consumed between the two types of work: counting and measuring, is 1/2.
WARNING: for your own health and sanity... do not play with a formula to observe the effects of randomly reversing the order of digits. Like to drove me nuts.
Also... you may not want to consider this in base 8,10, or 16 as it gets really hairy because of the addition of "base", Limit parameters.
Last edited: Sep 7, 2012
12. Sep 7, 2012 #11
Staff: Mentor
What mfb showed has nothing to do with the harmonic series. It is how floating point numbers that aren't integers can be represented. The representation mfb showed is exactly the same as a decimal fraction, except that the numbers to the right of the "binary point" (not decimal point) are coefficients of (negative) powers of 2. In a decimal fraction, the digits are coefficients of negative powers of 10.
An example:
3/8 = .37510 = 3 x 10-1 + 7 x 10-2 + 5 x 10-3
3/8 = .0112 = 0 x 2-1 + 1 x 2-2 + 1 x 2-3
Imaginary space?
Complex numbers?
Riemann hypothesis?
It seems to me you are throwing around terms that you don't understand.
13. Sep 7, 2012 #12
Stephen Tashi
User Avatar
Science Advisor
I don't know if you are referring to a specific expression. My advice is that you study how to write. In order to write clearly, your language should be precise and it should not use terminology that you yourself have invented unless that terminology is explained. There are a small number of incoherent writers who know what they are doing, but have a particular disability when it comes to writing. A far larger number of incoherent writers can't express themselves clearly because they haven't figured out what they are doing.
If you are dealing with a problem of designing digital electronic circuits, you should ask about in another section of the forum where the readers are likely to be familiar with NAND gates, multiplexing, etc. You can't assume readers in the mathematics section understand this technology.
Technical writing should be concise if you expect it to attract readers. It shouldn't have all sorts of side-remarks and personal expressions of emotion. For what you have been able to express, your posts have been unnecessarily long.
If you need the answer to a mathematical question, you should study mathematics and learn its standard terminology. If you have question about algorithms, you should study computer programming and how algorithms can be represented in pseudo-code.
14. Sep 7, 2012 #13
Thank you Mr. Tashi. I have the opposite of dyslexia... an output handicap instead of input. I have to find opportunity to repeatedly train myself in technical writing or i'll never develop the skill. hard to do at 44 with a job and mired in with .... the uneducated.
verblexia ;-) I have the programming and need to learn the math as well as how to express myself.
I've read discreet math 3 times, Pattern Recognition and Machine Learning twice, self taught. comprehension without the training leaves me in a bind. I couldn't afford college in the 80s. so i'm trying to get "free aide" from you.
You give me a book...i'll devour it faster than you can spit... ask me to communicate and ... sigh.
MFB and Mr Tashi hit the nail on the head... can we just focus on what that is in purely mathematical terms and not my numerous encounters with my gripe at Riemann, rounding, and -1 and my very poor grammar. I desire to learn. I apologize profusely for speculating that this was even related to our inability to proof the hypothesis.
Yes sir Mr. Mark. I am using the closest words to concepts I do not have the vocabulary for. I'm counting on you to guide me in. Not just throwing, not for pride as I have none.... adjusting fire... heading to a goal with an incomplete plan and not paralized by indecision and pride. knowing, that no battle plan survives contact with the enemy. I can't let myself be stopped worrying about making a gramatical error, social mistake or miswording a concept I have no words for... I must learn and change. or fail.
It's faster to learn on my own when I can rely on synergy... aide from others.
sometimes friction, but still faster than repeated training. divide and conquer!!! my favorite algorithm.
Mark, the sum of the increasing power of two series or whatever it is... what is it if not harmonic? it was in MFBs denominators AND it's in the zeta function. HECK, middle C vibrates at 256, each octave up and down increases or decreases by a power of two. The perfect series. I just want to know what to call it, how to express it in symbolic language, and where I need to go to learn more.
I learned in discreet math that counting and measuring are indeed two different things.
I simply speculates that the reason we can't proof the hypothesis is because we mix counting and measuring without regard for unit of measure. I sure as heck can't proof it... I'm not alone in THAT!!!
I'm also not the first person to beat his head repeatedly on papers that begin "If the rieman hypothesis is true..."
It's everywhere, in every area of science. and it ticks me off that something I don't know is causing it. I have a burning desire to figure it out.
I will not make that mistake again sir. Again, I am so sorry I even mentioned it. Again... back to the operator MFB and Tashi mention so that I can continue my work on compiling pattern recognition(big darn polynomial) instead of training it.
No code, I just want to learn how to build the polynomial/matrix without training. Pure math I hope?
It can't be that hard right?
What is it and what do I need to learn to understand?
Last edited: Sep 7, 2012
15. Sep 7, 2012 #14
I appears I'll have to go back to school to get the education and repetition I need as well as the training on technical writing. Unless you can get me an ordered list of the next few books i need to read... should keep me out of your hair for another couple of years. sigh.
Anybody know of any scholarships for old guys that have been driven crazy? ;-)
Thanks again for your assistance...i'm a lot further than I was last week with your help.
16. Sep 7, 2012 #15
Wouldn't the same be true of decimal left to right?
Only instead of halves(1/2) you have tenths(1/10) of the set.
so with a 2 digit number, such as 27, you take
divide into 10 groups
0-9,10-19,20-29,30-39,...80-89,90-99, take the (2+1)th(add 1 because we don't say 0th set for 0-9)
20-29, divide into 10 groups
20,21,22,24,...,28,29 and take the (7+1)th
and voila 27!
In fact, you could generalize it to any base relatively easily.
If we have a number is base b, whose length is n.
Define a set A, that has each digit as an element.
A = {a1, a2, a3,...,an-1,an}
Define another set B to have all integers less than b^n
B ={0, 1, 2,...,(b^n)-1, b^n}
You then divide B into b subsets of equal length and take the (a1+1)th one as your set, continue until |B| = 1
That should work for any number in any base?
17. Sep 7, 2012 #16
Staff: Mentor
The series that mfb and I wrote are power series, which generally look like this:
$$ \sum_{n = 0}^{\infty} a_nx^n~=~a_0 + a_1x + a_2x^2 + ... + a_nx^n + ...$$
The examples that mfb and I wrote were finite series, where the base was 1/2 (for binary fractions) or 1/10 (for decimal fractions).
The harmonic series, which is one of many kinds of series, looks like this:
$$ \sum_{n = 1}^{\infty} \frac{1}{n} = 1 + 1/2 + 1/3 + ... + 1/n + ...$$
Series are usually studied in engineering-level calculus (as oppose to "calculus for poets") in the 2nd semester or 3rd quarter. There are lots of calculus books out there, all available on amazon. Some that come to mind are the ones by Thomas/Finney (maybe just Thomas is writing now), Larson, Anton, Stewart, and a bunch of others.
18. Sep 10, 2012 #17
THANKS a million, litterally. I KNEW once we got past my ... excitement at having been understood... and my mouth overflowing.... that I'd score!!!
The two series are exactly what I need to learn.
Advanced Calculus and Series. On It!
Mark44, I'll have to read Thomas for technical writing? and another for engineering calculus? Is that right? Thanks. Any particular favorite? I prefer the writers be "long winded".
Zula, Yes, it appears so. for any base... I'm just particularly in love with it expressed in binary.How can I figure out how to take the next step and .... name "what ever it is" so that I can research it? I need the ability to then, express the "principle" in both propositional and symbolic form to allow for the use of multiple "fixed points." (without forgetting that it describes a location in a subspace).
Does anyone have an idea of where I would start on that line?
This axis I defined that all have noted and identifed, seems to run up and down the scalar's digits with the two different traversal methods and appears to be some function of... scale? I am wondering what area of study will allow me to express these concepts sybolically?
Last edited: Sep 10, 2012
19. Sep 10, 2012 #18
If I could have just one more book gentlemen, it would be something that describes....
a three tiered architecture or way of looking at a math system such that the math simulation is in layer 2, operators and operator are in layer 1, and meta-operators would be in Layer 0?
Last edited: Sep 10, 2012
20. Sep 10, 2012 #19
Staff: Mentor
No, Thomas writes calculus books.
21. Sep 10, 2012 #20
Staff: Mentor
I'm not aware of any such book. |
How Do You Spell THEORY?
Correct spelling for the English word "theory" is [θ_ˈiə_ɹ_ɪ], [θˈi͡əɹɪ], [θˈiəɹɪ]] (IPA phonetic alphabet).
Click here to check the spelling and grammar
Definition of THEORY
1. A plan or scheme subsisting in the mind only; abstract knowledge of any art; a proposed explanation: speculation.
Common Misspellings for THEORY
Below is the list of 612 misspellings for the word "theory".
Usage Examples for THEORY
1. Nothing could have seemed easier in theory, but in practice unexpected difficulties presented themselves. - "A College Girl" by Mrs. George de Horne Vaizey
2. With these I have no quarrel, nor with the religion they teach- in its theory. - "The Gun-Brand" by James B. Hendryx
3. I had a complete theory about her. - "The Diary of a Man of Fifty" by Henry James
4. It's all theory, then? - "Unexplored!" by Allen Chaffee |
Tid Bits of Info
• VMO stands for vastus medialis oblique.
• The VMO is most active in the final 30 degrees of knee extension.
• The patella is used as a pulley by the vasti musculature to generate force and extend the knee.
• The VMO is the muscle that resembles a “tear drop” above the knee and on the medial side of the quadriceps muscle group.
• Seek the advice and treatment of a Physical Therapist if you develop anterior knee pain .
Young athletes and the overweight sometimes suffer from discomfort in the front of the knee. There can be a many sources for anterior knee pain but the most common cause is a patella that does not remain in the groove of the thigh bone when it coursed up and down during knee motion. Over the past quarter century, healthcare providers have debated the importance of the VMO (one of the quadriceps muscles) in preventing anterior knee pain. Studies now indicate that effective treatment addresses the whole system including the VMO, the core, and the entire leg.
The Vastus Medialis Oblique of VMO is one of the quadriceps muscles and it is located above (proximal) and medially to the patella or kneecap. The knee joint consists of 3 bones, tibia, femur and patella. These bones move in unison during the knee motions of flexion and extension. At the distal end of the femur, there is a groove (trochlear) that houses the patella during motions of the knee. The ideal position of the patella is somewhat centered in the groove which reduces the amount of compressive and sheer forces on the patella and groove.
The patella moves up and down in the groove during knee flexion and extension and the vasti muscles (quadriceps) in thigh are thought to help control the patella motion. The VMO was thought to be the primary controlling dynamic force from the medial side of the patella, but some studies indicate that there are numerous variables that can affect the tracking of the patella.
The VMO is most active in the last 20 degrees of knee extension. When this muscle does not function properly, the patella can track laterally and “rub” against the lateral side of the trochlear groove. If this occurs repeatedly the groove or the patella begins to experience damage to the articular cartilage. The lining within the joint, synovial lining, becomes inflamed and the knee begins to hurt.
Recent studies indicate that the VMO and vasti muscles play a role in controlling the patella and the way it moves within the groove. The method used during these studies involved using a nerve block to eliminate the motor function of the VMO therefore disabling its ability to “pull” the patella medially when it tracks within the groove. In the study the non-symptomatic test subjects experienced more lateral tracking of their patella at 15 degrees of knee flexion than symptomatic test subjects experienced at the same flexion angle. The lateral tracking of both groups is indicative of a non-functioning VMO.
Many healthcare professionals are hesitant to have their patients/clients perform a leg extension exercise due to the increased patella femoral compression force that occurs as the knee moves from flexion to extension. The patella is used as a pulley for the vasti muscles and the patella moves deeper into the groove when the knee is flexed to 90 degrees or more. In most knee extension exercises, the patient/client sits with their knee flexed to 90 degrees and then extends it against resistance that is applies at the ankle level. The increased compressive force of the patella can become detrimental over a period of time.
The results of the recent studies indicate that isolated exercises for the vasti muscle and particularly the VMO are needed for a healthy patella femoral joint. Several exercises can be performed safely that isolate on the VMO and will enhance its capability to control the patella from the medial side (i.e. knee extension exercises, squats and lunges that limit knee flexion to 30 degrees isolates on the VMO and can strengthen it.
16949466 - man with anterior knee pain
If you develop anterior knee pain, you should seek advice and treatment from a Physical Therapist. These licensed healthcare professionals can be visited without a doctor’s prescription, but your insurance might require that you secure a referral from your primary care physician. They will evaluate you and prescribe specific exercises that will address the imbalances that exist throughout your lower extremities. Strength development of the vasti muscles is only one part of the treatment protocol that will be implemented to address the symptoms of anterior knee pain.
During the past 25 years the treatment protocols for anterior knee pain have changed quite a bit. The focus has transitioned to include the entire body and not just the muscle structure around the involved knee joint. The VMO and vasti muscles have a role in controlling the patella but the core and entire leg has to be included to fully rehabilitate a knee that has anterior knee pain as a primary symptom. |
5 Ways To Keep Your Dog's Teeth Healthy
26 September 2016
Categories: , Blog
Canine teeth both look and function different from human teeth, but the steps that you would take to care for them aren't that dissimilar. Unlike humans, dogs can't tell you when they're experiencing a toothache or even when they feel a piece of food stuck between their teeth. If you read up on these five simple ways to keep your dog's teeth clean, you will be able to protect your dog's oral health with ease.
1. Consult With Your Veterinarian About Full Cleanings Like humans, some dogs need to have their teeth cleaned more often than others. If you properly clean your dog's teeth on a regular basis, you probably won't need to take him or her in for a full cleaning more than once a year, or perhaps even less often than that. Some dog breeds are more prone to plaque buildup than others, so talk to your veterinarian about what will work best in your particular case.
2. Make Sure Your Dog Gets Enough Water Giving your dog plenty of fresh, clean drinking water throughout the day will help to prevent your pet from overeating and seeking out other foods to snack on. In addition, you can actually add vet-approved oral care additives to your pet's water to help reduce bacteria and promote fresh breath, all while keeping your dog happy and hydrated.
3. Brush Your Dog's Teeth Regularly Brushing your pet's teeth on a daily basis is one of the most important things you can do to help ward off disease and keep veterinarian visits to a minimum. You will need to get your dog accustomed to sitting down and staying still long enough for you to brush all of his or her teeth, but the end results will definitely pay off with time.
4. Feed Your Dog The Right Food Your veterinarian may have given a list of reasons you why you should avoid feeding your dog table scraps, but a lot of people fail to heed this advice because they aren't aware of the full extent of the potential consequences. Feeding your dog dry foods isn't just good for digestion and bone health—dry dog food is simply best for your pet's teeth.
5. Choose Treats That Are Best For Canine Teeth Some dogs behave better when they know that a tasty treat is coming. While you don't want to give your dog too many treats, choosing the right kind of doggie snack is even more important. Choose hard dog treats instead of the kind that is soft to the touch so that your dog's teeth stay strong and healthy.
For more information, contact local professionals like Kenmore Veterinary Hospital. |
Skip Navigation, To Main Content
Dublin City University
Donal Fitzpatrick
Home Teaching Areas of Research C.V. Publications Personal Webpage Assessment Submission
TechRead: A System for Deriving Braille and Spoken Output from LaTeX Documents
Donal Fitzpatrick and Alex Monaghan
Computer Applications, Dublin City University
One of the most difficult aspects of research for a blind student is the unavailability of technical material in a format accessible to them. To date, much of the effort of transforming documents into either Braille or spoken output has been in the literary rather than in the technical or scientific areas. For example, the majority of the spoken text produced by existing screen access technology does not harness the capabilities of synthetic speech devices but instead outputs the material using a monotone. TechRead, on the other hand, is a system which, it is hoped, will render technical documents more accessible to blind people. This software will take LaTex documents as input and produce Braille or spoken output from them.
The main aims of Techread are as follows:
This paper discusses the fundamental principles underlying this system. It aims to show how the LaTeX document is transformed into an internal representation of the document, and from this to either Braille or spoken output. The final section discusses how the system will be expanded to cater for mathematical material, and our beliefs that the ideas contained in the system can be used to improve screen access technology.
Keywords: LaTeX, Spoken Documents, Braille
1 Introduction
For many years, the focus of those writing software to translate material into Braille has been on non-technical documents. Much of the effort has gone into producing software which will translate literary material into Braille while ignoring the more technical documents. Therefore, as can be imagined, the procurement of technical data for those who cannot read the printed version is both a time-consuming and tedious affair.
TechRead aims to solve this problem. The purpose of the system is to take a file in LaTeX [2,3], and to produce an output file in some medium accessible to blind people. A two-fold approach is taken here. Firstly, the system will take the input source file and derive a Braille file from it. The user will then be able to obtain a hard-copy of this file using a standard Braille Embosser. The reason for using Braille is simple. Many people think that it is an out-dated, archaic system, which has no use in the modern age of advanced technology. However, it is a fact that for many people Braille has been their standard means of reading for most of their lives, and in our view they must be catered for by TechRead. The translator will, as closely as possible, represent the structure of the document i.e., the Brailled document will be as close a replica of the printed version as it is possible or sensible to be. There will, by the very nature of Braille, be some discrepancy here particularly in the area of lay-out.
While Braille to many is the only means of reading a document, it is limited. By its very nature as a tactile system, Braille cannot convey many aspects of documents which sighted people find so important, and which they take for granted. For example, it is only possible to show emphasis in Braille in one way. In order to do this, the emphasised text is italicised no matter what the printed font might be. Speech on the other hand allows for a far wider scope. It is possible for example to have different voice characteristics for emboldened or italicised text. Another major advantage of speech is its ability to convey both document structure and layout characteristics of a document by the use of prosodic characteristics.
Techread's second mode therefore combines existing speech synthesis technology with an analysis of document markup information to produce a "speaking document browser". Our current strategy is to take LaTeX as an input source and to produce an off-screen model of the document from this. Using this model, the blind person will be able to read a document in as similar a manner to their sighted colleagues as possible. An example of how this system will work is as follows.
Let us assume that the document being browsed is a newspaper, with sections, headlines and articles. The sighted person will read the section if it interests them. However, there might be headlines in this section that they wish to skip, or paragraphs in the articles which they do not wish to read. The document browser aims to provide the blind user with exactly the same functionality. The browser will allow the blind user to skip sections, paragraphs etc. Also, to return to the analogy of the newspaper, if the sighted person wishes to find the next headline, then they can simply scan down the page to see it. The document browser will allow the blind user to go directly to the next/previous section or sub-section of the document.
The advantages of such a system would be many. Unlike the current situation the blind reader would not have to read superfluous and extraneous information. They could "scan" the document using the browser, until the relevant material has been reached and then read it.
One of the key underlying ideas of the TechRead system is that of conveying the structure of a document to the blind user. In order to achieve this goal, an off-screen model (OSM) of the document must be constructed. The strategy employed in producing the accessible documents is based on a three level architecture as shown below (Fig. 1). As can be seen, it consists of an input or source file (LaTeX) which is passed into a pre-processor. This pre-processor will then convert this raw LaTeX material into an internal representation of the document, which can then be passed on to either the translator or the system for producing the documents used by the Browser. Before embarking on this discussion however, it would be useful to outline the reasons for selecting this type of architecture. Firstly, such a system lends itself to a very modular design. The layered structure means that one component can be changed without altering the overall structure or logic of the entire system. Secondly, though at the time of writing the input source file will be in LaTeX, there is no reason why other file formats cannot be added at a later date. All that will be necessary will be to write the conversion routines to transform the input file into the internal format used by the translator and the generator for browsable documents.
2. Representing structure.
The off-screen model of the document is constructed at the input stage. At this point the structure of the document is encoded. For example, to return to the analogy of the newspaper, it is at the input stage of translation that the structural information such as the whereabouts of the starting points of sections or headlines would be deduced. This system will enable the blind user to browse a document in as close a manner to a sighted colleague as possible. The off-screen model will enable the reader to do this. The following paragraphs will describe the model used by TechRead, and the manner in which this will influence the design of an interface to the document.
Figure 1: The Three Layer Architecture.
Previous systems [7] used a tree based architecture to represent the structure of a document. TechRead uses a complex hierarchical structure to represent this. We assume two types of node in the system; one being a terminal while the other is an internal node. The terminal nodes will be used to hold the actual text of the document, coupled with any formatting information associated with that text, while the internal nodes will be used to hold the material relating to headings, sub-headings etc. The architecture can be best described as a cross-linked tree. The root is a node containing all global formatting for the document. Below this are the first level headings (if they are present), or simple terminal nodes containing the text of the document otherwise. At all levels below the root, the nodes are inter-linked both downwards and across the same level. For example, each section node is linked to the preceding and following sections as well as dominating the sub-sections contained within it: this allows the user to browse any chosen level of the document. In addition, the left-most terminal node on any branch of the tree is linked to the left-most terminals on the preceding and following branches: this allows the user to skip forward or back a paragraph. All terminal nodes on any given branch are linked to each other in the form of a list. Finally, the right-most or final terminal on any given branch is linked to the left-most terminal on the next branch for smooth continuous reading. This combination of links directly models a range of different reading strategies.
During construction of the OSM, any formatting changes are passed up the hierarchy to enable rapid processing of the document. For example, if a portion of emphasised text appeared in paragraph 4 of section 3, a flag would be set in both the section and paragraph nodes. Thereafter, if the browser encounters that section it will examine the paragraph nodes to find the one which contains a formatting command. Similarly, browsing the paragraph level would lead to an examination of the terminal nodes to discover where the formatting change occurs. The algorithm which ultimately produces the spoken version of the document would then alter the characteristics of the voice appropriately, instead of simply outputting the text in the normal reading voice. As a consequence of this model, the interface to the document can be very flexible.
It was initially decided to display the LaTeX on screen in an un-interpreted form and to design the interface to the spoken version of the document such that it was based around the numeric keypad on a standard IBM-compatible computer. This is in keeping with trends in the design of modern screen access technology, where developers attempt to ensure that the time taken by users to learn the system is kept to a minimum. However, use of the numeric keyboard also has an inherent logical basis. Firstly, navigation through the document is intuitively related to the directional keys (up = 8, down = 2, etc.). Secondly, the use of meta-keys in combination with the numeric keypad allows functions at different levels of the document. Let us assume, for example, that the "5" key on this keypad reads the current character. When pressed in conjunction with the "shift" key it could read the current word, and in conjunction with the "control" key could read the current paragraph.. The flexibility of such an interface leaves a wide scope for expansion or customisation. The number of overlays which can be placed on the numeric keypad is (theoretically) infinite, while the fact that only a small number of keys are at the core of the system means that should the user desire to alter the key mappings it will be relatively straightforward to do so.
3. Translation Algorithms.
We have seen how the TechRead system takes the raw LaTeX documents and produces an off-screen model from them. Then next phase of producing accessible documents is to transform this model into either speech or Braille. The following sections detail how this will be achieved.
3.1 Braille Translation
For many years, the means for producing Braille material from various types of document have been known. However, as was stated in section 1, the material produced to date has been of a literary rather than a technical nature. There are still very few translation packages which can take technical documents with embedded mathematical formulae and render accurate Braille. This is one of TechRead's main aims.
The translation process simply involves a character substitution algorithm, consisting of a rule based engine which replaces patterns of characters found in the input document with their grade II Braille equivalents. The rules are of the form:
"input_string" => "Braille symbol"
"input_string" => "braille_symbol"+ "remaining_input"
Either the entire string has a Braille equivalent, or the first part has a Braille equivalent.
However, more important than the translation process is the actual material which is translated. How for example should emphasis be conveyed to the Braille reader. Traditionally Braille has used only one form of emphasis, namely italics, and this has been used to denote emphasis irrespective of the form of visual enhancement of the document. This notion of "what" rather than "how" to translate is particularly important when dealing with mathematical material. Unlike the spoken version of the document where an almost infinitely varied set of alterations in the characteristics of a voice can convey much of the semantics of the formulae, Braille has only one way to translate mathematical material. One of the means in which it is hoped that this translator will improve on others currently in existence is the means in which it uses spatial location of formulae on a page. For example, as any student using the British mathematical notation will know it is customary to simply write equations in a line across the page (as though it were literary text) instead of using the conventions adopted by typesetters for displaying printed mathematics. While it is true that not all of these conventions will have relevance to Braille mathematics some of them, such as the use of vertical as well as horizontal orientation to display formulae will improve readability.
3.2 Producing Spoken Documents
By far the more interesting portion of the system is that concerned with the production of spoken documents. While the derivation of Braille from the LaTeX input may indeed be very useful there is far more variety in the output which can be obtained by actually using alterations in the characteristics of the voice to convey the material to the user.
To begin with, let us consider the text-to-speech commands which are available in LaTeX and similar document preparation systems actually correspond to a much smaller number of linguistic categories which can be realised prosodically. These categories include subordination, aside, change of topic, list, and emphasis. For example, an aside may be encoded in a document as a footnote, a parenthesised passage, a margin note or simply some text between commas: all of these might receive the same prosodic treatment in a spoken rendition. Similarly, in the spoken version it may not be desirable to distinguish between bold, italic, underlined, capitalised and quoted text: it seems unrealistic to expect the listener to keep all these emphasis types distinct. Moreover, TechRead is limited by the possibilities of the synthesis devices which will produce the spoken version: not all synthesisers offer the same degree of control over the prosodic realisation, and the granularity of control also varies. However, previous work [4-6] has shown that a small number of prosodic categories allows the construction of quite complex hierarchies which should be sufficient to express all the relations which sighted users extract from formatted documents.
Starting with a core set of LaTeX commands, we will derive a model of the possible functions performed by different formatting commands. Each of these functions will relate a set of formatting commands to a unique combination of prosodic symbols. These symbols will be given a translation in terms of the control sequences for each output device. The acoustic phonetic details of the spoken output will therefore depend on the particular synthesiser in use. The core set of LaTeX commands will then be expanded to include almost all the standard commands [3], although this will always be a subset of full LaTeX: we cannot cope with user defined macros.
It is hoped to construct a formal model for the alterations in the voice characteristics to show proper semantic interpretation of mathematical equations in particular. It should be noted that to date much of the work relating to this portion of the system has gone in the direction of enhancing the spoken text, as opposed to mathematical equations. (For a discussion of our future work in the area of mathematics see section 4.)
In order to translate efficiently from the format used in the OSM to a synthesiser-independent format for spoken output, we have devised an algorithm based on the off-screen model generated for each specific document. As was said in section 2, flags are stored as part of the internal nodes which indicate whether changes of formatting occur at a lower level. The algorithm simply checks the formatting of each level as it goes, and, if no change occurs at that level, the text at the levels below this is output to the browser with no additional control codes. However, if a formatting change is detected, the translator drops down a level and goes through the same process, until a point is reached when the text contained in the terminal nodes is found. At this juncture the algorithm simply scrutinises the formatting of the text and, where necessary computes the prosodic changes necessary to convey the visual appearance of the material to the blind user. An example will suffice to explain this algorithm.
Let us assume that a default reading voice (V1) has been chosen on a DECtalk [1] synthesiser, and that we are translating a document of two sections with no sub-sections. The browser encounters the starting point for "section 1" in the OSM, and checks the information stored to determine whether there is a need to alter the voice characteristics within that section. Let us assume that in section one there is no such alteration needed. The material contained in this section can now be simply output for use by the document browser. However if in "section 2" there is some emphasised text in the first paragraph, the translator will not now simply output the text, but will discover that a change occurs in the section, so will examine the paragraph level. Here it will become apparent that there is a change in the first paragraph, so the translator will now examine each of the words within this paragraph to deduce where the alteration in voice occurs. When the start of the emphasised text is found, V2 replaces V1 as the reading voice, until the attributes change again, when the voice is returned to V1 for normal reading.
4. Discussion
Though it is intended to incorporate mathematical translation into the TechRead system the majority of the work done thus far has been in the realms of conveying the structure and visual enhancement of a document to the listener. Therefore, as can be seen from previous sections, the algorithms devised to date have been for the production of both Brailled and spoken text. However, these algorithms have been designed with mathematics in mind, and it is our belief that minor modifications will ensure that this type of material will be translated successfully.
It is our intention to conduct a study over the coming months to determine what information sighted people extract when reading equations or other formula-based material. We intend to show them a series of mathematical expressions for various fixed lengths of time, and get them to write down what they recall. The use of varied lengths of time is intended to simulate different types of reader. For example, it is hoped that the replies we get after the subjects have seen the equations for the shortest time will indicate what a sighted person sees when they glance at an equation, while those we obtain after the longest period of observation will indicate what they recall after examining the mathematical material in depth. It is then hoped to analyse these results to determine the best and most effective way to give a listener a "glance" at an equation.
Much work has been done in this area to date. The method used in The Maths Project [8], for instance, was to convey the "glance" using musical tones. It is hoped to find a more natural alternative to this.
We currently envisage several reading modes for equations: verbose, overview and glance. In "glance" mode the system will announce the presence of each equation, followed by an indication of the components of the equation e.g., "Equation: summation followed by integral followed by fraction". This information should be available from the formatting commands. TechRead is intended to process equation of the type and complexity encountered in pre-university examinations.
We also foresee uses for this system in the realms of screen access technology. The traditional approach to designing screen reading software has been to start from the operating system, and then design add-ins which will cope with various types of package. It is our belief that TechRead could be adapted to cope with many different types of documents. For example, a spreadsheet is simply a table, and a document produced by a word processor is simply a document marked up in different ways. Accordingly, we believe that instead of designing screenreaders to cope with various operating systems, it may be possible to incorporate "style sheets" into the TechRead system, thus rendering many different types of document accessible. Finally, though LaTeX is being used at present, there is no reason why the input source could not be amended to SGML at a later date.
[1] DECtalk, a trade mark of Digital Corporation,
[2] KNUTH, Donald E., The TeXbook, Addison-Wesley, 1993
[3] LAMPORT, Leslie, LaTeX: A Document Preparation System, Addison-Wesley, 1986.
[4] MONAGHAN, A. I. C., Intonation in a Text-to-Speech Conversion System, Edinburgh University Ph.D. thesis, 1991
[5] MONAGHAN, A. I. C., Intonation Accent Placement in a Concept-to-Dialogue System. Proceedings of AAAI/ESCA/IEEE Conference on Speech Synthesis, New York, September 1994, pp. 171-174.
[6] MONAGHAN, A. I. C. & LADD, D. R., Manipulating Synthetic Intonation for Speaker Characterisation., ICASSP 1991, vol. 1 pp. 453-456.
[7] RAMAN, T. B., ASTOR Audio System for Technical Readings, Cornell University Ph.D. thesis, 1994
[8] The Maths Project
Back to Top of Content |
Cascading Water Supply Challenges
What is it?
Former Chinese prime minister Wen Jiabao once remarked that water shortages "threaten the very survival of the Chinese nation."1
Increasing water scarcity in Asia is attributed to worsening pollution, unsustainable consumption due to changing lifestyles, droughts and climate change. The construction of dams, nuclear and coal power plants, and other megaprojects is diverting the natural flow of rivers, thereby exacerbating the problem. The problem is especially acute in densely populated areas in Asia, where water scarcity could create "water refugees," increase black market water sales and pose serious institutional challenges.
The region's three major river systems — the Indus, the Ganges and the Brahmaputra — sustain India and Pakistan's breadbasket states and many of their major cities including New Delhi and Islamabad, as well as Bangladesh. Rapid and continuing urbanization will only exacerbate pollution and resource strain, particularly on surface water such as rivers. A 2012 U.S. intelligence report warns that fresh water supplies are unlikely to keep up with global demand by 2040.
The prevailing assumption is that people will co-operate over water rather than fight, however, past water conflicts and the lack of cooperative arrangements indicate that the risk for conflict around water issues is on the rise in Asia.
Why is it important?
Water supply issues could trigger or escalate geopolitical tensions, especially in the case of trans-boundary rivers, such as the Mekong and the Brahmaputra. Demand for fresh water that outstrips the available surface and groundwater may also lead to increased water trading, and increased use of desalination and filtration technologies. The trade-offs both between agricultural and industrial usage, and between water and energy needs, will present policy dilemmas for the economic growth objectives of many emerging economies in Asia.
With more reliance on water to generate energy, an excessive number of dams are being constructed in Asia. Out of the 57 transitional river basins only 4 have a co-operative or water sharing treaty. Increasing autonomous action, such as the building of new dams, is leading to heightened tensions between neighboring countries.2 The future of the world's most famous mountain range – the Himalayas – is endangered by this regional race. China and India together have plans to build over 400 hydroelectric dams that will generate 160,000 megawatts of electricity. In the next 20 years, the Himalayas will be the most dammed region in the world.
Indian farmers will increase their use of ground water for irrigation as rivers become more polluted and river levels decline. The increased use of ground water may have a negative impact on salinity and soil quality, leading to a decline in agricultural output and further exacerbating food security concerns. According to the Asian Water Development Outlook 2013, saline soils are already estimated to affect almost 50% of irrigated areas in Turkmenistan, 23% in China, and 20% in Pakistan.
Investment in desalination and filtration technologies could offset some of the impact. Water technology and management companies will find Asia a highly receptive market over the next 15 years. For example, Singapore recently opened the region's largest seawater desalination plan and is hoping to reduce its reliance on Malaysia for water supply.
1. "Desperate measures." The Economist. October 2013.
2. Chellaney, B. "From Arms Racing to 'Dam Racing' in Asia: How to contain the geopolitical risks of the dam-building competition." Transatlantic Academy. May 2012. |
This blog is hosted on Ideas on EuropeIdeas on Europe Avatar
Arundells: The former home of the Prime Minister who brought Britain into the European Community
The historic house of Arundells in the Cathedral Close, Salisbury is a special place for anyone interested in Europe’s history, as it was once the home of Sir Edward Heath (1916 – 2005), the Prime Minister who brought Britain into the European Community. Sir Edward lived there from 1985 until his death in 2005 and the house contains his personal effects including antiques, an art collection, memorabilia from his political career and items associated with his hobbies of music and sailing.
The house also has a much older historical context dating back to the Middle Ages, having originally been a canonry and the home of Henry of Blunston, Archdeacon of Dorset who lived there between 1291 and 1316. The house has been much altered over the centuries, its classical frontage dates from the time it was occupied by John Wyndham who acquired a lease for the property in 1718. The house is called Arundells because it later became the home of James Everad Arundel, who married John Wyndham’s daughter Ann in 1752.
The contents of the house tell the story of Sir Edward Heath’s life. There is a Steinway Grand piano in the Drawing Room, on top of which are a number of photographs of world leaders and royalty. One of these photographs is of George Pompidou, who in 1971 was the President of France with whom Edward Heath negotiated the United Kingdom’s entry into the EEC (European Economic Community). While Britain’s entry into the EEC – which is today known as the European Union or EU – would have been one of the most controversial policy decions of Edward Heath’s premiership during the early 1970s, he should also be remembered for the many other achievements in his life.
Edward Heath was born at Broadstairs in Kent in 1916 and later attended Chatham House grammar school in Ramsgate. One of the paintings in Sir Edward’s art collection is called ‘Broadstairs’ by Sir Robert Ponsonby-Staples, where three women and a man are depicted looking out to sea towards the distant coastline of Belgium and France. Perhaps he bought this painting because it reminded him of his childhood. The painting poses a question about its former owner: did Edward Heath develope a fascination for Continental Europe as a boy because of his awareness of its closeness to his childhood home? In another painting in Sir Edward’s collection called ‘Girl on a Jetty’ by Antoin Plee, a young woman is depicted standing on a jetty looking out to sea through a pair of binoculars. In this painting the white cliffs of a distant coastline are visible on the horizon. Perhaps the girl in this painting is in France looking across the Channel towards England?
During the 1930s Edward Heath attended Balliol College, Oxford as an organ scholar. While a student at Oxford, he recognised the danger Adolf Hitler and the Nazis posed to Europe. Although Heath was a Conservative, in October 1938 he campaigned on behalf of the anti-Munich candidate, A.D.Lindsay, Master of Balliol College, against the official Conservative candidate Quintin Hogg in the Oxford by-election. Hogg supported Neville Chamberlaine and the Munich Agreement. The Munich Agreement was seen as a capitulation to Nazis aggression, so Heath campaigned under the slogan: “A vote for Hogg is a vote for Hitler”. (See Chronicle of the 20th Century, 1989, p.502)
During the Second World War he served as an army officer in the Royal Artillery. After the war he became a civil servant with the Ministry of Aviation, before resigning from his job in order to stand as a candidate for Parliament for Bexley. He served Bexly as an MP from 1950 until his retirement in 2001. By the time of his retirement from the House of Commons, his constituency of Bexley had become Old Bexley and Sidcup.
During the 1950s and 1960s Heath served the governments of Sir Winston Churchill, Sir Anthony Eden, Harold Macmillan, and Sir Alec Douglas-Home. His political career advanced from serving the government as a Parliamentary whip through to becoming the Minister of Labour in 1959. A Cabinet reshuffle by Harold Macmillan in July 1960 gave Heath the job of Lord Privy Seal, which meant he was the spokesman in the House of Commons for the Foreign Secretary Sir Alec Douglas-Home. Foreign policy under the Macmillan government was marked by the process of decolonisation, the Cold War, and an attempt to join the EEC.
In 1961 Macmillan gave Heath the task of negotiating Britain’s entry into the EEC, but by January 1963 President Charles de Gaule of France had blocked Britain’s entry to that organisation. Throughout the negotiations of bringing Britain into Europe, Heath had the almost imposible task of trying to reconcile the trading interests of the Commonwealth with those of the European Common Market. However, Heath recognised that Britain was no longer an imperial power, and the country would have to forge new relationships with Europe as well as former colonies. He saw the new supranational organisation of the EEC would create common economic interests in Europe, which would make war less likely between the member states of the EEC. It would not be until Heath became Prime Minister in 1970, that he would once again get the opportunity to negotiate Britain’s entry into the EEC, this time he would be more successful.
The signing of Britain’s entry into the EEC by Edward Heath in January 1972 could be viewed by history as the high point of his premiership. Britain’s entry into the EEC is something that went well for a government which had to adapt itself to changing events and a number of setbacks. Before coming to power in 1970, Heath had a fairly right wing free market policy agenda for government. The Tory historian Robert Blake in his book ‘The Conservative Party from Peel to Major’ described Heath’s manifesto as follows: “The main themes were: lower direct taxation; less government interference; reduction in public expenditure; selectivity in the social services and a shift of the burden from the Treasury to the employers; legislation to restrain the power of the unions; and entry into the EEC. Negatively its message was no less important; it said little or nothing about an incomes policy or a national economic plan.”
The first setback to Heath’s government was the death of Iain Macleod just a month after the Cnservatives had won the general election. Macleod had just been appointed Chancellor, and was the man who would have put Heath’s economic plans into practice. Another problem for Heath’s government came when Rolls-Royce went bankrupt in February 1971. Ideologically the Tories did not believe in the government saving companies that were in difficulty, but Heath was a pragmatist recognising that putting the workforce of Rolls-Royce on the dole – along with the loss of skills and talent to the British economy – would be far more expensive for the taxpayer than government intervention. The aero-engine division of the company was therefore nationalised by the Heath government, while the luxury car manufacturer was sold to a private investor.
An energy crisis which began in October 1973 would ultimately bring down Heath’s government. When the OPEC nations cut oil production and raised the price of oil as a result of the Yom Kippur War, petrol prices in the United Kingdom rose while petrol shortages forced the British government to plan for petrol rationing. For the government and the country the problem of the oil crisis was exacerbated by a coal miners strike in December 1973, which caused power cuts across the country as most power stations were coal fired in those days. Edward Heath called a general election in February 1974 under the slogan “Who runs Britain?” He probably hoped the electorate would blame the miners for taking advantage of the oil crisis as a bargaining lever to get a better pay deal. However, the Tories did not gain a majority in the election of 28th February 1973. Heath was unable to form a coalition with the Liberals and had to concede defeat to Labour’s Harold Wilson a few days later.
After the election defeat of February 1974 Heath would never serve in government again. Many commentators have written about the antipathy between Heath and Margaret Thatcher, who would succeed him as leader of the Conservative Party in 1975 and as Prime Minister in 1979. When Heath moved to Arundells in Salisbury in 1985, he would have been aware that Thatcher had no intention of offering him a ministerial post in her government. However, in 1992 his lifetime service to the country was recognised when the Queen appointed him a Knight of the Garter and he became Sir Edward Heath.
Sir Edward Heath’s Will states that Arundells should be open to the public, for which purpose the Sir Edward Heath Charitable Foundation was set up. The first charitable object of the Trust as recognised by the Charity Commission is “the preservation and conservation of Arundells and its associated amenities as a building both of special architectural and historical interest being the home of Sir Edward Heath”. The second charitable object of the Trust is “the preservation of the furniture pictures memorabilia and chattels ordinarily kept at Arundells (excepting such items the trustees consider inappropriate) and such of any other furniture pictures memorabilia and chattels as shall form part of the Charitable Trust and which the Trustees consider appropriate to preserve”. The third charitable object specifically mentions that Sir Edward Heath’s papers should be administered, maintained and preserved. Sir Edward would have realised that his papers not only told the story of his own life, but also that they were an important source for future historians of Britain’s post war history. This may have also been the reason why he made the fourth charitable object of the Trust “the advancement of education by the facilitation and access to and the study and appreciation of Arundells and its contents by the general public”.
If Sir Edward had not pursued a political career, he could have easily made his living as a concert pianist or the conductor of an orchestra. Sir Edward’s love of music is remembered in the fifth charitable object of the Trust, which is “the advancement of education of the public in the artistic appreciation of music by the promotion development or improvement whether at Arundells or elsewhere of the knowledge understanding and practice of all forms of music and the performance recording study composition instruction or training in all forms of music”.
Unfortunately since Sir Edward’s death in 2005, not all of the trustees have been committed to upholding Sir Edward’s wishes and the objects of the Sir Edward Heath Charitable Foundation. Some of the trustees did not want Arundells open to the public. The trustees applied to the Charity Commission for a scheme to sell Arundells and the contents of the house. The trustees claimed that the cost of maintaining the house is greater than the income it can generate from paying visitors. However on 26th September 2011 the Charity Commission refused the trustees’ scheme to sell off Arundells and its contents. A report of the Charity Commission’s decision was published on the website of the Friends of Arundells which said: “the reviewer is not satisfied that the trustees have properly identified and explored the range of alternative ways of generating income”.
Arundells as an historic house would not be open to the public today without the tireless dedication of The Friends of Arundells, many of whom volunteer as guides, stewards and gardeners. The Friends of Arundells have also fought to keep Arundells open to the public and the contents of the house intact. They drew up a business plan to show how Arundells and its garden could increase revenue by drawing more paying visitors. According to the plan, revenue could be increased by extending Arundells opening season and extending the opening days each week from four to five, as well as hiring out the grounds to film makers and hosting corporate events.
It is difficult to understand why the trustees are so keen to close Arundells when the main purpose of the Sir Edward Heath Charitable Foundation is to keep the house open for the nation. It appears that some of the trustees would like to have the memory of the former Prime Minister, Sir Edward Heath erased from the public consciousness. The story of Sir Edward Heath’s life is not just an important part of the history of the British Isles, but also belongs to the heritage of Europe and the world. Arundells as a living museum in Salisbury could attract many visitors both from within the UK and overseas, who would want to understand more about the man who brought Britain into Europe.
Details of visiting times to Arundells can be seen at while further details of the Friends of Arundells can be seen at .
This article can also be read at
©Jolyon Gumbrell 2012
Comments are closed.
|
The Ancestors Of The Burns Paiute Tribe, Surviving The Early Days
The Wadatika when translated means Waada eaters, these were the ancestors of the Burns Paiute Tribe. They lived in central and southern Oregon and one has to say not under the best of circumstances as was prevalent at the time for the Native American Indians.
The Burns Paiute Tribe had a reservation north of Burns in Harney County. This area is encompassed in the Great Basin an arid expanse of land shared by several states, and for food needs this indigenous people had to be particularly in tune with the seasons, if they were to endure the perils of winter.
Different seasons, would bring different opportunities for food, just like any able farmer of today would attest to. The Wadatika would follow their seasons religiously. They would look to gather edible plants from streams, marshes and lakes when and where they had the chance and they would also fish and hunt.
In the spring they would do their root gathering along with their fishing. Amazingly, not all this was to be eaten fresh, something that one would imagine being far tastier. These were the days of survival and so their pursuits were with winter time in mind.
The fish and the roots would be dried and preserved and prepared for storage through the long months ahead. Winter would eventually come and they would need to have this part of their stable diet taken care of well ahead of time.
During the summer The Wadatika would roam their lands throughout. This would be their time to go collecting seeds, also to tackle any unsuspecting game, a big prize that could be stretched a long way when it came to the food stakes.
Through some of the months of the year, they would have community hunting events. Here they would be on the lookout for wild antelopes & rabbits. In areas near marshes and lakes, they would have their 'coot' drives also known as mud hens. The hunters would drive them out from the marshes to dry land, where they would capture their prey.
Such was the way of the world for these Native Americans. All was in a day's work to keep their families and tribe members sustained and only part of a very remarkable history of the Burns Paiute Tribe, from whence they came and how they survived in their very early days. |
How Smart Technology Can’t Put People Out of Work
Christopher Michel [CC-BY-2.0 (], via Wikimedia Commons
Smart technology is the rage these days. You have smartphones, smart cameras, smart watches, smart cars, and smart TVs. There’s even a smart fridge powered by Android, capable of running various apps. Technology has brought a lot of great things to mankind but isn’t it ironic how a lot of people fear it because of the belief that smart machines can take away jobs from humans?
A recent article on the Technology section of The New Zealand Herald presents a dreary outlook for professionals in the advent of smart technology. The article’s title attracts attention as it paints a rather gloomy picture of technological advancement – which is unlikely going to be the case. To appease everyone’s worries, the following arguments should be helpful.
1. Law of Supply and Demand
This is a basic concept in economics that can be a good way of giving those who predict hopelessness in the job market a comeback. No matter how smart technology gets, the economy will react in incremental terms. There will never be a change so sudden that governments are unable to do something to cushion the impact. Everything will happen with enough “slowness” to enable adaptation.
Here’s how the law of supply and demand comes to play. If companies decide to automate various manufacturing jobs with the help of 3D printing technology, people will obviously lose jobs. Without jobs, populations will lose purchasing power. This consequently leads to lower demand. Lower demand will mean unprofitability for businesses. So how will businesses bother automating if there’s no demand to meet? What do you expect would happen to the economy if every company decides to automate?
As mentioned earlier, the adoption of technological advancements will happen at a tolerable pace. A company may have to slash jobs but those who have lost employment can still find jobs somewhere else. Yes, this is easier said than done but this kind of scenario is way better than the idea that professionals will no longer have work because smart technology is hogging the jobs. It can even be said that the bigger threat is the cheap labor market overseas.
2. Creation of New Jobs or the Shift to New Jobs
The job shift consequence of smart technology and automation can be exemplified by what happened to banks. Before ATMs became commonplace, many expressed hesitations (even rejection from some) because of the impact on the job opportunities in the banking industry. However, it has been proven the ATMs have not taken jobs away from bankers, or the tellers in particular.
What happened was that tellers or bankers in general had been relegated to more complex tasks as the ATMs took over the simpler transactions. Also, interestingly, because of the effectiveness of ATMs, many banks found it cheaper to operate a branch, allowing banks to build branches in more locations to serve more customers. This is certainly not a disadvantageous consequence of adopting smart technology.
Image courtesy of adamr /
Image courtesy of adamr /
3. The Creation of New Opportunities
To illustrate how technology creates new opportunities to make up for the jobs lost, consider the world of smart devices and apps. After automation, electronic devices have become cheaper. This allowed more people to afford smart devices. In turn, the popularity of smart devices, smartphones and tablets in particular, allowed more people to make money by buying and selling these smart electronic devices.
Additionally, the prominence of smart electronic devices paved the way for the popularity of apps, which eventually created a money-making market for app developers. With Internet penetration also increasing, many have found opportunities to work as online telemarketers, social media marketers, or self publishers who make money through online ads. All of these opportunities have been achieved because manufacturing processes were automated so the prices of electronics have gone down.
Imagine a world where smartphones still cost a year’s worth of salaries. Imagine the Internet still being limited to the few elite people who can earn enough to pay for a decent bandwidth. Without smart technology and automation, progress stalls and new opportunities will remain unseen.
Image courtesy of Victor Habbick /
Image courtesy of Victor Habbick /
The key here is adaptation. The advancement of machines is inevitable. Machines can very possibly displace people who already hold regular jobs that can be automated. However, it’s farfetched to claim that people will be out of work simply because smart technology has taken over. This kind of perception is shortsighted.
It’s not right to blame technology for unemployment problems. Technology isn’t taking jobs away from humans. It simply enables efficiency, taking humans away from doing tedious and monotonous tasks. Well, this means losing a traditional job but it also means opportunities to do something else more suitable for a thinking brain. |
Monday, July 31, 2017
What Is Heatstroke?
I got heatstroke once when I was hiking in Arizona. As I was coming down the mountain in the final mile, I was overwhelmed with a feeling of nausea and I started dry heaving. At the time I was confused about what was happening, because I thought I had been drinking enough water during the hike, and although I had run out a little while back, I didn't have any sweat on my body, nor did I feel overwhelmingly hot.
But dry, desert conditions can cause sweat to evaporate so quickly, you don't even know you've been sweating. The risk for heat stroke increases with outdoor temperatures above 100 degrees Fahrenheit and with physical activity. Outdoor running is not recommended when it is above 85 degrees. Luckily for me, I wasn't too far from my car when I became sick.
How do you know if you might have heatstroke?
The symptoms include: Mental status changes such as confusion, agitation, slurred speech, irritability, delirium, seizures and coma. Physical symptoms including nausea and vomiting, flushed skin, rapid breathing, racing heart, and headache.
How do you treat heatstroke?
Rapid cooling techniques include cold water or ice baths, cooling blankets, getting into an air conditioned space, drinking water, getting out of the direct sunlight, and cooling the skin with water and fanning. Severe cases of heatstroke require emergency intervention.
Fortunately, I got to my air conditioned car and found some extra water inside. If I had been hiking even 30 minutes longer, I wonder if I might have collapsed. It was a sobering experience, and one that taught me not to repeat it.
This is what I've done to avoid future heatstroke:
1. Do not do strenuous exercise outdoors when the temperature is >85 degrees.
2. Do not stay outdoors for prolonged periods of time when the temperature is > 100 degrees.
3. Drink lots of water in the heat. Two to four glasses per hour may be necessary.
4. Have a plan to escape the heat if necessary (a body of water, or air conditioned or shady space nearby).
5. Wear sunscreen, wide brimmed hats, and breathable clothing in light layers.
For more information about heatstroke, check out the Mayo Clinic website:
If you have any questions about heatstroke, please log into your account and send us your question. We are here to help. If you'd like to ask Dr. Val for a health tip, email your request to us at:
Dr. Val Jones MD - Health Tip Content Editor
Friday, July 21, 2017
Health Tip: C. Diff Diarrhea And Stool Transplantation?
Although you may shudder at the prospect of stool banks, they may be life-savers for those with chronic or resistant C. Diff infections. Until then, I wash my hands with soap and water and eat yogurt (with live cultures) every day - to keep the C. Diff at bay!
If you have any questions about bacteria, please log into your account and send us your question. We are here to help.
Dr. Val Jones MD - Health Tip Content Editor
Friday, July 14, 2017
Health Tip Reader Question: What is Bursitis?
Did you know that our bodies have mini "air bags," located around our joints (the largest ones are in the shoulder, elbow, and hip) for protection? They are not actually filled with air - they are filled with fluid and are called bursae (plural). But they are designed to keep the joints buffered from impact. Sometimes these sacs become inflamed - usually from trauma or overuse.
When a bursa (singular) becomes inflamed, it's called "bursitis" - and it can look swollen due to expansion with extra fluid. So long as it doesn't also become infected, the treatment is rest and anti-inflammatory medications such as naproxen or ibuprofen. In chronic cases (longstanding bursitis), doctors will sometimes give a stronger anti-inflammatory - such as a steroid injection.
When a joint is infected, it usually becomes very red, warm to the touch, and the pain can be disabling. The infection can get into the blood stream and cause a fever. If you think your joint may be infected, it's important to see a doctor right away.
Otherwise, if your "air bag" is acting up, the best thing to do is rest it and give it time to heal. Bursitis can be confused with arthritis (inflammation of the joint surface itself, not it's air bag) but bursitis is more likely to resolve, while large joint arthritis doesn't improve much without intervention (such as surgery) because the cartilage is permanently damaged.
For more information about how to tell the difference between bursitis and arthritis, check out the Mayo Clinic website.
If you have any questions about bursitis, please log into your account and send us your question. We are here to help. If you'd like to ask Dr. Val for a health tip, email your request to us at:
Friday, July 7, 2017
Sunglasses: Fashion or Function?
Sun safety - describes a range of behaviors that include wearing wide brimmed hats that cover the face and neck; the correct use of sunscreen of at least sun protection factor (SPF) 15 and limiting sun exposure during the hours of peak sun intensity, 10:00 AM to 4:00 PM. One other important "sun safety" issue is wearing sunglasses that filter out ultraviolet B (UVB) and ultraviolet A (UVA) light. Wearing sunglasses is so important in fact that it can be considered to be the "suntan lotion for the eyes." Let's look at some of the detrimental aspects of sun exposure to the eyes and important features in sunglasses for eye protection.
Dangers of sun exposure to the eyes
Ultraviolet light from the sun has been linked to the formation of cataracts, macular degeneration, skin cancer on the lids and pterygium, an abnormal growth on the eye's surface. Cataracts develop when proteins in the lens of the eye clump together, clouding the vision. Even though cataracts appear to different degrees in most individuals as they age, their development appears to be enhanced by exposure to UVB. Macular degeneration is the leading cause of blindness in the center portion of the visual field in the U.S. There is conflicting evidence as to whether exposure to sunlight contributes to the development of macular degeneration, but at least one study has shown that people who stay outside in the summer sun for more than 5 hours a day in their teens through their 30's are twice as likely to develop macular degeneration later in life. Just as in other areas of the body in which sun exposure increases the risk of skin cancer, melanoma, basal cell and squamous skin cancers can develop on the skin around the eye if not protected. A ptyerygium is a growth that develops on the white portion of the eye that can extend over the the pupil and obscure vision. These appear to develop as a result of prolonged UV exposure also.
Factors that increase your UV exposure, also increasing your likelihood of developing sun-related problems include: spending time on snow, sand or on the water, being outside at higher elevations or closer to the equator, staying outside between the hours of 10 AM and 4 PM and prolonged sun exposure, particularly in the Spring and Summer.
Will sunglasses prevent these problems?
Sunglasses serve two major functions. They decrease the amount of sunlight reaching your eye for comfort and protect your eye and surrounding structures from the devastating damage of ultraviolet light. By wearing sunglasses regularly, you can decrease your risk of sun-related damage significantly.
The following features will help you to select a pair of sunglasses that are both protective and appropriate for your needs.
• Be sure that your sunglasses block 99-100% of UV light. Both plastic and glass lenses can be coated to so that they block essentially 100% of UV rays. This information should be available on the label.
• The color or darkness of the lens has nothing to do with its ability to block UV rays as the UV coating is colorless. Color choice is personal decision. Green and gray lenses produce minimal color distortion and are probably best for all-round use. Brown offers high contrast and depth perception but does distort color. Vermillion lenses are best for defining water from other objects but distort color badly. Many skiers prefer "blue blocking" sunglasses (typically amber in color) which provide the best contrast in snow and haze.
• Polarized lenses are the best for reducing reflected glare such as sunlight that bounces off of snow or water. Polarization, however, does not have any relation to blocking UV radiation. Wrap around sunglasses will help keep light from shining into your eyes from around the frames. During active sports, they provide some added degree of protection.
• Photochromatic glasses will change from light to dark, depending on the amount of UV radiation that they receive. While most of them offer good UV protection, it can take time for them to "adjust" to different light conditions.
Final considerations
1. Remember that sunglasses are necessary even on cloudy days. Clouds might provide shade, but they are no barrier for UV light.
2. You need sunglasses even if your contact lenses offer UV protection. A high quality lens can only protect the area it covers, and the entire surface of your eye needs protection.
3. Children who cannot tolerate sunglasses should wear a wide-brimmed hat, which will provide some UV protection.
5. With sunglasses, you can have it both ways. An attractive pair of sunglasses can also provide adequate sun protection. Be a "label reader" and make sure that your lenses offer the protective features mentioned above.
If you have any questions about sun safety, please log into your account and send us your question. We are here to help.
Monday, July 3, 2017
Reader Question - What is Anemia?
The term "anemia" is quite broad, and simply means that there is a problem with a person's red blood cells (sometimes there are too few, sometimes they are malformed). Red blood cells are manufactured in the bone marrow and are basically tiny sacks of oxygen-carrying hemoglobin molecules (centered around an iron atom) that circulate through the blood stream and release the oxygen gas wherever needed along the path back to the heart and lungs. Extra red blood cells are stored in the spleen. Red blood cells make up 25% of all cells in the body, and are constantly being replenished.
When there isn't enough oxygen being carried to the tissues, all the typical symptoms of anemia can occur: fatigue, dizziness, rapid heart rate (the heart is trying to compensate for the low number of red blood cells by pumping the ones it has around faster – this can lead to irregular heart rates or arrhythmias), chest pain, shortness of breath, and pale skin.
What causes anemia?
The most common cause of anemia is bleeding. If your red blood cells are leaking out of your arteries or veins, then there are fewer of them being transported inside the blood vessels. Sudden losses of blood (that can occur with gunshot wounds or traumatic accidents) cause anemia, but slow, chronic losses of blood (such as a stomach ulcer, or heavy menstrual periods) can do the same but go undetected. The other causes of anemia are less intuitive, and can be difficult to diagnose. Hematologists are specialists in anemia and blood disorders, and are often consulted in cases where the primary care physician hasn't uncovered the cause of anemia. Blood tests that determine the size and shape of the red blood cells, along with kidney function, liver tests, and iron, folate, and B12 levels can be helpful.
Red blood cell destruction: sometimes mechanical heart valves or devices inserted in the heart to help it pump can "chew up" red blood cells over time, creating lower levels of them. Some genetic diseases cause the bone marrow to produce weak or odd-shaped cells that break open easily and don't last as long as regular red blood cells (which survive about 4 months). Examples include thalassemia and sickle cell anemia.
Red blood cell production suppression: certain medicines (especially chemotherapy and antibiotics), toxins (such as alcohol), and infectious diseases (such as HIV), can cause "aplastic anemia" by suppressing their production in the bone marrow.
Iron deficiency is a common cause of anemia. Just as thyroid glands can't make thyroid hormone without iodine, bone marrows can't make hemoglobin without iron. Iron is found in certain foods (red meat, pork, poultry, seafood, beans and peas, dark green leafy vegetables, dried fruit, such as raisins and apricots) but unlike iodine (which is added to table salt to prevent deficiencies, and table salt is added to virtually everything we eat), iron is only added to a few foods, such as fortified cereal, bread and pasta. So it's easier to become deficient in iron than it is in iodine, at least in the United States.
Kidney disease: the kidneys actually produce a hormone (called erythropoietin) that stimulates red blood cell production in the bone marrow. When kidneys are injured or sick, anemia can occur.
So if you are diagnosed with anemia, it is very important to determine the true underlying cause or causes, so that it/they can be treated effectively. I've seen some of my physician colleagues treat all their anemic patients with iron supplements (since that is probably the second most common cause of anemia) rather than ordering further testing to make sure that this is the right solution. As you can see, anemia can be a sign of anything from a dietary deficiency, to internal bleeding, to medication toxicity, infection, alcoholism, kidney disease, or a genetic condition. If you have been diagnosed with anemia that has not responded to iron supplements (which, by the way, can cause some pretty nasty constipation), you might want to consider further testing or a second opinion from a hematologist.
TThe good news is that we have excellent treatments for anemia, and most people will feel much better once the right treatment is initiated.
If you have any questions about anemia, please log into your account and send us your question. We are here to help. |
Medicines and Dryness
Certain drugs can contribute to eye and mouth dryness. If you take any of the drugs listed below, ask your doctor whether they could be causing symptoms. However, don't stop taking them without asking your doctor--he or she may already have adjusted the dose to help protect you against drying side effects or chosen a drug that's least likely to cause dryness.
Drugs that can cause dryness include
• Antihistamines
• Decongestants
• Diuretics
• Some antidiarrhea drugs
• Some antipsychotic drugs
• Tranquilizers
• Some blood pressure medicines
• Antidepressants
When there is an inadequate production of saliva, an individual may notice difficulty talking or swallowing dry food. Additionally, the individual may have unusual sensitivity to acidic or spicy foods and notice burning of their mouth. There may be an increased rate of dental decay. The most common cause for a dry mouth is side effects of certain medications, including anti-depressants, anti-motility drugs (used for spastic colon or irritable bladder), and diuretics.
This site is not intended to diagnose, treat, cure, or prevent any disease. You must keep on seeking medical advice from a doctor or specialist to be diagnosed and treated. Please see disclaimer |
In an inclined plane we lose in terms of distance but gain in
Options :
1. energy
2. work
3. efficiency
4. force
Answer and Explanation :-
Answer: Option 4
Previous Question : A body is thrown vertically upwards with velocity V. It returns to earth after reaching a height H. ...
Next Question : What will happen if a steam of air is being blown horizontally through a tube under one of the pans...
Click here for online test on Motion
(Getemail alerts when others member replies)
Have ten successful people share their recipe for success and you shall receive ten different versions. What is your recipe ,and why not add the first ingredient today? Develop a plan.
-Mark W. Boyer |
Accessibility Options
Definition of Accessibility Options in The Network Encyclopedia.
Accessibility Options (in computer networking)
A utility in Control Panel for most versions of Microsoft Windows that allows you to adjust the behavior of the keyboard, mouse, and display to suit the needs of individuals with impaired eyesight, hearing, or motor skills. Accessibility Options are part of Microsoft’s initiative to provide access to computer technology to all individuals, regardless of their physical impairments.
Settings for Accessibility Options include the following:
• StickyKeys:
Makes it possible to use a key combination such as Ctrl+Alt+Delete without pressing more than one key at a time
• FilterKeys:
Makes it more difficult to bounce keys by telling the system to ignore brief or repeated keystrokes
• ToggleKeys:
Generates sounds when certain toggle keys, such as Caps Lock, are turned on
• SoundSentry:
Flashes a specified part of the screen when the system generates sounds
• ShowSounds:
Displays icons or text captions to accompany the sounds that programs generate
• HighContrast:
Makes reading text easier
• MouseKeys:
Controls the pointer using the numeric keypad
• SerialKey:
Enables the use of alternative input devices connected to the computer’s serial port instead of the keyboard and mouse
Graphic Accessibility Options.
Windows 98 and Windows 2000 include an additional wizard called the Accessibility Wizard that allows you to configure accessibility options on your computer. Additional accessibility utilities include Magnifier, Narrator, and On-Screen Keyboard.
Microsoft product documentation and books from Microsoft Press are available in alternative formats from Recording for the Blind and Dyslexic and Microsoft Accessibility and Disabilities Group at the following Web sites. |
Diego Rivera & Frida Kahlo - Mexican Artists
Frida Kahlo was born in 1910, during the Mexican Revolution. Her mother rushed her and her sisters inside the house as gunfire echoed in the streets; revolutionaries would leap over the walls into their backyard and her mother would feed them. After being involved in a bus accident, she was recovering in a body cast. She painted to pass the time; her mother had a special easel made so she could paint in bed. Frida painted many self portraits - when asked why she answered: "I paint self portraits because I am the person I know best." She married the famous Mexican muralist Diego Rivera. Kahlo's work was not widely recognized until decades after her death. Often she was popularly remembered only as Diego Rivera's wife. An exhibit in the summer of 2007 of her paintings honoring the 100th anniversary of Kahlo's birth at the Museum of the Fine Arts Palace, in Mexico City, and broke all attendance records at the museum.
Simple information, biography, outline, photos of artist, gallery of art of Frida Fahlo.
FRIDA Gallery, bookstore,
FRIDA’S restaurant chain
Diego Rivera was a Mexican painter and muralist born in Guanajuato City, Guanajuato. Studied painting in Mexico before going to Europe in 1907.
While in Europe he took up cubism and had exhibitions in Paris and Madrid in 1913; he then had a show in New York City in 1916. In 1921 he returned to Mexico, where he undertook government-sponsored murals that reflected his communist politics in historical contexts.
He married Frida Kahlo in 1929, and their tempestuous marriage got to be as famous as their art. In the 1930s and '40s Rivera worked in the United States and Mexico, and many of his paintings drew controversy. His 1933 mural for the RCA Building at Rockefeller Center in Manhattan featured a portrait of Communist Party leader Lenin, the resulting uproar led to his dismissal and to the mural's official destruction in 1934.
His personal life was as dramatic as his artwork. In 1929, he married Kahlo who was roughly 20 years younger. The two had a passionate, but stormy relationship, divorcing once in 1939 only to remarry later. She died in 1954. He then married Emma Hurtado, his art dealer. Rivera died of heart failure on November 24, 1957, in Mexico City, Mexico.
PBS American Masters: Diego Rivera
RIVERA, gallery, biography, and commentary |
Testicular cancer
Trauma, infections or radiation does not increase the incidence of testicular tumors
Testicular tumors are extremely chemo sensitive and respond excellently to treatment
Testicles are the male gonads (glands) which are responsible for secretion of the majority of androgenic hormones in men and sperm production.
Testicular cancer accounts for 1% of all cancer diagnoses in men. There are many types of testicular cancer and most develop from germ cells (germ cell tumors) or spermatocytes (testicular tumors). This depends on the cell type from which the tumor develops. |
As artificial intelligence spreads its roots throughout all technology, Bolshoy Bhattacharya, LOC Group’s electrical and control systems specialist, looks at dynamic positioning and asks, ‘What makes the yachting industry different?’
The cons mostly come down to the cost of installing the necessary DP system, cost of maintenance and cost of running engines and thrusters when stationary.
Dynamic Positioning (DP) is a computer controlled process that automatically maintains a vessel’s position and heading by using its own propellers and/or thrusters. A combination of classical and modern control theory, the DP system monitors the vessel and its surroundings using a variety of sensors. These include positionreference systems to pinpoint a vessel’s location with a fixed or relative reference and gyrocompasses as a heading reference. As the concept matured, more environmental sensors, such as motion reference units (MRU) and wind sensors, were added to paint a more accurate picture of the environment.
To read full technical article in ‘The Superyacht Refit Report’, please download here. |
Spasm of accommodation: A spasm of accommodation (also known as an accommodation, or accommodative spasm) is a condition in which the ciliary muscle of the eye remains in a constant state of contraction. Normal accommodation allows the eye to "accommodate" for near-vision.PresbyopiaConvergence of measures: In mathematics, more specifically measure theory, there are various notions of the convergence of measures. For an intuitive general sense of what is meant by convergence in measure, consider a sequence of measures μn on a space, sharing a common collection of measurable sets.List of optometry schools: The following list of optometry schools covers many countries, although the list is not exhaustive. Internationally, optometry as a profession includes different levels of education.Tadpole pupil: The eye is made up of the sclera, the iris, and the pupil, a black hole located at the center of the eye with the main function of allowing light to pass to the retina. Due to certain muscle spasms in the eye, the pupil can resemble a tadpole, which consists of a circular body, no arms or legs, and a tail.Autorefractor: An autorefractor or automated refractor is a computer-controlled machine used during an eye examination to provide an objective measurement of a person's refractive error and prescription for glasses or contact lenses. This is achieved by measuring how light is changed as it enters a person's eye.Lens Controller: A Lens Controller is device that controls motorized photographic lens functions such as zoom, focus, and iris or aperture.Kruegle, Herman (2007).Sustainability marketing myopia: Sustainability marketing myopia is a term used in sustainability marketing referring to a distortion stemming from the overlooking of socio-environmental attributes of a sustainable product or service at the expenses of customer benefits and values. The idea of sustainability marketing myopia is rooted into conventional marketing myopia theory, as well as green marketing myopia.Ciliary body: The ciliary body is a part of the eye that includes the ciliary muscle, which controls the shape of the lens, and the ciliary epithelium, which produces the aqueous humor. The ciliary body is part of the uvea, the layer of tissue that delivers oxygen and nutrients to the eye tissues.Guiding Eyes for the Blind: Yorktown Heights, New YorkLevobetaxololEmmetropia: Emmetropia (from Greek emmetros, "well-proportioned" or "fitting") describes the state of vision where an object at infinity is in sharp focus with the eye lens in a neutral or relaxed state. This condition of the normal eye is achieved when the refractive power of the cornea and the axial length of the eye balance out, which focuses rays exactly on the retina, resulting in perfect vision.List of puddings: This list includes both sweet and savoury puddings that conform to one of two definitions:Monocular estimate method: The monocular estimate method or monocular estimation method is a form of dynamic retinoscopy widely used to objectively measure accommodative response.Tassinari JT.Binocular vision: Binocular vision is vision in which creatures having two eyes use them together. The word binocular comes from two Latin roots, bini for double, and oculus for eye.Conjugate gaze palsyRimless eyeglasses: Rimless eyeglasses, are a type of eyeglasses in which the lenses are mounted directly to the bridge and/or temples. The style is divided into two subtypes: three piece glasses are composed of lenses mounted to a bridge and two separate temple arms, while rimways (also called cortlands) feature a supporting arch that connects the temples to the bridge and provides extra stability for the lenses.Anterior segment mesenchymal dysgenesis: Anterior segment dysgenesis (ASD) is a failure of the normal development of the tissues of the anterior segment of the eye. It leads to anomalies in the structure of the mature anterior segment, associated with an increased risk of glaucoma and corneal opacity.Iris dilator muscleOcular albinismStomach diseaseMicromirror device: Micromirror devices are devices based on microscopically small mirrors. The mirrors are Microelectromechanical systems (MEMS), which means that their states are controlled by applying a voltage between the two electrodes around the mirror arrays.LogMAR chart: A LogMAR chart comprises rows of letters and is used by ophthalmologists and vision scientists to estimate visual acuity. This chart was developed at the National Vision Research Institute of Australia in 1976, and is designed to enable a more accurate estimate of acuity as compared to other charts (e.Dilated fundus examination: Dilated fundus examination or dilated-pupil fundus examination (DFE) is a diagnostic procedure that employs the use of mydriatic eye drops (such as tropicamide) to dilate or enlarge the pupil in order to obtain a better view of the fundus of the eye.Exam Information Once the pupil is dilated, examiners use ophthalmoscopy (funduscopy) to view the eye's interior, allowing assessment of the retina, optic nerve head, blood vessels, and other features.AniseikoniaArtificial tearsAsthenopiaAnisometropiaOcular dominance: Ocular dominance, sometimes called eye preference or eyedness, is the tendency to prefer visual input from one eye to the other. It is somewhat analogous to the laterality of right- or left-handedness; however, the side of the dominant eye and the dominant hand do not always match.Saal Greenstein syndrome: Saal Greenstein syndrome is a very rare autosomal recessive genetic disorder characterized by stunted growth, short limbs, microcephaly, and an anomalous cleavage of the anterior chamber of the eye. The disorder is similar to Robinow syndrome except for anterior chamber anomalies and, in one case, hydrocephalus.Imbert-Fick law: The Imbert-Fick "law" was invented by Hans Goldmann (1899–1991) to give his newly marketed tonometer (with the help of the Haag-Streit Company) a quasi-scientific basis; it is mentioned in the ophthalmic and optometric literature, but not in any books of physics. According to Goldmann,Goldmann H.Intraocular lymphoma: Intraocular lymphoma is a rare malignant form of eye cancer. Intraocular lymphoma may affect the eye secondarily from a metastasis from a non-ocular tumor or may arise within the eye primarily (primary intraocular lymphoma, PIOL).Landolt CAberrations of the eye: The eye, like any other optical system, suffers from a number of specific optical aberrations. The optical quality of the eye is limited by optical aberrations, diffraction and scatter.StrabismusAtomic force acoustic microscopy: Atomic Force Acoustic Microscopy (AFAM) is a type of scanning probe microscopy (SPM). It is a combination of acoustics and atomic force microscopy.List of infrared articles: This is a list of Infrared topics.Gastric distension: Gastric distension is bloating of the stomach when air is pumped into it. This may be done when someone is performing cardiopulmonary resuscitation and blowing air into the mouth of someone who is not breathing spontaneously.Aqueous humour: The aqueous humour is a transparent, gelatinous fluid similar to plasma, but containing low protein concentrations. It is secreted from the ciliary epithelium, a structure supporting the lens.Vitreous membrane: The vitreous membrane (or hyaloid membrane or vitreous cortex) is a layer of collagen separating the vitreous humour from the rest of the eye. At least two parts have been identified anatomically.Inferior rectus muscle: The inferior rectus muscle is a muscle in the orbit.Astigmatism: An optical system with astigmatism is one where rays that propagate in two perpendicular planes have different focus. If an optical system with astigmatism is used to form an image of a cross, the vertical and horizontal lines will be in sharp focus at two different distances.Eye injuryA-scan ultrasound biometry: A-scan ultrasound biometry, commonly referred to as an A-scan, is routine type of diagnostic test used in ophthalmology. The A-scan provides data on the length of the eye, which is a major determinant in common sight disorders.Stereopsis: Stereopsis (from the Greek στερεο- [meaning "solid", and ὄψις] opsis, "appearance, [[visual perception|sight") is a term that is most often used to refer to the perception of depth and 3-dimensional structure obtained on the basis of visual information deriving from two eyes by individuals with normally developed binocular vision.Intraocular pressureIntraocular lens power calculation: The aim of an accurate intraocular lens power calculation is to provide an intraocular lens (IOL) that fits the specific needs and desires of the individual patient. The development of better instrumentation for measuring the eye's axial length (AL) and the use of more precise mathematical formulas to perform the appropriate calculations have significantly improved the accuracy with which the surgeon determines the IOL power.AlachrymaCyclopentolateRefractometry: Refractometry is the method of measuring substances' refractive index (one of their fundamental physical properties) in order to, for example, assess their composition or purity. A refractometer is the instrument used to measure refractive index ("RI").Oculomotor nucleus: The fibers of the oculomotor nerve arise from a nucleus in the midbrain, which lies in the gray substance of the floor of the cerebral aqueduct and extends in front of the aqueduct for a short distance into the floor of the third ventricle. From this nucleus the fibers pass forward through the tegmentum, the red nucleus, and the medial part of the substantia nigra, forming a series of curves with a lateral convexity, and emerge from the oculomotor sulcus on the medial side of the cerebral peduncle.Nepean HospitalList of soft contact lens materials: * Alphafilcon ACapsulotomyOperation Eyesight Universal: Operation Eyesight Universal is a Canada-based international development organisation, founded in 1963. It works to prevent avoidable blindness and to cure blindness that is treatable.Retinal regeneration: Retinal regeneration deals with restoring retinal function to vertebrates so impaired.GastroparesisDischarge coefficient: In a nozzle or other constriction, the discharge coefficient (also known as coefficient of discharge) is the ratio of the actual discharge to the theoretical discharge,Sam Mannan, Frank P. Lee, Lee's Loss Prevention in the Process Industries: Hazard Identification, Assessment and Control, Volume 1, Elsevier Butterworth Heinemann, 2005.TimololDiffuse unilateral subacute neuroretinitis: Diffuse unilateral subacute neuroretinitis (DUSN) is a rare condition that occurs in otherwise healthy, often young patients and is due to the presence of a subretinal nematode.International Deaf Education Association: The International Deaf Education Association (IDEA) is an organization focused on educating the deaf in Bohol, Philippines initiated by the United States Peace Corps, under the leadership of Dennis Drake. The organization is a non-profit establishment that provides education to the impoverished and neglected deaf and blind children in the Philippines.DiplopiaConjunctivochalasisSpalding MethodLigneous conjunctivitis: Ligneous conjunctivitis is a rare form of chronic conjunctivitis characterized by recurrent, fibrin-rich pseudomembranous lesions of wood-like consistency that develop mainly on the underside of the eyelid (tarsal conjunctiva). It is generally a systemic disease which may involve the periodontal tissue, the upper and lower respiratory tract, kidneys, middle ear, and female genitalia.Robert Atkyns (topographer): Sir Robert Atkyns (1647–1711) was a topographer, antiquary, and Member of Parliament. He is best known for his county history, the Ancient and Present State of Gloucestershire.Outline of photography: The following outline is provided as an overview of and topical guide to photography:Echothiophate |
Fluid and Electrolyte Imbalances (Fundamentals Ch 57) Flashcards Preview
ATI-Review Modules > Fluid and Electrolyte Imbalances (Fundamentals Ch 57) > Flashcards
Flashcards in Fluid and Electrolyte Imbalances (Fundamentals Ch 57) Deck (64):
-minerals present in all body fluids
-regulate fluid balance and hormone production
-strengthen skeletal structures
-act as catalysts in nerve response, muscle contraction, and metabolism of nutrients
Fluid volume deficits (FVDs)
Isotonic FVD (hypovolemia)
Isotonic FVD
-loss of water and electrolytes from the ECF
-referred to as hypovolemia because intravascular fluid is also lost
-loss of water from body w/o loss of electrolytes
-this hemoconcentration results in increases in Hct, serum electrolytes, and urine specific gravity
Compensatory mechanisms for FVD
-sympathetic nervous system response of:
1) increased thirst
2) antidiuretic hormone (ADH) release
3) aldosterone release
Are older adults at increased risk for dehydration?
-due to decreased total body mass
Causes of isotonic FVD (hypovolemia)
1) abnormal GI losses--vomiting, ng suctioning, diarrhea
2) abnormal skin losses--diaphoresis
3) abnormal renal losses--diuretic therapy, diabetes insipidus, kidney disease, adrenal insufficiency
4) third spacing--peritonitis, intestinal obstruction, ascites, burns
5) hemorrhage
6) altered intake--impaired swallowing, confusion, NPO
Causes of dehydration
1) hyperventilation
2) prolonged fever
3) diabetic ketodacidosis
4) enteral feeding w/o sufficient water intake
Subjective and objective data of FVD
1) vital signs--hypothermia, tachycardia, thready pulse, hypotension, orthostatic hypotension, decreased central venous pressure, tachypnea, hypoxia
2) neuromusculoskeletal--dizziness, syncope, confusion, weakness, fatigue
3) GI--thirst, dry mucous membranes, dry furrowed tongue, N/V, anorexia, acute weight loss
4) renal--oliguria (decreased production of urine)
5) other clinical findings--diminished cap. refill, cool clammy skin, diaphoresis, sunken eyeballs, flattened neck veins, absence of tears, decreased skin turgor
Lab findings associated with FVD
1) Hct--increased in both hypovolemia and dehydration (unless FVD is due to hemorrhage)
2) serum osmolarity--dehydration--increased hemoconcentration osmolarity (greater than 300 mOsm/kg)- increased protein, BUN, electrolytes, glucose
3) urine sp. gravity and osmolarity--increased concentration (urine sp. gravity > 1.030)
4) serum sodium--dehydration--increased hemoconcentratin
Nursing care for FVD
1) assess respiratory rate, symmetry, effort
2) monitor SOB and dyspnea
3) check urinalysis, SaO2, CBC, electrolytes
4) administer supplemental O2 as prescribed
5) measure client's weight daily at same time of day using same scale
6) observe for N&V
7) assess & monitor VS (check for hypotension & orthostatic hypotension)
8) check neurological status to determine LOC
9) assess heart rhythm (may be irregular, tachycardic)
10) initiate & maintain IV access
11) place client in shock position ( on back with legs elevated)
12) fluid replacement: administer IV fluids as prescribed (isotonic solutions--Lactated Ringer's or 0.9% NaCl; blood transfusion)
13) monitor I&O--encourage as tolerated; notify provider of urine output less than 30 mL/hr
14) monitor LOC and ensure client safety
15) assess level of gait stability
16) encourage client to use call light for assistance
17) encourage client to change positions slowly
18) check cap refill
19) provide frequent oral care
20) prevent skin breakdown
Fluid volume excess (FVE)
-isotonic retention of water and sodium in abnormally high proportions
(hypoosmolar fluid imbalance)
-gain of more water than electrolytes
-hemodilution results in decreases in Hct, serum electrolytes, and protein
Compensatory mechanisms for FVE
1) increased release of natriuretic peptides--result in increased excretion of sodium and water by kidneys
2) decreased release of aldosterone
Causes of hypervolemia
1) chronic stimulus to kidneys to conserve Na and water (heart failure, cirrhosis, increased glucocorticosteroids)
2) abnormal kidney function w/ reduced excretion of Na and water (kidney failure)
3) interstitial to plasma fluid shifts (hypertonic fluids, burns)
4) age-related changes in CV and kidney function
5) excessive Na intake from IV fluids, diet, medications (Na bicarbonate antacids, hypertonic enema solutions)
Causes of overhydration
1) water replacement w/o electrolyte replacement (strenuous exercises w/ diaphoresis)
2) SIADH--excess secretion of ADH
3) head injuries
4) barbiturates
5) anesthetics
Subjective and objective data of FVE
1) VS--tachycardia, bounding pulse, hypertension, tachypnea, increased central venous pressure
2) neuromusculoskeletal--confusion, muscle weakness
3) GI--weight gain, ascites
4) respiratory--dyspnea, orthopnea, crackles
5) other clinical findings--edema, distended neck veins
Lab findings associated w/ FVE
1) Hct--hypervolemia: decreased Hct; overhydration: decreased Hct = hemodilution
3) serum osmolarity--overyhdration: < 280 mOsm/kg
4) serum Na--hypervolemia: Na within expected range (136-145 mEq/L)
5) electrolytes, BUN, creatinine--hypervolemia&overhydration: decreased
6) ABGs--respiratory alkalosis (decreased PaCO2, increased pH)
Nursing care for FVE
1) assess respiratory rate, symmetry, effort
2) assess breath sounds in all lung fields (may be diminished w/ crackles)
3) monitor for SOB and dyspnea
4) check ABGs, SaO2, CBC, CX-ray results (may indicated pulmonary congestion)
5) position client in semi-Fowler's
6) measure client's weight daily
7) monitor and document edema (pretibial, sacral, periorbital)
8) monitor I&O
9) implement prescribed fluid and Na intake restrictions
10) administer supplemental O2 as needed
11) reduce IV flow rates
12) administer diuretics as prescribed (osmotic, loop)
13) monitor and document circulation to extremities
14) reposition client at least q 2 hr
15) support arms and legs to decrease dependent edema as appropriate
Clients at greatest risk for electrolyte imbalances
1) infants and children
2) older adults
3) clients who have cognitive disorders
4) chronically ill clients
Sodium (Na+)
-major electrolyte found in ECF
-present in most body fluids or secretions
-essential for maintenance of acid-base and fluid balance, active and passive transport mechanisms, and irritability and conduction of nerve and muscle tissue
-136-145 mEq/L
-serum Na+ level < 136 mEq/L
-a net gain of water or loss of sodium-rich foods
-delays and slows depolarization of membranes
-water moves from ECF to ICF, causes cells to swell (cerebral edema)
Causes of hyponatremia
1) deficient ECF volume
2) abnormal GI losses--vomiting NG suctioning, diarrhea, tap water enemas
3) renal losses--diuretics, kidney disease, adrenal insufficiency, excessive sweating
4) skin losses--burns, wound drainage, GI obstruction, peripheral edema, ascites
5) increased or normal ECF volume--excessive oral water intake, SIADH
6) edematous states--heart failure, cirrhosis, nephrotic syndrome
7) excessive hypotonic IV fluids
8) inadequate Na+ intake (NPO status)
9) age-related risk factors--older adults greater risk due to incidences of chronic illnesses, use of diuretic medications, and risk for insufficient Na+ intake
Subjective and objective data of hyponatremia
1) physical assessment findings--vary with a normal, decreased, or increased ECF volume
2) VS--hypothermia, tachycardia, rapid thready pulse, hypotension, orthostatic hypotension
3) neuromusculoskeletal--headache, confusion, lethargy, muscle weakness w/ possible respiratory compromise, fatigue, decreased deep tendon reflexes (DTRs), seizures, coma
4) GI--increased motility, hyperactive bowel sounds, abdominal cramping, anorexia, nausea, vomiting
Lab findings associated with hyponatremia
1) serum Na+ -- decreased < 136 mEq/L
2) serum osmolality--decreased < 280 mOsm/kg
-serum sodium > 145 mEq/L
-serous electrolyte imbalance--can cause significant neurological, endocrine, and cardiac disturbances
-causes hypertonicity of serum--causes shift of water out of cells, making them dehydrated
Risk factors for hypernatremia
1) water deprivation (NPO)
2) heat stroke
3) excessive Na+ intake--dietary, hypertonic IV fluids, hypertonic tube feedings, bicarbonate intake
4) excessive Na+ retention--kidney failure, Cushing's syndrome, aldosteronism, some meds (glucocorticosteroids)
5) fluid losses--fever, diaphoresis, burns, respiratory infection, diabetes insipidus, hyperglycemia, watery diarrhea
6) age-related changes--decreased total body water content and inadequate fluid intake related to altered thirst mechanism
7) compensatory mechanisms--increased thirst and increased production of ADH
Subjective and objective data of hypernatremia
1) VS--hyperthermia, tachycardia, orthostatic hypotension
2) neuromusculoskeletal--restlessness, disorientation, irritability, muscle twitching, muscle weakness, seizures, decreased LOC, reduced to absent DTRs
3) GI--thirst, dry mucous membranes, dry and swollen tongue red in color, increased motility, hyperactive bowel sounds, abdominal cramping, nausea
4) other clinical findings--edema, warm flushed skin, oliguria
Lab findings associated with hypernatremia
1) serum Na+-- increased, > 145 mEq/L
2) serum osmolarity--increased, > 300 mOsm/kg
Nursing care for hypernatremia
1) report abnormal lab findings to provider
2) fluid loss--based on serum osmolarity (administer hypotonic IV fluids-0.225% NaCl)
3) Excess sodium--encourage water intake, discourage Na+ intake and administer diuretics (loop diuretics)
4) monitor LOC and ensure safety
5) provide oral hygiene and other comfort measures to decrease thirst
6) monitor I&O, alert provider if urinary output inadequate
Potassium (K+)
-major cation in ICF
-plays vital role in cell metabolism; transmission of nerve impulses; functioning of cardiac, lung, and muscle tissues; and acid-base balance
-reciprocal action w/ Na+
-Expected levels 3.5-5 mEq/L
- < 3.5 mEq/L
-result of increase loss of K+ from body or movement of K+ into the cells
Risk factors for hypokalemia
-abnormal GI losses--vomiting, NG suctioning, diarrhea, inappropriate laxative use
-renal losses--excessive use of K+-excreting diuretics (furosemade (Lasix)), corticosteroids
-skin losses--diaphoresis, wound losses
-insufficient K+
-inadequate dietary intake (rare)
-prolonged administration of non-electrolyte-containing IV solutions (5% dextrose in water)
-ICF--metabolic alkalosis, after correction of acidosis (treatment of DKA), during periods of tissue repair (burns, trauma, starvation), total parental nutrition (TPN)
Subjective and objective data of hypokalemia
1) VS-hyperthermia, weak irregular pulse, hypotension, respiratory distress
2) neuromusculoskeletal-ascending bilateral muscle weakness w/ respiratory collapse and paralysis, muscle cramping, decreased muscle tone and hypoactive reflexes, paresthesias, mental confusions
3) ECG-PVCs, bradycardia, blocks, v-tach, flattening T waves, and ST depression
4) GI-decreased motility, hypoactive bowel sounds, abdominal distention, constipation, ileus, nausea, vomiting, anorexia
5) other clinical findings-polyuria (excretion of dilute urine)
Lab findings associated with hypokalemia
1) serum potassium < 3.5 mEq/L
2) ABGs--metabolic acidosis pH > 7.45
Diagnostic procedures associated with hypokalemia
ECG shows findings of dysrhythmias (PVCs, ventricular tachycardia, flattening T waves, ST depression)
Nursing care for hypokalemia
1) report abnormal findings to provider
2) treat underlying cause
3) replace potassium: provide dietary education and encourage foods high in K+ (avocados, dried fruit, cantaloupe, bananas, potatoes, spinach); provide oral K+ supplementation
4) IV K+ supplementation: mixed by pharmacist and checked by 2 nurses; max. rec. rate is 10-20 mEq/hr; NEVER IV bolus (high risk of cardiac arrest)
5) monitor for phlebitis (tissue irritant)
6) monitor for and maintain adequate urine output
7) monitor for shallow, ineffective respirations and diminshed breath sounds
8) monitor cardiac rhythm and intervene promptly as needed
9) monitor clients receiving digoxin (hypokalemia increases risk of digoxin toxicity)
10) monitor LOC and ensure safety
11) monitor bowel sounds and abdominal distention and intervene as needed
-serum K+ > 5 mEq/L
-results from increased intake of K+, movement of K+ out of the cells, or inadequate renal excretion
-uncommon in clients w/ adequate renal function
-potentially life-threatening due to risk of cardiac arrhythmias and cardiac arrest
Risk factors for hyperkalemia
1) increased total body K+ -- IV K+ admin., salt substitutes, blood transfusion
2) ECF shift--decreased insulin, acidosis (DKA), tissue catabolism (sepsis, trauma, surgery, fever, MI)
3) hypertonic states--uncontrolled diabetes mellitus
4) decreased excretion of K+ -- kidney failure, severe dehydration, K+ sparing diuretics, ACE inhibitors, adrenal insufficiency
5) older adult clients -- greater risk due to decreased kidney function and medical conditions resulting in use of salt substitutes, ACE inhibitors, and K+ sparing diuretics
Subjective and objective data of hyperkalemia
1) VS-slow, irregular pulse; hypotension
2) neuromusculoskeletal-irritability, confusion, weakness with ascending flaccid paralysis, parethesias, lack of reflexes
3) ECG-ventricular fibrillation, peaked T waves, widened QRS, cardiac arrest
4) GI-increased motility, diarrhea, abdominal cramps, hyperactive bowel sounds
5) other clinical findings-oliguria
Lab findings associated with hyperkalemia
1) serum potassium -- increased > 5 mEq/L
2) ABGs -- metabolic acidosis pH < 7.35
Diagnostic procedures associated with hyperkalemia
1) ECG will show dysrhythmias (ventricular fibrillation, peaked T waves, widened QRS)
Nursing care fro hyperkalemia
1) report abnormal findings to provider
2) decrease K+ intake--stop infusion of IV potassium, withhold oral K+, provide K+ restricted diet, dialysis may be required for extremely high levels
3) promote movement of K+ from ECF to ICF--administer IV fluids w/ dextrose and regular insulin
4) monitor cardiac rhythm and intervene promptly as needed
5) medications to increase K+ excretion--administer loop diuretics (furosemide-Lasix) if kidney function adequate; administer sodium polystyrene sulfonate (kayexalate) orally or as enema (increase excretion from GI tract)
6) maintain IV access
7) prepare client for dialysis if prescribed
-found in body's cells, bones, and teeth
-expected range 9-10.5 mg/dL
-balance essential for proper functioning of CV, neuromuscular, and endocrine systems, blood clotting and bone and teeth formation
-serum Ca+ < 9 mg/dL
Risk factors for hypocalcemia
1) increased Ca+ output--chronic diarrhea, steatorrhea (as w/ pancreatitis--binding of Ca+ to undigested fat)
2) inadequate Ca+ intake or absorption--malabsorption syndrome (Crohn's disease); vitamin D deficiency (alcohol use disorder, kidney failure)
3) Ca+ shift from ECF into bone or to an inactive form--repeated blood transfusion; post-thyroidectomy; hypoparathyroidism
Subjective and objective data for hypocalcemia
1) muscle twitches/tetany--numbness and tingling (extremities, circumoral); frequent, painful muscle spasms at rest that can progress to tetany; hyperactive DTRs; + Chvostek's sign (tapping facial nerve triggering facial twitching); + Trousseau's sign (hand/finger spasms w/ sustained bp cuff inflation)
2) cardiovascular--decreased myocardial contractility (decreased HR and hypotension)
3) GI--hyperactive bowel sounds, diarrhea, abdominal cramping
4) CNS--seizures due to overstimulation of CNS
Lab findings associated with hypocalcemia
-Ca+ level < 9 mg/dL
Diagnostic procedures associated with hypocalcemia
-ECG--prolonged QT interval and ST segments
Nursing care for hypocalcemia
1) administer oral or IV Ca+ supplements (carefully monitor respiratory and CV status)
2) initiate seizure precautions
3) keep emergency equipment on standby
4) encourage foods high in Ca+ (dairy products, dark green vegetables)
-serum Ca+ level > 10.5 mg/dL
Risk factors for hypercalcemia
1) decreased Ca+ output--thiazide diuretics
2) increased Ca+ intake and absorption
3) Ca+ shift from bone to ECF--hyperparathyroidism; bone cancer; Paget's disease; chronic immobility
Subjective and objective data for hypercalcemia
1) neuromuscular--decreased reflexes; bone pain; flank pain if renal calculi develop
2) CV--dysrhythmias
3) GI--anorexia, nausea, vomiting, constipation
4) CNS--weakness, lethargy; confusion, decreased LOC
Lab findings associated with hypercalcemia
-Ca+ level > 10.5 mg/dL
Diagnostic procedures associated with hypercalcemia
-ECG--shortened QT interval and ST segment
Nursing care for hypercalcemia
1) increase client activity level
2) limit dietary Ca+
3) encourage fluid to promote urinary excretion
4) encourage fiber to promote bowel elimination
5) implement safety precautions if client is confused
6) monitor for pathologic fractures
7) encourage acid-ash fluids (prune, cranberry juice) to decrease risk for renal Ca+ stone formation
-most of body's Mg is found in bones
-smaller amounts found within body cells
-very small amount found in ECF
-expected range 1.3-2.1 mEq/L
-serum Mg level < 1.3 mEq/L
Risk factors for hypomagnesemia
1) increased Mg output--GI losses (diarrhea, NG suction); thiazide or loop diuretics
2) inadequate Mg intake or absorption--malnutrition; alcohol use disorder; laxative use
Subjective/objective data for hypomagnesemia
1) neuromuscular--increased nerve impulse transmission (hyperactive DTRs, paresthesias, muscle tetany); positive Chvostek's and Trousseau's signs
2) GI--hypoactive bowel sounds, constipation, abdominal distention, paralytic ileus
3) CV--dysrhythmias, tachycardia, hypertension
Nursing care for hypomagnesemia
1) discontinue Mg-losing medications
2) administer oral or IV MgSO4 following safety protocols (IV route used because IM can cause pain and tissue damage. Oral Mg can cause diarrhea and increase Mg depletion. Monitor closely)
3) encourage foods high in Mg (whole grains and dark green vegetables)
4) implement seizure precautions
-serum Mg level > 2.1 mEq/L
Risk factors for hypermagnesemia
1) decreased Mg output--kidney failure; adrenal insufficiency
2) increased Mg intake and absorption--laxatives or antacids containing Mg
Subjective/objective data for hypermagnesemia
1) neuromuscular--diminished DTRs; muscle paralysis; shallow respirations, decreased RR
2) CV--bradycardia, hypotension; dysrhythmias, cardiac arrest
3) CNS--lethargy |
High Heels: A symbol of gender discrimination
by Kristel Liakou
You may already know that what we wear gives credit to who we are. In capitalist societies, branding is the most important thing when it comes to communication. Apparel is something like a “badge of origin” -represents who we are. Additionally, high-heeled shoes symbolize specific ideas, and as far as we know they have always indicated discrimination.
Let’s take it from the Baroque Period, at the 1600s, when aristocratic men used to wear heels, to stand out from the crowd. It is obvious that from the very beginning, these kind of shoes were a symbol of prejudice. Then, the aristocratic women followed the fashion of wearing heels, with the distinctive feature of “skinny heels” -the ancestors of today’s “stilettos”.
high heels, gender discrimination
Shot by Anna Tea | Models: Louis Flothmann, Rebecca Sigfridsson
With hopes of getting rich and aristocrats themselves, lower-class people started to wear high heels too. Fashions typically filter down from elite, reminding us a lot of ourselves who always want to look like a million bucks, even though we have no money to pay our rent. You see? Fashion does make circles.
As heels became popular to the masses, aristocrats lost their interest in heels. They stopped wearing them. In the same fashion, everyone stopped wearing them. After all, it wasn’t a practical trend -who would have thought.
High heels made a come-back in the mid-nineteenth century, under pornography circumstances. Actually, during WWII stiletto heels were born. At that period of time, Baby Boomer pornographers captured female nudes and photographers captured pin-ups wearing high heels. Most of those photographs, accompanied men all-over battlegrounds in Europe.
In contrast with the past, millennial professional women are turning away from high-heeled shoes, which have long been misconceived as a symbol of female empowerment and sexual confidence. In light of the unmasking of powerful men as sexual harassers, some critics now consider high-heeled shoes a symbol of gender discrimination. As a result, towering stilettos are increasingly being replaced by athletic and comfort-shoes.
Leave a comment
|
Ofqual's reliability research
Research to investigate the reliability and validity of results in national tests, public examinations and other qualifications in England.
Reliability is a measure of the consistency of the results of qualifications and assessments. A reliable qualification would mean that a student would receive the same result if they took a different version of the exam, took the test on a different day, or if a different examiner had marked the paper.
See also Ofqual’s ‘Reliability of Assessment Compendium’ that pulls our comprehensive research into one list of documents.
1. The internal reliability of some City and Guilds tests
2. Estimation of internal reliability: Pritchard and Hayes
3. Reliability of vocational assessment: level 3 tech qualifications
4. Estimation of inter-rater reliability
5. Marker effects and examination reliability
Published 9 October 2013
Last updated 9 October 2015 + show all updates
1. Added link to the reliability compendium.
2. First published. |
Use of Foreign Vessels to Transport Petroleum from the Virgin Islands to the United States Mainland
Under the Merchant Marine Act of 1920, the President is authorized to extend the coastwise laws of the United States to the Virgin Islands, and thus mandate the use of U.S. vessels for transportation of passengers and merchandise from the Virgin Islands to the U.S. mainland.
There is a strong argument that the President is empowered to make the coastwise laws applicable to the Virgin Islands solely for the carriage of petroleum and petroleum products.
Updated July 9, 2014 |
Social Scientists Find That Introverts See The World More Accurately Than Extroverts
Social Scientists Find That Introverts See The World More Accurately Than Extroverts
No one can deny that introverts and extroverts are two very different types of people. Extroverts are favored in today’s society. Many people think that extroversion is normal, while introversion is abnormal. Introverts are typically ridiculed and often misunderstood by extroverts.
Introverts make up a third to half of the U.S. population, and personality tests, like Myers-Briggs, have shone light upon several personality types, including various forms of introversion.
Susan Cain, TED talk speaker and author of The New York Times bestseller, Quiet, has grown global awareness about the issues faced by introverts, and has discussed the reasons why they are often misunderstood. Moreover, a new study has found that introverts have a more accurate perception of the social world than extroverts do. This study is strongly based on the friendship paradox.
What Is The Friendship Paradox?
In 1991, SUNY’s Scott Feld observed a phenomenon that led him to the theory that most people have fewer friends than their friends have on average. The sociologist explained that due to this, it makes sense that people might interpret themselves as being inadequate in some way for seeming to have fewer friends than those around them; however, it is actually the norm for people to have friends who have more friends than them on average.
The Dartmouth Study
Two Dartmouth researchers, Daniel Feiler and Adam Kleinbaum, studied the interaction of two key factors among a group of 284 MBA students: extroversion and homophily. Homophily is the notion that people with similar levels of introversion or extroversion are more likely to be friends with people of the same group.
Their findings were quite predictable. Since extroverted people are likely to connect with other extroverts, their social networks often contain an overwhelming majority of extroverts. The same is true for introverts.
The data also showed that extroverts believed that others were more extroverted than them- this being a trick of perception due to the way that social networks form.
“If you’re more extroverted, you might really have a skewed view of how extroverted other people are in general,” Feiler says. “If you’re very introverted, you might actually have a pretty accurate idea.”
Introverts are likely to have networks that represent a fuller demographic of a society. Introverts utilize their reserved nature to enhance their ability to observe, analyze, and understand society.
Why Introverts’ Social Skills May Benefit Their Relationships, Self-Esteem, And Job Performance
Contrary to popular belief, introverts are not bad communicators. They just prefer to be among a small group of people rather than a large group. They value the quality of relationships over the quantity.
Would it surprise you to know that introverts are actually better managers than extroverts?
It has been scientifically shown that introverts are not just better managers of time, but also can be better managers in their approach to business.
Wharton research professor, Adam Grant, examined the profits of different pizza franchises along with their different management styles. He found that proactive employees performed better under an introverted manager than an extroverted manager. Grant explained this result by noting that: “introverted leaders are more likely to listen carefully to suggestions and support employees’ efforts to be proactive.”
In The End
While many people still consider extroversion to be the norm, and perceive introverts as not fitting in as well socially, these cultural preferences do not necessarily reflect reality- or the capabilities that introverts possess. While introverts may be positioned as underdogs in society, as research has demonstrated, they have a lot to contribute to the world around them. In summary, introverts seem to actually perceive their social world more accurately than extroverts do, as demonstrated by the study by Daniel Feiler and Adam Kleinbaum.
Featured photo credit: Sodanie Chea via
More by this author
Kallen Diggs
Bestselling Author / Magazine Editor / Syndicated Radio Show Host
Trending in Communication
Read Next
Last Updated on September 20, 2018
1. What are the things I’m most passionate about?
2. What are my greatest accomplishments in life so far?
4. What are my goals in life?
5. Whom do I admire most in the world?
6. What do I not like to do?
“What do I want to do with my life?”
Featured photo credit: Andrew Ly via
Read Next |
Skip to Content
• Wait Times
• Site Search
MidMichigan Health - Joint Pain Assessment Icon
Total Hip Replacement
Total hip joint replacement involves surgical removal of the diseased ball and socket, and replacing them with a metal ball and stem inserted into the femur bone and an artificial plastic or metal cup socket. The metallic artificial ball and stem are referred to as the “prosthesis.” Upon inserting the prosthesis into the central core of the femur, it is fixed with a bony cement called methylmethacrylate. Alternatively, a “cementless” prosthesis is used which has microscopic pores that allow bony in growth from the normal femur into the prosthesis stem. This “cementless” hip is felt to have a longer duration and is considered especially for younger patients.
Who is a Candidate for Total Hip Replacement?
Total hip replacements are performed most commonly because of progressively severe arthritis in the hip joint. The most common type of arthritis leading to total hip replacement is degenerative arthritis (osteoarthritis) of the hip joint. This type of arthritis is generally seen with aging, congenital abnormality of the hip joint, or prior trauma to the hip joint. Other conditions leading to total hip replacement include bony fractures of the hip joint, and death (necrosis) of the hip bone. Hip bone necrosis can be caused by fracture of the hip, drugs (such as alcohol or corticosteroids), diseases (such assystemic lupus erythematosus), and conditions (such as kidney transplantation).
The progressively intense, chronic pain — together with impairment of daily function including walking, climbing stairs and even rising from a sitting position — eventually become reasons to consider a total hip replacement. Because replaced hip joints can fail with time, whether and when to perform total hip replacement are not easy decisions, especially in younger patients. Replacement is generally considered after pain becomes so severe that it impedes normal function despite use of anti-inflammatory medications. A total hip joint replacement is an elective procedure, which means that it is an option selected among other alternatives. It is a decision which is made with an understanding of the potential risks and benefits. A thorough understanding of both the procedure and anticipated outcome is an important part of the decision-making process.
What are the Risks?
The risks of total hip replacement include blood clots in the lower extremities that can travel to the lungs (pulmonary embolism). Severe cases of pulmonary embolism are rare, but can cause respiratory failure and shock. Other problems include difficulty with urination, local skin or joint infection, fracture of the bone during and after surgery, scarring and limitation of motion of the hip, and loosening of the prosthesis which eventually leads to prosthesis failure. Because total hip joint replacement requires anesthesia, the usual risks of anesthesia apply and include heart arrhythmias, liver toxicity, and pneumonia.
Find an Orthopedic Specialist
Related Locations |
Symptoms and Treatments for Torn Rotator Cuffs
The shoulder is one of the largest and most complex joints in our body.
The rotator cuff is an important structure within the shoulder, and it is comprised of a collection of muscles and tendons that support the joint and give it a wide range of motion. A torn rotator cuff is a relatively common injury that refers to a tear in one of the muscles or tendons.
What causes rotator cuff tears?
Rotator cuff tears are an injury that can occur for several reasons. It is most commonly associated with overuse and repetitive strain, and people who repeatedly perform overhead motions in their work or in their recreational time are more likely to develop a rotator cuff tear. This could include decorators, construction workers, tennis and baseball players. Over time, the repetitive motion being performed starts to damage the muscles and tendons in the rotator cuff, and eventually a tear can form.
In some cases, a rotator cuff tear may be the result of an accident or trauma, such as a car collision. In these instances, emergency medical care may be necessary.
Symptoms of a rotator cuff tear
Rotator cuff injuries are usually quite painful and can affect your ability to move your shoulder in certain ways. Some of the most common indicators that you have torn your rotator cuff include:
• A dull ache felt deep in the shoulder
• Inability to sleep, particularly if you are laying so that you put pressure on the effected shoulder
• Weakness in the arm
• Difficulty moving the arm in some directions. In particular, trying to comb your hair or reach behind your back.
To accurately diagnose your injury, we will perform a physical examination of your arm and shoulder. X-rays, ultrasound or even MRI scans may also be used to assess the extent of the damage to your rotator cuff.
Treating torn rotator cuffs
There are a range of treatment options for improving the symptoms associated with a torn rotator cuff. Initially, we might recommend that you try non-surgical solutions, such as heat/ice packs, pain relief and anti-inflammatory medications and steroid injections. Physical therapy can also help restore some of the movement in your shoulder.
Depending on the extent of your tear and the impact is has on your life, we may recommend that you consider a surgical repair. In small, minor tears this involves simply smoothing and trimming the damaged tissues. However, if you have a substantial tear in your rotator cuff, it may be necessary to suture the divided parts of the tendon together. This process can be done arthroscopically – a minimally invasive technique that requires a much smaller incision. This gives the patient benefits including:
• Lower risk of complications including post-operative infection
• Less bleeding
• Less post-operative pain
• Less scarring
• Faster recovery time
At New York Spine and Sports Surgery, we will be able to make a recommendation as to which procedure is needed and whether or not you are a good candidate for surgery. Following surgery, a physical therapy plan will support you as you heal and enable you to regain strength, control and range of motion in your arm. |
Sir Joseph Whitworth, English mechanical engineer, c 1860s.
Sir Joseph Whitworth, English mechanical engineer, c 1860s.
4 0 c m
actual image size: 26cm x 32cm
Joseph Whitworth (1803-1887) was one of the most important machine tool-makers of his time. Working at a time when the manufacture of machine tools was for the first time being seen as a distinct profesion, Whitworth's succes was in part down to a combination of technical expertise and personal ambition. Having worked for Henry Maudslay (1771-1831) and Joseph Clement in London, he set up busines independently in Manchester. By the early 1850s he had become a leader in the British machine tool trade, and travelled to the USA to study developments in small arms manufacture. Later, he was largely absorbed in improving small arms and artillery production. Having given a great deal of money for educational foundations, he was created a baronet in 1869.
Image Details
Image Ref.
© Science Museum / Science & Society Picture Library
buy a print
Select size
Select finish
How many prints?
buy a framed print
buy a canvas
buy a framed canvas |
Home | Blog | C.M. Corner: Respect – The Two Way Street
C.M. Corner: Respect – The Two Way Street
There are a million different factors that play into a class’s behavior. One of the biggest factors that comes into play is respect. We often look at a self-centric view of respect when we are dealing with difficult students and situations; they’re behavior is disrespectful to me. Respect is defined as “esteem for or a sense of the worth or excellence of a person” or “to show regard or consideration for” by dictionary.com. I like to think of it in terms of value. Because we are the staff member in the room, we expect students to value our authority and experience. Students, however, feel the same need to be valued. To be truly effective in our classrooms, we need to promote an environment of respect that each person in that class makes an impact and adds to the experience of the day. It needs to be a two-way street. Some things may look different depending upon the position you are in, teaching for a day or two, on a long-term assignment, working as a paraprofessional. But when our students don’t respect us, we become very aware through their behavior. Here are some core things that we can use to promote an environment of respect.
1. Know your relationship to the students and stick to those boundaries. You are the staff member in the classroom and they are the students. Your role is not their best friend, their parent or their counselor. You are there to help them continue their educational experience, which does require you to build relationship with your students. So, yes, get to know something about them and let them know about you, but make sure that the content and questions stay professional. It’s ok to share that you love animals or previously worked as an engineer. It’s also ok to ask them if they are having a good day or not. However, refrain from asking probing questions about their personal life. If they share/volunteer that information, that is the student’s choice, but don’t ask for them to share it.
2. Establish a safe environment. In particular with a substitute, students feel a bit more apprehensive because this is a change to their classroom culture and can affect the students to varying degrees. Let your student(s) know the expectations for conduct that day and hold both yourself and the students to them. Whether stated in detail or through a brief statement, also make sure your topics of conversation are appropriate. Topics that make students uncomfortable or frightened, as well as offensive, derogatory comments and negative criticism, make the class unsafe. This will cause your students to misbehave. If those expectations are violated, then respond appropriately with the established consequences.
3. Focus on the positive actions and try to keep them “anonymous”. Praise statements are an effective way to manage various types of classroom misbehavior. It is most effective when it is not attached to a person or their value. Statements like “Johnny is a good boy because he is working quietly on his assignment instead of talking with his neighbors,” attach personal value to the behavior. If you are working quietly on your assignment you are good, but if not you are bad. It also can put the class in contention with each other as Johnny is good and I am not. Instead use “Thank you to those who are working quietly on their assignments.” You are still drawing attention to the positive behavior, but it is no longer drawing attention only to Johnny or assigning value to a person due to their behavior.
If we communicate to our students that we respect and value them, they will respect us more and classroom management will become much easier. And while these are important to maintain in any position, it is ever more important that we implement and continue this practice. With students, staff and our own stress, we need to intentionally build a culture of respect in our classrooms and positions. Do you have any ways that you establish this type of environment? We would love to hear it. Feel free to send along any tips or strategies to training@teachersoncall.com!
"respect". Dictionary.com Unabridged. Random House, Inc. 27 Apr. 2018. <Dictionary.com http://www.dictionary.com/browse/respect>. |
(redirected from macules)
Also found in: Thesaurus, Medical.
(măk′əl) also mac·ule (măk′yo͞ol′)
A blurred or double impression in printing.
v. mack·led, mack·ling, mack·les also mac·uled or mac·ul·ing or mac·ules
To blur or double (a printed impression).
To become blurred.
[Middle English macule, spot, from Old French, from Latin macula.]
(ˈmækəl) or
(Printing, Lithography & Bookbinding) printing a double or blurred impression caused by shifting paper or type
[C16: via French from Latin macula spot, stain]
vb (tr) , mackles, mackled or mackling
dialect Midland English to mend hurriedly or in a makeshift way
ThesaurusAntonymsRelated WordsSynonymsLegend:
Noun1.mackle - a printed impression that is blurred or doubledmackle - a printed impression that is blurred or doubled
printing, impression - all the copies of a work printed at one time; "they ran off an initial printing of 2000 copies"
References in periodicals archive ?
It consists of pale red macules or slightly raised nonpruritic papules.
The papules and macules on the periphery of the involved skin create a net-like shape.
3) NF1 is characterized by multiple cafe-au-lait macules, axillary freckling, neurofibromas, and Lisch nodules (pigmented iris hamartomas).
Some physical signs were found, including hypopigmented macules on her back, right forearm, and right calf, ungual fibromas on her left hand, and shagreen patch on dorsolumbar area of the back [Figure 1]d.
anomaly loss 1 CAL * Bilateral severe mixed loss 2 * -- Bilateral severe conductive 3 CAL; hyper Bilateral -pigmented moderate macules conductive 4 Hypopigmented -- macules 5 CAL; hypo -- -pigmented macules 6 CAL; hypo-and -- hyperpigmented macules 7 * CAL; hypo -- -pigmented macules 8 CAL; -- hyperpigmented macules FA = Fanconi anaemia; GIA = Gastrointestinal anomaly; M = male; DEB+= diepoxybutane positive (chromosome breakage); del = deletion; --= not observed;--= not observed; CAL = cafe au lait macule; del = deletion; F = female.
3 patients with SJS have epidermal detachment 30% BSA plus widespread purpuric macules or flat atypical targets and TEN without spots, detachment > 30% BSA with loss of large epidermal sheets without purpuric macules or target lesions.
She started noticing the hypopigmented macules after the 6th session, in addition, she noticed an increase in size and number of the lesions over a short period.
Bannayan Riley Ruvalcaba Syndrome (BRRS) is an autosomal dominant disorder characterized by macrocephaly, intestinal hamartomatous polyposis, hyperpigmented macules of the glans penis and devolepmental delay.
The cutaneous lesions are mainly flat warts, reddish-brown scaly macules resembling tinea versicolor or pityriasis rosea.
Tyring highlighted the Zika rash, which consists of "blanchable macules and papules.
Objective: Pigmented purpuric dermatosis (PPD) is a chronic skin disease characterized by petechial and pigmentary macules.
nodules (lepromas), macules, or plaques, but instead characterised by diffuse and massive infiltration of the skin, known as diffuse LL or Lucio-Latapi leprosy (5) which were found in this patient. |
How Cerebral Palsy Is Diagnosed
Show Article Table of Contents
cerebral palsy diagnosis
© Verywell, 2018
Self Checks
Labs and Tests
Clinical History and Physical Examination.
An evaluation of a child’s abilities using a detailed neurological exam is between 90-98 percent accurate in diagnosing cerebral palsy. A few other methods of testing a child’s abilities include the Prechtl Qualitative Assessment of General Movements and the Hammersmith Infant Neurological Examination, both of which systematically assess and score a child’s physical and cognitive abilities on a scale.
Blood Tests
Blood tests are not expected to show abnormalities in cerebral palsy. Metabolic syndromes that are characterized by symptoms similar to those of cerebral palsy are expected to show blood test abnormalities, which can help in differentiating the conditions. A blood test may also be considered if a child with symptoms of cerebral palsy has symptoms of an illness, organ failure or an infection.
Genetic Tests
Genetic tests may help in identifying genetic abnormalities associated with cerebral palsy. Cerebral palsy is only rarely associated with verifiable genetic defects, and the greater value of genetic testing lies in the diagnosis of other conditions that are clinically similar to cerebral palsy and that have known genetic patterns.
Electroencephalogram EEG
Nerve Conduction Studies (NCV) and Electromyography (EMG)
Brain CT
Brain MRI
A brain MRI is a more detailed imaging study of the brain than a CT scan. The presence of some types of malformations, as well as abnormalities suggestive of prior ischemic injuries (lack of blood flow) to the white or gray matter of the brain, may support the diagnosis of cerebral palsy. Evidence of active inflammation may point to other conditions such as cerebral adrenoleukodystrophy.
Differential Diagnosis
Shaken Baby Syndrome
A condition caused by repeated trauma—shaken baby syndrome—can affect young children of all ages, and is more common in older babies than in newborns. Shaken baby syndrome is characterized by skull fractures, hemorrhage (bleeding) in the brain, and often trauma to other areas of the body.
Depending on when the trauma begins, shaken baby syndrome can result in a loss of cognitive skills that have already begun to emerge, while cerebral palsy is characterized by lack of emerging skills.
Rett Syndrome
A rare condition that affects girls, Rett syndrome may cause lack of motor control and cognitive deficits. The biggest differences between the conditions are that children with Rett syndrome develop normally for 6 to 12 months, and then show a decline in function, while children with cerebral palsy do not attain developmental milestones.
Autism Spectrum Disorder
A complicated disorder with symptoms that can manifest as cognitive and behavioral deficits, some children on the autism spectrum can display motor or speech deficits with characteristics that may be mistaken for cerebral palsy or the other way around.
Metabolic Syndromes
Conditions that interfere with metabolism or with protein production such as Tay Sacks disease, Noonan Syndrome, Lesch-Nyan syndrome, and Neimann-Pick disease can all have features of motor weakness and cognitive deficits that may be mistaken for cerebral palsy—and cerebral palsy can be mistaken for these conditions.
• Infectious: Infectious encephalitis is characterized by rapid onset, and as evidence of infection and inflammation on blood tests, brain CT, brain MRI or in lumbar fluid.
• Inflammatory: Inflammatory encephalitis may be serious despite the lack of an infectious cause. There may be associated fevers and usually blood tests, brain CT, brain MRI and lumbar fluid show evidence of inflammation.
Spinal Muscular Atrophy
A disorder that can begin during infancy, childhood, or adulthood, the form of spinal muscular atrophy that begins during infancy can be devastating, causing paralysis or near paralysis. The motor weakness of early onset spinal muscular atrophy, also often referred to as SMA type 0, is more debilitating than that of cerebral palsy. The illness is expected to rapidly progress to a fatal outcome, which is not typical with cerebral palsy.
Cerebral Adrenoleukodystrophy
A rare disorder characterized by visual deficits and cognitive decline, cerebral adrenoleukodystrophy predominantly affects boys. The key differences between adrenoleukodystrophy and cerebral palsy are that children with cerebral adrenoleukodystrophy have white matter abnormalities on their brain MRI and the condition causes a decline in cognitive and motor function, not a lack of development of skills as in cerebral palsy.
Muscular Dystrophy
How Cerebral Palsy Is Treated
Was this page helpful? |
Monday, February 16, 2009
Average velocity from High School Physics and Agile Projects
Two Agile teams start working on Application A, at the same time developing exactly same functionality. Team 1 delivers with a constant velocity…
While searching on the internet about AgileEVP (Agile Earned Value Management) I came across a formula from my high school physics: v=d/t, the Average Velocity formula. I really enjoyed high school math, physics, and its exams: Two trains leave Station A at the same time traveling in the same direction, Train 1 travels with a constant velocity…
Here is the formula that got me started:
High School Physics Average Velocity
The average velocity v of an object moving through a displacement (d) during a time interval (t) is described by the formula: v=d/t
v = Average Velocity
d = displacement
t = time
Exercise 1:
What is the average velocity for a car with a displacement of 120 km in the interval of 3 hours?
V = d/t, where d is 120 km and t is 3 h
V = 120 km / 3 h = 40 km/h
Average velocity: 40 km/h (the car average velocity is 40 kilometers per hour)
So I decided to compare high school physics average velocity with team average velocity as an Agile development concept.
Typical Agile projects use iteration fixed in time, so instead of t, team average velocity uses the number of iterations as the time variable.
Displacement in Agile is measured by means of story points completed.
Velocity, in Agile, team velocity, means the number of story points the team completed in the Iteration.
Agile Team Average Velocity
The average velocity v of a team delivering story points (s) during the iterations interval (i) is described by the formula: v=s/i
v = Average Velocity
s = story points completed
i = the number of iterations
Consider sp for Story point unit, it for iteration unit, and sp/it as velocity unit (Story points per iteration)
Exercise 2:
What is the average velocity of a team with a completion of 120 sp in the interval of 3 it?
V = s/i, where s is 120 sp and i is 3 it
V= 120 sp / 3 it = 40 sp/it
Average velocity: 40 sp/it (the team average velocity is 40 story points per iteration)
Agile projects extensively rely on the concept of average velocity. Average velocity is very useful for agile team planning activities such as iteration and release planning. For example, my team is about to have an IPM (Iteration Planning Meeting) and we had to decide the number of story points to sign up for the next Iteration. The Average velocity from the last 3 iterations is a good initial value for the team to go about signing up for work for the coming iteration.
To be continued…
Shlomo said...
You have to be careful with average velocity. I've been on projects where people panic because the average velocity makes the project look like it is failing. The problem with average velocity is that there are always outliers, usually from the first few iterations, that skew the average. For example if you have the following velocities for the first 7 iterations (0,5,14,23,26,32,23) your average velocity is 17. But if you remove the outliers (0,5,14) your velocity is 26. Using an average without trimming is IMO a bad idea.
Paulo Caroli said...
I agree with you. I typically use the average Velocity as a basis for the future commitments. But as you said, using the average Velocity without trimming is not a good idea.
Jason said...
This is why people use other statistics for this sort of thing.
Some people like sum of squares/euclidian distance (for your example, 20), or median (23).
Another is a to give more weight to later iterations. All else being equal (All else is never equal), the next iteration should look more like the last several than like the ones before it.
This I think is especially true because people revise their 'unit of work' over time, based on past experience, and, most importantly, expectations. It's very difficult to measure things when people know they're being measured. If you place too much emphasis on the measuring (measuring becomes judging), you get metric dysfunction. I don't think I have to tell you how much worse that situation is than flying blind, but trust me, it is very much worse.
Fred Tingey said...
I too have suffered from a variability in Velocity (commonly at the early and late stages of a project) and certainly find averages useful.
It is possible to alter the 'variability' in velocity by altering the size of your stories, the length of your iterations, the amount of time you work and of course your estimating ability. I posted some simple analysis on this and for example you would need to both double your iteration length and halve your story size to reduce variability by 50%.
Paulo Caroli said...
Hey Fred,
Thanks for the comment.
My next blog post ( the team acceleration concept ) provides a way to measure variability in Velocity. I found agile acceleration useful for forecasting (or at least trying to) the velocity for teams in early stage of a project.
Your entry on ( “Volatility of Velocity” very good. It is interesting to see the statistics formulas applied to the day to day Agile concept such as iteration length and story size.
Anonymous said...
igxe swagvaultoforu wowgold-usaignmax wowgoldlivebrogame thsaleGoldRockUbrogameswagvaultgoldsoonoforuigxethsale
Anonymous said...
This is just another reason why I like your website. I like your style of writing you tell your stories without out sending us to 5 other sites to complete the story. Please come visit my site Speech Pathology when you got time.
Anonymous said...
Anonymous said...
I noticed that in my daughter's school teachers are using this program to help them in their stuff and I've seen that it is excellent. Viagra Online Generic Viagra Viagra
Thomas said...
My all queries regarding this subject have satisfied by reading your blog and I was looking such intellectual information for long time.
Viagra Online
Steve said...
Caverta | Penegra
manoj singh said...
Average velocity and speed both concepts are used in day to day life physics while driving.
Domain and Range of a Function
Olivia Jennifer said...
Ella Mapple said...
Elliot Thomson said...
Hey thank you. very simply explained doubt are cleared.
elliot thomas
tim adam said...
buy online viagra generic usa
Miq Classes said...
Best ssc coaching centres in Bangalore |
Diabetes Mellitus Glucose Levels Can Be Controlled With LCHF Diet [Video]
Diabetes Mellitus
Diabetes Mellitus blood glucose (BG) levels are a problem for most type 2 diabetics, but studies suggest that they can be controlled by following a LCHF diet plan. LCHF is an acronym for the Low Carb High Fat way of eating. This dieting plan has under gone a lot of scrutiny in recent memory and rightly so.
Until recently, there were only two accepted methods of weight loss. One by limiting your caloric intake, and the other by limiting your fat intake. Fat intake has been a cause for concern for doctors and scientists alike, because of concern for serum cholesterol levels. It was thought that a diet high in dietary fat caused high serum cholesterol levels, and with that Coronary Artery Disease.
One of the many things to come from this debate about dietary fat is exactly what type of fat specifically might be bad for your overall heart health. Surprisingly enough, it has been determined that there are actually good fats that should be included in your daily diet regimen, along with a list of dietary fats to be avoided. Saturated and polyunsaturated fats have been targeted as the big offenders in the past, but that theory is changing.
According to a study published in May 2013 in PubMed, division of NIH, dietary fat has been shown to have a minimal effect on serum cholesterol levels, because of other contributing factors that drive adverse health effects related to CAD and atherosclerosis. This is good news for diabetes mellitus sufferers. The accepted train of thought was that the mechanism behind dietary fat leading to CAD was this; dietary fats along with circulating cholesterol in the blood stream raised a persons total cholesterol levels, along with their LDL cholesterol levels. These increased LDL levels were thought to become oxidized in the blood, attaching themselves to the intima of the artery, creating atherosclerosis. Atherosclerosis leads to CAD, and plaque deposits on the intima (the innermost lining of an artery) create occlusions in the arteries. These blockages restrict blood flow to the heart muscle, which in turn can lead to a Myocardial Infarction, (MI) more commonly referred to as a heart attack.
Scientists had determined that a diet high in saturated fat had the effect of raising serum cholesterol levels, while the reverse was true of polyunsaturated fats. These polyunsaturated fats were realized to be the actual cause of fat oxidation in the blood, contrary to well established facts generally accepted for decades. So it would appear that saturated fats circulating in the blood stream do not cause plaque deposits, and polyunsaturated fats are the real culprit. There is evidence that the American Diabetes Association (ADA) has come around to this way of thinking as well.
“Blaming saturated fat for causing atherosclerosis is akin to blaming your barber because you’re going bald,” said Martin Shannon, President and CEO of T2D Coaching, an online type 2 diabetes mellitus help site. “The enemy of any diet and weight loss program is carbohydrates, not fat, and more so for those with type 2 diabetes mellitus. All carbohydrates are turned to glucose in the human body, and they continue to circulate in the T2 diabetics’ blood stream because of insulin resistance and the decreased action of the beta cells in the pancreas. This eventually leads to the constantly circulating glucose being stored as fat.”
Essentially, that is the mechanism by which those with T2 diabetes mellitus can become overweight and obese. “A diet high in carbohydrates only exacerbates the problem,” said Shannon, “For the person with type 2 diabetes mellitus, removing as many carbs as possible from their daily diet is the key to weight loss and overall heart health.”
Diabetes MellitusControlling your blood glucose is a fact of life for anyone with type 2 diabetes mellitus, and Shannon is a devotee of the LCHF way of eating (WOE). “Restricting carbs is the quickest and most comprehensive method of losing weight and controlling your blood glucose levels for diabetes mellitus sufferers, and the LCHF WOE will help you feel fuller while attempting to lose that excess weight and control your BG levels,” adding, “High protein diets are just as bad as high carb diets because even if you limit your carb intake, your body will turn excess protein into glucose, raising your BG levels. You can’t substitute protein in your diet for the carbs you restrict, it just doesn’t work.”
“You have to understand, the human body is a carbohydrate processing machine. Our bodies were made to run on carbs, and the human body has created methods of staying on track with respect to carb intake.” If a person with diabetes mellitus restricts their carbohydrate intake as suggested, what will the body run on, and what will take its place? “Dietary fat. By restricting your daily carb intake, along with eating a proper level of protein, your body will go into a state of ketosis, and your body then runs on ketones, not glucose. This lack of glucose in your blood stream will lower your BG levels, and keep your body from storing fat as glycogen. Now the body does need some glucose to function properly, usually in the morning when you wake up. This ‘Dawn Phenomenon’ draws all available glucose to your brain when you wake up, and if you don’t have any available, your liver converts glycogen stores (glucose) to handle the problem.” This WOE is sometimes referred to as a ketogenic diet plan.
Controlling your blood glucose levels with an LCHF diet plan would be a good start for diabetes mellitus sufferers to achieve normal BG readings. Individual weight loss goals are within reach if you follow the LCHF percentage of food intake protocol. Your daily food intake should consist of this; 70 percent of your calories from dietary fat, 25 percent of your calories from dietary protein, and just five percent from carbohydrates. This diet regime will definitely lower you BG levels, and your HbA1C readings as well.
By Jim Donahue
NIH – PubMed
LCHF for Beginners
Eat Low Carb High Fat
T2D Coaching |
[tutorials & resource material arranged by topic Class Table
MIDDLE GROUND - Binomial Formula Explained
I. Brief Summary of A Binomial Distribution
0. Basic Probability and Counting Formulas
Vocabulary, Facts, Count the Ways to Make An Ordered List Or A Group
The average is the sum of the products of the event and the probability of the event.
II. Binomial Distribution Explained More Slowly
III. Binomial Formula Explained
Combinations Compute The Number of Each Outcome in A Binomial Distribution
What's the Probability of Obtaining Exactly 3 Heads If A Fair Coin Is Tossed 4 Times?
IV. Sum of the Probabilities and the Binomial Mean
The Sum of The Probabilities Is One.
The Expected Value Is The Mean.
The Mean, Expected Value, Is (n)(p).
Why the mean, expected value, is (n)(p)
V. Examples
VI. Use the Normal to Compute the Binomial on a Calculator
Use the Normal to Compute the Binomial.
Using the normal to compute the binomial is an important mathematical idea but now an old computational technique. For the most part, calculators functions have replaced statistical tables as sources for values of probability distributions.
It is valuable to know that for large values of n, the discrete binomial distribution approaches the continuous normal distribution.
When n(p) >5 and n(q) >5, the normal distribution may be used to approximate the binomial. This technique is useful when:
• binomial tables are not available for the given n,
• the numbers for n is large, or
• computation is required for more than 1 value of x, as in p(x>2), since repeated use of the formula or even the tables might be a pain.
The normal distribution does not match the binomial distribution, but, for larger values of n the shape matches closer and closer.
In each distribution, the probability is the area under the curve. For the binomial distribution (which is discrete), each value of x has a thickness on the x axis. For the normal distribution (which is continuous), each value of x has one one point on the x-axis where a vertical line crosses the axis.
Since a line does not have an area (the vertical line x=2 to compute p(x=2), for example), one must use an interval (the interval 1.5 < x < 2.5 to compute p(1.5 < x < 2.5), for example) to compute the probability.
Because of the above, to use this technique, one must rewrite the binomial probability expression to match an expression which can be evaluated by a normal probability expression.
This technqiue requires taking the statement of equality (one which inlcudes =) and rewriting it as a statement of inequality (one which includes >, <, >, or <, or multiple statements of inequality. The figures below depict this technique.
Compute Using A Calculator
The Distribution menu, [DISTR], lists probability distribution functions. It is found above the Varibles menu, [2nd] [VARS].
Funtion A, listed as [A:binomcdf(], is the cumulative BINOMIAL distribution. It is similar to the NORMAL cumulative distribution funtion, [2:normalcdf(]. It adds "from the left."
Each function requires parameters, stated in specific order and manner, for the function to work.
If p is the probability of success and x is the number of successes on n trials, the cumulative binomial distribution is computed using binomcdf(n,p,x) where , is the comma key, [,].
To compute p(x<1) where n is 4 and p is .4, use
This value is .4752 and is shown below on a calculator home screen and in a written-upon binomial distribution table.
© 2010, Agnes Azzolino |
What Kind of Bird is that? Snap a Picture and Find Out
May 09, 2014
Digital technology is about to add big data to the bird enthusiast’s traditional tools of binoculars and a field guide.
Peter Belhumeur, a Columbia computer science professor whose app for recognizing leaves was launched in 2011, has now created Birdsnap, an electronic guide for identifying birds. Birdsnap uses the computer technology that can recognize human faces to identify 500 common birds in North America.
“It’s all part of the same thing, using this technology to recognize the things around you,” says Belhumeur. While state-of-the-art facial recognition algorithms identify similarities between parts of the human face-the nose, chin or eye, for instance-Birdsnap homes in on parts of a bird-the beak, eye, wing, neck or feet and finds visual similarities to other birds. “It’s all automatic,” he says.
An expert in facial recognition, Belhumeur built Birdsnap with David Jacobs, a computer science professor at the University of Maryland, and a computer science Ph.D. candidate at Columbia, Thomas Berg.
Columbia University, Birdsnap, Leafsnap, bird |
~ Albert Szent-Gyorgyi (Hungarian Biochemist, 1937 Nobel Prize for Medicine)
Why Qualitative Research
Qualitative research methods valuable for evaluating behavioural characteristics that influence motivation and individual preferences. Acknowledging that perceptions, culture, and social barriers influence a sport can lead to sensitivity in program development, resiliency and agility in accounting for consumer change, and determining outlying factors beyond the scope of current business focus. Determining causal factors that contribute to better decision and the ability to condense a wide range of inputs into meaningful analysis for decision-making.
• Canadian Curling Association
• OCAD University
Published Research
Meeting a Preferred Future - A case study of Tennis CanadaAbstract
Sport represents a unique socio-cultural form operating within an economic system that supports its activities. Social, cultural and economic structures impose tensions to the nature of sport, its commercial activities, and relationship to society. Nevertheless, sport is transformative. It envisions connections, fun, healthy lifestyles, and national character. Tennis Canada’s future vision imagines tennis representative of Canada’s diverse national character, and yet persistent socio-cultural themes may obstruct reaching that goal. Moreover, fulfilment is not assured as barriers restrict change, and unforeseen conditions disrupt system dynamics. The research examines Tennis Canada’s unique character using a business case study to identify business operations. The benefits of strategic foresight for modeling persistent and emerging conditions as determinants for future business viability are also assessed. Curry & Hodgson’s Three Horizons model is used to 1) analyze socio-cultural themes impacting tennis, 2) identify trends and drivers, and 3) model implications using a future facing SWOT analysis. |
UFO And ET Images Found In Remote Cave In India.
A group of anthropologists working with hill tribes in a remote area of India have made a startling discovery: Intricate prehistoric cave paintings depicting aliens and UFO type craft.
The images were found in the Hoshangabad district of the state of Madhya Pradesh only 70 kilometers from the local administrative centre of Raisen. The caves are hidden deep within dense jungle.
A clear image of what might be an alien or ET in a space suit can be seen in one cave painting along with a classical flying saucer shaped UFO that appears to be either beaming something down or beaming something up, in what might be an ancient UFO abduction scenario. A force-field or trail of some sort is seen at the rear of the UFO.
Also visible is another object that might depict a wormhole, explaining how aliens were able to reach Earth. This image may lead UFO enthusiasts to conclude that the images might have been drawn with the involvement of aliens themselves.
Winston Churchill UFO Request Revealed
New documents released by the Government reveal former prime minister Winston Churchill expressed curiosity in “flying saucers” and requested a briefing from his ministers.
The “Churchill Memorandum” sent from the wartime leader on July 28 1952 to Lord Cherwell, secretary of state for air, is one of hundreds of UFO files released by the Ministry of Defence and National Archives.
In the note, Mr Churchill briefly writes: “What does all this stuff about flying saucers amount to? What can it mean? What is the truth? Let me have a report at your convenience.”
The response to his memo, explained that following an intelligence study conducted in 1951 the “flying saucers” could be explained by “one or other” of the following four causes: Known astronomical or meteorological phenomena; mistaken identification of conventional aircraft, balloons, birds etc; optical illusions and psychological delusions; and deliberate hoaxes. |
Print Page Email Page Increase Font SizeDecrease Font SizeReset Font Size
Skip Navigation Links
Home / Browse / Cultured Pearl Industry
Cultured Pearl Industry
Arkansas freshwater mussel shell provided the raw material for cultured pearl farming in the latter half of the twentieth and the early twenty-first centuries. Following World War II, cultured pearls were the quintessential statement of elegance, and this drove the demand for Mississippi River Valley freshwater shell.
The 1960–1980s were the heyday for shell harvesting from northeast Arkansas waterways. Most of the shell was shipped to Japan, where Kokichi Mikimoto had perfected a cultured pearl process in the early 1900s. In this process, a bead, or nucleus, was inserted into a marine oyster and the creature layered its natural nacre around the orb, thus creating a pearl. As is the case with human organ transplants, pearl oysters could potentially reject an inserted nucleus, and Mississippi River Valley shell proved to be the least likely to be expelled.
Prior to use in cultured pearl farming, tons of shell had been harvested for mother-of-pearl button blanks that were made in small factories dotting northeast Arkansas riverbanks, especially during the early to mid-twentieth century. By the time that the mollusk was used for the cultured pearl industry, brailing and diving were the primary gathering methods. Brailing boats fished with a fringe of chain to which non-barbed hooks were attached. The apparatus was lowered into the water and trolled across the mussel beds where the animals lay open for feeding. The method relied on the animal’s tendency to clamp onto anything that triggered its natural closing response. Diving was the other popular means of harvest in Arkansas. Divers in the region used underwater breathing apparatuses rather than free diving, which had been practiced for centuries in pearl regions, including the Gulf of Mexico. Primitive dive equipment appeared in the late 1800s, but the safety of the equipment was little better than that associated with free diving. Dive gear steadily improved, and by mid-century, lightweight equipment was available to the public.
In Arkansas, the same ingenuity that kept old cars and tractors running on the farm was utilized to engineer equipment. Old Model-T car engines were turned into compressors. Garden hoses served as air conduits, and dive helmets were designed from things as disparate as old fire extinguishers, hot water tanks, or, in one instance, an old torpedo casing. The glass faceplate was useless in the underwater darkness but offered a degree of illumination once the diver returned to the surface. Museums such as Jacksonport State Park and Randolph County Heritage Museum exhibit old dive gear once used for gathering mussels.
The Arkansas Game and Fish Commission has required a shell-taking license since the early twentieth century, and after the diver had hauled the mussels out of the water, the shells had to be graded. Small or endangered creatures were tossed back in the immediate vicinity whence they had been harvested. Shell processing also included steaming open over a fire, removal of the flesh, drying and sorting by variety, and transporting to a buyer. The meat from the mussel was generally thrown back into the river for fish feed.
While the Japanese cultured pearl industry thrived during the latter half of the twentieth century, Chinese scientists and entrepreneurs were steadfastly moving from gnarly, so-called “rice-krispie” pearls to gorgeous specimens that would eventually upend the pearl industry. In former rice paddies, freshwater mussels that were not finicky and did not require ocean nurseries were now capable of growing pearls. Of greatest implication for Arkansas, they no longer needed a shell bead to serve as the nucleus for the pearl.
By the first decade of the twenty-first century, Chinese pearl farmers could consistently grow perfect pearls after inoculating the host mollusk with a piece of mussel tissue instead of inserting shell-bead nuclei. This meant that the animal laid layers of nacre over the piece of mantle, and the resulting pearl was solid nacre rather than a few layers over a shell bead. As a result, the pearls exhibited an increased luster and luminosity—important factors in the pearl industry. Chinese pearls flooded the market in every size, shape, and color. The enormous quantity of pearls even allowed for pearls themselves to be used as nuclei to grow even larger pearls.
When the cultured pearl process no longer required a shell nucleus, international demand for Arkansas shell dried up, leaving limited uses for the shell in jewelry manufacture—primarily watch faces—or supplying the only domestic freshwater cultured pearl farm in the United States, in Birdsong, Tennessee.
For additional information:
Harris, John L., and Mark E. Gordon. “Arkansas Mussels.” Little Rock: Arkansas Game and Fish Commission, 1990.
Mazurkewich, Karen. “From Rice to Pearly Riches—Chinese Farmers Switch Crops, Transforming Backwater; But Will the Market Last?” Wall Street Journal, Eastern Edition, December 6, 2000, p. 1B.
Lenore Shoults
Pine Bluff, Arkansas
Last Updated 1/24/2012
|
Studies still looking for link between cell phones and brain tumors
But for most people, it’s still not clear if there’s added risk, the authors say. Plus, the devices and the way people use them keeps evolving so that more research is needed going forward, they add.
This isn’t the first study to point to a tumor risk with heavy cell phone use, said Dr. L. Dade Lunsford, a distinguished professor of neurosurgery specializing in brain tumor management at the University of Pittsburgh.
But these kinds of studies rely on people to recall how much they have used cell phones in the past with no indication of their actual use, said Lunsford, who was not involved in the French research.
The new results found no difference between regular cell users and non-users, which suggests that if there is a link, it is only applicable for people who claim to use their cell phone the most, he noted by email.
Cell phones emit radiofrequency electromagnetic fields in the microwave spectrum, which may be cancer causing, although that’s not yet proven, said Dr. Seung-Kwon Myung of South Korea’s National Cancer Center.
Myung led a large analysis of all the previous studies of cell phone use and brain tumors.
Studies still looking for link between cell phones and brain tumors In 2011, the World Health Organization’s International Agency on Cancer classified this radiation as possibly carcinogenic, based on existing studies.
The French team, led by Dr. Gaelle Coureau of the Universite Bordeaux Segalen, used a cancer registry to identify adults with meningiomas or gliomas, the two most common types of adult brain tumors.
Brain tumors are generally rare relative to other types of cancer. Less than eight in 100,000 people in the U.S. each year are diagnosed with meningiomas, and 85 percent of those tumors are benign, according to Johns Hopkins Medicine and the Cleveland Clinic.
Malignant brain tumors represent only two percent of all cancers.
The new analysis included 253 cases of glioma and 194 cases of meningioma in four French regions, and twice as many people from the same areas of France who had never had a brain tumor, for comparison.
Researchers interviewed the participants about their past cell phone use, with questions about the model of phone they had used, how long they had used it, the average number and length of calls made and received each month and whether the phones were used for work.
“Regular users,” who had used a mobile phone at least once a week for at least six months at a time, were no more likely to have a tumor than those who had never used a cell phone, according to the results published in Occupational and Environmental Medicine.
People with the longest cumulative duration of calls, or more than 896 hours on the phone, were about twice as likely to have a glioma or meningioma than people who had talked less.
“Case-control studies use questionnaires to ask people to recall how often they used their phones for periods up to 10 years and more,” said Dr. Michael Repacholi, former coordinator of the World Health Organization Electromagnetic Field (EMF) Project. “Most people could not accurately remember how many calls or for how long they used their phone each call 10 years ago, and so they give best estimates.”
Cancer registries have not shown a significant increase in brain cancers since mobile technology was introduced 20 years ago, which is reassuring, Repacholi said.
Mobile phone users shouldn’t worry too much about this problem until larger, better studies have been done, and those will take at least another 10 years, Myung told Reuters Health by email.
The new French study will not affect the world’s conversion to mobile phone use, which has saved more lives across the world than probably any other technology in the last 100 years, Lunsford said.
“The ability to communicate without land line access to report illness, injury, impending weather disasters, to access 911, fire, police has undoubtedly saved more lives than any conceivable risk of the late and as yet unverified risk of exposure to non ionizing radiation from mobile phones,” he said.
To minimize the risk, if there is any, Myung recommends following five safety tips issued by the Environmental Working Group: use a headset or speakerphone when possible, when in use hold the phone away from your body, text instead of calling, don’t store the phone in your pocket or under your pillow and try only to use it when you have a strong signal.
“Fewer signal bars means the phone must try harder to broadcast its signal,” he said. “Research shows that radiation exposure increases dramatically when cell phone signals are weak.”
SOURCE: Occupational and Environmental Medicine, online May 9, 2014
The carcinogenic effect of radiofrequency electromagnetic fields in humans remains controversial. However, it has been suggested that they could be involved in the aetiology of some types of brain tumours.
Objectives The objective was to analyse the association between mobile phone exposure and primary central nervous system tumours (gliomas and meningiomas) in adults.
Methods CERENAT is a multicenter case-control study carried out in four areas in France in 2004–2006. Data about mobile phone use were collected through a detailed questionnaire delivered in a face-to-face manner. Conditional logistic regression for matched sets was used to estimate adjusted ORs and 95% CIs.
Results A total of 253 gliomas, 194 meningiomas and 892 matched controls selected from the local electoral rolls were analysed. No association with brain tumours was observed when comparing regular mobile phone users with non-users (OR=1.24; 95% CI 0.86 to 1.77 for gliomas, OR=0.90; 95% CI 0.61 to 1.34 for meningiomas). However, the positive association was statistically significant in the heaviest users when considering life-long cumulative duration (≥896 h, OR=2.89; 95% CI 1.41 to 5.93 for gliomas; OR=2.57; 95% CI 1.02 to 6.44 for meningiomas) and number of calls for gliomas (≤18 360 calls, OR=2.10, 95% CI 1.03 to 4.31). Risks were higher for gliomas, temporal tumours, occupational and urban mobile phone use.
Conclusions These additional data support previous findings concerning a possible association between heavy mobile phone use and brain tumours.
Gaëlle Coureau,
Ghislaine Bouvier,
Pierre Lebailly,
Pascale Fabbro-Peray,
Anne Gruber,
Karen Leffondre,
Jean-Sebastien Guillamo,
Hugues Loiseau,
Simone Mathoulin-Pélissier,
Roger Salamon,
Isabelle Baldi
Provided by ArmMed Media |
Effects of Sunlight on the Skin | Pros & Cons
What are Effects of Sunlight on the Skin?
Just like in plants, the sun can be beneficial or harmful to human beings. The effects of the sun to the skin is somewhat of a hot topic that continues to draw much research and debate. The sun is a natural source of vitamin D.
The skin plays a crucial role in this. Once the sunlight comes into contact with the skin, it reacts by producing vitamin D.
The skin will also get some benefits as a result of the sun exposure but too much contact can be very harmful.
Benefits of sunlight on the skin:
1. The sunlight kills harmful bacteria
Niels Ryberg Finsen received the 1903 Nobel Prize in medicine and physiology for his discovery that sunlight can have some healing properties. Sunlight can kill harmful bacteria, disinfecting and healing wounds.
2. Sunlight helps heal some skin disorders
Skin diseases such as acne, eczema, and fungal infections can be relieved by exposing the skin to sunlight. The sunlight produces vitamin D, which cools inflammation.
skin3. Improves skin health
Vitamin D, produced by the skin when it comes into contact with sunlight, boosts the immune system. An improve immune system allows your skin to fight skin disorders, toxins, bacteria and fungi improving your skin health.
4. Sunlight can protect you from melanoma
The risk of getting melanoma (which is a form of skin cancer) is reduced with sensible exposure to the sun. Thinner melanoma has also been noted in patients with high blood levels of vitamin D as compared to those with low concentrations.
Negative effects of sunlight on the skin:
1. Sunburns
Overexposure to ultraviolet (UV) light, mainly from the sun, causes skin burns. The skin will become red, sore, tender and itchy as a result. Sunburns usually heal after a week or so but having regular sunburns can cause further damage to your skin.
2. Skin damage
Overexposure to the sunlight can also cause skin damage. The effects of skin damage can vary based on your age, skin type and level of sun exposure. Over exposure can cause wrinkles, discoloration, and change of skin texture. All this causes premature aging of the skin. This is more common for women as they use more products on their skin than men. Female skin discoloration through skin damage is more common than you might think.
3. Skin cancer
skin with blemish
The most disturbing effect of sunlight to the skin is skin cancer. Skin cancer has been linked to sunburns that occur in childhood and long-term exposure to the UV rays. UV rays change the structure of skin cells and too much exposure sustained over an extended period permanently damages the skin.
While many people are now fearful of the sun due to its link with skin cancer, it is important to note that only scorching sun is dangerous to your health. These adverse effects can also be prevented by covering up and using sunscreen.
Moderate sunlight has many benefits not just to the skin but also to your bones, brain, blood pressure and even sleep quality. Ensure you get at least 15-20 minutes of sunlight everyday but remember to protect yourself in case you are going to be outdoors for an extended period.
Posted in Skin Health Tagged with: |
Geothermal Services in Mount Juliet, TN and Surrounding Areas
If you are looking for a reliable, energy-efficient way to heat and cool your home or business, consider a geothermal heat pump. Also called ground-source heat pumps, these systems extract heat from the earth during the winter and carry it indoors for heating. During the summer, heat is extracted from the indoors and carried to the outside. The system is up to 75 percent more efficient than a standard electric heating, ventilation and air conditioning system. Bentley’s Air Conditioning continues its 75-year tradition of excellent service in bringing superior technology for heating and cooling needs to our customers in Mount Juliet and surrounding areas in Middle Tennessee.
How Geothermal Heating and Air Conditioning Works
Although the outside air temperature may be freezing, the earth just a few feet below ground surface stays at a temperature between 45 and 75 degrees F throughout the year. Ground-source heat pumps capture this heat from the earth and transfer it into your home. In the winter, this temperature is warmer than the air. In the summer, it is cooler.
In ground-source heat pumps, a liquid circulates through coils of pipes installed in the ground. In the winter, heat is absorbed from the earth and transferred indoors. The system compresses the liquid to create a higher temperature and then circulates this heat through the building.
In the summer, the process is reversed. Heat is extracted from the indoor air and carried outside, where it is absorbed into the earth. Instead of using energy to generate heat, these systems use energy to move heat. Power is used to run the compressor and fans that circulate heat.
geothermal heating and air conditioning
Benefits of Ground-Source Heat Pumps
Ground-source heat pumps are energy-efficient. HVAC units are rated by its coefficient of performance. The COP expresses how much energy a system uses compared to how much it moves. For geothermal air conditioning and heating units, the COP is between 3 and 5. This means that for each unit of energy used to operate the system, as many as five units are generated.
To get an idea of the energy efficiency of a ground-source heat pump, we can compare its efficiency to a gas or oil furnace. A highly efficient furnace may have an efficiency of 90 percent. In contrast, geothermal heating systems have an efficiency of approximately 400 percent. In cooling mode, they are approximately 43 percent more efficient than a standard air conditioning system.
Other benefits include better humidity control, quiet operation and fewer repairs. The subsurface ground loop lasts up to 50 years. The inside components, consisting of a compressor, fan, and pump, last an average of 25 years with proper installation and maintenance.
About Bentley’s Air Conditioning
We install, service and repair all types of heat pumps in the greater Wilson County area. We are a Trane Comfort Specialist because we value the reliability and durability of Trane equipment. In addition to Trane products, we also install, repair and maintain all makes and models of HVAC equipment. Our technicians are NATE-certified and undergo regular factory training.
Feel free to call to discuss your heating and cooling needs. We treat each job as our only job. We believe in honesty, transparent pricing and quality craftsmanship.
nate Trane TCS trane-tag mj_chamber wc_chamber bbb EnergyRight® Solutions |
How Baby Bottles can Cause Tooth Decay in Babies
July 21, 2016, Summer Brook Dental
baby teeth
baby teeth
healthy hygiene begins at an early age so make sure your baby's teeth are protected
Every parent wants to give their baby the best start in life. One of the ways in which parents of babies can do this is to ensure they are preventing tooth decay from an early age. Parents need to be properly informed about the link between baby bottles and tooth decay and how to prevent it. This is a bigger problem than most parents understand. It stems from bottles not being properly sanitized before being given to children. They can spread bacteria in the child's mouth, increase the development of sugars and create chewing habits. By taking your time to not only wash, but sanitize your child's bottles and nipples, you will have the greatest likelihood of eliminating tooth decay in your baby.
Spreading bacteria in your child's mouth
The first thing to understand is how the decay happens in the first place. This is a result of bacteria being harbored in the nipple or the bottle themselves. As the child sucks on the bottle, the bacteria are transferred to the mount of the baby. Because the average child has a bottle for 5-10 minutes, according to Parenting Magazine, you increase the likelihood of bacteria transfer. This is 10 times longer than the exposure an adult has to even the dirtiest utensil. Worse, the nipple is right against the teeth, which makes it impossible for your child not to pick up bacteria from the bottle if it is present.
Develop sugars
Another concern to parents and someone like a dentist in aurora, Colorado is the fact that having a nipple in your baby's mouth for an extended period of time can develop sugars in the mouth. This is more common in children that use the bottle like a pacifier. They chew at a nipple that had milk in it. Milk has sugars in it that make it sweet and enjoyable to most people. If these sugars stay on the teeth for extended periods of time they weaken the teeth and make it more likely that the child will have some problems with decay over time.
Create chewing habits
Chewing habits all by themselves can create tooth decay. If you allow your child to have extended periods of time with a bottle, you're increasing the likelihood they will develop chewing habits. This is a problem, because your baby will end up putting more things in its mouth than is normal. This includes items that are loaded with bacteria. You will result in having to see the experts at SummerBrook Dental Group more often and still run the risk of permanent tooth decay and loss.
The time which babies use bottles is a brief, but important time in their young lives. This is why it is important to start with good health practices for bottle feeding as soon as possible. Your little one is relying on you to make the best decisions to ensure their health now and in the future.
Be the first to comment on this article
Please register if you want to comment
Partners and Sponsors
© 2018 DentaGama All rights reserved |
conservation of mass
Also found in: Dictionary, Thesaurus, Medical, Wikipedia.
Related to conservation of mass: law of conservation of mass
Conservation of mass
The notion that mass, or matter, can be neither created nor destroyed. According to conservation of mass, reactions and interactions which change the properties of substances leave unchanged their total mass; for instance, when charcoal burns, the mass of all of the products of combustion, such as ashes, soot, and gases, equals the original mass of charcoal and the oxygen with which it reacted.
The special theory of relativity of Albert Einstein, which has been verified by experiment, has shown, however, that the mass of a body changes as the energy possessed by the body changes. Such changes in mass are too small to be detected except in subatomic phenomena. Furthermore, matter may be created, for instance, by the materialization of a photon (quantum of electromagnetic energy) into an electron-positron pair; or it may be destroyed, by the annihilation of this pair of elementary particles to produce a pair of photons. See Electron-positron pair production, Relativity
conservation of mass
[‚kän·sər′vā·shən əv ′mas]
The notion that mass can neither be created nor destroyed; it is violated by many microscopic phenomena.
References in periodicals archive ?
The conservation of mass, momentum and energy equations are applied to the two interacting control volumes.
In comparison, COSMO solves the conservation of mass, momentum, and energy equations applied to a large numbers of differential cells in the vertical spaces.
The theoretical density obtained by conservation of mass is too large by a factor of order 200 at redshift 5.
In a similar manner that conservation of mass links areas and velocities, Bernoulli's equation links velocities and static pressures, when applied to a diffuser or a nozzle.
The interface development and material distribution during the cavity filling can be accurately determined by enforcing the extra local mass conservation at each fluid subdomain, apart from the global conservation of mass at the whole cavity domain.
In reality, it takes much more than one activity for young learners to assimilate such understanding of the conservation of mass.
A "toolkit" for modelers is provided, including general mathematical functions, and cornerstone principles such as feedback, conservation of mass, and units checking.
Lavoisier had advanced the law of conservation of mass (see 1789) and before that there had been the law of conservation of momentum (see 1668).
Based on the conservation of mass, the resulting average thickness of the formed part will be the original thickness multiplied by the strech ratio.
This third edition of this reference emphasizes the fundamental principles of the conservation of mass and energy, and their consequences as they relate to materials and energy.
3", (2007) is incorporated to solve the differential equations governing the conservation of mass, three momentum and energy in the processing of airflow distribution.
Pretests and posttests were used to measure the change in students' understanding of chemical reactions, conservation of mass, and biological growth.
Full browser ? |
Parliament and King – Part One
By 1640, Charles I was finding that his non-parliamentary attempts to raise money were failing to fund his plans, especially his military struggle with Scotland. After twelve years of personal rule, he called a parliament in April 1640. It was not a success. Parliament only wanted to continue where it had left off and talk about their own privileges and the king’s abuses of power. Exasperated, Charles dissolved it after only three weeks, and it became known, appropriately, as the Short Parliament.
Inevitably, he had to call another parliament in November. This one lasted rather longer and became known as the Long Parliament, because technically it sat until it dissolved itself in 1660. Charles still wanted it to vote him money, but it had other priorities, and one of its first acts was to impeach William Laud, the reforming Archbishop of Canterbury and protégé of the king, for high treason. This was really a way of getting at Charles by the mostly Puritan parliament, who disliked Laud’s reforms and his sometimes draconian methods of enforcing them – by having his opponents branded, for example. Given his age (he was sixty-seven), Laud was imprisoned in the Tower of London rather than being put on trial. Parliament next impeached Charles’s close advisor, the Earl of Strafford, alleging that he had attempted to raise an army in Catholic Ireland to subdue England. Charles was obliged to sign his friend’s death warrant and he was dead within six months of the Long Parliament’s first meeting.
When rumours reached Charles that parliament was also planning to impeach his queen, Henrietta Maria, for alleged involvement in Catholic plots, Charles decided to go on the offensive and arrest five of its leaders for treason. It was not a wise move. No king had ever entered the House of Commons, but on Tuesday, 4 January 1642, in gross violation of Parliamentary privilege, the King entered the House with armed men to arrest the Five Members. They had been warned and fled, but Charles had openly shown his contempt for parliament. He left London on 10 January 1642 and set up his court in Oxford, where he began raising an army, having declared that parliament was in rebellion. The Civil War had started.
In 1640, Hythe had elected John Wandesford and Henry Heyman as M.P.s for the Short Parliament . The two men could not have been more different. Wandesford was a Royalist, who later went with the king to Oxford and managed the king’s artillery train there during the Civil War. His attraction for Hythe corporation seems to have been that he tried to get the Crown to take an interest in building a proper harbour for the town. He was as good as his word, and sent papers to the Secretary of State, who undertook to pass them to His Majesty, but by then Charles’s mind was on other things and the project never got any serious attention. Henry Heyman , on the other hand, was the parliamentarian son of Peter Heyman, the towns’ former M.P. The Heyman’s family seat was Somerfield at Sellinge about four miles from Hythe, and they were well-known to the corporation.
In the election for the Long Parliament the same year, Hythe plumped for two parliamentarians. Henry Heyman was chosen again, and wrote frequently to the corporation, his ‘brethren and loving friends’, keeping them up to date with national developments, especially of the Five Members charged with treason. The town ditched Wandesford, who had failed to deliver the promised harbour, and chose instead John Harvey, brother of the physician William Harvey. He also had local connections, having inherited from his father land at Arpinge and Folkestone. He had broken with the family’s Royalist loyalties (his brother was physician to James I), and sided with parliament until his death in 1645.
The choice of two staunch parliamentarians attracted the attention of the Lord Warden of the Cinque Ports, the Duke of Lennox, who was a Stuart cousin of the king. He was incensed by Hythe’s decision and wrote a threatening letter to the corporation demanding to know the name and standing of every man who had voted for Harvey and Heyman. Votes were not then secret and were given verbally. The list had to be provided to Dover Castle. Refusal would be ‘at your peril’.Unintimidated, the corporation referred the letter to their M.P.s., who passed it on to other interested parties. It formed part of the evidence against Lennox when, in 1643, the House of Commons decided that he was ‘one of the malignant Party, and an evil Counsellor to His Majesty’ and that he should be removed from all his offices. He fled into exile before joining the king at Oxford.
The Duke of Lennox, Lord Warden of the Cinque Ports
The Duke of Lennox, Lord Warden of the Cinque Ports
For the Love of God – Part Two
Puritans were not the only critics of the Church. Charles I believed that far from being attracted to the Puritans’ endless preaching, self-examination and warnings of damnation, people were being alienated from the Church. He and Buckingham, whom he had inherited from his father as royal favourite, supported and promoted the career of William Laud, an Arminian priest. He, like many English clergy, was a follower of Jacobus Arminius, a Dutch theologian, who taught that salvation was not absolutely predestined and that God might be convinced by the penitent works of a sinner to allow them into Heaven. Therefore, the distinctions between the saved and the damned were not so hard and fast after all.
William Laud, Archbishop of Canterbury
William Laud, Archbishop of Canterbury
Laud and Charles saw the restoration of Catholic spectacle and mystery as a way of bringing people back to a proper engagement with the Church and with God. Laud was appointed Archbishop of Canterbury in 1633 and set about restoring ceremonial and ritual and what he described as ‘the beauty of holiness’ in the Church. Charles thought he was broadening the church. The godly Puritans thought Laud and his clergy were disguised papists whose real aim was to take England back to Rome.
Hythe is in the Diocese of Canterbury, so was fully exposed to Archbishop Laud’s reforming zeal, particularly as the Rector of Saltwood, William Kingsley, was one of Laud’s acolytes. Hythe was not then a separate parish, and the magnificent church of St Leonard’s was designated as a ‘chapel’ under the control of Saltwood. Kingsley was not only Rector of Saltwood, to which he was appointed in 1614; he added Great Chart (1615), Ickham (1617) and the Archdeaconry of Canterbury Cathedral (1619) to his portfolio of posts. Kingsley was as ardent as his master in his desire to reform the Church. As Archdeacon, he attempted to banish Puritan preachers from Canterbury and began to enforce kneeling at communion, a practice which had not been used since Catholic Queen Mary’s time. Communion tables were removed from the naves of churches, where they had been since the Reformation, and were railed about in the chancel, like an altar, which only the minister could approach. Even though Kingsley’s presence in Hythe could only have been intermittent, given his commitments elsewhere, his curate Thomas Kingsmill no doubt followed orders. There was not much the Puritans of Hythe could do about the situation, but there was a noticeable increase in defaults on tithe payments after his appointment. Perhaps that was one way of showing disapproval.
By the early 1640s, Kingsley had been removed from office by Parliament, and Kingsmill was dead, replaced by the radical Puritan Scot, William Wallace. By that time, the world had been turned upside down. |
From SourceWiki
Jump to: navigation, search
Python for Scientists
With thanks to Simon Metson and Mike Wallace for much of the following material.
Getting Started on BlueCrystal Phase-2
After you have logged in, type the following at the command line:
module add languages/python-
This should start up an interactive python session:
Python 2.7.2 (default, Aug 25 2011, 10:51:03)
[GCC 4.3.3] on linux2
where we can type commands at the >>> prompt.
Python as a Calculator
To get started, let's just try a few commands out. If you type:
>>> print "Hello!"
you'll get:
If you try:
>>> print 5 + 9
you'll get:
So far so simple! Here is a copy of a session containing a few more commands where we've set the values of some variables and also defined and run our own function:
>>> five = 5
>>> neuf = 9
>>> print five + neuf
>>> def say_hello():
... print "Hello, world!"
... # hit return here
>>> say_hello()
Hello, world!
You can exit an interactive session at any time by typing Ctrl-D.
Getting Help
One of the good things about Python is that it has lots of useful online documentation. (There are good books on the language too.) For example, take a look at: You can also type help() and the interpreter prompt:
>>> help()
Welcome to Python 2.7! This is the online help utility.
If this is your first time using Python, you should definitely check out
the tutorial on the Internet at
Enter the name of any module, keyword, or topic to get help on writing
Python programs and using Python modules. To quit this help utility and
return to the interpreter, just type "quit".
help> keywords
Here is a list of the Python keywords. Enter any keyword to get more help.
and elif if print
help> if
The ``if`` statement
The ``if`` statement is used for conditional execution:
if_stmt ::= "if" expression ":" suite
( "elif" expression ":" suite )*
["else" ":" suite]
It selects exactly one of the suites by evaluating the expressions one
by one until one is found to be true...
help> quit
You are now leaving help and returning to the Python interpreter.
Making a Script
An interactive session can be fun and useful for trying things out. However--to save our fingers--we will typically want to execute a series of commands as a script, created using your favourite text editor. Here are the contents of an example script:
#!/bin/env python
print "Hello, from a python script!"
Ensure that your script is executable:
chmod u+x
and now you can run it:
[ggdagw@bigblue4 ~]$ ./
Hello, from a python script!
Python and Whitespace
Love it of hate it, Python incorporates whitespace in it's syntax. (It's either that or demarcate blocks with some other syntax, such as ending a line with a semi-colon as it is in C. Pick your poison.) Spacing is therefore key in creating a valid python script. For example:
message = "happy days!"
if len(message) > 10:
print "longer.."
print "shorter.."
will work, but:
message = "happy days!"
if len(message) > 10:
print "longer.."
print "shorter.."
will not:
File "./", line 7
print "shorter.."
IndentationError: expected an indented block
It is therefore a great advantage, when writing to python script, to use a text editor which has a dedicated python mode--such as emacs--and will actively help you to keep your spacing correct. See,, for an extensive list.
Some Suggested Exercises
• Calculate the volume of a sphere. You can experiment with the following (where r needs to be set to some value):
• 4/3 * 3.14159265359 * r ** 3
• 4.0/3.0 * 3.14159265359 * pow(r,3)
• float(4)/float(3) * 3.14159265359 * pow(r,3)
• Concatenate two strings
• Write a recursive function to compute fibonacci numbers (Hint: F(n) = F(n-1) +F(n-2), F(0)=0 and F(1)=1)
Nuts and Bolts
Python has intrinsic types including, integers, floats, booleans and complex numbers. It is dynamically typed (meaning that you don't have to have a block of variable declarations at the top of your script), but it is not weakly typed, for example:
>>> my_complex = 2 + 0.5j
>>> my_complex
>>> my_complex.real
>>> my_complex.imag
>>> name = 'fred'
>>> lucky = 7
>>> name + lucky
Traceback (most recent call last):
TypeError: cannot concatenate 'str' and 'int' objects
The eagle-eyed will have spotted in a previous examples that we could ask the length a character string--straight off the bat. No need to write a counting routine ourselves:
message = "happy days!"
print len(message)
We also take slices of our character string. In my case
print message[:5]
Since a string is an object (in the object oriented programming sense of the word, but more of that another time...) we can call a number of methods that operate on a string. A selected sample include:
s.find(sub) Finds the first occurrence of the given substring
s.islower() Checks whether all characters are lowercase
s.upper() Returns s converted to uppercase
s.strip() Removes leading and trailing whitespace
s.replace(old,new) Replaces substring old with new
s.split([sep]) Splits s uses (optional) sep as a delimiter. Returns a list
Lists and Tuples
An example of a list is:
shopping = ['bread', 'marmalade', 'milk', 'tea']
and we can inquire about the length of that using the same function as before:
We can also take slices of a list, as we did with a string:
and even reset a portion of the list that way:
shopping[0:2] = ['bagels', 'jam']
Since a list is also an object, we have more handy methods, including:
s.append(x) Appends an new element x to the end of s
s.count(x) Returns the number of occurences of x in s
s.reverse(x) Reverses items of s in place
s.sort([compfunc]) Sorts items of s in place. compfunc is an optional comparison function
Tuples are very similar to lists and support many of the same operations (indexing, slicing, concatenation etc.) but differ in that they are not mutable after creation:
>>> mytuple = ('fred', 'ginger', 7, 2.5)
>>> mylist = ['fred', 'ginger', 7, 2.5]
>>> mylist[2] = 8
>>> print mylist
['fred', 'ginger', 8, 2.5]
>>> print mytuple[2]
>>> mytuple[2] = 8
Traceback (most recent call last):
TypeError: 'tuple' object does not support item assignment
List comprehension:
>>> numbers = [12, 3, 90, 40, 52, 11, 10]
>>> small_numbers_doubled = [number * 2 for number in numbers if number < 20]
>>> small_numbers_doubled
[24, 6, 22, 20]
A dictionary is an associative array or hash table, containing key-value pairs:
mydict = {'thomas':'blue', 'james':'red', 'henry':'green'}
>>> print mydict['james']
We can write much more user-friendly and intuitive code using dictionaries, rather than arbitrary indexes into a list.
Some example dictionary methods are:
m.keys() Returns a list of the keys in m
m.items() Returns a list of the (key,value) pairs in m
m[k] = x Sets m[k] to x
m.update(b) Adds objects from dictionary b to m
Control Structures
Of course, we'll need conditionals and loops etc. to go beyond the simplest of scripts. Here is an if-then-else, python style:
if sky == ‘blue’:
elif sky == ‘black’:
pass #do nothing
and a classic for loop:
for ii in range(1,10):
print ii
We'll also see a while loop shoehorned into the next example.
For our control statements, we can use comparison operators such as, ==, !=, >, <, <=, >=, and logical operators, such as, and, or,not
File Input and Output
Here's some code for printing the contents of a text file:
fp = open("foo.txt","r")
line = fp.readline()
while line:
line = line.strip()
print line
line = fp.readline()
We could open a file for writing with:
fp = open("foo.txt","w")
and use:
to write to that file.
Object Oriented Programming in Python
Here is an example of using a class in python:
#!/usr/bin/env python
class Radio:
"A simple radio"
def __init__(self,freq=0.0,name=""):
"Constructor method"
def tune(self,freq):
def tuned_to(self):
print, "tuned to:", self.__frequency
if __name__ == "__main__":
# declare two radio instances
car = Radio(name="car")
kitchen = Radio(91.5,"kitchen")
# call some methods
# Docstrings--double quotes at the top of the class:
print car.__doc__
# NB members not private by default:
# BUT leading double underscores will trigger
# name mangling and hence the member will be hidden
print car.__frequency
Running the script gives us:
car tuned to: 0.0
kitchen tuned to: 91.5
car tuned to: 89.3
A simple radio
Traceback (most recent call last):
File "./", line 27, in <module>
print car.__frequency
AttributeError: Radio instance has no attribute '__frequency'
Using Packages
Python packages are great because they provide us with a whole lot of extra functionality--above and beyond the core language--that we didn't have to write and debug ourselves.
Let's walk through a simple example using a package. At an interactive prompt type:
from random import randint
This will give us access to the randint(x,y) function, which returns a randomly chosen integer from the given range [x,y]:
>>> randint(0,10)
>>> randint(0,10)
>>> randint(0,10)
>>> randint(0,10)
OK, so far so good. One thing to note is that the above import statement has drawn the name randint into our current namespace. What if we had already defined a function named randint. That could cause problems. In order to protect ourselves from this kind of problem, there are several import variants.
By default, functions will be added to a namespace with the same name as the package. In order to call the functions we will, in this case, have to prefix them with there namespace:
>>> import random
>>> random.randint(0,10)
Should we desire, we can apply a little more control and specify the namespace for the import ourselves:
>>> import random as rnd
>>> rnd.randint(0,10)
Another--more 'devil-may-care'--approach is to do away with the separate namespace and pull everything from a given package into the current namespace:
>>> from random import *
>>> randint(0,10)
>>> random()
(The random() function returns a randomly selected floating point number in the range [0, 1)--that is, between 0 and 1, including 0.0 but always smaller than 1.0.)
Interrogating a Module
To find all the functions that are in a particular module, type dir(<modulename>).
If you have the pip package installed, you can easily see which other packages are installed using pip list on the linux command line.
A Namespace Collision
>>> def randint():
... print "dummy function"
>>> randint()
dummy function
>>> from random import randint
>>> randint()
Traceback (most recent call last):
TypeError: randint() takes exactly 3 arguments (1 given)
>>> randint(0,10)
Python for Shell Scripting
from subprocess import call
call(["ls", "-l"])
Python as a Glue Languge
Command Line Parsing
#!/usr/bin/env python
import sys
if __name__ == "__main__":
# We can test on the length of argv
if len(sys.argv) < 2:
print "usage: to use this script..."
ii = 0
for arg in sys.argv:
# (typically) argv[0] is bound to the script name
print "arg", ii, "is:", arg
ii = ii+1
gethin@gethin-desktop:~$ ./
usage: to use this script...
gethin@gethin-desktop:~$ ./ fred ginger
arg 0 is: ./
arg 1 is: fred
arg 2 is: ginger
Simple Databases
Python provides access to some database packages through some standard packages. The bsddb module allows you to access the highly popular Berkeley DB database from your python code.
The interface to the database provided by this module is very similar to the way in which we access a dictionary. First, let's populate a database:
import bsddb
d = bsddb.btopen('engines.db')
d['thomas'] = 'blue'
d['james'] = 'red'
d['henry'] = 'green'
Now let's open the database again and query it's contents:
>>> d = bsddb.btopen('engines.db')
>>> d.keys()
['henry', 'james', 'thomas']
>>> d.first()
('henry', 'green')
>>> d.last()
('thomas', 'blue')
>>> colour = d['james']
>>> colour
>>> del d['henry']
>>> d.keys()
['james', 'thomas']
Relational Databases
Relational databases give us more oomph. SQLite is a useful relational database to consider as it is light, in that it requires hardly anything in terms of setup or management, yet still understands queries formulated in SQL. As such it is useful for creating relatively simple examples of SQL access to a database in python and is a stepping stone toward more powerful database packages.
Here is a script which will create a table called planets in the file pytest.db and populate with details of the planets in our solar system:
#!/usr/bin/env python
# Example python script using sqlite3 package
# to connect to an SQLite database.
import sqlite3
conn = sqlite3.connect('pytest.db') # or use :memory: to put it in RAM
cursor = conn.cursor()
# create a table
cursor.execute("""CREATE TABLE planets
(Id INT, Name TEXT, Diameter REAL,
Mass REAL, Orbital_Period REAL)""")
# insert a single record
cursor.execute("INSERT INTO planets VALUES(1,'Mercury',0.382,0.06,0.24)")
conn.commit() # save data to file
# insert multiple records
other_planets = [(2,'Venus',0.949,0.82,0.72),
cursor.executemany("INSERT INTO planets VALUES (?,?,?,?,?)", other_planets)
conn.commit() # save data to file
# delete a record
sql = """
WHERE Name = 'Pluto'
cursor.execute(sql) # poor old pluto!
And here is a short example script showing a couple of ways to interrogate the database:
#!/usr/bin/env python
# Example python script using sqlite3 package
# to connect to an SQLite database.
import sqlite3
cursor = conn.cursor()
print "All the records in the table, ordered by Name:\n"
for row in cursor.execute("SELECT rowid, * FROM planets ORDER BY Name"):
print row
print "\n"
print "All the planets with a mass greater than or equal to that of Earth:\n"
sql = "SELECT * FROM planets WHERE Mass>=?"
cursor.execute(sql, [("1.0")])
for row in cursor.fetchall(): # or use fetchone()
print row
Where the results of running the script are:
All the records in the table, ordered by Name:
(3, 3, u'Earth', 1.0, 1.0, 1.0)
(5, 5, u'Jupiter', 11.209, 317.80000000000001, 5.2000000000000002)
(4, 4, u'Mars', 0.53200000000000003, 0.11, 1.52)
(1, 1, u'Mercury', 0.38200000000000001, 0.059999999999999998, 0.23999999999999999)
(8, 8, u'Neptune', 3.883, 17.199999999999999, 30.059999999999999)
(6, 6, u'Saturn', 9.4489999999999998, 95.200000000000003, 9.5399999999999991)
(7, 7, u'Uranus', 4.0069999999999997, 14.6, 19.219999999999999)
(2, 2, u'Venus', 0.94899999999999995, 0.81999999999999995, 0.71999999999999997)
(3, u'Earth', 1.0, 1.0, 1.0),
(5, u'Jupiter', 11.209, 317.80000000000001, 5.2000000000000002),
(6, u'Saturn', 9.4489999999999998, 95.200000000000003, 9.5399999999999991),
(7, u'Uranus', 4.0069999999999997, 14.6, 19.219999999999999),
(8, u'Neptune', 3.883, 17.199999999999999, 30.059999999999999)
For more information on using SQLite with Python, see, e.g.:
You can also connect to a MySQL database from python using, e.g. the python-mysqldb package. A snippet of python code for connecting to a database is:
#!/usr/bin/env python
import MySQLdb
conn = MySQLdb.connect(host="localhost", # your host, usually localhost
user="gethin", # your username
passwd="changeme", # your password
db="menagerie") # name of the data base
# Create a cursor object, as before with SQLite
cur = conn.cursor()
# and then you can submit your SQL command:
OK, let's move onto looking at python's numerical processing capabilities. We will start by looking at the numpy package:
from numpy import *
Now that we have access to the functions from numpy, let's create an array. Note that a numpy array is an object of a different type to an intrinsic array in Python. A simple approach is to use the array function. For example we might enter:
a = array([[1.0,0.0,0.0],[0.0,1.0,0.0],[0.0,0.0,1.0]])
b = array([[1,2,3],[4,5,6],[7,8,9]])
>>> a
[ 0., 1., 0.],
[ 0., 0., 1.]])
>>> b
array([[1, 2, 3],
[4, 5, 6],
[7, 8, 9]])
>>> transpose(b)
array([[1, 4, 7],
[2, 5, 8],
[3, 6, 9]])
Given an array, we may inquire about it's shape:
print a.shape
and we are told that it is a 2-dimensional array (i.e. an array of rank 2) and that the length of both dimensions is 3:
(3, 3)
We can also apply operators to array objects. For example:
a = a * 9
array([[ 9., 0., 0.],
[ 0., 9., 0.],
[ 0., 0., 9.]])
Note, however, that most operations on numpy arrays are done element-wise, which is different to a linear algebra operation that you may have been expecting. We will return to linear algebra operations when we look at the scipy package.
Should we so desire, we could re-shape the array. One way to do this is to to set it's shape attribute directly:
>>> a.shape = (1,9)
>>> a
array([[ 9., 0., 0., 0., 9., 0., 0., 0., 9.]])
As with the list example, it can be useful to read or change the value of an element (or sub array) individually. Let's turn the array back to it's rank-2 form and try it out:
>>> a.shape = (3,3)
>>> a[1,1] = 777.0
>>> print a
[[ 9. 0. 0.]
[ 0. 777. 0.]
[ 0. 0. 9.]]
>>> a[1:,1:] = [[777.0, 777.0],[777.0, 777.0]]
>>> print a
[[ 9. 0. 0.]
[ 0. 777. 777.]
[ 0. 777. 777.]]
This is all pretty handy so far, but specifying the value of each element explicitly could become a chore. Happily some helper functions exist to give you a head start with some building blocks. For example, your can use:
>>> b = zeros((3,3))
>>> print b
>>> b = ones((3,2))
>>> print b
>>> b = identity(2)
>>> print b
>>> big = resize(b, (6,6))
>>> print big
The use of resize in the last example illustrates a useful replicating feature.
A list of all the functions and operations contained within numpy is:
Pylab and Matplotlib
The above examples are quite natty, but we have deliberately kept the array sizes small so that we can print the element values easily. In practice, you may find that your array sizes are much larger and printing the values to the screen is impractical. Fear not! Python has many packages which help you plot your data, so that you can explore it.
Using the pylab plotting interface we can create:
import pylab
from numpy import arange, pi, cos, sin, add, sqrt
t = arange(0.0, 3.0, 0.01)
c = cos(2 * pi * t)
s = sin(2 * pi * t)
pylab.ylabel('some numbers')
pylab.xlabel('some more numbers')
pylab.plot(t, c, 'r', lw=2)
pylab.plot(t, s, 'b', lw=2)
pylab.plot(t, c-s, 'gs', lw=2)
pylab.ylim(-1.5, 1.5)
pylab.title('sin and cos functions')
pylab.savefig('curves', dpi=300)
Where curves.png looks like:
Some nice curves
You can open .png images from the linux command line (inc. bluecrystal) using, e.g.: display -resize 1000 curves.png
We can also use Matplotlib directly for more control:
import matplotlib.pyplot as plt
from pylab import meshgrid
from numpy import arange, add, sin, sqrt
x = arange(-5,10)
y = arange(-4,11)
z1 = sqrt(add.outer(x**2,y**2))
Z = sin(z1)/z1
X, Y = meshgrid(x,y)
and you should get a window similar to:
A contour map of the sinc function
Perhaps the best way next step for matplotlib is to look at the gallery: Just click on a figure and you will get the code used to generate it--a really great resource!
Input and Output
The foregoing is all very interesting, but life would be rather dull if you had to re-enter all your data by hand whenever you set to work with Python and numpy. Therefore we need a means to save data to a file and load it again. Happily, we can do this rather easily using a couple of routines from the pylab package:
>>> from numpy import *
>>> from pylab import load
>>> from pylab import save
>>> data = zeros((3,3))
>>> save('myfile.txt', data)
>>> read_data = load("myfile.txt")
warning, the load() function of numpy will be shadowed in the above example. One way to protect yourself against this is to make use of namespaces: Modify your import command to import pylab and then use pylab.load(..).
An example: Differentiation
>>> # derivative of x^2 at x=3
>>> from scipy import derivative
>>> derivative(lambda x: x**2, 3)
>>> # also works with arrays
>>> from numpy import array
>>> my_array = array([1,2,3])
>>> derivative(lambda x: x**2,my_array)
array([ 2., 4., 6.])
Google for many more examples pertaining to your favourite numerical procedure!
A Repository of Packages You Could Use
Now, we've touched on a couple, but there are thousands of python packages available. Before you start writing your own function for X, check that someone hasn't contributed code for that already at
pip, the python package manager will look in pypi by default to install a package. You can use the --user option to install python packages in your own user space. See:
for more information on pip.
Writing Faster Python
As with other scripting languages, such as MATLAB and R, one of the simplest ways in which you can write faster python code is to eliminate loops by vectorising your code.
Consider the following two scripts. First
#!/usr/bin/env python
import numpy as np
arr = np.random.rand(1000000)
def filter(arr):
for i, val in enumerate(arr):
if val < 0.5:
arr[i] = 0
return arr
if __name__ == "__main__":
and secondly,
#!/usr/bin/env python
import numpy as np
arr = np.random.rand(1000000)
def filter(arr):
arr[arr < 0.5] = 0
return arr
if __name__ == "__main__":
If we now run these two scripts through the Linux command line time utility, we see that the vectorised code runs a lot faster than the for loop:
gethin@gethin-desktop:~$ time ./
real 0m0.963s
user 0m0.952s
sys 0m0.012s
gethin@gethin-desktop:~$ time ./
real 0m0.116s
user 0m0.096s
sys 0m0.020s
For some more tips on writing faster python code, and examples of how to use one of the python profiler modules, take a look at:
Further Reading |
Lydia Miljan: Proportional representation voting systems breed unstable governments
Proportional representation means more coalition governments which mean more unstable governments and more uncertainty about the composition of those governments, says Lydia Miljan.
Share Adjust Comment Print
Some see the upcoming referendum on electoral reform — specifically, whether British Columbia should switch to a proportional representation (PR) voting system — as a blatant attempt by the B.C. Green party to secure more power. While it’s clear that under any form of PR, the Greens would theoretically increase their seat share, there would also be more single-issue parties vying for seats in the legislature.
In B.C.’s current first-past-the-post voting system, it’s difficult for single-issue parties and new parties to garner enough support to get electoral seats. As a result, political actors tend to compromise and form coalitions within existing parties. Big tent parties such as the Liberals and NDP are basically coalitions of various interests.
Because the threshold for securing seats is lower in PR systems (usually about five per cent of the popular vote), they result in more political parties competing for support. In other words, PR systems reduce the need to compromise within parties before an election. While there are 18 registered political parties in B.C., only three (Liberal, NDP and Green) have seats in the legislature.
University of Windsor political scientist Lydia Miljan. VSUNwp
Consider this. As noted in a recent Fraser Institute study, because new and single-issue parties have a greater chance of being elected under PR, an international measure known as the “effective number of parliamentary parties”, is higher in PR systems than in majority and plurality systems like we have in Canada. In countries worldwide, the average number of effective parties in first-past-the-post systems (again, like we currently have in B.C.) is 2.5 — under PR systems, that number doubles to 4.5. And there’s a lot of variability depending on the country. For example, while PR systems in Portugal and Greece have similar effective number of parliamentary parties as Canada, countries such as Israel (7.5) and Belgium (8) have higher effective number of parliamentary parties.
Crucially, the consequence of a higher effective number of parliamentary parties is twofold — more coalition governments and more unstable governments.
Of course, coalition governments can occur in any electoral system. Currently, B.C. essentially has a coalition government of the NDP and Greens. However, the likelihood of coalition increases significantly in PR systems. Between 2000 and 2017, 23 per cent of majority/plurality systems (including first-past-the-post) produced coalition governments compared to 87 per cent for PR systems. For mixed systems, which combine aspects of majority/plurality with aspects of PR, it was even higher at 95 per cent. PR systems also have more parties as part of government, averaging 3.3 parties compared to 2.6 for mixed and 2.3 for majority/plurality systems.
Why is this a problem?
Because more coalition governments mean more unstable governments and more uncertainty about the composition of those governments. Remember, coalition building is done after the votes are cast and it often takes a long time to work out deals between coalition partners. How long? Based on the research, it takes 32 days (on average) for a government coalition to emerge in mixed systems and 50 days for PR systems. After its 2010 election, it took Belgium 541 days to form a government — the longest wait on record at the time. More recently, Germany went 161 days before forming government after its September 2017 election. And most recently, Northern Ireland is 590 days and counting without a government since the coalition Catholic-Protestant power-sharing administration collapsed in January 2017.
Finally, some proponents of PR argue that this form of electoral system better represents the diverse views of voters. Their logic is that the negotiation of coalition governments allows for more viewpoints, and that this process produces policy closer to what the median voter wants. But negotiations after the vote are not based on popular will, but on whether a party can prop-up a government. More importantly, coalition governments make it hard for voters to hold governments to account, as it becomes difficult for voters to clearly assign blame or credit. At the same time, voters are given limited options during elections and can only use their vote to punish or reward governments for their policy decisions.
These findings suggest that, at the very least, the debate on electoral reform in B.C. must be expanded. And the current government, and the citizens of the province, should consider a broad set of evaluative criteria — much broader than the referendum ballot voters will receive in the fall — when determining whether we should change B.C.’s electoral system.
CLICK HERE to report a typo.
This Week's Flyers
|
Coffee Drinkers May Live Longer Thanks To Caffeine's One Special Property
Every couple of years, a study comes along proclaiming that something we often consider a "guilty pleasure" — think red wine or chocolate — might actually be good for you. Now it’s coffee lovers turn to rejoice, as a study out of Stanford offers more compelling evidence as to why coffee drinkers live longer than their less-caffeinated counterparts. The new research suggests that caffeine found in regularly consumed beverages such as coffee and tea could have a positive effect on the immune system, countering certain inflammatory processes that have been linked to risk of stroke and heart attack in older adults. As if we needed another reason to drink coffee!
Delicious, delicious coffee's image has been changing for the better since 2015, when a study out of Harvard found that drinking between one and five cups of joe a day was tied to lower risks of all-cause mortality. Positive research continues to pile up, with numerous studies tying a hit of caffeine to improved health and reduced risk of heart disease, chronic respiratory diseases, diabetes, dementia, and pneumonia. While the benefits of everyone’s favorite pick-me-up stand undisputed, the mechanisms behind exactly why coffee seems to work like a miracle drug remained unclear — until now.
The study, published Jan. 16 in the journal Nature Medicine, began as a multi-year examination of the aging process. Researchers from Stanford and the University of Bordeaux examined the genes of young and old participants isolating two gene clusters that were more active in the older group. High activity in these two clusters was found to be linked to the production of a circulating inflammatory protein known as IL-1-beta. The protein, which is intrinsic to fighting infection, is also tied to inflammation which could be damaging to an older person's health. “More than 90 percent of all non communicable diseases of aging are associated with chronic inflammation,” the study’s lead author, David Furman, said in a press release.
The 23 older subjects, all between the ages of 60 and 89, were then divided into two groups dependent on the activity in the two gene clusters. The group with lower activity in the clusters were found to live longer, have generally lower blood pressure, and be healthier overall compared to the more inflamed group. The group of individuals displaying high activity in the gene clusters were more likely to have arterial stiffness, which has been linked to heightened risk of heart attack and stroke.
So where does coffee come in? A questionnaire revealed that those in the less inflamed group were more likely to consume caffeine, and the researchers confirmed that their blood was indeed "more likely to contain caffeine metabolites." This gave researchers their first big hint that caffeine could dampen inflammation.
When immune cells were incubated with nucleic acid metabolites, it was found to increase activity in one of the gene clusters, producing more IL-1-beta. When the same experiment was repeated, but caffeine metabolites were added to the mix, the caffeine canceled the effects of the nucleic acid metabolites, hence blocking inflammation. “That something many people drink — and actually like to drink — might have a direct benefit came as a surprise to us,” Davis stated in a press release. “What we’ve shown is a correlation between caffeine consumption and longevity. And we’ve shown more rigorously, in laboratory tests, a very plausible mechanism for why this might be so.”
While more investigation is needed to determine exactly how caffeine blocks the inflammatory process, it may bode well for those who can't start their morning without a hot cup of coffee. |
David Brown
Biography has not been added
David Brown
's contributions
• It is clear the author is a bit mixed up about C and C++, as well as the standards versions. By "If you are using an older version of C11", he means "an older version of C++". And templates and exceptions are not exactly new features of C++11 ! Regarding templates, it is a common misconception that they necessarily mean larger code in C++. That will depend on how they are defined and used. Inlining, constant propagation, common code merging, and various other compiler techniques can mean code is shorter with templates - especially compared to the alternative of trying to write "generic" functions. And while there are good reasons for choosing not to use exceptions, especially in embedded programming, run-time speed is not one of them. Enabling exceptions usually has very little or no impact on the speed of the code, and indeed is often faster than using error returns or similar alternative techniques. However, it can sometimes involve substantial increase in code size for the stack unwind tables.
• If you are writing floating point code for embedded systems, be /very/ careful about your floating point literals. If you write 5.0, then you have a double-precision "double". If you want a single-precision "float", then write 5.0f. On a microcontroller with software floating point, double-precision operations are typically about 3 or 4 times slower than single-precision. If the microcontroller has 32-bit hardware floating point (like a Cortex M4F), the difference is a factor of several hundred. So don't write "float y; ...; y /= 2.0;". You might find your compiler bumps this up to an operation on doubles and then converts back down to singles. Write "y /= 2.0f", or even better, just write "y /= 2;". That is the natural way to write it. And if your compiler supports it, don't forget warnings to catch mistakes like this: -Wfloat-equal -Wfloat-conversion -Wdouble-promotion
• As Matthew says, use types from stdint.h. Don't make your own typedefs for size-specific types (unless you are stuck with tools from last century). Of course, use typedefs freely for making your own types for other purposes - just don't use them to duplicate standard types. And if you want your code to be portable across a range of target architectures while being optimal, go beyond the basic types like uint32_t. For the example above, comparing 8-bit and 32-bit code, the correct type to use is "int_fast8_t". This says "give me a signed integer, at least 8-bits in size, whatever works fastest on this platform". On an 8-bit AVR, it will be 8-bit in size. On a 32-bit ARM, it will be 32-bit. On both platforms you get the best results. Don't /ever/ use "char" for an aritmetic type! And the article is incorrect in saying that signed types are more expensive than unsigned types - precisely because signed types can have more optimisations than unsigned types. It is possible that IAR's compilers are different here and don't optimise signed types as freely as the C standards allow, but a C compiler can ignore the possibility of overflow for signed types. This gives it more opportunities for optimisation, not less. Sometimes there can be particular operations that are faster with unsigned types than signed types on particular cores, however. And unsigned types should always be used for bitwise operations.
• Your first example will fail in most cases, because you have not declared "flag" to be "volatile" (or alternatively, you have not used volatile accesses on it in the main loop). I expect you will cover "volatile" in detail in later parts, but it should be in your examples here too. The idea that you should have a "default" statement in a switch to "help recover from an undefined state" is laughable. Write your code correctly so that it will not /have/ undefined states. If you have bugs in the code (or perhaps hardware problems), then a default statement is not going to help significantly - it just means you have added more code and therefore more complexity, on a code path that you almost certainly will never test. "Default" switch clauses are there for when you want a general case rather than just specific ones - never as a "just in case something goes wrong".
• A smaller RTOS with less cache misses will mean it runs faster and more consistent. But contrary to what some people here have been saying, "real-time" is /not/ about having consistent or predictable times for anything. It is about having guarantees on deadlines. If a task has to be completed within 100 ms, then it doesn't matter if it is done in 1 us or 99.9 ms, or that cycles vary over this whole range - it is correct and real-time. Having predictable and consistent timings means you can match tighter deadlines, and you can use a greater proportion of your processing power for real-time tasks. But on a typical Linux system, you have lots of non-real-time tasks too (if not, you would be better off with a small dedicated system rather than Linux). So you have the real-time tasks with real-time scheduling so that they get the time they need, when they need it - and let everything else run when it gets the chance. Regarding FreeRTOS, you /are/ mistaken. It is totally unrelated to the Linux kernel, and targets a completely different size of system.
• Most "big name" toolchain vendors can provide you with older versions of their tools without too much fuss or extra cost (though typically without much support). But that's a good reason to pick open source toolchains as well as having the source for the application itself.
• A free market only works when there is significant competition and alternative suppliers, and when there is clear knowledge about the products and suppliers. Do you know which scope manufacturers are going to support their models in four years' time? Do you have a choice between different scopes with roughly similar characteristics and price, but differentiated in their expected support time? If not, then a "free market" does not exist for supported scopes.
• The scaling of electronics does not work like that. With smaller feature width on the chips, you get more features and more processing power for the same die space and the same money - but that does not translate into saying you can get the old features for less money. Chips cost money to design, produce, test and distribute - there are minimum prices. I'm guessing the bottom end is something like 20-30 cents. The 4-bit market is close to dead - even the simplest rice cookers will use 8-bit devices these days. As the price (and size and power requirements) of 8-bitters has come down to within a few cents of 4-bit devices, there is no longer a good reason for a "rice cooker" company to use them - they need the 8-bit chips for their "high-end" rice cookers, and dropping the 4-bitters means less inventory and one less development team. The same thing will happen with 8-bitters pushed out by 20-30 cent 32-bit devices, though it will take a bit of time. (There are no 16-bit devices of significance left, except the msp430 - which is best grouped with 8-bit devices.)
• I learned most of the programming languages and paradigms (but not Forth or Python) I mentioned in a 3 year university course, of which the "computing" part was only about 30% - the rest was maths. And most of the "computing" didn't involve real programming languages at all - it was theoretical. The point is, you don't need to be fluent in all these languages, or learn the more advanced features. You need to understand the principles, not the details, and that can be done in a much shorter time. Once you have learned enough about the principles of programming, you can pick up the basics of a new language in a couple of days, and be confidently using it for development well within a week.
• Your principle is right, but you are not going far enough. All these languages are essentially the same - they are all imperative low-level single-thread programming languages, and you can translate easily enough between them. Sure, some things are easier to express in one language than another one, but there are no fundamental differences. It would be far better to teach students a variety of different programming paradigms - different ways of /thinking/ about their programming. Teach them functional programming so that they learn about functions, states, types, result-oriented coding (i.e., say what you want to know, not how to calculate it), and provably correct programming. Teach them Forth for the ultimate divide and conquer - and to learn that sometimes it is best to think backwards. Teach them occam or CSP to learn how multi-threading /really/ works. Teach them a high-level language like Python, so they can learn when high-level design is more important than low-level premature micro-optimisations. Teach them assembly, so that they learn how the cpu actually works and thinks. /Then/ teach them C, C++, Java, etc., as a practical way of getting things done.
• Producing a correct C++ implementation is hard, no doubt about that. And it will likely have more bugs than the C parts - though most such bugs will only show in very obscure code, rather than in common use real-world code. But if you try to divide up the costs involved in making a full toolchain like CW, then the C++ specific support is only one part - and far less than the C support (most C++ boils down to the same internal representation as you already have for C. There are even some C++ compilers where the C++ part is just a pre-processor for the C compiler). Thus the costs to the user for getting C++ compilation are hugely out of synchronisation with the development costs. I believe therefore that C++ support should follow the same pricing model as for C in many commercial toolchains - code limited for cheap licenses, with stepwise increases in code size, support and toolchain features towards the expensive licenses. Note that some toolchain vendors, such as CodeSourcery, do this already.
• I think one of the things holding back C++ in the embedded world is the tools. The two biggest issues are commercial toolchains and debuggers (open source toolchains, with gcc at the head, are fine for C++). Commercial toolchain suppliers seem to view C++ as a hugely advanced and expensive option - despite barely half-hearted support for it. One example I have looked at recently is Freescale's CodeWarrior. For C programming, CW is turning into a very nice tool - good compiler, good IDE, and usable libraries and "wizards". And the price is good - the free code-limited version is sized so that you can use it for a lot of real-world projects. Then you have pay-for options for more code size, support, and advanced features such as kernel-aware debugging, debuggable libraries, etc. And where does C++ fit in? It is only available on the most expensive version. It's absurd, and a pointless limitation. If the commercial toolchain developers actually put some added value here - say, C++ versions of their libraries and device header files - it might be understandable. But as it stands, I get the impression that they like to advertise C++ but would prefer people not to use it. The other big problem with C++ is in debugging - many debuggers are simply not up to the task as you try to deal with things like breakpoints in methods, disassembly of overloaded functions, and comprehending names generated from templates.
• You are right that C is not suitable for programs with millions of lines of code. But what you are missing is that /no/ programming language - real or imagined - is suitable for programs that size. The issue is that /programmers/ are not suitable for working with millions of lines of code. There are two ways to handle very large projects. One is to divide it very clearly into independent projects. And I mean /independent/ projects - not just different groups doing different libraries all designed to work together. The other method is to use programming languages which do far more in less code. Fewer lines of code means a more manageable project. Of course, the best results usually come from combining these. Work with multiple subprojects, and use whatever language makes sense for each particular project.
• 1. No code that is correct can be made incorrect by adding a "volatile". But it can be made bigger and slower, and less clear. 2. Exactly correct. C has no proper support for controlled accesses. Similarly, it does not have support for things like memory barriers, cache or pipeline controls, etc. You have to use toolchain-specific enhancements, or assembly (possibly inline). Explicit volatile accesses using typecasting is just the best that can be done using the limited tools available. 3. C++ doesn't let you do anything that C can't do (in this area), but it may let you do it a little neater and clearer. 4. Using explicit volatile accesses rather than declaring data as volatile does make it easier to forget them. But equally, putting volatile in the declarations can make you think that you've done all you need to do, and give a false sense of security. Especially for bigger and more advanced processors, "volatile" is never enough - so you need to be in the habit of understanding your accesses. Having said that, I fully agree with putting "volatile" in the declarations of data for which accesses always need to be volatile, such as many hardware registers. I hope now you will /think/ about what each access means when you use them, but that doesn't mean it is necessary to write them all out explicitly if it does not add to the clarity and understanding of the code.
• A lockless queue like that also illustrates why "volatile" is often not enough - and simply declaring data as "volatile" quickly leads to mistaken assumptions, while explicit access control may make things more obvious. If your queue writer code puts data into the queue with non-volatile writes, then updates the head with a volatile write, many programmers will assume that the data writes are completed before the volatile write to "head". After all, volatile accesses can't be re-ordered, right? Wrong. Volatile accesses /can/ be reordered with respect to non-volatile accesses, and the compiler can do the data writes after the volatile head write. You need memory barriers to get the write effect, or you need explicit volatile writes on the data too. So why not just make the data non-volatile? Because you can often make the code much bigger and slower - and often the code using volatiles is in low-level, time-critical code.
• Let me first give you a pointer to one of Linus Torvald's rants against "volatile": Obviously he is talking specifically about the Linux kernel, not embedded programming, but I believe it still mostly applies here. Most people - including the authors of most books about embedded programming - think of "volatile" as applying to the data. And in many cases, it is very convenient to declare the data as "volatile", meaning that all accesses to it are volatile. But I believe that thinking that way limits your understanding of "volatile" and about controlling accesses. It works well enough for simple programs, and simple processors - make all your hardware registers "volatile" and assuming everything works okay. But it fails in complex systems, it fails with advanced processors, it fails when mixing volatile and non-volatile data, and easily leads to inefficient code when using faster processors. So how do you force a volatile access to data that is not declared volatile? You use a typecast: #define volatileAccess32(var) \ (*((volatile uint32_t *) &(var))) extern uint32_t vol; uint32_t foo(uint32_t x) { volatileAccess32(vol) = x; return volatileAccess32(vol); } The typecast is messy, so you wrap it in a macro, static inline function, or C++ template according to taste. There are lots of situations when you want different types of access. Maybe you've got a hardware register that you want to control with volatile writes, but are happy with non-volatile reads to get better code. Perhaps you've got a lockless queue with one context (thread, interrupt code, etc.) controlling "head" and the other controlling "tail". The process controlling "head" will not need volatile reads to head, but it will need volatile writes, and it will need volatile reads from "tail".
• Looking up "volatile" in a dictionary does not help - what is of interest is what the C standards say, and how real-world compilers implement them. C has no concept of volatile /data/. When you make a volatile access, you are telling the compiler that it should do exactly as many reads and writes as the source code says, in exactly the same order (with respect to other volatile accesses). You are /not/ telling the compiler that this data might change suddenly, or might be read by external hardware. You are telling it to access it exactly as stated in the source code. It is very common to have data that needs volatile accesses in one part of the code, and can use non-volatile accesses elsewhere. Or perhaps you need your writes to be volatile, but not reads. You get that control by being explicit in your accesses, not in your declarations. There are also many situations where "volatile" is not enough - processors with cache, re-ordering, multiple cores, etc., make it far more difficult to make sure that the access really happens. Get in the habit of making your accesses explicitly volatile in the code that uses it, and you will write clearer code that will work better when you move to more advanced processors.
• Don't rely on your compiler doing any sort of direct translation just because you turned off optimization - the compiler is free to optimize code regardless of any switches you use.
• The key to getting "volatile" right is to understand that there is no such thing as "volatile data" or a "volatile variable" (and certainly no "volatile const" data). It is /accesses/ that are volatile. Declaring data to be volatile is just a short-hand for saying that all accesses to that data are volatile.
• Some compilers, such as newer gcc versions, will factor out "hot" and "cold" functions automatically and work with the linker to place them separately.
• You misunderstand. First, it is quad-spi - there are four data lines in parallel. So when it is running at 100 MHz, you get approximately 40 MB/s throughput. For comparison, a typical 60ns 16-bit NOR flash gives you about 30 MB/s. Secondly, the key use is for things like bootloaders, and then loading the program into ram. On other processors with instruction caches combined with pre-fetching, SPIFI is much more appropriate.
• The fundamental idiocy here is the legal system in the US that allows decisions like this to be made by untrained and ignorant jury members. I know that the theory is that a jury from the public is unbiased, fair, and immune to corruption - but in reality for cases like this they are highly biased and incapable of reaching a fair, informed and appropriate decision.
• People who still use function-like macros should learn about "static inline" (or inline C++ methods if you prefer) - you can define your functions with proper syntax, type checking, etc., and the compiled code will be as small and fast as you can get with macros.
• Could you give an example of this? Common experience is that assembly needs complicated macros to generate different code depending on the parameters (or features of the parameters, such as whether or not they are constant). Decent C or C++ compilers have no problem generating optimal code for constant values - you just have to make sure the function definition is known at compile time (typically an inline function), and you are not crippling your compiler by not enabling optimisation.
• I don't have any problem with non-standard features where they are useful - you can't do embedded programming at all without using at least some non-standard features, and often they make the code significantly more efficient. But in this case, explicit padding is better than "packed" pragmas or attributes - being standard C is mostly just a bonus as far as I am concerned.
• Don't use non-standard "packed" attributes or pragmas unless you have a very good reason for it. A better technique to control your packing is simply to add dummy bytes (or bits, or words, or whatever) to make your layout explicit. That keeps everything clear, and avoids any mistakes with alignments, etc. I agree entirely about using static assertions to check that you've got the structure right. You don't need checks on the individual offsets - if you've included explicit dummies, then it's enough to check the size or the offset of the last member. If that's correct, then so is everything else. If your compiler supports it, use a "-Wpadded" flag to check that you haven't missed any pad bytes.
• Sometimes it might be nice to have abstract classes for something like timers - perhaps the microcontroller has two different types of timers, and you want to treat them the same way. That can be handled by having a distinction between logical timers (which can be abstract) and physical timers (which have a fixed structure). The logical timer class would have a pointer to a physical timer register structure, as you suggest. But this sort of thing has its costs. The aim here is to be able to write clear and concise C++ code with encapsulated syntax (so that you can write timerA.enable(), etc.), while still generating efficient code. A call to such an "enable" function should be inlined and implemented by a single bit operation - not a call to an external function with several layers of indirection. It's okay to pay the time and space costs for indirection when you really need it - but not when you don't.
• You /can/ use a hacked new operator to let you put the "new" object at a specific place, and call the constructor automatically. But what does that give you, compared to an extern reference declaration and an explicit call to Init()? It gives you uglier code where it is much harder to see the logic, extra overhead in startup, and it means that every action on the object uses an extra pointer and layer of indirection. No thanks - Init() is the sensible way to initialise hardware devices like this.
• It's fairly clear that you are not an electronics expert - you are simply regurgitating terms like "high frequency power supply noise" without an understanding of what that might mean, or what effect it might have on the music. (To give you a hint, from someone who /does/ know, the answer is zero effect, unless you have a very badly designed system.) HiFi equipment is tested with dynamic testing, not just static. And even if there /were/ these mythical "subtle second-order effects" that can only be heard by a human ear - don't you think that HiFi manufacturers include listening tests during development? You can be sure that high-end HiFi manufacturers use panels of /real/ expert listeners, rather than "Which HiFi" addicts, to help tune systems and identify any issues. Manufacturers use test CDs as part of their development and quality control. If there were combinations of sounds that emphasised particular problems, then you can be sure these would be used in testing. It is correct that there are differences in the sound between different CD players (though very little between high-end players). And it is correct that no CD player is absolutely perfect - there /are/ distortions, and there are effects dependent on the type of electronics used, the way it is designed, and some variation due to tolerances in the electronics. No one will argue any differently. But it is total and complete nonsense to say that vinyl has fewer distortions because it is "all analogue" and CD is digital.
• You have a few correct points here, but are missing the main point. Yes, peoples' ears are sensitive to certain types of distortion and noise - but you don't get them with good digital playback. You get them with vinyl. When you listen to a CD, the problem is not some extra noise or distortion - it is that the familiar (to you) noise and distortion is missing. It's like tube amplifiers - fans will tell you they give a "warmer" sound than transistor amplifiers. In fact they give a less precise rendition than most mid or even low-end transistor amplifiers - there is more noise, and more distortion. In particular, they have significant second harmonic distortion that you don't get elsewhere - the "warmth" is that second harmonic. If you like your music to have this added noise and distortion, that's fine. I can appreciate that, and understand it. Just don't make nonsense claims about CDs and digital reproduction adding noise or being less accurate in some way. Oh, and the "minimal electronics" movement is purely about pandering to people who will pay for such "features". In the recording studio, the sound from the musician passes through perhaps 30 or 40 opamps before being digitised. A few more on the playback device would not make the slightest difference (assuming, of course, that the electronics is of good quality and design).
• I think the differences of opinion are actually quite simple to explain. CD gives a more accurate representation of the original recorded sound than vinyl. Very few people have the training and naturally good hearing to be able to notice any digitally-induced noise - SACD or 96kHz/24-bit digital takes that noise below anything that humans could theoretically differentiate. The reason some people prefer vinyl (or tube amplifiers) is because they /like/ the noise. Humans are not comfortable with too stark contrasts - we write on paper that is off-white, we put patterned wallpaper on our walls rather than pure colours, etc. If there is not enough background noise, things seem artificial and cartoonish. There is no point in having a technical argument about the quality of the reproduction of the sound - there are no doubts that CD is more accurate than vinyl. But equally there is no doubt that some people /prefer/ the sound of vinyl. It has nothing to do with CD's sounding "harsh" or "failing to preserve the continuity". It is just that some people /like/ the pops, crackles and hisses from vinyl - it feels familiar and comfortable to them. As another poster says, it's the same reason people sometimes prefer candlelight to a light bulb.
• Exceptions are a way of hiding control flow and adding surprises to your code. If used carefully, so that they are caught and handled appropriately, they can be a useful mechanism. The unfortunate reality is that in most cases, programmers don't handle them properly. So enabling exception checking on array bounds just means that an out-of-bounds access leads to an unhandled exception and the death of your program. In any situation when you would be able to handle an out-of-bounds exception sensibly, you would also be able to check the bounds before the access - doing the check in advance is always better. Thus the only benefit of exceptions on array accesses is as a possible debugging aid. And even then it is almost certainly better to make your own array class that does the bounds-checking explicitly. gcc will warn you if it can spot out-of-bounds accesses at compile time, which is always a good check to enable.
• Arrays in C typically do have dimensions and sizes, when the code is well-written. But C allows you to make a mess of your code and use arrays without specified sizes, or even to use pointers to access array data. You lose a lot of static error-checking when you do that, and typically also generate poorer code, but people still seem to think that C arrays and pointers are interchangeable.
• What happens when the assert statement is triggered? Asserts /do/ alter the program flow and add to the complexity - if they did not, then they would not do anything! And all code must be checked and tested - are you able to force the error condition to check the assert? There are several possible outcomes - some are good, some are bad. If the assert trigger can be determined at compile-time, then a compile-time warning can be given - that's effectively static analysis, and is a good thing. Unfortunately, it is in the wrong place. To be useful, the static_assert should be in an inline function defined in the header, not in the function body. Triggered asserts can cause breakpoints or stops during a debugger session - that is definitely useful as a debugging aid. But if the assert is triggered at run-time in an active system, what can it do? Abort the program with an error message? That is generally either useless, or worse than useless. At best, log files might be useful in a post-mortem. Sometimes you /can/ do something useful with the error check, such as add a log entry and return a default value. But that is a specific reaction to the error condition, not just a brain-dead assert. What you say about Eiffel's pre- and post-conditions is exactly what I described about specifications of the function. The only difference is that Eiffel allows you to specify these in the language (and therefore do static analysis with them), while C does not.
• There is nothing wrong with the function definition: int divide(int value1, value2) { return value1 / value2; } The only thing that is wrong, is the assumption people are making about its specification - i.e., what it is supposed to do. It is in fact a function that when given two numbers, the second of which is non-zero, will return their quotient. It is incorrect to give a warning about a divide-by-zero error here, and it is wrong to use an assert or some sort of run-time check in the function. The place to do the checking is before the function is used - that is where a potential program error lies. The other major mistake here is to think that an "assert" is a good thing. Sometimes it is - but often it is not. There are many situations when it is fairly harmless if a divide-by-zero returns an undefined value, and many situations when a failed assert causing a program abort would be the worst possible outcome. An important rule is never to check for error conditions unless you can do something useful with that information - otherwise you have just increased your program's complexity for no gain.
• I agree with most of this article, but I think it goes a bit far with the "hide everything" aim. It is a worthy idea, but the unfortunate reality is that C (and C++) will not let you hide data and details without a cost. We are talking about embedded programming here - wasted code space costs money, and wasted run time costs power. And while layers of abstraction and data hiding help code quality and testing in some ways, they detract in other ways - they can make debugging much harder, and it can be difficult to follow the flow of information. It is common to slavishly follow the ideas that "all global data is bad", and that implementation details must be hidden at all times. The reality is that global data is sometimes the clearest and most efficient way to pass data around. And implementation details end up in header files if you want your code to be efficient. The simple answer here is to use comments or sectioning of the header file to make usage clear.
• If one header file needs another header file, it should include it. Generally you should try to keep modules independent when that is possible. But if one header needs a type defined in another header, for example, then it should include it. Never rely on the application code having the extra headers or headers in a particular order.
• The file handle is an abstract type - it is not a struct, but a pointer to a struct. You can handle these without knowing the contents. If all you have is an empty "struct" declaration, then you can only work with pointers to them, but not the type itself.
• I think the purpose of listing 2 is as an example - sometimes a delayed initialisation is appropriate, though this example is too simple to show it. Personally, I find it odd to have "initialise variables before use" as a tip - it's a bit like a driver's handbook recommending you start the car's engine before driving off. Assert() is often a poor choice in embedded programming - you don't have any convenient way to show a message, and killing your program is seldom smart. You are correct that passing incorrect data to a function is a programming error, and the aim should be to avoid it in the first place, or catch it quickly if it happens. But assert() is no better (or worse) a way than returning an error value.
• The compiler should tell you about using unintialised variables, and the original programmer should have checked all the warnings - the reviewer should not have to deal with such elementary things. And beware of writing too many comments on code that will be reviewed. If you put a lot of comments in some complex code, the reviewer will believe the comments rather than reading and understanding the code.
• Unnecessary globals are a bad idea. But the variables here may be file statics, which is normally perfectly reasonable. You've got to have your data /somewhere/. As to whether it is better to have separate variables, or put them in a structure, that depends on the program, the target processor, and the compiler - you can't make generalisations here. Of course, you are correct about idiotic comments. I don't see a need for any comments in that code sample.
• You should certainly distinguish error codes if it is useful - but /only/ if it is useful. It is also not uncommon to have extra differentiation as part of development and debugging (but remember to test the shipping code, not just the debug builds!). But beware of adding extra features in case a customer later wants it - that means extra work for you doing the initial development, extra work for the testing (all your error cases must be tested), and extra risk due to the extra code. Are you paying for all that, or is the customer?
• I've seen worse program organisation. I once had to maintain a program where there was /only/ a master include file - no other headers at all. Every extern declaration of data or functions was inside this file, in no particular order or relation to the module that defined it. Other joys of this program included filenames in DOS convention (the compiler was DOS-based, so that's fair enough) but with a program name prefix first "PRG_". This left 4 letters unique to each file name. The programmer had the same attitude to variable and function names - none longer than 8 letters. On the very rare occasions when anything was commented, the comments were also abbreviated.
• I agree that spaces in filenames and directory names are a daft idea. But so is limiting names to 8.3 letters, all capitals. Use sensible, meaningful names for files and directories, as long or as short as makes sense - just like for variables. As to all-caps for things like define'd macros or enum constants - yes, it's a convention. And that convention stretches back to K&R's original habits. But that doesn't make it any less a poor convention - writing in all caps is ugly and distracting, and provides no benefits to coding. Some people think it's a good idea because it makes it clear that you are using a macro - but /why/ is that useful? Why should it matter if a "function" is a real function, or a function-like macro? Why should it matter if an identifier is a macro or a variable? If it makes a significant difference to the code you are writing, then you would already know the answer. And if you really want to be able to see at a glance which identifiers are macros, then join the 21st century and get an editor with syntax highlighting. Conventions using all-caps are from the dark ages, when C was created with the single aim of letting K&R write operating system code using fewer keypresses than writing it in assembly (their motivation for developing C was their hatred for the DEC keyboards they had).
• There are some useful tips here. But I don't entirely agree with your tips about errors. The most important thing to consider about errors, is how you are going to handle them. The second step is to consider how you are going to test your handlers. There is no point in making error type enums and differentiating between error causes, unless this makes a difference in how you will handle the errors. If every error cause leads to a big red light going on, then it is better to have just a single error indicator - that means less extra code to write, test and maintain. The exception here is if you can make use of the cause to aid debugging or post-mortems. Sometimes there is nothing sensible you can do with an error - the sensible thing is then to do nothing. Obsessively checking for every theoretically possible error or unusual situation can make your code larger, at higher risk to bugs, and impossible to test properly. Prefer to write functions that don't return errors, rather than handle returned errors. It is very debatable whether having "LAST" entry in an enum is a good idea. It is strange that it's been used here, given that the inspiration of the article is Ada and strong typing. When you have an enumerated type of errors (for example), then "LAST_ERROR" is not a valid error - it should not be included in the type. There is no good way to express the concept in C (nothing like Ada's 'last type attribute), but putting it in the enum is wrong. It is better to make it a #define'd constant (or even it's own little enum if you don't want to use #define). This means marginally more work when writing the enum definition, but gives you better type safety. Other than that, it was a very nice article.
• There are many reasons not to use a single "master" include file. It's okay to have a common include file for basic functionality that should always be available, such as including stdint.h, perhaps microcontroller-specific includes, and system configuration - every file needs these available. But outside that, you include the header for a module if you use that module - you don't pollute your namespace with useless names. It keeps your code clearer, easier to maintain, and more modular, and makes compilation faster.
• I agree with the principle that consistent style is important. However, this particular style seems to have been created some 15 years ago, and left untouched. The world has moved on, and it makes sense to have a style convention that takes advantage of improvements since the days of DOS and K&R C. Consistency is important, especially over long-term projects, but a style guide should be updated as necessary. Here are a few things I particularly disagree with: Drop the DOS-crippled file and directory conventions. Use capital letters in names if you want - but not all-caps. There is no need for abbreviations in file names, unless it's obvious what they mean - you can call a file "displayTables.c" if you want. And C files use ".c", not ".C". Version control software should /not/ modify comment blocks - it should leave files untouched. Most modern VCS software follows that rule. Making your header files work differently depending on when they are used (based on xxx_EXT macros) is such bad style it makes me cringe. I hadn't expected a professional author and developer - who relies on his reputation - to even suggest it. You declare globals as "extern" in a header, and define them in the matching C file - the compiler will tell you if you get something wrong. It's been many years since stdint.h has been available - fixed size data types are called int8_t, etc. There is no need to have your own private convention, and certainly no need to shout about it (fix the broken capslock key). And if your compiler won't accept // comments, get a better compiler.
• This sort of arrangement is very common as a way to get atomic reads of data. You have to be sure that the looping is bounded, but that's clear enough for such a timer or counter. It is particularly useful when dealing with hardware counters or timers, where you cannot use any sort of locks (not even the simple "global interrupt disable" solution).
• AVR Studio 5 beta is available. Atmel have supported gcc for the AVR for several years (though most of the work is done outside Atmel). They have had a gcc port for the AVR32 since they first launched that architecture.
• I disagree on much of this article. Intrinsic functions often a good choice if the compiler happens to define one - typically, they are available as wrappers for single assembly instructions like CLZ. If you need more than one assembly instruction, it is unlikely that there is a matching intrinsic. I have also found cases where intrinsic functions were implemented inefficiently by the compiler - the compiler did a better job when given inline assembly than when using intrinsics. It is rare that a long function is best written in assembly - compilers will often do a better job than an assembly programmer because it can (amongst other things) track register and stack usage for optimisations that would be too time consuming to write by hand. So in most cases, you only need small sections of assembly - perhaps between 1 and 4 instructions. You can't use intrinsics if they don't exist for the code you want. External assembly modules means a lot of extra effort, and it means function call overheads - a big waste of time and space. And because the code is a black box as far as the compiler is concerned, it can't use IPA or global optimisation to improve the code. Inline assembly fixes these issues. I don't know about Green Hills compilers, but gcc will happily inline and optimise inline assembly code, and will optimise the C code around it. The "correct" way to write the count_leading_zeros function with gcc is: static inline int count_zeros(uint32_t src) { int ret; asm(" clz %[ret], %[src] " : [ret] "=r" (ret) : [src] "r" (src)); return ret; } It's true that there is a learning curve for the syntax - but there is a learning curve for writing assembly modules too. And while it's also true that the documentation of the syntax in the gcc manuals could be clearer, there are endless examples, tutorials and resources on the web.
• MS have tried working with different architectures before on Windows. The original NT worked on x86, MIPS, PPC and Alpha. But one by one, these architectures were dropped. There were several reasons for this: 1. MS's own software (windows, office, etc.) was written specifically for the x86 - making the code portable was a huge effort. 2. Third-party developers wrote code specifically for the x86, not portable code. 3. MS didn't want to pay the costs of making and supporting Windows on these architectures - they made the cpu manufacturers pay. This was a large cost for the manufacturers, and made competing with Intel's processors even harder. In the end, these companies found it was not worth the cost to pay MS, so MS dropped them. In the Linux world (and also for most embedded OS's), portability is standard. The kernel supports several dozen cpu architectures, and most software is written for portability. The cpu architecture is almost incidental in a Linux system. If MS are going to get Win8 running on ARM, they will have a lot of work to do. And they will have to do it themselves - ARM won't pay them to do it. Much of the work is going to be in getting third-party developers to support it. For developers using dotnet, it will not be too hard - but most important software runs native on the x86. The result will be that the Win8 ARM tablets will look like large Win7 phones, without the phone. They will work for browsing, email, MS office, and a few games. They won't work with any other windows programs the user might have. And if you can't run windows programs, why bother with windows? Do you buy a Win8 ARM tablet so that you can run MS Office, MineSweeper and Solitaire, or do you buy the cheaper one with Linux, OpenOffice, and thousands of other apps that can be installed from a simple dialog box?
• You have certainly quoted correctly from Freescale's website. However, Freescale's website here is wrong. Someone there has got their Coldfire cores badly mixed up. Look at Wikipedia's article on the ColdFire: It is not very detailed, but at least it's got its history correct! It may be that IPextreme has a new version of the V4 core which shares some code with the V1 core. The V1 core was the first ColdFire core that "mere mortals" could license and use in FPGAs or SoCs. Historically, however, the ColdFire cores have always been synthesisable, and have been used inside SoCs from before the term SoC was invented. It's just that you had to be the size of a major American car manufacturer even to hear about them.
• When you are writing time or space critical code, the important thing is to know your compiler well. Write simple test cases, and look at the generated assembly code - timings are often affected by other things. Different compilers are going to produce different results, and the results will depend on the flags used and the target device. Good code for one device is not necessarily good code for another device. For example, if you are compiling this for an AVR, you want to ensure that pointer indirection is eliminated because it is costly in time and space. But if you are compiling for an ARM, pointers are good - access through a pointer plus offset is cheaper than absolute addressing. Ideally, you want to use a good compiler and let the tools pick the best code. Your job as programmer is to give the compiler as much information as possible, with as clear intentions as possible (if you know something is constant, call it "const". If it is static to a file, call it "static"). One thing that will typically make a very big difference with code like this is to ensure that the compiler can inline the functions. It has to see the function definitions through headers or link-time optimisation. Then the compiler will typically eliminate pointer indirection automatically - unless, of course, it feels pointers give better code.
• I'm not sure you meant to write "The ColdFire V4 core is a simplified version of the ColdFire V1 core". It's the V1 core that is simplified, not the V4 core - especially since the V1 core is newer than the original V2, V3 and V4 cores.
• There is a lot more to the question than just the poll numbers. The number of people who don't know that the Earth goes around the Sun in any given country is not a big issue - there are ignorant people everywhere, and there are plenty of people who choose to believe religious teachings rather than scientific results. That's OK by me. The real problem is when people take religious beliefs and claim them as science, or claim that they are backed up by science. It is this attitude that is very much an American problem (though it has spread a little to Europe). It is only in the USA that you get groups like the Galelio was wrong that claim scientific backing for their nonsense. It is only in the USA that quacks and crackpots call themselves scientists and write stuff like this that cannot remotely be called "science". It is also almost only in the USA that you get so many "fake scientists" with qualifications (sometimes earned, sometimes bought, sometimes totally fabricated) that support this sort of thing. It is almost always about money - it's easier to make money writing and selling "intelligent design" books than doing real science. It is also sometimes about trying to enforce your religious beliefs on others. Here in Europe there are plenty of people who are happy to con others out of their money, and plenty of people who write religious books. But they don't call themselves scientists, and if they do, no one gives them any credit for it. Perhaps it is an American attitude that everyone has a right to an opinion, and that you have the right to express that opinion, and that every opinion is equally valid and deserves equal consideration. The European attitude is also that everyone has a right to their opinion, but not that these are equally valid. Feel free to say what you want, but if it's nonsense then you should not expect people to listen.
• I take exception to the "Is the world going nuts?" comment - I think you meant to say "Is the USA going nuts?". Most people throughout the world simply do not care whether the sun goes around the earth or vice versa - as long as the sun comes each morning, it makes no difference to their daily lives. Then there are a those who choose to believe a strict interpretation of their religion's holy book(s), and thus believe in a geocentric universe. Such people simply believe that when science and their understanding of the Bible/Koran/whatever is in conflict, the book trumps science. But the concept of "scientific" arguments for geocentricity, such as this "Galelio was wrong" group, is almost exclusively an American phenomenon. It's in the same spirit as the "young earth" and "intelligent design" nonsense, and while there has been a limited spreading to other countries, this sort of active disbelieve of science, and religious pseudo-science, is from the USA alone. The USA has a big problem here, and it's getting bigger. But please don't say the "world" is going nuts - the USA is going nuts, and the rest of the world is just getting dragged along.
• I am wondering if some of the people commenting here have missed the point that this is a class to access memory-mapped devices. C++ has no concept of classes whose data members are in different parts of memory (except for "static" members). So there are no issues about "adding non-memory mapped members" - you can't add extra arbitrary members to the memory mapped device, so you can't add them to the class. If you want to mix your own members with the timer_registers, create a new "normal" C++ class containing the new data, and a reference to a timer_registers object. Similarly, it doesn't make sense to think of virtual functions for a memory-mapped device - the device is what it is, and it can't change. Again, if you need some virtual functions, make a new class that includes a reference to a timer_registers object. You also cannot create or destroy the peripheral, therefore there is no sense in having a constructor or destructor, and the pointer cast is perfectly reasonable. If you want to have some sort of initialisation procedure during startup, it is almost certainly better to write it explicitly so that you are clear about when it is called, and that it is called exactly once. But if you want to do it automatically, create a new class with timer_registers as a reference. Finally, to those that worry about the cost of the pointer indirection - if your compiler is generating object code with unnecessary indirections from this class, get a better compiler or learn to use your existing compiler properly.
• First off, it should be noted that the code examples used by Microsoft are a totally different kind of code than the Linux kernel mentioned, and from the sort of embedded software written by most of this website's readers. Opinions about the quality of Microsoft's programming aside, there is no reason to assume that the conclusions of this paper are valid for a wider range of software tasks and software development processes. Having said that, asserts can often be useful. They are particularly useful during testing and debugging, and in the interfaces between code modules if they are not well specified and documented. But asserts in general are /not/ free, especially in embedded systems. There is a big question as to where the assert errors should go, and what the software should do when an assert is triggered. In an embedded system, asserts should only be enabled during testing and debugging - if you need run-time checks on finished software, these should be at a higher level. One thing that can be very useful, and is free, is static assertions that are evaluated entirely at compile time. Until an standard static_assert makes its way into the C and C++ standards, it is possible to get a free (though slightly developer-unfriendly) static assertion with macros: #define STATIC_ASSERT_NAME_(line) STATIC_ASSERT_NAME2_(line) #define STATIC_ASSERT_NAME2_(line) assertion_failed_at_line_##line #define static_assert(claim) \ typedef struct { \ char STATIC_ASSERT_NAME_(__LINE__) [(claim) ? 1 : -1]; \ } STATIC_ASSERT_NAME_(__LINE__)
• For rule #8, your example is bad - integer promotion will ensure that the uint8_t a is promoted to (int) 6, and int8_t b is promoted to (int) -9. The constant "4" is already an int (whether you like it or not), so the comparison will be done correctly as the programmer expected. The advice is sound, however - be wary of mixing signed and unsigned numbers. Sometimes things won't work as expected, and sometimes you end up with unwanted promotions (if you add a uint16_t to a int16_t on an 8-bit processor, you'll not be pleased when the compiler follows the C standards rules and promotes then both to int32_t).
• The source code page is most certainly *not* a "library of open-source software". It is a collection of commercial software trial versions - at least, that's what most of the items seem to be. While such a collection is certainly a useful resource, there is nothing "open" about it. To be truly useful, a library like that should be very clear on exactly what licenses each item has - that way people can see what they are looking at without reading the small print.
• Good programming style is about readability. "char const" might sound natural to a Frenchman, but English speakers find "const char" to be a more natural and readable phrase. I can see absolutely no benefit from writing your type phrases backwards - it's inconsistent and breaks the flow of the text. It is not unlike silly rules such as writing "if (1 == x)" instead of "if (x == 1)", where logical writing style is sacrificed for mythical error-checking benefits. C has a powerful "typedef" statement - use it to make your types and declarations clear. A very simple rule is the type part of a declaration should not have more than two parts - there is no need to worry about the differences between "const char *p", "char const *p" and "char * const p" because the declaration is never written. Instead, use: typedef char *pchar; const pchar s; or typedef const char cchar; cchar *p; (Note the use of more logical names, rather than the non-obvious "typedef char *ntcs".) Encouraging the correct use of "const" (and "volatile") is a good thing - but please do it by emphasising and encouraging good, clear, *readable* code and good type usage, rather than by inventing your own conventions.
• It would be a lot easier to take groups like the RIAA seriously if they did not use such inaccurate and wildly disproportionate terms, which are then propagated by the media - including this article. "Theft" and "stealing" require three things - you must take something to which you do not have any rights, you must do it intentionally and knowingly, and you must deny the rightful owner their rightful use of the stolen item. If you take someone's CD, you are stealing their music. If you download an illegal copy, you are not stealing. It's a civil offence - breaking copyright laws and/or licences or contracts. But it is not theft, and it is not a crime, because the rightful owner has lost nothing. Once you get to the levels of selling illegal copies, or distributing to the level of making a significant impact in the potential sales by the rightful owner, you are committing a crime (though it is still not theft). And "piracy" is something that happens at sea, particularly off some parts of the African coast. The same thing applies to IP in engineering. Illegal use of IP is without doubt a serious problem for many people. It can involve copyright violations, breach of contract, licences breaches, and various other civil offences - but it is not theft, and it is in no way the same as stealing a car. As you said in your article, there are two reasons why people do not steal. One is fear of reprisals (which barely applies in the case of illegal IP usage), and the other is an understanding of the moral issues involved and a desire to "do the right thing". IP misuse can only be realistically tackled by appealing to people's morals - confusing, inaccurate and downright dishonest media terms, adverts, and propaganda cannot help. The first step to dealing with any problem is to properly identify the problem. Until IP rights owners, the various pressure groups, and the media learn that, they will never make any progress.
• An alternative way to handle USB upgrades is to use an external USB interface chip and connect to the microcontroller's serial programming interface (many microcontrollers have a suitable interface). For example, the FTDI2232C USB to serial device has two UARTs, one of which can be used for fast SPI-style communication. If you have a microcontroller with a UART and a serial programming connection, you can add this device to your design and get USB communication that appears as a UART for both the microcontroller and the PC side (no need for new USB drivers on the PC, or any USB-specific knowledge and programming on either side). The second serial port gives you a "back door" for updating the firmware. A big advantage of this is that you don't need any bootloader on the microcontroller, which can save time during production. In fact, if you are using a microprocessor rather than a microcontroller, and are always connected to a PC, then you don't need any flash on the board at all - use the USB device to download the program to ram and run it there.
• These are all good questions to consider before choosing embedded Linux as your OS. But it's worth pointing out that they are good questions for *any* OS, not just Linux. In particular, people often think that you have to consider licensing and legal issues as a special topic for Linux. It's not a Linux issue, or a GPL issue - you have the same sort of issues with any software you use in your system. With the GPL, the legal issues are out in the open, and are in terms that non-lawyers can understand - with commercial licenses, it is normally much harder to figure out what rights and responsibilities you have. I'm not sure I approve of you publicly generalizing "the rest of us can often ignore the legal issues". It's certainly true that most of use can ignore some of the legal issues, such as the example threat of being sued for unauthorized code in the kernel (assuming it is not your fault!). But other legal issues, such as the requirements of the GPL, should most definitely *not* be ignored. You wouldn't expect developers to ignore the legal and licensing requirements of WinCE or QNX - why imply that they can ignore those of Linux and the GPL?
• While I agree with much of your article, I think you are wrong about optimisation. It is certainly true that code is often easier to debug with less optimisation (don't turn it off entirely, as the generated assembly is often unintelligably poor). But if your code works with compiler optimisations disabled, and fails when optimisations are enabled, it is almost certainly an error in the code - not a problem with the compiler. I have used compilers in the past that have bugs in their optimisers, but its rare for a decent compiler - the chances are much higher that it is a user error. In listing 4, the problem is not a "optimizer error", and the solution is most certainly *not* to turn off optimisation. The correct solution is to learn to use the "volatile" keyword - if "data_port" is correctly declared as "extern volatile char *data_port", then the optimiser will generate code that works as the programmer intended. I also wonder a little about your programming style - "Old style" C function declarations have not seen much use in new code for fifteen years or so. Your suggestions for better C programming are mostly good advice, but I disagree with the tired old "avoid global variables" mantra. Many people seem to think that it is "better" to hide global data, and refer to it by pointers or by set-and-get functions - leading to programs with exactly the same abuses of global data as direct access, but with harder-to-read code and bigger and slower object code. The problem is when the programmer does not properly control access to shared global resources, and it is in no way limited to global variables (think of a program that updates a screen during its main loop, and has an interrupt routine that prints an error message). I'd also suggest that people take advantage of the warnings and checking that their compilers provide. Some compilers have very good checking, and can even enforce style rules. Most modern compilers would catch most of your "ashitical" bugs if you ask them to. |
Getting Down To Basics: The Definition Of Sleep Terror Disorder
August 16, 2014 by
Filed under Sleep Terror Disorder
Have you ever heard of Sleep Terror Disorder? Most people haven’t. Unless you’ve been around someone who suffers from it, you probably haven’t ever been exposed to it, and it just sounds like a horrible thing, using the word “terror” like that.
So what is “Sleep Terror Disorder?” Is there a definition the experts agree on? How can you avoid it – is it contagious? Can you catch it from your neighbor? Will your children bring it home from school and the whole family get it? Will it ruin your plans for the weekend?
Sleep Terror Disorder – as awful as it sounds – isn’t something to worry about and be afraid of. You’re not going to catch it from anyone else, or be a silent carrier, either. And it isn’t even all that horrible, usually, except it will interrupt your sleep on the nights when it occurs. What’s a definition of Sleep Terror Disorder? Let’s take a look.
Finding A Definition
If you’re looking for a definition of Sleep Terror Disorder, you can check with your pediatrician or in any comprehensive parenting book or medical dictionary. Though it only affects three percent of all children, the condition is distressing enough that many parenting books at least mention it in passing, if not giving a thorough description and suggestions for it.
What’s In A Name?
Sleep Terror Disorder affects primarily children. It is a condition that results in “night terrors” or “pavor nocturnes.” It occurs during the stages 3 or 4 of the non-rapid eye movement (NREM) sleep. It is a sleep disorder that is recognizable by extreme terror and an inability to (for a short time) be unable to wake up.
The person – usually a child between two years and six or eight years, seldom an adolescent or adult – wakes up (sort of) with a panicky scream or maybe a gasp or moan. They have anxiety, confusion, unresponsiveness, odd motor movements, disorientation, and agitation. They cannot usually be completely woken up, or even comforted, and usually after a short while of the screaming or crying (often up to ten or fifteen minutes), they will fall back asleep into a deeper sleep again and will awake usually in the morning with no memory of the episode at all.
Another bit to the definition of sleep terror disorder is that night terrors usually occur one half hour to three and one half hours after the child falls asleep.
The definition of Sleep Terror Disorder is clear – while it is a disturbing situation to have a child who suffer from Sleep Terror Disorder, it is hardly an emergency. You won’t catch it, and you won’t die from it. Neither will your child. Just wait it out, and in a week or so, the symptoms will be gone, never to return.
Why Some Adults Have Sleep Terror Disorder
August 16, 2014 by
Filed under Sleep Terror Disorder
You may be familiar with sleep terror disorder in children, but did you know that adults can also have sleep terror disorder? It’s usually got a very different cause and treatment from the childhood kind, but the symptoms are similar. Let’s take a look.
Symptoms Of Sleep Terror Disorder – Whatever The Age
Whether a child or adult, a person with sleep terror disorder has symptoms that are distressing to anyone seeing them. They will usually awake in the night – generally within a few hours of falling asleep – with a feeling of sheer terror. They are waking abruptly from stage 3 or 4 of non-rapid eye movement sleep cycle, and it would seem to the onlooker that they are stuck between sleep and wake. When they wake, they’ll usually scream, or gasp, or moan, and they have a very hard time awaking. It is much more effective to gently help the person fall back into a deep sleep, which they usually do within fifteen minutes. With a child, this role is usually performed by a parent. For an adult, if their spouse or roommate can help them back to sleep, it is ideal.
Other symptoms are physical ones that are to be expected when the person is feeling terror. They will tend to be sweating, with large pupils. Their pulse will usually be racing, and they are likely to be breathing very fast and have a look of fear or panic on their face. They can also look very confused. Reassurance by a person near them can help them relax and fall back into a deep sleep more easily.
Adults And Sleep Terror Disorder
Sleep Terror Disorder is usually a children’s disease. Usually only children between two and eight get it, though occasionally a bit older. When adults have sleep terror disorder, look for other causes. There are many avenues to check and methods to try to alleviate the symptoms, since (unlike children) they are unlikely to get better within a few weeks’ time.
Things for adults with sleep terror disorder to check include: getting a proper diet and enough sleep, and managing stressful events in life. Sometimes adults with sleep terror disorder have additional triggering factors, like trauma-based situations (post tramatic stress syndrome, for example) and genetic or chronic factors. If this is the case, the adult with sleep terror disorder should be in therapy. Psychotherapy and antidepressant medicine can often help immensely.
The adult with sleep terror disorder should also be checked for other physical factors, as there is some evidence that adults with hypoglycemia can have night terrors, as well as other symptoms.
Is There A Treatment For Sleep Terror Disorder?
August 16, 2014 by
Filed under Sleep Terror Disorder
If your child suffers from sleep terror disorder, you want a treatment to deal with the scary episodes the sleep terror disorder causes. Is there such a thing? What can help?
Who Gets Sleep Terror Disorder?
Sleep Terror Disorder is most common among children from ages two to six. However, they can actually happen at any age. Adult who have sleep terror disorder are in the category of “unusual.” About three percent of children get them. They are most apt to happen during the fires few hours of sleep at night, and they can happen again any night over the course of a few weeks. Fortunately, after that time, they generally disappear entirely, and never recur. After age ten, they are unlikely to occur at all.
What About A Treatment?
Sleep Terror Disorder is different from nightmares. Nightmares are easy to awake a child from, and after they forget about the nightmare, they can go back to sleep. Sleep terror disorder treatment varies because with sleep terror disorder, there is no nightmare to wake from or forget. In fact, due to the complexity of what is going on in their brain at the time of the episode, no one should (or, usually, even can) awake someone from an episode. They usually are out of touch with reality, and unresponsive to outside influence. Instead, during a sleep terror disorder the best treatment is to hold the child or comfort it in another way, while reassuring the child that you are there by them. Rocking can also help. After a few minutes (usually – but it can be longer) the child will fall back into a deeper sleep and be through the sleep terror disorder episode. Treatment, for the moment, was successful, and Mom can go back to sleep.
Long Term Treatment Options
For almost all children, these subside as the child ages. There is usually no long-term medical treatment for the Sudden Terror Disorder needed. If you see a doctor, they will usually suggest that you help the child get more sleep, and lessen the stress that the child is subjected to.
Another option is for the parents to figure out which time period the episodes are most likely to occur and wake the child about 15 minutes prior to that time. After keeping the child fully awake for 4 or 5 minutes, the child can go back to sleep. This usually helps with persistent cases, and within a week can usually be discontinued.
In very severe cases, there are drugs that can be prescribed, but these are usually reserved for adults with ongoing conditions. Also, psychotherapy can be beneficial. |
Judy Kynaston – National Project Manager, KidsMatter, Early Childhood Australia
One way to understand mental health in early childhood is looking at risk and protective factors.
Risk factors for children’s mental health can increase the chance of mental health difficulties developing. These might be things such as poor physical health, family conflict or separation, being affected by a natural disaster, experiencing trauma or abuse, or lacking friends or supportive relationships with adults.
Protective factors for children’s mental health decrease the likelihood of experiencing mental health difficulties. Protective factors are things such as good physical health, a stable and warm home environment, a supportive family and early childhood service, good social and emotional skills, and having support from a wide circle of family, friends and community members.
The KidsMatter framework supports early childhood services to strengthen protective factors in the early years to improve children’s mental health and wellbeing.
Professor Helen Milroy - Consultant Child and Adolescent Psychiatrist
Some of the most important factors though for good mental health and good social and emotional wellbeing are relationships. And there’s an enormous amount of evidence now that looks at attachment relationships, particularly in the early years, but also how those attachment relationships then build into other sorts of secure relationships further on in life.
Dr Nicole Milburn – Clinical Psychologist and Infant Mental Health Consultant
Children’s mental health is supported by stability. Stability is really important for small children in particular, because if you think about a toddler who’s just starting to walk around and explore the world, there are so many new things that that toddler will see in a day. And so if you can create some stability and predictability for the child then they don’t have to use their emotional energy wondering about what the newness is.
Dr Nick Kowalenko - Consultant Infant, Child and Adolescent Psychiatrist
So kids need a very strong sense of security, and in the context of that, they can usually manage a surprising array of stresses and they get that security really base one, you know, in their intimate relationships with their mum or their dad or other significant adults who are really a critical part of their lives. That can include extended family members, it can include early childhood educators and that’s the kind of, the spring, the source, the kind of spring well from which kids can manage to negotiate much of the risks and difficulties that they face.
Professor Helen Milroy - Consultant Child and Adolescent Psychiatrist
Given that attachment relationships are so important, then clearly what disturbs attachment relationships is also part of the risk factor. So things like abuse, separation, grief and loss, any of those adverse sorts of experiences that kids can have, can cause significant problems in mental health development, and development in general of course. Other things that can also cause problems are when there’s also physical health problems, and other things that interfere with a child’s general development.
Dr Sarah Mares – Consultant and Infant, Child and Family Psychiatrist
I guess children who have a number of risk factors, and particularly if those are either severe or sustained over time, are much more at risk of poor developmental outcomes than those children who might have exposure to one or two risks, which are either short or intermittent, but there’s not that sense of cumulative risk. So you’re always hoping. So all children have some risk factors, because that’s just what happens in life, but what you’re hoping is that there’s enough protective factors to balance out the impact of that risk, and to give children another kind of experience to draw on as they’re growing up.
Professor Helen Milroy - Consultant Child and Adolescent Psychiatrist
So if we then consider the interplay between risk and protective factors, I think what I tend to see, certainly working clinically is it’s not just one thing that’s happened, it’s this accumulation or cumulative stress that children experience, which actually then leads on to major problems for the child. On the flipside, if you only have one of those things, then sometimes the protective factors are enough to safeguard the child. And there’s plenty of evidence now that something like a safe secure attachment relationship, will ameliorate the effects of poverty on a child. So disadvantage per say is not necessarily going to cause you a mental health problem. But disadvantage with a whole pile of other sorts of negative factors, may well contribute to a significant problem.
Dr Sophie Havighurst – Clinical Child Psychologist
So risk and protective factors are very interesting things, because you can never predict what an outcome is going to be for a child. Because you may think “Oh this child has a number of risk factors. They have a very, very reactive personality style, temperament style, and they have a family environment that’s not seeming to be nurturing or helping them to learn the right sort of skills there”. But you can have at the same time some really important protective factors that are going on which might be a setting, an early childhood setting that’s really able to hold and support a child.
If parents are really working, and carers really work in with the early childhood carers and workers, then a partnership there can really foster the child’s needs. And then the way that early childhood workers will actually see the child and be able to recognise those and support those individual needs through in the early childhood environment. That will actually continue to foster a good emotional development.
Dr Nicole Milburn – Clinical Psychologist and Infant Mental Health Consultant
Having a strong and supportive relationship with a number of adults helps a child manage the world, by giving them a sense that there is a help when they need it, and therefore they will be able to trust that their needs will be met and their problems can be managed.
Dr Nick Kowalenko - Consultant Infant, Child and Adolescent Psychiatrist
One of the things about early childhood is that period of enormous adaptation by kids but if there are too many stresses then they can get overwhelmed. It’s a bit like adults in one sense. So one of the things that certainly this program tries to do is look at the ways in which we can boost protective factors, promote social health and emotional wellbeing so that kids in a sense are a bit more resistant to the kind of stresses or the risk factors that they might experience in the course of their lives. |
Payment Cards
Payment card covers a range of different cards that can be presented by a cardholder to make a payment. There are four types of payment cards; credit card, debit card, ATM card and prepaid card.
Credit Card is a card entitling its holder to buy goods and services based on the holder's promise to pay for these goods and services. The issuer of the card grants a line of credit that allows the holder (consumer) to make purchase or obtain a cash advance up to the approved credit limit. Several commercial banks have provided to their customers credit card in conjunction with Visa and Mastercard.
Debit Card is a card which provides an alternative payment method to cash when making purchases. Functionally, it can be called an electronic check, as the funds are withdrawn directly from cardholder’s bank account within the remaining balance. It can also allow cardholders to withdraw cash from their deposit account through ATM, essentially acting as the ATM card.
ATM card is a card that can be used at ATMs for transactions like account balance inquiry, cash withdrawals, and so on. It is similar to the debit card since the card is directly connected to a bank account, however it cannot be used for purchasing purpose.
Prepaid Card means a card where the money is replenished by depositing on a virtual account related to that card, and that amount of money can be spent at any participating stores. In some cases, the card is designed exclusively for use on the Internet, and so there is no physical card. Typical applications of a prepaid card include phone cards, gift cards, and travel cards.
Payment Card transactions are increasing in both currencies, but USD denominated transactions take more than 70% in terms of volume. Transactions like cash deposit, POS payments are only available in USD. Although the number of credit and debit cards which can support the POS transaction is increasing, payment cards is mainly used for cash withdrawal at ATMs, rather than purchase on POS.
In Cambodia, ATMs and POS support cardholders to access services like deposit balance inquiries, cash withdrawals, and purchases. Currently, there is no shared switch for the card based electronic payment on a national level. Instead, there is an interbank ATM switch called ‘Easy Cash’. Easy Cash was established in 2008, by Visa along with some commercial banks. It is not an independent institution but more like a consortium. As of June 2014, Easy Cash connects 285 ATMs of 7 member banks. The member banks are;
- Cathay United Bank
- Canadia Bank
- Union Commercial Bank
- Cambodian Public Bank
- Foreign Trade Bank of Cambodia
- Cambodia Asia Bank
Easy cash provides interbank balance inquiries and cash withdrawal services only in USD denomination. Other services like cash deposits, funds transfer and bill payments are not available. Customers who uses balance inquiry or who uses cash withdrawal at other bank’s ATM has to pay USD 0.3 or USD 0.5 for each transaction. On the other hand, there is no domestic interbank POS network. Card payment transactions in Cambodia are processed within the same issuer’s network or via international network like Visa. |
Video Retrieved From GoPro Balloon That Soared to Nearly 100,000 Feet
In June 2013, members of the Grand Canyon Stratospheric Balloon Team launched a balloon with a camera into the stratosphere, where it burst. It was found by a hiker two years later.Published OnCreditCreditImage by Grand Canyon Stratospheric Balloon Team
Strapped to a weather balloon bound for space, this GoPro camera soared almost 100,000 feet, into the stratosphere, to capture high-resolution footage of the Grand Canyon in 2013. An hour and a half into its flight the balloon burst and the camera plummeted back to earth.
It was two years before anyone would find the camera, which was lost in the Arizona desert. Fortuitously, the breathtaking footage was retrieved after an AT&T agent hiking in the desert stumbled upon the box about 50 miles from its launch point earlier this year and returned it to its owners. Last week the researchers shared the video of the GoPro’s herculean adventure from the fringes of space.
The footage shows a blend of browns and tans with a few streets and rivers that become more faint as the balloon floats higher. As the craft nears its highest point, the black of space enters the screen. An hour and 27 minutes into the voyage, when the balloon reaches an approximate altitude of 98,660 feet, it pops into what looks like white confetti against the darkness, and sends the camera plunging.
The team that launched the craft consisted of students from Stanford University. Their original goal was to obtain video of the Grand Canyon that they could modify with a special camera technique that one of the team members had developed, called fluid lensing. |
How High Is A Basketball Hoop? A Brief Look At The History Of The Rim
According to official rules, a basketball hoop is 10-feet (305cm) above the ground. Pretty easy to remember, huh?
The rim measures exactly 10-foot (305cm) off the ground
And what about in women’s basketball? Although women are on average shorter than men, the hoop in women’s basketball is also 10-feet high.
For much younger players, however, a regular rim is simply too high. Playing basketball on a 10-foot rim would be incredibly challenging, and therefore youth basketball has the following hoop height guidelines:
• Ages 5 to 7: 6-foot rim
• Ages 8-10: 8-foot rim
• Ages 11: 9-foot rim
• Ages 12+: 10-foot rim
That’s surely a big hassle for parents, right? Luckily, most basketball hoops are adjustable. Phew!
Why 10-feet?
Many people wrongly assume the top of the basketball hoop stands 10-foot above the ground because it’s a nice number. But, the actual reason can be found lying in the history of basketball.
There was once a Canadian gym instructor at a YMCA school in Springfield, Massachusetts, called James Naismith. Naismith wanted a game that could be played indoors on rainy days and set about creating a game that would be both exciting and challenging. By taking inspiration from existing sports like soccer and football, basketball was born.
At the time, basketball hoops obviously didn’t exist. Instead, Naismith hung peach baskets on the railing of a running track that circled the outer perimeter of the gym.
And guess how high that railing was? 10-feet!
(In fact, YMCA gyms were actually built to the same specifications, so they all had these 10-foot high railings available!)
Credit: Black Fives Foundation
Could A Higher Rim Benefit The Game?
It’s no secret that being taller is a big advantage in basketball. In fact, professional basketball players have been getting taller and taller.
Back in 1946, when the NBA first started, players were on average 6’2 to 6’3 tall. Nowadays, however, NBA players are on average 6’7 tall.
That has led to some people arguing for the regulation height of the basketball hoop to be raised. For instance, a higher rim would make fundamentals more important, as making shots closer to the rim would be more challenging and less dependent on the height and physicality of players.
Everyone Loves The Slam Dunk
The ability to get above the rim and dunk a basketball might seem unfair to those who can’t reach the rim. But let’s be honest here, I doubt many basketball fans would want to see fewer dunks during games.
Personally, I feel a hoop being higher than 10-foot would be a big slap in the face for those of us who weren’t born giants. It’s high enough as it is!
And besides, it appears there would be no end to raising the height. There will always be a freak who can reach even higher. Let’s not forget about Dwight Howard comfortably dunking on a 12-foot rim during the 2009 NBA Slam Dunk Contest:
Leave a Comment
Become a VJD subscriber and get everything you need to jump like a madman.
I hate spam too. Unsubscribe at any time. |
Tech Jobs Underpay Black Women and Minorities
Unless you’re a white man working in the tech industry you can forget getting top pay. recently published a study indicating that two out of three women working in the technology industry are paid less than men. That’s an improvement over last year when 69 percent of women were paid less compared to 63 percent this year.
But black women appear to be the hardest hit by pay disparities. According to the study African-American women make only 79 cents for every dollar a white man made. Black men made only 88 cents for every dollar paid to white counterparts. This pay gap can cost African-American tech workers as much as $10,000 a year in salary.
Because of the intense interest in increasing diversity in the tech industry blacks are 50 percent more likely to get hired but they are likely to be offered less pay. The study revealed Latino candidates are 26 percent less likely to get hired than a white candidate and Asians are 45 percent less likely. However they are still paid more than blacks but less than white hires. For example Latinos received only $5,000 less that white hires while Asians averages $2,000 less than whites.
Courtesy USAToday
Hired’s study revealed an interesting situation. The average white software engineer in San Francisco and New York asked for $126,000 in annual salary and usually recieved an average offer of $125,000. But blacks seem to be asking for less salary and getting it. Blacks in the San Francisco bay area/Silicon Valley asked for $115,000 and in New York $113,000.
Why are black technology workers asking for less money? According to the report’s author, Jessica Kirkpatrick, blacks maybe asking for less because people base their salary expectations on what they are currently earning. According to Kirkpatrick blacks lower expectations are a reflection of past salary history and being denied raises and promotions.
This pay disparity is not going unnoticed. Google is currently under scrutinity because of accusations that it is underpaying women. Google recently announced on Equal Pay Day that it hadclosed the gender pay gap globally.But testimony from a Department of Labor official in federal court stated that Google systematically discriminated against women. The official went on to say that Google’s discriminatory practices were “extreme” even for the tech industry. Google has been under pressure from the federal government to produce pay data to ensure the company is in compliance with anti-discrimination laws. Google has failed to produce the information so far and called the government request a “fishing expedition.”
Leave a Reply
|
Thursday, February 14, 2008
What is a Logline?
Definition of what a Logline is:
- A 25 words or less description of a screenplay
- one sentence description of a screen or TV play
- Between one to three lines describing the story, focusing on the concept and not giving away the ending
- the storyline of a script described present tense in essentially 25 words or less
- a one line story plot summary
- Very brief (one or two sentence) synopsis of a screenplay
Examples of Loglines:
An alienated boy bonds with an extraterresstrial child who's been stranded on earth; the boy defies the adults to help the alien contact his mothership so he can go home.
A self-centered hotshot returns home for his father's funeral and learns the family inheritance goes to an autistic brother he never knew he had. The hotshot kidnaps this older brother and drives him cross-country hoping to gain his confidence and get control of the family money. The journey reveals an unusual dimension to the brother's autism that sparks their relationship and unlocks a dramatic childhood secret that changes everything.
My Big, Fat, Greek Wedding :
Toula's family has exactly three traditional values - "Marry a Greek boy, have Greek babies, and feed everyone." When she falls in love with a sweet, but WASPy guy, Toula struggles to get her family to accept her fiancée, while she comes to terms with her own heritage.
Tips on how to construct a great logline for your script:
Reveal the star's SITUATION
Reveal the important COMPLICATIONS
Describe the ACTION the star takes
Describe the star's CRISIS decision
Hint at the CLIMAX - the danger, the 'showdown'
Hint at the star's potential TRANSFORMATION
Identify SIZZLE: sex, greed, humor, danger, thrills, satisfaction
Identify GENRE
Keep it to three sentences
Use present tense
1 comment:
Gary said...
I am an inspired screen play writer if anybody on here can give me some advice on how to write a good log line please email me at I would really appreciate it |
Home > Error Correcting > Error Correcting Code Software
Error Correcting Code Software
ISBN978-1-60558-511-6. Interleaving allows distributing the effect of a single cosmic ray potentially upsetting multiple physically neighboring bits across multiple words by associating neighboring bits to different words. TCP provides a checksum for protecting the payload and addressing information from the TCP and IP headers. Imai, Essentials of Error-Control Coding Techniques(Academic Press, Harcourt Brace Jovanovitch Publishers, 1990).[6]D. his comment is here
Eventually, it will be overlaid by new data and, assuming the errors were transient, the incorrect bits will "go away." Any error that recurs at the same place in storage after In this code the only valid words are 000 and 111. It is a very simple scheme that can be used to detect single or any other odd number (i.e., three, five, etc.) of errors in the output. Tsinghua Space Center, Tsinghua University, Beijing.
Error Correcting Code Example
If there is no error in program and data code then the following equation will be satisfied. Load More View All Evaluate 10Base-T cable: Tips for network professionals, lesson 4 Gigabit Ethernet standard: Overview of 1000BASE Ethernet, lesson 5b What duties are in the network manager job description? Retrieved 2011-11-23. ^ a b A. ECC also reduces the number of crashes, particularly unacceptable in multi-user server applications and maximum-availability systems.
Create a code in which each valid code word is surrounded by a sphere of radius m and you can detect up to m-bit errors and correct m-1 bit errors. They cannot correct for any errors in the data once detected at the destination, and the data must be transmitted again to receive the message. Software based CRC implementation is impractical in a real time application. Error Correcting Code Book Since processing power is relatively fast and cheap, software coding is more feasible.
Linear Block Codes Linear block codes are so named because each code word in the set is a linear combination of a set of generator code words. Error Correcting Code Universe The most common error correcting code, a single-error correction and double-error detection (SECDED) Hamming code, allows a single-bit error to be corrected and (in the usual configuration, with an extra parity In general, the reconstructed data is what is deemed the "most likely" original data. you can try this out Englewood Cliffs, NJ: Prentice-Hall, 1983.
The Hamming and Reed-Solomon (RS) code are two special cases of this powerful error correcting technique. Error Correcting Code Multiclass Classification NASA Electronic Parts and Packaging Program (NEPP). 2001. ^ "ECC DRAM– Intelligent Memory". Handling network change: Is IPv4-to-IPv6 the least of your problems? During the first 2.5years of flight, the spacecraft reported a nearly constant single-bit error rate of about 280errors per day.
Error Correcting Code Universe
A receiver decodes a message using the parity information, and requests retransmission using ARQ only if the parity data was not sufficient for successful decoding (identified through a failed integrity check). http://www.i-programmer.info/babbages-bag/214-error-correcting-codes.html?start=1 Again such codes are not always adequate for on-chip DRAM applications. Error Correcting Code Example We'll send you an email containing your password. Error Correcting Code Pdf Applications that use ARQ must have a return channel; applications having no return channel cannot use ARQ.
If CRC codes are the only ones used for an application, the raw BER of the channel is usually extremely low, and data is not time-critical. http://celldrifter.com/error-correcting/error-correcting-code-example.php Download this free guide Download Our Guide to Unified Network Management What does it really take to unify network management? They have very high code rates, usually above 0.95. For example, the key to the parity checking code is that every valid code is surrounded by invalid codes at one unit’s distance and this is why one-bit errors are detected. Error Correcting Code Memory Enables The System To Correct
The correct algorithm is simply to assign any incorrect code to the closest correct code. If an error is detected, data is recovered from ECC-protected level 2 cache. During the life cycle of an application where an ultra reliability in the computed result is essential, this proposed technique will be very effective at the cost of affordable redundancy in http://celldrifter.com/error-correcting/error-correcting-software-in-the-fabric-of-the-universe.php Fractal image com[...] + Full Article The Memory Principle - Computer Memory and PigeonholesWe discover why computer memory can be likened to pigeonholes and even include instructions for you to build
Retrieved 2011-11-23. ^ "Parity Checking". Error Correcting Code Hamming However, some are of particularly widespread use because of either their simplicity or their suitability for detecting certain kinds of errors (e.g., the cyclic redundancy check's performance in detecting burst errors). The newly generated code is compared with the code generated when the word was stored.
Related Terms domain name system (DNS) The domain name system (DNS) maps internet domain names to the internet protocol network addresses they represent and allows ...
Raspberry Pi User Guide (4e) The Java Tutorial 6th Ed ASP.NET 4.6 Web Programming with C# 2015 Data Analytics With Hadoop Professional C# 6 and .NET Core 1.0 Murach's SQL Server Upper Saddle River, NJ: Prentice-Hall, 1999. The Viterbi algorithm is a maximum likelihood decoder, meaning that the output code word from decoding a transmission is always the one with the highest probability of being the correct word Error Correcting Code Definition At the 64-bit word level, parity-checking and ECC require the same number of extra bits.
The advantage of repetition codes is that they are extremely simple, and are in fact used in some transmissions of numbers stations.[4][5] Parity bits[edit] Main article: Parity bit A parity bit As long as a single event upset (SEU) does not exceed the error threshold (e.g., a single error) in any particular word between accesses, it can be corrected (e.g., by a In practice noise tends to come in bursts that affect a group of bits. check over here ECC memory usually involves a higher price when compared to non-ECC memory, due to additional hardware required for producing ECC memory modules, and due to lower production volumes of ECC memory
ECC Page SoftECC: A System for Software Memory Integrity Checking A Tunable, Software-based DRAM Error Detection and Correction Library for HPC Detection and Correction of Silent Data Corruption for Large-Scale High-Performance Retrieved 2015-03-10. ^ Dan Goodin (2015-03-10). "Cutting-edge hack gives super user status by exploiting DRAM weakness". Error-correcting memory[edit] Main article: ECC memory DRAM memory may provide increased protection against soft errors by relying on error correcting codes. Cambridge University Press.
A random-error-correcting code based on minimum distance coding can provide a strict guarantee on the number of detectable errors, but it may not protect against a preimage attack. Most motherboards and processors for less critical application are not designed to support ECC so their prices can be kept lower. When the unit of data is requested for reading, a code for the stored and about-to-be-read word is again calculated using the original algorithm. admin-magazine.com.
As of 2009, the most common error-correction codes use Hamming or Hsiao codes that provide single bit error correction and double bit error detection (SEC-DED). This strict upper limit is expressed in terms of the channel capacity. ISBN978-0-521-78280-7. ^ My Hard Drive Died. For a more general and short but still mathematical introduction try: A First Course in Coding Theory by Raymond Hill See the sidebar for links to both books.
When data is transmitted using this coding scheme, any bit strings with even parity will be rejected because they are not valid code words. Interested reader may refer to [7-12] for related works on fail-safe kind of fault tolerance through error detection. 3. But is ... Electrical Fast Transients (EFT), Electrostatic Discharge (ESD), Electromagnetic Pulses (EMP) are the example of short duration noises. A scientific application often computes erroneous results if it reads bad data from a
Effectiveness of the proposed Scheme Compared to ECC The various block codes like BCH, Hamming, RS codes have nice mathematical structures. Costello, Jr. (1983). By submitting my Email address I confirm that I have read and accepted the Terms of Use and Declaration of Consent. |
Sweden to recognize Palestine as a state
Sweden is set to become the first EU country to recognize Palestine as a state, according to Prime Minister Stefan Lofven.
Sweden’s newly elected center-left government is set to recognize Palestine as a state, it was announced by Prime Minister Stefan Lofven in parliament on Friday, a move which will make it the first long-term EU country to do so.
''The conflict between Israel and Palestine can only be solved with a two-state solution, negotiated in accordance with international law,'' Lofven said.
''A two-state solution requires mutual recognition and a will to peaceful coexistence. Sweden will therefore recognize the state of Palestine,'' he added.
The decision comes less than a month after the Social Democrats won the Swedish parliamentary elections in alliance with the Greens and the Left Party on September 14. However, holding minority seats in the parliament could cause the center-left government to be the weakest in decades.
Countries such as Hungary, Poland and Slovakia also recognize Palestine as a state; however they did so before joining the European Union.
Although the UN General Assembly approved the de facto recognition of the sovereign state of Palestine in 2012, the European Union has not given official recognition.
On Thursday, the EU condemned Israeli plans to build 2,610 settlements in Givat Hamatos located in southeast Jerusalem.
''This represents a further highly detrimental step that undermines prospects for a two-state solution and calls into question Israel’s commitment to a peaceful negotiated settlement with the Palestinians,'' the European Union External Action said in a statement released Thursday.
Last Modified: 2014-10-04 07:04:18
• Visitors: 5405
• 0 0 |
“Los que llegaron”, Spanish language videos about Mexico’s immigrant groups
Books and resources, Teaching ideas Comments Off on “Los que llegaron”, Spanish language videos about Mexico’s immigrant groups
Oct 052013
Once TV México (“Eleven TV Mexico”) is an educational TV network owned by the National Polytechnic Institute (Instituto Politecnico Nacional) in Mexico City. Over the years, Once TV México programs have won numerous national and international awards.
Many of its programs are available as webcasts or on Youtube. Once TV México has made hundreds of programs that provide valuable resources for Spanish-language geography classes or for students of Spanish or anyone wanting to improve their Spanish-language skills. For example, their long-running program “Aquí nos tocó vivir” (“Here We Live”) has explored all manner of places throughout Mexico over the past 35 years, and has received UNESCO recognition for its excellence.
Of particular interest to us is “Los que llegaron” (“Those Who Arrived”), a series of programs looking at different immigrant groups in Mexico. Each 20-25 minute program focuses on a different group and explores the history of their migration to Mexico, their adaptation to Mexican life, their integration into society, the areas where they chose to settle, and the links between their home countries and Mexico.
Mexico has a long history of welcoming people from other countries, including political refugees. Each of these programs offers some fascinating insights into the challenges faced by migrants arriving in Mexico for the first time.
Sister city of Segusino, Italy
Sister city of Segusino, Italy
For instance, the program about Italian immigration to Mexico (above), explains why Mexico was seeking colonizers in the middle of the 19th century in order to populate and develop rural areas. One group of Italians settled in Veracruz (in present-day Gutiérrez Zamora); another group, 3,000 strong, and from the Veneto region in northern Italy, settled in Chipilo, near the city of Puebla. (For anyone not familiar with Chipilo, one of our favorite bloggers, Daniel Hernandez, has penned this short but memorable description of a typical Sunday morning there: Cruising in Chipilo, an Italian village in Mexico).
Italian immigration increased dramatically after the 1914-1918 war. Today, according to the program, there are approximately 13,000 Italian citizens residing in Mexico and an estimated 85,000 Mexicans of Italian descent. Note, though, that most sources quote a much higher figure for the latter category, perhaps as high as 450,000.
[Aside: In chapter 4 of “Mexican National Identity, Memory, Innuendo and Popular Culture”, William H. Beezley looks at the role of itinerant puppet theater in molding Mexico’s national identity. The largest and most famous single troupe of all was the Rosete Aranda troupe, formed by two Italian immigrants in 1850. The troupes went from strength to strength in the next half-century. By 1880, the Rosete Aranda company had 1,300 marionettes and by 1900 a staggering 5,104. Their annual tours around the country helped influence national opinions and attitudes.]
Program list for the “Los que llegaron” series:
• Españoles (Spaniards)
• Alemanes (Germans)
• Húngaros (Hungarians)
• Italianos (Italians)
• Argentinos (Argentines)
• Ingleses (English)
• Japoneses (Japanese)
• Estadounidenses (Americans)
• Coreanos (Koreans)
• Franceses (French)
• Chinos (Chinese)
• Libaneses (Lebanese)
• Rusos y Ucranianos (Russians and Ukrainians)
Related posts:
Where do most Hispanics in the USA live?
Other Comments Off on Where do most Hispanics in the USA live?
Sep 232013
A recent study by Pew Research analyzes the geographical distribution of the over 53 million Hispanics who currently live in the USA. The “Hispanic” or “Latino” population is composed of many different segments. It includes families that have lived in the USA for numerous generations as well as recent immigrants from many countries. Mexicans are by far the largest Hispanic origin group. There are 34.7 million Mexicans in the USA accounting for 64% of all Hispanics. A future post will look at the geographic distribution of Mexicans in the USA. Several previous posts, including “Recent trends for Mexicans living in the USA”, have investigated the socio-economic characteristics of Mexicans living in the USA.
Though Hispanics are spreading throughout the country, they still tend to be concentrated in the west, particularly states that border Mexico [see map]. Almost half (46%) of Hispanics live in California (14.4 million) or Texas (9.8). Other states with relatively large Hispanic populations include Florida (3.5m), Illinois (2.1m) and Arizona (1.9m). Almost 47% of New Mexico’s population is Hispanic compared to 38% in both California and Texas.
Map of Hispanic population in USAFully 44% of Hispanics live in only 10 metropolitan areas. Almost half (46%) of the Greater Los Angeles population is Hispanic. The Los Angeles–Long Beach metro area has 5.8 million Hispanics and the neighboring Riverside–San Bernadino metro area has another 2.1 million, giving Greater Los Angeles 7.9 million Hispanics, 15% of the USA total. The New York–Northeastern New Jersey metropolitan area is next with 4.3 million Hispanics. Other metro areas with large Hispanic populations include Houston (2.1m), Chicago (2.0m), Dallas (1.8m), Miami (1.6m), San Francisco–San Jose (1.6), Phoenix (1.2m), San Antonio (1.1m) and San Diego (1.0m).
Over 80% of the Greater Los Angeles Hispanic population is Mexican. Mexicans also dominate the Hispanic populations in Houston (78%), Chicago (79%), Dallas (85%) as well as most other metro areas in the USA. In metro New York, Puerto Ricans are most numerous among Hispanics (28%) followed by Dominicans (21%) and Mexicans (12%). Puerto Ricans are also most numerous in Orlando (51%), Tampa–St Petersburg (34%), Philadelphia (56%), Boston (29%) and Hartford (69%). Cubans dominate the Hispanic population in Miami (55%), Fort Lauderdale (21%) and West Palm Beach (21%). In metro Washington DC, Salvadorians are most numerous among Hispanics (32%).
Roughly one third (36%) of all Hispanics in the USA are foreign-born; the rest were all born in the USA. Miami has the highest proportion of foreign-born Hispanics with 66%. No other metro area with over a million Hispanics has more than 43% foreign-born. On the other hand, only 17% of Hispanics in the San Antonio area are foreign-born with 83% born in the USA.
Source of data:
Related posts:
Cross-border tribe faces a tough future
Other Comments Off on Cross-border tribe faces a tough future
Sep 162013
In this post, we consider the unfortunate plight of the Tohono O’odham people, whose ancestral lands now lie on either side of the Mexico-USA border.
How did this happen?
Following Mexico’s War of Independence (1810-1821), the rush was on to draw an accurate map of all of Mexico’s territory. Mexico’s boundaries following independence were very different to today. At that time, the major flows of migrants linking the USA to Mexico were from the USA to Mexico, the reverse of the direction of more recent flows, where millions of Mexicans have migrated north.
As this map of Mexico in 1824 shows, Mexico’s territory extended well to the north of its present-day limits.
Map of Mexico, 1824
Map of Mexico, 1824
At the end of the Mexican-American War (1846-1848) the 1848 Treaty of Guadalupe Hidalgo ceded over half of Mexico’s territory to the USA. A few years later, under the 1853 Gadsden Purchase (Treaty of La Mesilla), northern portions of Sonora and Chihuahua (shaded brown on the map below) were transferred to the USA.
Mexico 1853
Source: National Atlas of the United States (public domain)
With minor exceptions since, to take account of changes in the meanders of the Río Bravo (Grande), this established the current border between the two countries.
Impacts on the Tohono O’odham people
One of the immediate impacts of the Gadsden purchase was to split the lands of the Tohono O’odham people into two parts: one in present-day Arizona and the other in the Mexican state of Sonora, divided by the international border. The O’odham who reside in Mexico are often known as Sonoran O’odham.
There are an estimated 25,000 Tohono O’odham living today. Most are in Arizona, but about 1500 live in northern Sonora. In contrast to First Nations (aboriginal) groups living on the USA-Canada border who were allowed dual citizenship, the Tohono O’odham were not granted this right. For decades, this did not really matter, since the two groups of Tohono O’odham kept in regular contact for work, religious ceremonies and festivals, crossing the border when needed without any problem. Stricter border controls introduced in the 1980s, and much tightened since, have greatly reduced the number of Tohono O’odham able to travel freely. This is a particular problem for the Tohono O’odham in Sonora, most of whom were born in Mexico but lack sufficient documentation to acquire a passport.
Tohono o'odhum border protest
Tohono O’odham border protest
Since 2001, several attempts have been made in the USA to solve the “one people-two country” problem by granting U.S. citizenship to all registered members of the Tohono O’odham, regardless of their residence. So far, none has succeeded.
The largest community in the Tohono O’odham Nation (the Arizona section of Tohono O’odham lands) is Sells, which functions as the Nation’s capital. The Sonoran O’odham live in nine villages in Mexico, only five of which are offically recognized as O’odham by the Mexican government.
The border between the two areas is relatively unprotected compared to most other parts of the Mexico-USA border.
The Tohono O’odham Nation is often called upon to provide emergency assistance to undocumented workers (and drug traffickers) from south of the border who have underestimated the severe challenges of crossing this section of the harsh Sonora desert. Tribal officials regularly complain about the failure of the U.S. federal government to reimburse their expenses.
ABC News reports (Tohono O’odham Nation’s Harrowing Mexican-Border War) that the border “has made life a daily hell for a tribe of Native Americans” and that drug seizures on the Tohono O’odham Nation’s lands have increased sharply.
Want to read more?
The impact of immigrants on U.S. public budgets
Updates to Geo-Mexico Comments Off on The impact of immigrants on U.S. public budgets
Sep 052013
As the US Congress debates new immigration reform legislation there is considerable confusion concerning the fiscal impact of immigrants. One side argues that immigrants pay relatively little in taxes and absorb costly benefits in terms of public health, education, welfare, etc. Others note that immigrants often pay significant amounts in taxes and get little back in terms of benefits. Obviously, it depends on the immigrant and perhaps on their legal status.
OECD-migrationIn June 2013, the OECD published “International Migration Outlook,” a study on the budgetary impacts of immigrants to OECD countries. (OECD countries Mexico and 29 other mostly rich and mainly European countries). The study compares native-born with foreign-born residents, some of whom may have already become citizens. The study suggests that immigrants may have a slightly positive impact on fiscal budgets. The average for all OECD countries was 0.3% of GDP; the comparable figure for the USA was 0.03%.
Immigrants tend to have lower incomes, pay a bit less in taxes, but receive less in benefits. They tend to be younger and thus receive less in public health benefits. If they have children, they receive considerable education benefits. Obviously these are gross generalizations as some immigrants are highly paid executives and scientists, who pay significant taxes, while others may work as domestics or laborers, paying far less in taxes. Given that many public costs, including defense and debt service, are very hard to allocate to migrants versus native-born, the study suggests that immigration appears to be neither a drain nor a gain on fiscal budgets.
A big issue in the USA is the specific impact of Mexican immigrants on the fiscal budget, particularly the impacts of undocumented immigrants. Many legal immigrants from Mexico are family members joining their relatives. They may or may not be employed and thus may not pay income taxes. On the other hand, virtually all illegal immigrants seek employment. Furthermore, many obtain formal sector jobs by using fake Social Security cards or “Individual Tax Identification Numbers.” Their employers deduct federal and state income tax from their paychecks and forward these funds to government tax agencies.
Undocumented immigrants rarely file tax returns and thus very rarely receive the tax refunds to which they might otherwise be entitled. All immigrants pay considerable amounts in gasoline and sales taxes as well as property taxes, either directly or indirectly as part of their rent. Given that most illegal immigrants are rather young, relatively healthy and without children, they may have only a small impact on public education and health expenses. Their children are often born in the US, are US citizens, and should not be considered immigrants. It appears that undocumented immigrants might be paying more into the public coffers than they receive in benefits. A closer look at the data may provide some answers.
A 2007 study by the US Congressional Budget Office (CBO) entitled “The Impact of Unauthorized Immigrants on the Budgets of State and Local Governments” directly addressed this issue. The study notes that at the Federal level roughly 50% of illegal immigrants pay income or payroll taxes, which include Medicare taxes. But they generally are excluded from such Federal benefits as Social Security pensions, Medicare and Medicaid (other than emergency services), Food Stamps, and Assistance to Needy Families. The data suggest that in general illegal immigrants usually pay more in federal taxes than they receive in benefits. On the other hand, a number of court cases mandate that state and local governments cannot withhold from illegal immigrants certain services such as education, selected health care, or law enforcement. Many illegal immigrant children do not speak English; therefore their education may be more costly.
In assessing the fiscal impact on state and local government budget, the CBO analyzed 29 reports published since 1990. The study noted that undertaking such an analysis is very challenging and involves many big assumptions. Still the CBO analysis concluded that the relatively small amount spent by state and local governments on services for illegal immigrants is not fully offset by the even smaller amount of tax revenues collected from them including federal revenues they may receive for this purpose.
In conclusion, available research suggests that the impact of immigrants on public budgets is not very clear. With respect to all immigrants, there appears to a slight positive fiscal impact according to a recent OECD study. The older CBO analysis indicates that undocumented immigrants appear to have a positive impact of the federal budget, but a negative fiscal impact for state and local governments. Of course, the impact varies enormously among migrants depending on their incomes, tax brackets, consumption patterns and needs.
Related posts:
Recent trends for Mexicans living in the USA
Updates to Geo-Mexico Comments Off on Recent trends for Mexicans living in the USA
Jul 152013
The population of Mexican origin in the USA now totals more than 33.7 million, including 11.2 million born in Mexico and 22.3 million who identify themselves as being of Mexican origin. Mexicans account for 64% of all Hispanics in the USA and 11% of the country’s total population.The changing profile of Mexicans living in the USA.
A Pew Research Hispanic Center analysis of US Census data shows that the portion of the US population that is of Mexican origin is undergoing a gradual transformation. The average age of residents of Mexican origin is becoming younger and average education levels are on the rise. In 1990, only 25% of the Mexican migrants had a high school diploma, compared to 41% today. Even so, among Hispanics, Mexicans have the lowest rate of university education and the highest percentage of people without any health insurance.
Currently, 71% of the Mexicans who live in the USA have lived in the country from more than 10 years, compared to around 50% in 1990. The proportion of migrants that is male fell slightly from 55% in 1990 to 53% in 2010.
The average household income of households with at least one member of Mexican origin was $38,884, compared to a USA-wide average of $50,502. About 49% of families of Mexican origin own their own homes, compared to a 64.6% rate for the USA as a whole.
In terms of jobs, 26.7% of people of Mexican origin living in the USA work in services, 21.1% in sales positions or offices, 18% in transportation, 17.8% in construction, and 16.4% in administration, business, science and the arts
Related posts:
Jun 082013
This 30 minute video (narrated in Spanish with English subtitles) looks at the vexed situation of Mexican workers that have been deported from the USA back into Mexico. About 200 migrants are deported daily. Almost all are male,. Many of them have lived for several years in the USA prior to deportation, and some have wives and families still living north of the border.
About 45% of all migrants from Mexico to the USA crossed the border between Tijuana and California. Since 1994 (Operation Gatekeeper) crossing the border has been made progressively more difficult. The border is now heavily protected with border guards given access to technology such as night-vision telescopes and a network of seismic monitors (to detect the minor ground movements that signal people walking or running through the desert). As the US economy ran into problems a few years ago, the flow of migrants north slowed down, even as authorities in the US launched more raids against undocumented workers, leading to an increase in the number of workers deported.
In the video, a range of stakeholders are given the chance to explain how they see the problems faced by deportees. A social anthropologist provides some background and academic insights; activists explain their position and how they seek to help deportees; several individual deportees share their experiences and invite us into their “homes”, precarious one-room shacks, some built partially underground, hobbit-like, in “El Bordo”, a section of the canalized channel of the Tijuana River that runs alongside the international border.
The garbage-strewn El Bordo has sometimes housed as many as 4,000 deportees. Mexican authorities are anxious to clean the area up and periodically bulldoze any shacks they find.
These personal stories of workers from interior states such as Puebla are harrowing. Many still seek “the dream” and openly admit they do not want to return to their families as a “defeated person”.
While parts of this video might have benefited from tighter editing, the accounts are thought-provoking and the video is an outstanding resource to use with classes considering the longer-term impacts of international migration.
There seems little doubt that a majority of the “residents” of El Bordo has a serious drug problem, and the video includes interviews about this issue with municipal police, deportees and aid workers, who discuss the problems and suggest some possible solutions, but ultimately, the city and state authorities have some tough decisions to make if they are to resolve this serious, and growing, humanitarian problem.
Related posts:
Mexico’s population: now over 117 million and expected to peak at about 138 million
Updates to Geo-Mexico Comments Off on Mexico’s population: now over 117 million and expected to peak at about 138 million
Feb 282013
Mexico’s population in January 2013 was 117.4 million; 57.3 million males (48.8%) and 60.1million (51.2%) females according to a December 10, 2012 report by CONAPO (Consejo Nacional de Población) in Proyecciones de la población de México 2010-2050”. By January 2014 it will grow by over a million to 118.6 million. However demographic trends indicate that population growth in Mexico is declining significantly.
The birth rate is expected to fall from 19.7 births per 1,000 population in 2010 to 14.0 in 2050. As the Mexican population ages the death rate is projected to increase from 5.6 per 1,000 in 2010 to 9.2 in 2050. Consequently the annual rate of natural population growth is expected to decline from 1.41% in 2010 to 0.48% in 2050. Extrapolating the trends from the CONAPO projection suggests that death rates will surpass birth rates sometime in the by 2070s and natural population change will become negative. Of course, we must also take emigration into account.
According to the CONAPO report net emigration from Mexico was 321,000 in 2012, though some have noted that due to the Great Recession net emigration to the USA is near zero or less [Pew Research Center’s “Net Migration from Mexico Falls to Zero – and Perhaps Less”]. CONAPO expects net emigration to peak at about 689,000 by 2020 and then gradually decline to 590,000 by 2050. Given the current low levels of emigration to the USA and the rapid growth of the Mexican economy, some feel that these levels are rather high.
As a result of trends in birth rates, death rates and emigration, Mexico’s population growth rate is declining. Annual population growth is expected to fall below a million in 2017, below 500,000 in 2032 and below 100,000 by 2049. Extrapolating the rates in the CONAPO projection, Mexico’s population growth is expected to peak in 2053 at 137.6 million and then start to gradually decline. Of course, it is very difficult to accurately project emigration figures. If emigration is a third less than projected by CONAPO, then Mexico’s population could peak at 145 million.
Related posts:
Women’s Migration Networks in Mexico and Beyond (review)
Other Comments Off on Women’s Migration Networks in Mexico and Beyond (review)
Nov 172012
As long ago as 1885, Ernst Georg Ravenstein, a German-English cartographer, proposed seven “laws of migration” that arose from his studies of migration in the U.K.
The original seven laws, as expressed by Ravenstein, were:
• 1) Most migrants only proceed a short distance, and toward centers of absorption.
• 2) As migrants move toward absorption centers, they leave “gaps” that are filled up by migrants from more remote districts, creating migration flows that reach to “the most remote corner of the kingdom.”
• 3) The process of dispersion is inverse to that of absorption.
• 4) Each main current of migration produces a compensating counter-current.
• 5) Migrants proceeding long distances generally go by preference to one of the great centers of commerce or industry.
• 6) The natives of towns are less migratory than those of the rural parts of the country.
• 7) Females are more migratory than males.
These laws, though certainly not accepted uncritically, have provided a basic framework for many later studies of migration. Surprisingly, despite the wording of law 7, there has been remarkably little focus on female migration in the literature, with far more attention being paid in most studies to the migration of men.
Wilson-Tamar-Diana-coverRecognizing this, anthropologist Tamar Wilson provides a detailed account of several important aspects of female migration in her Women’s Migration Networks in Mexico and Beyond (University of New Mexico Press, 2009).
Wilson’s book focuses on the experiences and thoughts of doña Consuelo [all names are pseudonyms], a woman she met while researching in Colonia Popular, a Mexicali squatter settlement, in 1988, and her daughters Anamaria and Irma.
Over a period of several years, and in small part due to marrying a man from Colonia Popular, the author was able to become an insider, invited to all family functions, helping pay for expenses of other family members for such things as tuition, gaining a unique perspective that extends far beyond that usually available to researchers. Yet, at the same time, she remained an observer, recording conversations and impressions and arranging interviews as she felt necessary in order to tease out the details relating to the migration network involved.
The book is solidly grounded in migration theory and the early chapters call heavily on secondary sources. The first chapter (Herstories) provides a useful summary of the history of Mexican migration to the USA since the mid-nineteenth century, and of the increasing participation of women in international migration from Mexico.
Chapter 2 summarizes the history of female employment in Mexico over the same time period, and changes in gender relations in recent decades, while Chapter 3 provides the theoretical background, emphasizing the key concepts of migration networks, social capital and the peculiarities of transnational migration.
Wilson summarizes the findings of previous studies as suggesting that, “Women migrants within Mexico tend to be disproportionately single and either separated, abandoned, or widowed, and single mothers tend to accompany parents. Young, single women seem to be attracted to the border by the possibilities of finding work in the maquiladoras, underscoring women’s generally ignored status as labor migrants. ” However, based on her fieldwork between 1988 and 1992, she found that “None of the women in Colonia Popular had migrated to Mexicali in order to work in the maquiladoras, but some of their teenage daughters were employed in those assembly plants”.
Six chapters then focus on the personal experiences of doña Consuelo and her family and friends. Extensive quotations (translated into English) from interviews are linked with a clear narrative. These chapters are full of interest as the reader is drawn into the lives of the women and family members involved.
In the final chapter, Wilson draws nine general conclusions from her research:
1. Poor women in Mexico engage in a variety of income-producing activities in both the formal and informal economies that may involve migration.
2. Many if not most women accept the system of male domination but may opt out of an unhappy marriage if men do not live up to certain standards they consider fair. This is especially true in urban areas where women can find work.
3. Although many women migrate under the auspices of husbands or fathers, women’s migration in Mexico can take [place independently of males.
4. Extended family migration to a given city often also involves migration to a specific colonia or neighborhood within that city.
5. The social capital provided by networks exists on individual, familial and community levels
6. Strong ties can be either reinforced or weakened over time and the family life cycle, and weak tie can be converted into strong ties or abandoned. Transnational migration networks multiply in urban centers when siblings or offspring marry.
7. Transnational migration networks can be anchored in a multiplicity of locales, including place of origin, place of anterior (internal or transnational) migration, or place of one’s current residence or that of one’s parents, spouse’s parents or other kin
8. Adaptation networks for urban-origin migrants at destination may be, to a great extent, composed of work-site acquaintances converted into friends or ritual kin, and both friends and ritual kin may introduce migrants to future friends and ritual kin.
9. Transnationalism involves individuals embedded in households, families and networks who, through their ability to cross borders, provide connecting links between kin in Mexico and kin in the United States.
Related posts:
Nov 152012
In a previous post, we quoted a press release from the Pew Hispanic Center suggesting that the net migration flow from Mexico to the USA had slowed down to a trickle, and possibly even gone into reverse (ie with more migrants moving from USA to Mexico than in the opposite direction):
We also looked at data related to the vexed question of which Mexicans, if any, may still want to move to the USA:
There are some slight signs now that the net migration flow northwards is on the increase again. According to this press article, the National Statistics Institute (INEGI) has reported that out-migration from Mexico started to rise again in the second quarter of this year.
During the second quarter, international immigration into Mexico was estimated (based on survey evidence) at 14.3 / 10,000 total population, and emigration from Mexico to another country at 41.9 / 10,000, meaning a net migration outflow from Mexico of 27.6 / 10,000.
It seems like the average age of migrants is also slowly rising. For instance, INEGI data suggest that 31% of emigrants were between 30 and 49 years of age during the period from 2006 to 2008, compared to 35% for the 2009-2011 period.
It is still far too early to say whether or not the flow of migrants from Mexico to the USA will become as strong, and involve as many people, as in the 1990s and 2000s, but watch this space.
Related posts:
Have Mexicans given up on the dream of moving to the USA?
Updates to Geo-Mexico Comments Off on Have Mexicans given up on the dream of moving to the USA?
Jul 092012
A recent post noted that net migration from Mexico to the USA has dropped to essentially zero. Does this mean that Mexicans no longer have any interest in moving to the USA? The answer to this question is complicated. Obviously, many Mexicans living in Mexico would like to join their family members in the USA if it were legally possible. Others might feel that their career ambitions or the aspirations of their children might be better served by living in the USA. On the other hand, many Mexicans in the USA might feel that their lives would be better if they lived in Mexico.
A face-to-face survey in April 2012 by the Pew Research Center of 1,200 Mexicans in Mexico sheds light on this issue. According to the survey, 56% had a favorable view of the USA, compared to 52% in 2011. Only 34% had an unfavorable view of the USA, down from 41% in 2011. The views varied significantly by age and education. Sixty percent of 18 to 29-year-olds had a positive view compared to only half of those over age 50. Fully 66% of those with a post-secondary education had a favorable view compared to less than half (48%) of those with less education.
Over half (53%) think that Mexicans who move to the USA have a better life, up sharply from 44% in 2011. This suggests that there is still considerable interest in migration. Only 14% indicated they had a worse life, down from 22% a year earlier. However, 61% said they would not move to the USA if they had the means and opportunity. On the other hand, 37% said they would move to the USA and of these 19% indicated they would move even without legal documentation. Not surprisingly, younger Mexicans and those with more education were more interested in moving to the USA.
The survey data indicate that when/if US unemployment declines and there are again ample job opportunities in the USA, many Mexicans may migrate legally or illegally to fill those jobs. Of course, employment opportunities in Mexico will be a very important factor affecting decisions about migration. While the Mexican economy has recovered from the severe recession far better than the USA, still 62% of surveyed Mexicans described the economy as “bad”, down from 75% in 2010 and 68% in 2011. But Mexicans remain optimistic, 51% say the economy will improve in the next year compared to 32% who think it will remain the same, and only 16% who believe it will be worse. The Mexicans more willing to migrate, those with higher educations and incomes, are more optimistic about Mexico’s economic future. If the gap between US and Mexican economic opportunities continues to shrink in the decades ahead, we can expect Mexicans to become less interested in moving to the USA.
Related posts: |
Essay about Athenian Democracy - Today, ..
Democracy In Athens - Essay about Athenian democracy
Thucydides, from his Aristocratic and historical viewpoint, reasoned that the common people were often much too credulous about even contemporary facts to rule justly. notes that "Thucydides cites examples of two errors regarding : the beliefs that the two Spartan kings each had two votes in council and that there was a Spartan battalion called the 'Pitanate .' Thucydides sums up: 'Such is the degree of carelessness among the many () in the search for truth () and their preference for ready-made accounts'." He contrasted his own critical-historical approach to history with the way the demos decided upon the truth. So "Thucydides has established for his reader the existence of a potentially fatal structural flaw in the edifice of democratic ways of knowing and doing. The identification of this "flaw" is a key to his criticism of Athenian popular rule."
Payment for jurors was introduced around 462 BC and is ascribed to , a feature described by Aristotle as fundamental to radical democracy ( 1294a37). Pay was raised from 2 to 3 by early in the Peloponnesian war and there it stayed; the original amount is not known. Notably, this was introduced more than fifty years before payment for attendance at assembly meetings. Running the courts was one of the major expenses of the Athenian state and there were moments of financial crisis in the 4th century when the courts, at least for private suits, had to be suspended.
Solon, the mediator, reshaped the city "by absorbing the traditional aristocracy in a definition of citizenship which allotted a political function to every free resident of Attica. Athenians were not slaves but citizens, with the right, at the very least, to participate in the meetings of the assembly." Under these reforms, the position of archon "was opened to all with certain property qualifications, and a , a rival council of 400, was set up. The Areopagus, nevertheless, retained 'guardianship of the laws'". A major contribution to democracy was Solon's setting up of an or Assembly, which was open to all male citizens. However, "one must bear in mind that its agenda was apparently set entirely by the Council of 400", "consisting of 100 members from each of the four tribes", that had taken "over many of the powers which the Areopagos had previously exercised."
Three characteristics of athenian democracy essay
Order Description
Please i want to answer the FIRST question in the first assessment in the written exercise which it is :
What were the core features of Athenian democracy? What are its key differences from contemporary western democracies?
Essays about athenian democracy Coursework Help
Elite refers to a person who belongs to small group of people that enjoys additional benefits in contrast to the other masses. Pertaining to the various classes of people, Athens witnessed all types of elites including ruling, educated, wealthy, and status elites. The introduction of constitutional democracy in Athens really put the elites on the back foot. The masses were given equal rights in each and every social field. Though the elites tried to blend in with the masses, their extraordinary characteristics always kept them outstanding (Ober 14).
Athenian Democracy Essay - Paper Topics - Essays & …
This 2236 word essay is about Athenian democracy, Ancient Greek law, Democracy, Elections, Ecclesia, Solon, Cleisthenes, Pnyx, Prytaneis. Read the full essay now!
Around 460 B.C., (c.490-429 B.C.) used the power of the people in the law courts and the Assembly to break up the Council of Five Hundred. Under Pericles, came to mean the equality of justice and the equality of opportunity. The equality of justice was secured by the jury system, which ensured that slaves and resident aliens were represented through their patrons. The equality of opportunity did not mean that every man has the right to everything. What it did mean is that the criteria for choosing citizens for office was merit and efficiency and not wealth. Whereas Solon had used the criterion of birth for his officials and Cleisthenes had used wealth, Pericles now used merit. This was the ideal for Pericles. What indeed happened in practice was quite different. The Greek historian Thucydides (c.460-c.400 B.C.) commented on the reality of democracy under Pericles when he wrote: "It was in theory, a democracy but in fact it became the rule of the first Athenian." And the historian Herodotus (c.485-425 B.C.) added that "nothing could be found better than the one man, the best." This "one man, the best," was the , the word from which we get the expression aristocracy. So, what began as Greek democracy under Cleisthenes around 500 B.C., became an aristocracy under Pericles by 430 B.C.
Athenian democracy - ROAR Magazine
So was free to impose his reforms, which he did during the . These mark the beginning of classical Athenian democracy, since (with a few brief exceptions) they organized into the political landscape that would last for the next two centuries. His reforms, seen broadly, took two forms: he refined the basic institutions of the Athenian democracy, and he redefined fundamentally how the people of saw themselves in relation to each other and to the state. Since the is devoted to its various institutions, so for the moment we can focus on the new Athenian identity that imposed.
According to , brought about a reform of the Court of the by denouncing the Court before the and the (). So the reform was not, finally, the work of alone, but an act of by two of the more democratic institutions in . connects this event to a newfound feeling of power among the common people of following the Persian Wars, when the less wealthy citizens by serving in the navy had saved the city. He makes the connection between naval victories and the reform of the Court of the explicitly in his Politics (), and the Constitution of the Athenians that survives under Aristotle’s name strongly suggests the connection as well ().Though Athens may have built a strong foundation for the prosperity of democracy throughout the world, the Athenian Democracy itself was a system saturated with flaws as it ignored the significant role of a majority of population that included women, metics, and slaves. With time, the so-called democratic Athens fell back into the grip of the elites; consequently, corruption prevailed which led to the eventual downfall of the empire.Socrates and his famous relationship to the known democratic city, and also especially during his trial and the celebrated execution, were great matters of dire central concern to all ancient critics that dwell on democracy. The existing figure of the renowned Socrates continues to always loom large in all contemporary discussions of the existing moral and also practical value of the Greek democracy (Jones 48).Democracy is necessary for the success of any given nation. People cannot exist and live in harmony if there is no proper procedure or way in which democracy is defined and practiced. All people should understand their rights and should always call out and yearn to experience it. There should be no instances of ignorance as even the less educated have their day when it comes to democracy. Their voices count as equal as to those who also have a lot of knowledge and prowess in different fields. Some of the contemporary critics always regarded the trial and also execution as a clear evidence of the Athenian democracy's moral and turpitude (Jones 57). |
The history of the sikhs and sikhism
Rated 3/5 based on 31 review
The history of the sikhs and sikhism
Sikhism, religion and philosophy founded in the punjab region of the indian subcontinent in the late 15th century its members are known as sikhs. Introduction to sikhism: sikh missionary society: the history of the jews and that of the sikhs bears witness to this fact true sikhism is as strong now as before. Who are sikh americans there are approximately 700,000 sikhs in the us sikhism is an independent our history the first sikhs. Monu chauhan sikhism history - download as word doc (doc), pdf file (pdf), text file (txt) or read online.
Unknown history of sikhs and sikhism , pawns in divide and rule policy of british empire – capt ajit vadakayil. Sikhism in the area of present-day pakistan has an extensive heritage and history, although sikhs form a small community in pakistan today most sikhs live in the. History of the sikhs including a new religion in india, militant sikhs sikhism develops into the religion with the most sacred book of all. Sikhism can be found predominantly in the punjab region of india but sikhs and sikhism by country region country hindus and sikhs in america: a short history.
The national sikh campaign's (nsc) mission is to promote a better understanding of the sikh community in america and other western countries, and to project a. Most of the world’s 25 million sikhs live in india, huffpost personal sikhism 101: facts, history, beliefs, gurus,. Essentials of sikh history: war strategies of sikhs the sikh misls the bhangi misl khalsa kingdom battle of naushera nalua and afghanis after maharaja ranjit singh.
La orden de punjabi sikhs, el nihang o los akalis, se formó durante la época de ranjit singh en virtud de su líder, akali phula singh,. Explore sikhism, with comprehensive articles on sikh beliefs and practices and facts about its history and gurus. The history of sikhism started with nanak , the first guru in the fifteenth century in the punjab region of the indian subcontinent the religious practices were.
the history of the sikhs and sikhism A discussion on sikhs and sikhism a source of information for deeper understanding of religious subjects.
Who are sikhs what is sikhism sikhs have throughout history followed a simple but effective mechanism whereby individual voices are heard and decisions. This article provides a brief history of sikhism british broadcasting militarisation of the sikhs sikhism was well established by the time of. Sikhism is the youngest religion of the world dating back barely to five hundred years old read more on origin and history of sikh religion. He started lavan system to differentiate marriage system of sikhs he was the fourth guru of sikhism this execution proved a turning point in sikh history.
S ikhism a progressive religion well ahead of its time when it was founded over 500 years ago, the sikh religion today has a following of over 20 million. Who are the sikhs sikhism's holiest shrine - the golden temple traumatic history sikhs have had a very traumatic 500-year history explains john das,. The history of sikhism at a glance guru goband singh founds the khalsa brotherhood of sikhs “timeline of sikhism.
What is the history of sikhism update cancel ad by casa de sante sikhism: what is the history of sikhs fighting in wars can you summarize the history of sikhism. Sikhism in canada sikhism, a major world religion, arose through the teachings of guru nanak (c 1469–1539) sikhs continued to have a turbulent history. Sikhism: what is the history behind the sikh turban the sikhs have a long history of being warriors and are what is the history of the kara before sikhism.
2018. Term Papers. |
What is Flotation Therapy?
Flotation therapy seems relatively new today because it has begun to make a comeback in recent years. However, flotation the
rapy has a long history. It was invented in the mid 1950’s by neuroscientist Dr. John Lilly in the United States. The idea is based on a scientific approach to give people a deep relaxation called “Restricted Environmental Stimulation Technique” also known as R.E.S.T. Inside human sized tanks, ten inches of water with 850lbs of salt lie. This environment triggers a deep relaxation response and a deep sleep called the Theta state. Flotation therapy works by shutting down the normal responses in the brain, such as the sensation of gravity, temperature, touch, sight and sound, senses you deal with daily.
At Pholat Philadelphia, you can experience all of this first hand. It is a great way to embrace mindfulness, meditate, or just be one with your thoughts. In addition, flotation therapy has many physical and mental benefits. For instance, flotation therapy helps old wounds and injuries to heal faster because the body responds to mechanism that naturally regenerates itself and maintains chemical and metabolic balance. It also helps decrease anxiety, stress, and depression levels. |
NASA spacecraft provides new evidence of water plumes on Europa
NASA spacecraft provides new evidence of water plumes on Europa
It's a hugely exciting finding, not just because it's only the second known example of a natural waterpark in space.
The Galileo spacecraft carried a Plasma Wave Spectrometer (PWS) to measure plasma waves caused by charged particles in gases around Europa's atmosphere.
But we've also learned that life finds a way in the harshest of Earth's environments, like vents in the deepest parts of the ocean floor.
Water plumes support the idea that underneath Europa's ice-crested surface, there's a massive expanse of subterranean ocean.
Europa is our best shot of finding biological life in the solar system, researchers say.
The researchers initially suspected Europa's surface to be releasing water by seeing the images received with the help of the Hubble Space Telescope.
They layered the magnetometry and plasma wave signatures into new 3D modelling developed at the University of MI in the U.S., which simulated the interactions of plasma with solar system bodies. The model simulations that included plumes from Europa closely matched the Galileo data, but the model without them did not.
To delve into more into ejection of a plume from Europa. If researchers want to know if some form of life has indeed taken root inside the planet, studying those plumes may be the easiest way to prove it. They know that the plumes are shorter than those on Enceladus. Plumes from the surface means that the ice is warm.
The result that emerged, with a simulated plume, was a match to the magnetic field and plasma signatures the team pulled from the Galileo data. SLS can send the probes directly to Jupiter instead of requiring gravity-assists from other planets, significantly shortening the trip time. Jia also is co-investigator for two instruments that will travel aboard Europa Clipper.
Iran official says Europe has 60 days to give nuclear 'guarantees'
The EU's energy commissioner is also traveling this week to Iran to discuss strengthening European energy support to Iran. In a tweet on Sunday, Trump wrote: "Remember how badly Iran was behaving with the Iran Deal in place".
"If plumes exist and we can directly sample what's coming from the interior of Europa, then we can more easily get at whether Europa has the ingredients for life", said Robert Pappalardo, Europa Clipper project scientist.
But this will also likely be the hardest question to tackle. However, the picture was so faint that they were regarded as very weak evidence to arrive at any conclusion regarding the existence of the plume. Europa is regularly hit by meteorites that could deliver compounds, and the moon is surrounded by deadly radiation that might kill off anything trying to get by.
Thus, the scientists discovered that the thermal and magnetic anomalies recorded in 1997 by Galileo corresponded to the region where Hubble identified the steam plume.
Galileo came much closer during its 11 flybys of Europa.
With 3-D modeling to tie all these factors together, the data signature lined up perfectly to suggest that during the 1997 flyby, Galileo flew through a plume, Kivelson said.
Jia and his colleagues are now working on the magnetic field and plasma instruments for two future missions aimed at studying Jupiter and its moons.
Now NASA is planning a mission - named Europa Clipper -to fly over the distant world in the 2020s for a closer look.
But it wasn't until a year ago, at a conference for boffins planning the Europa Clipper spacecraft that's due to head out to the mysterious moon in 2022, that Xianzhe Jia, a space physicist at the University of MI put the pieces together and chose to revisit the Galileo data. Still, Xianzhe Jia, an associate professor at the University of MI, guessed that there may be clues lurking in the information that was beamed back. "They have a huge potential, of course, for collecting very useful data", he told Newsweek.
Related Articles |
melanocyte-stimulating hormone (MSH)
melanocyte-stimulating hormone
The magnetotatic bacterium Magnetospirillum magnetotacticum.
Melanocyte-stimulating hormone (MSH) is a peptide hormone produced by cells in the intermediate lobe of the pituitary gland. Melanocyte-stimulating hormone (MSH) stimulates the production and release of melanin (melanogenesis) by melanocytes in skin and hair. MSH is also produced by a subpopulation of neurons in the arcuate nucleus of the hypothalamus. It released into the brain by these neurons has effects on appetite and sexual arousal.
In the skin of lower vertebrates, melanocyte-stimulating hormone stimulates the concentration of melanin granules in chromatophore. |
Disabilitychat Blog
Main Menu
How are People with a Disability Treated?
It has been over 20 years since the Disability Discrimination Act was introduced and from that moment, those who live with a disability have had the power to make a change and be treated in the same way as everyone else.
However, in light of the introduction of this Act in 1995 and even to this very day, those with disabilities do not seem to have the same attention when it comes to equal rights as other campaigns such as women’s rights or racial equality. Legislation for both of these was introduced decades before and so, the progress has been slow.
For those who live with a disability there is still a fight to fight and that is exactly what is happening. The act has changed things and it has made those who live with a disability realise that they are not the ones with the problem because the problem lies with society and how they deal with it. The introduction of the Act was just the beginning but there are still things that need to be put in place such as fairness and equality as well as opportunities. Still, there is a pay gap in the work place and not enough people with a disability are appointed to high public positions.
It is all about being represented in an equal way to those who are able bodied and still there is an issue there. This is in public life and in entertainment because those two situations are where perceptions and thoughts are cultivated. Many people who live with a disability feel that often, they are not represented in the correct way often feeling that they get nothing more than a token representation.
Those who live with a disability are not looking for pity, it is far from that because they want power. While things are slowly changing there are many physical barriers that they need to overcome such as access to a whole range of services which includes buses, the workplace and retailers. All of this needs to be removed.
There is still a belief that more needs to be done to enhance respect and dignity because why should disabled people feel any different to an able-bodied individual? In recent years, the Paralympic Games which took place in London showed the world just how ‘normal’ disabled people are. The good feeling that came with the event has filtered into society but some disabled people still experience verbal and physical abuse.
Sometimes, society can go a little too far and make it feel like those who are disabled are being treated too differently, however, it is important to remember that there is a very fine line here. Disabled people want to feel like they are the same as everyone else and that means the same respect, equality and the ability work and live in the same way. While changes have been made throughout the years, there is still some way to go but progress is being made and that is promising.
Is The Government Doing Enough To Protect Disabled Rights?
This is a very prominent question on the lips of many people in the UKRead More
2 Comments to How are People with a Disability Treated?
1. side says:
I lⲟve looking through a post that can mаke people think.
Also, thank you for permittіng me to comment!
Leave a Reply |
Braking System
Disc Brakes
Brake Drum
Brake Calipers
Wheel (Slave) Cylinder
Parking (Emergency) Brakes
Brake System
Master Cylinder
The master cylinder displaces hydraulic pressure to the rest of the brake system. It holds THE most important fluid in your car, the brake fluid. It actually controls two seperate subsystems which are jointly activated by the brake pedal. This is done so that in case a major leak occurs in one system, the other will still function. The two systems may be supplied by seperate fluid reservoirs, or they may be supplied by a common reservoir. Some brake subsystems are divided front/rear and some are diagonally separated. When you press the brake pedal, a push rod connected to the pedal moves the "primary piston" forward inside the master cylinder. The primary piston activates one of the two subsystems. The hydraulic pressure created, and the force of the primary piston spring, moves the secondary piston forward. When the forward movement of the pistons causes their primary cups to cover the bypass holes, hydraulic pressure builds up and is transmitted to the wheel cylinders. When the brake pedal retracts, the pistons allow fluid from the reservoir(s) to refill the chamber if needed.
Electronic sensors within the master cylinder are used to monitor the level of the fluid in the reservoirs, and to alert the driver if a pressure imbalance develops between the two systems. If the brake light comes on, the fluid level in the reservoir(s) should be checked. If the level is low, more fluid should be added, and the leak should be found and repaired as soon as possible. BE SURE TO USE THE RIGHT BRAKE FLUID FOR YOUR VEHICLE. Use of improper brake fluid can "contaminate the system". If this occurs, ALL of the seals in the brake system will need replacement, and that is usually a VERY expensive operation.
Brake Warning System
The brake warning system has been required standard equipment since 1970, and is connected to the master cylinder. It monitors differences in pressure in the brake lines of the two hydraulic sub-systems, and alerts the driver with a light if an imbalance occurs. When you turn the key to the Ignition position, the brake warning light on the dash comes on during a "self-test". You should not drive a car if the warning light does not come on during the startup self test.
The brake system is divided into two sub-systems to increase safety. A pressure differential switch, connected to the warning light, is positioned between the two. If a major leak occurs, and therefore pressure in one of the lines is sharply reduced, pressure from the other side forces a piston to move, activating the pressure differential switch and turns on the dashboard warning light.
There are two types of pressure differential switches; mechanical or hydraulic. Mechanical switches are activated by excessive brake travel. Hydraulic switches are activated by a difference in pressure between the front and rear system. When pressure in one of the lines is sharply reduced, pressure from the other side forces a piston to move. A plunger pin then drops into a groove in the piston, activating a switch that turns on a dashboard warning light.
The brake warning light is also connected to the brake fluid level sensors in the master cylinder reservoir(s). If the brake warning light comes on, the fluid level should be checked. If the level is low, more fluid should be added, and the leak should be found and repaired as soon as possible. BE SURE TO USE THE RIGHT FLUID. NEVER IGNORE THE BRAKE WARNING LAMP, AND ALWAYS NOTE WETHER IT WORKS DURING THE STARTING SELF-TEST.
Power Brakes
Power brakes (also called "power assisted" brakes) are designed to use the power of the engine and/or battery to enhance braking power. The four most common types of power brakes are: vacuum suspended; air suspended; hydraulic booster, and electro-hydraulic booster. Most cars use vacuum suspended units (vacuum boosters), which employ a vacuum-powered booster device to provide added thrust to the foot pressure applied.
In a vacuum booster type system, pressure on the brake pedal pushes forward a pushrod connected to the pistons within the master cylinder. At the same time, the pushrod opens the vacuum-control valve so that it closes the vacuum port and seals off the forward half of the booster unit. The engine vacuum line then creates a low-pressure vacuum chamber. Atmospheric pressure in the control chamber then pushes against the diaphragm. The pressure on the diaphragm forces it forward, supplying pressure on the master cylinder pistons.
Hydraulic booster systems usually tap into the power steering pump's pressure, and use this power to augment pressure to the master cylinder. Electro-hydraulic booster systems use an electric motor to pressurize a hydraulic system which augments pressure to the master cylinder. This allows the vehicle to have power assisted brakes even if the engine quits.
You may wish to compare the difference between power and non-assisted braking in a safe area; while driving slowly, turn the ignition key off (don't turn it into the locked position, because the steering wheel will lock, which is highly unsafe.) As the car coasts along, press the brakes hard. The force of your foot is now the only thing stopping the car. The safe driver is always ready to apply the total force needed to stop their vehicle, even if the engine quits (thereby removing the power assist).
Filler Cap (Brake Fluid Reservoir Cover)
The cap on the brake fluid reservoir has a hole for air, or is vented, to allow the fluid to expand and contract without creating a vacuum or causing pressure. A rubber diaphragm goes up and down with the fluid level's pressure, and keeps out any dust or moisture. If the cap's seal becomes distorted, it usually indicates a brake fluid contamination problem.
Vacuum From The Engine
Engine intake manifold vacuum is used for augmenting the foot's braking power in vacuum assisted power brakes. This vacuum is created by the pistons as they draw downward, sucking air into the cylinders. When you push the brake pedal down, the vacuum control valve lets the engine draw a vacuum in the front section of the booster unit. The atmospheric pressure on the other side of the diaphragm provides significant additional braking force.
Brake Fluid
Brake fluid is a special liquid for use in hydraulic brake systems, which must meet highly exact performance specifications. It is designed to be impervious to wide temperature changes and to not suffer any significant changes in important physical characteristics such as compressibility over the operating temperature range. The fluid is designed to not boil, even when exposed to the extreme temperatures of the brakes.
Different types of brake fluid are used in different systems, and should NEVER be mixed. Most cars use "DOT 3" or "DOT 4" brake fluid. Some newer cars use silicone brake fluids. These should NEVER be mixed together, because the seals in each car are designed to work with only their specific fluid types. For example, the mixing of "Silicone" brake fluid and conventional glycol based DOT 3 or DOT 4 fluids should be avoided, as the two fluid types are not miscible (they will not mix together). DOT 3 brake fluids and DOT 4 brake fluids can be mixed.
One of the WORST things that can happen to your car is if the brake fluid becomes contaminated, because the seals are designed to work with only pure brake fluid. "System contamination" means that all of the piston seals and hoses are deteriorating, and therefore must be replaced, a MAJOR expense. So, be VERY careful what you put in the master cylinder reservoir!
It should be noted that brake fluid is highly corrosive to paint, and care should be used not to get it on your car's finish.
The brake fluid in your car should be changed every (See Owners Manual) to prevent corrosion of the braking system components.
Anti-lock Brake Systems (ABS)
Originally developed for aircraft, ABS basically works by limiting the pressure to any wheel which decelerates too rapidly. This allows maximum stopping force to be applied without brake lockup (skidding). If standard brakes are applied too hard, the wheels "lock" or skid, which prevents them from giving directional control. If directional control (steering) is lost, the vehicle skids in a straight line wherever it is going. ABS allows the driver to steer during hard braking, which allows you to control the car much better. In the old days, drivers had to know how to "pump" the brakes or sense the lockup and release foot pressure in order to prevent skidding. This meant that if only one wheel lost traction and started to skid, the driver would have to reduce braking force to prevent a skid. The advantage of ABS is that the brakes on the wheels with good traction can be used to the fullest possible amount, even if other wheels lose traction.
An Anti-Lock Braking System consists of speed sensors located at each wheel and a central computer. The speed sensors measures how fast each wheel is turning, and sends that information back to the computer. The computer constantly evaluates the speed of the vehicle and the speed of the wheels. When the brake pedal is depressed and the speed of the wheel reaches or get close to locking-up, the ABS computer will then modulate the amount of brake pressure (or “pump” the brakes), as fast as fifteen times per second, on that wheel. This is usually accomplished by diverting the brake fluid into a small reservoir. The fluid is later pumped out of the reservoir and returned to the main fluid reservoir when the brakes are not being applied.This continuing modulation or pumping will prevent or correct wheel lock-up and allow the driver to brake and steer.
The anti-lock brake system tests itself every time the vehicle is started and every time the brakes are applied. The system evaluates its own signals. If a defect is detected, the system then turns off, leaving normal braking unaffected.
To correctly use the brakes in an ABS equipped car in a panic situation, the driver must apply the brakes 100 percent, using all available force. The ABS computer will prevent brake lockup and the tires sliding on the travel surface. This will allow the driver to steer around the threat. It is important to remember that ABS can increase straight-line stopping distances beyond that of threshold braking in a non-ABS equipped car. ABS offers drivers, in an emergency situation, the ability to maintain steering control so they can steer clear of an obstacle or threat. Current ABS systems give feedback to the driver to let them know it is activated and operating during the current braking maneuver. The most common way that ABS communicates to the driver is a pulsing sensation felt in the braking foot or a rattling noise during braking. This is normal operation and is telling the driver ABS is working. As discussed above, do not attempt to modulate the brake yourself and remember to use all the brake force available. The ABS system will take care of the modulation for you and allow you to steer around a threat. |
Object Lessons
Best Things in Life
Objects: Pockets
Let's take a peek into a boy's pocket, and see what goes into it. A jackknife, marbles, pennies, and what's this?—a nail and three tacks; a pencil, piece of chalk, an eraser, a puzzle; some keys, a golf ball and a tee; here is a small electric bulb, a top of a pencil; matches, rubber band, a whistle and some sand.
Now what about the pockets of a girl's handbag? Here is what I found: Two handkerchiefs, a picture, pencil and a pad of paper, pennies and a nickel, two dimes; colored paper with drawings on it, colored crayon, pins, cloth with a needle in it, a ten-cent watch; a birthday book, an arithmetic paper marked 100, empty powder box and a lipstick holder.
I think the boys might well take out the tacks and nails else holes might be made in the pocket and other more valuable things might be lost. Then too that bulb might break and the matches are rather dangerous. Girls might well watch out for the colored crayon and pins. The dimes had better be put in the bank.
I was thinking what goes into the pockets of your life. Love and kindness are there; there may be some selfishness which you had better hurry and take out. Generosity and obedience are there; perhaps you see disobedience and some grouchiness somewhere that need to be thrown out right away. I hope honesty and truth and helpfulness and happiness are in every one of the pockets of your life. Give it some thought, will you?
| More |
Battery Life and Cold Weather Riding
Batteries don't get along with cold temperatures.
GPS units, lights, phones, backup batteries, cameras. All of these electronics have trouble as the temperature plummets. We've noticed that at about 20 degrees Fahrenheit is when battery life become noticeably shorter. At about zero degrees batteries die impressively quick; we've had trouble keeping a 14 hour battery last for one hour. In fact, batteries can die 10-times faster, or more, as the temperature nears zero.
Overall, low temperatures cause a three primary of battery issues:
• Reduced use time. Anywhere from a 50% to 90% reduction.
• Unreliable battery life reading. Battery life readings become unreliable as the temperature drops. You may well not have a warning when your phone or Garmin dies. And this leads to...
• Random inexplicable shutoff. We've had this happen a few times over the years.
The fundamental question we ask is, "How do I survive an 8-hour ride if my Garmin, light, or whatever electronics are going to die in an hour?"
We've learned some simple tricks that enable a long ride in zero degree weather. Each electronic item gets different treatment:
GPS Unit
We have worked with three methods:
1. Carry two GPS units. This is our preferred method, by far. Why two? While one unit is navigating for you, and slowly freezing, the other unit is in your jersey pocket, staying warm. When the navigating unit reaches it's cold battery tipping point, swap it out with the warm unit. Let the frozen unit warm up in your jersey. Repeat the swap-out at appropriate tipping point times. The worst, or fastest, swap-out rotation pace that we've seen is about 30 minutes. If you use this method, two fully charged Garmin 1000s will last about 22 hours in zero degree weather. Of course, most people don't want to own a second GPS unit. In that case, we know of two options:
2. Share GPS units. If you're riding with someone that also has a unit, share them and use the swap-out method described above.
3. Start-stop. If you are riding alone, depending on your route and goals, ride with the unit in your jersey whenever possible and remount it on the bars when needed. This is cumbersome but it's a lot better than having to stop because your GPS dies. This is a good exercise for memory improvement. Another side benefit of this method it that you have one continuous file to upload at the end of your ride.
Lights -- Headlights and Taillights
We have two methods for managing cold weather lighting:
1. Carry two or more lights. Follow the protocol we recommend for GPS Units: swap them back and forth from warm pocket to cold handlebar or helmet as the cold tipping point approaches. this is one of the many advantages of small modular light systems. We strongly recommend a four light systems for mixed terrain riding -- one helmet mounted and one bar mounted. Regarding taillights, see our battery notes below.
2. Use a generator light system. We've not had problems with generator hubs in cold temperatures. The primary challenge with a generator is that a provides limited lumens -- about half of what we ride on the trails. However, half is infinitely better than zero lumens.
Mobile Phone
This is simple: Keep the phone in a jersey pocket. That's all. We also strongly recommend keeping your phone in a waterproof zip-lock bag; you get humid in cold weather and phones don't like this. As an aside, we've read that keeping your phone turned off will help prolong battery life; this is true if it's going to be sitting out in zero degree weather for hours, however, you'll be keeping it warm so having it turned off will not be relevant to cold weather usage. On a related note, on its website, Apple suggests that in addition to shortened battery life, "low-temperature conditions might [...] cause the device to alter its behavior [...]" Be aware.
Backup Batteries
This is simple, keep the battery in a body contacting pocket. If it's warm it'll be ready to use.
An important word of preparation: A battery will not take a charge when it's too cold. To charge, you have to get charger and chargee near your body core heat.
This would seem easy: Just keep the camera in your jersey pocket and it'll stay warm for when you need it. The only cautionary comment about this is that the environment under your jacket is very humid and cameras do not like this humidity at all. The simple solution is to keep your camera in a waterproof bag. A typical zip-lock bag is not waterproof.
Don't put any of your electronics in a handlebar bag or saddle bag. While this will slightly slow down the cold hand of death, it will not prevent your batteries from dying. This may seem obvious but we've seen numerous people make this mistake. Keep your electronics warm and dry.
Why do batteries die so quickly in cold weather? The battery isn't actually draining faster; the chemical reaction that produces the electricity proceeds more slowly and that means less energy is produced. The colder the temperature, the more difficult for the chemical reaction to occur. This is helpful to understand because it explains why the battery isn't discharging faster -- it's actually discharging more slowly -- so that warming it up will allow the chemical reaction to function properly again.
As a precise detail, the battery is actually discharging slightly faster in very cold weather, but that acceleration is minimal. We've not found any data regarding this but our personal experience suggests that the total charge lost in trying to generate the chemical reaction is somewhere between 5 and 15%.
Battery Operated Taillights
The type of battery has an influence on battery life. For most electronics, this is irrelevant because the battery is as it comes; you have no choice. However, for a AA or AAA battery operated taillight, you do have some options. Fortunately, the choice is simple:
Lithium Ion: This is the right choice for cold weather riding. Best performance is with lithium-Ion -- Li-Ion -- batteries. Performance begins to drop at about 30 degrees but that's a lot better than any alternative. At about zero degrees is when you see severe drop in light time. by the way, you may read that lithium ion batteries are not great in the cold; that's true relative to some other battery technologies but those are not available in AA or AAA batteries. So, lithium ion it is.
Alkaline batteries employ a water-based electrolyte that performs poorly in subfreezing temperatures. This can also lead alkaline batteries to leak or burst.
Rechargeable Nickel Metal Hydride -- NiMH -- batteries are not great either. They have the same basic issue as alkaline batteries.
Ready to Ride!
That covers most of what we've learned from riding in the cold. What are your tricks for keeping your electronics running during the cold season? |
Paper negative of Herschel's telescope, 1839
Paper negative of Herschel's telescope, 1839
V800/0005 Rights Managed
Request low-res file
530 pixels on longest edge, unwatermarked
Request/Download high-res file
Uncompressed file size: 51.0MB
Downloadable file size: 3.0MB
Price image Pricing
Please login to use the price calculator
This image is part of the feature The First Scientific Photographers
Caption: Early scientific photography. Photograph showing the paper calotype negative made by Sir John Herschel of his father's telescope mounting in 1839. This was the year in which W.H.F.Talbot patented the paper-based negative-positive process known as the calotype or talbotype. The photograph was taken around the time when Sir William Herschel's telescope was being dismantled on safety grounds.
Keywords: 1839, calotype, herschel, historic, historical, history of photography, history of science, negative, paper, paper negative, photo, photograph, photographic, photography, photos, sir john, telescope
|
History of Mobile Phones
From the mid-1990s, mobile phones have progressively become more important in our daily lives. Whereas early phones were able to do no more than make phone calls and send text messages, today’s smartphones are more like small computers rather than phones. I constantly hear people at work talking about the latest apps they have downloaded to their iPhone or Android phone. And although many of the apps are useful, many are not. Because, for many of us at least, our attention span tends to be fairly short, we have this urge to keep downloading new apps to keep ourselves entertained.
The Size of Handsets
We have all seen photos of the handsets of the late ’80s and laughed at how big they were. It’s hard to believe that they were ‘mobile’ phones at all. Martin Cooper, who led the team that developed the first cell phone (the Motorola DynaTAC – 1973) once commented on the fact that although the battery life of the phone was only 20 minutes, that was not a problem because you couldn’t hold the handset for that long due to it weighing so much (2.2 pounds – 1 kg).
During the course of the 1990s, the size of handsets gradually got smaller until by the early 2000s they were so small that the keypads were, for many people, virtually unusable. I myself had such a phone and in the end had to give it to my son because my fingers were too big and I found it very difficult to avoid pressing two characters on the keypad at the same time.
Some of the Early Pioneers
I’ve already mentioned Martin Cooper who led the team that developed the first cell phone in 1973, but there have been plenty of other people who have played a significant role in the history of the mobile/cell phone. Going way back to people such as Samuel Morse and Michael Faraday who made early breakthroughs in fields such as telegraphy and electromagnetism, moving through to Alexander Graham Bell and Guglielmo Marconi who played significant roles in telephone systems and radio transmissions, respectively.
Generations of Mobile Phone
First generation (1G) mobile phones
The first automated mobile phone network for commercial purpose was launched in 1979 in Japan by NTT. Within a few years, the network had been expanded to cover Japan was fully connected.
Second generation (2G) mobile phones
During 1990s, the 2nd generation mobile phone systems were introduced. They were primarily using the standard of GSM.
Third generation (3G) mobile phones
In the mid-2000s, the 3G (third generation) mobile telephony communications protocol emerged. There were also 3.5G, 3G+ or turbo 3G protocols.
In 2018, most people cannot believe life without mobile phones. They are such important devices in our lives. And we expect more to come from manufacturing companies in terms of shape, size, speed and more technology in future. |
350-logoIn response to 350.org’s Divestment ideas, explained in their film “Do The Math”, which we showed in 2013, here are some things you can do to prevent your money being used to fund climate change:
Move it
What’s your money doing? Is it just sitting there, minding it’s own business, or is it being lent by your bank to build an oil pipeline in Nigeria, ‘frack’ Sussex, or drill in the Arctic?
Well, if you’ve got a current account with one of the large UK banks, the chances are it’s being used for this, and worse.
move-your-money-logoShort of keeping your money under a mattress (which we don’t recommend), there are other places you can put it where it will do less harm, or even some good; the organisation Move Your Money is the place to go to find out where, and for help in moving it!
The good news is that it should now be possible to switch to/from many bank accounts in a matter of days, ‘quickly and hassle free’, through Simpler World, a scheme backed by the banks. It’s not got everybody you might want to switch to in it, but it is easier than falling off a (fossilised) log.
make money do good larger white boxIf you’ve got a sum of money to invest, another great resource is Ethex, a not-for-profit ethical investment advice company, which has impartial financial advisors to help you find positive, ethical ways to invest. Their report into ethical investment in the UK “Making money do good” is worth a read.
Retire it
If you’ve ever worked for a medium or large company, the chances are you’ll have started a pension with them, and from November 2013, employees without a pension will have to be automatically enrolled for one by law. For many people this is the largest amount of money they’ll ever invest, and as pension funds in the UK hold £2 trillion of assets, it has a huge effect: it doesn’t just sit their waiting for you to collect your golden watch, it’s out there doing stuff, and not all of it good.
Many pension schemes these days allow you to allot certain proportions of your pension to various schemes, with different ‘risks’ attached to them. But as 350.org explains, whether you ticked the ‘low risk’ box or the ‘put it all on black-7’ box, the pensions industry is heavily invested in fossil fuels: fuels which must stay in the ground if we are to avoid the worst possible outcomes for the climate; this means that either we’re going to have to wreak the planet, or all those investments are going to tank, big time, and probably some time before you retire!
So it’s now time to ask the question of your employer, and your pension provider, “what is my money doing?”, and see if they can find some less damaging investments to make, which will hopefully still be worth something when you retire!
If you’re lucky enough, you may be able to just tick the ‘ethical plan’ box (although you may want to check their exact definition of the word ‘ethical’), otherwise, maybe you could start asking why there isn’t an ethical investment choice, or look to see if you can opt-out of various industry sectors? Your Ethical Money and FairPensions are good starting points, whether you are on a company scheme or manage your own pension.
Header-GreenlightThere’s also ShareAction’s Greenlight campaign, which will help you to email your pension provider, putting pressure on them to divest from fossil fuels and ‘green-up’ their investments. Follow @ShareAction on Twitter for updates.
OK, so pensions are a potential yawn-fest, but choosing where that money goes is one of the most powerful long-term gestures you can make.
Switch to green
IMG_2542Around 40% of UK CO2 emissions are from electrical energy supply (DECC, 2012), and every time you switch on a light/TV/iPad you are producing CO2 and other pollutants via the energy grid.
You don’t have to have the money and space to put a Solar PV array on your roof (although if you can, why not?), and you certainly should try to save electricity by switching off things you are not using, and using energy efficient appliances, but in addition, you can also easily switch supplier to a company that produces energy in less environmentally damaging ways.
All UK suppliers are bound by the law to produce a certain percentage of their energy from renewable resources (20.6% FY 2013), or otherwise buy in the equivalent (ROCS) however, few are achieving anything like this in terms of their own generation, relying on buying in the credits, rather than changing their energy mix. I’ve certainly been called by salesmen working on the behalf of the big energy suppliers and lied to about their ‘green tariffs’, so it’s useful to know the facts.
EnergyMixIn 2013, the UK electrical energy source mix was 52.3% coal, 30.7% gas, 4.7% nuclear, and 8.3% renewables, with 4% other fuels.
Most worrying is that coal figure, because coal is the dirtiest fossil fuel, and produces over twice as much CO2 per MW than gas, and also black-carbon soot, which, as well as being harmful to our health, can fall many miles away on Arctic ice, so that it reflects less sunlight back into space, accelerating warming. Coal also emits more radiation than the nuclear industry, mercury, and sulphur dioxide which produces acid rain. I won’t even go into ‘fracking’ for gas, or nuclear power here: there are complicated arguments for and against both, but neither is sustainable.
If you want to opt out of all this, two UK energy suppliers guarantee to supply 100% of their energy from renewables: one is Good Energy and the other is Ecotricity. Both energy companies generate some of their electricity from their own wind and solar parks, and buy in the rest with REGOs, which guarantee that the energy was produced from renewables somewhere (somewhat like an A.O.C. for energy!). This encourages renewable generation, and the building of new renewable energy facilities. Both companies are either price-competitive or actually cheaper than the ‘Big Six’ suppliers, which may be reason enough to switch in itself, now other suppliers tariffs have received their regular winter price-hike. They also supply ‘Green Gas’ which is gas from bio-digesters, and thus offer a dual-fuel deal.
good energyEaling Transition has become a partner of Good Energy, which means if you sign-up with the link below, we’ll also benefit to the tune of £25 too, so you’ll be helping us do our work too: http://www.goodenergy.co.uk/affiliates/ealing-transition
Have you any other good ideas for Divesting? Why not let us know about them!
Leave a Reply
WordPress.com Logo
Google+ photo
Twitter picture
Facebook photo
Connecting to %s
%d bloggers like this: |
throw up
Definition from Wiktionary, the free dictionary
Jump to navigation Jump to search
See also: throwup and throw-up
• (file)
throw up (third-person singular simple present throws up, present participle throwing up, simple past threw up, past participle thrown up)
1. Used other than with a figurative or idiomatic meaning: see throw, up.
• Arthur Conan Doyle
• The servant who had first entered had thrown up the window []
2. (now colloquial) To vomit.
The baby threw up all over my shirt.
That cat is always throwing up hairballs.
3. To produce something new or unexpected.
This system has thrown up a few problems.
4. To cause something such as dust or water to rise into the air.
The car wheels threw up a shower of stones.
5. To erect, particularly hastily.
• 2001, Diane Kennedy, Loop-Holes of the Rich: How the Rich Legally Make More Money & Pay Less Tax, Warner Books, →ISBN, page 70,
In other words, a business can throw up a huge detour sign in the way of the government.
• 2007, Marissa Monteilh, Dr. Feelgood, Kensington Books, →ISBN, page 27,
The deal was that if anyone started catching feelings, he could throw up a stop sign and the other would honor it.
6. (transitive, intransitive) To give up, abandon something.
• 1859, Charles Dickens, A Tale of Two Cities
“No!” returned the spy. “I throw up. I confess that we were so unpopular with the outrageous mob, that I only got away from England at the risk of being ducked to death, and that Cly was so ferreted up and down, that he never would have got away at all but for that sham. Though how this man knows it was a sham, is a wonder of wonders to me.”
• 2011, Alan Bennett, "Baffled at a Bookcase", London Review of Books, XXXIII.15:
In 1944, believing, as people in Leeds tended to do, that flying bombs or no flying bombs, things were better Down South, Dad threw up his job with the Co-op and we migrated to Guildford.
7. To display a gang sign using the hands
• 2005, Brandon Bennett, Moon in Gemini, iUniverse, →ISBN, page 56,
Why don't you go on and throw up ya gang sign. Represent your hood, homey?
throw up (uncountable)
1. (colloquial) Vomit.
We had to scrub the seats for throw up when we left the dog in the car.
Alternative forms[edit]
See also[edit] |
KANJI: MAN BAN (ten thousand, myriad)
ASCII Art Representation:
,%%%%% ,,%,
%%%%% %%%%
,%%%%% %%%%
%%%%% %%%%%
%%%%% %%%%
,%%%%" %%%%%
%%%%% %%%%
,%%%% ,%%%%
%%%%" %%%%
%%%%" ,%%%%
%%%%" %%%%
%%%%" %%%%%
,%%%% ,%%%%%
,%%%" ,,,,,,,%%%%%%
,%%%" ""%%%%%%"
,%%"" "%%"
Character Etymology:
There are several theories as to the etymology of this character. One suggests that the current character is a heavily simplified version of a pictograph for a scorpion. Perhaps, because scorpions were extremely numerous at that place and time, but certainly there must have been a phonetic connection.
This theory seams unlikely, however, as the current character for ten thousand has also been shown to be a variant of a character meaning twisting waterweed, borrowed phonetically. This theory has some credibility as one direct decendent of the twisting waterweed character means something like number today.
Yet another theory suggests that the current character for ten thousand is indeed a simplification of the ancient swastika symbol, which has connotations of being all encompassing and by association myriad and thus numerous.
on-yomi: MAN BAN
kun-yomi: yorozu kazu ma yuru
Nanori Readings:
Nanori: kazu ma yuru
English Definitions:
1. MAN: ten thousand; myriad.
2. BAN: ten thousand; myriad; fully; if by any chance.
3. yorozu: ten thousand; myriads, all, everything.
Character Index Numbers:
New Nelson: 7
Henshall: 392
Unicode Encoded Version:
Unicode Encoded Compound Examples:
(goman): 50,000.
(banji): everything
(mannenhitsu): fountain pen.
Previous: release | Japanese Kanji | Next: taste |
What kid hasn’t imagined what it’s like having a tail? Let’s make it real!
Animatronics is tricky. In addition to the software and electrical engineering of most of our projects, it requires mechanical engineering. Normally when I talk with cosplayers just getting started with electronics, I point them toward a light or sound project, to limit the number of variables.
This project reduces animatronics to its simplest case. Mechanical tails are complex stuff, often involving armatures for bones and cable mechanisms to simulate muscles and tendons. Rather that fighting against gravity, we’ll instead make it our ally. Thinking of the tail as a pendulum, a single small servo and a little math is all it takes.
“The cheapest, fastest, and most reliable components are those that aren’t there.”
— Gordon Bell
It’s not the most realistic, but that’s not the goal. We’re learning…the project is modest in scope, with just a few inexpensive components and minimal soldering.
Parts and tools needed:
• 5V Trinket microcontroller (not the Pro Trinket…just the basic tiny one!)
• Micro servo. (You can use a servo you already have, but our 3D-printed enclosure is designed around this one specifically.)
• Power source. Either:
• or:
• A tail! You can either use a natural raccoon tail (vendors often sell these at anime conventions and Renaissance faires), or sew a simple tube from synthetic craft fur. Depending how flopsy your tail is, you may need to beef it up a little with some plastic aquarium tubing or similar. It can’t be a huge, stuffed plush tail though…the servo we’re using is very small.
• 3D printer (but see below).
• #4-40 and #2-56 machine screws and nuts for assembling the enclosure.
• Cable “zip tie” for joining tail to servo clip.
You will also need basic soldering tools and paraphernalia.
But I don't have a 3D printer!
Fear not! With some crafting ingenuity, you can work around this…perhaps a cheap cell phone holster can provide a belt clip. Mint tins make great electronics enclosures (just make sure there’s no electrical contact). Parts such as servos can be held at different angles using materials like Shapelock plastic (aka Friendly Plastic, Instamorph, etc.).
You could also use a 3D printing service such as Shapeways.
There are a couple of different ways to power the project: either with a 3x AAA battery holder, or with a USB battery pack. Which power source you choose determines how things should be connected…
Wiring for 3X AAA Battery Holder
Advantages of using a 3x AAA battery holder are that it's inexpensive and the connection is robust; it’s not likely to fall out. You can also get replacement batteries just about anywhere. And the power wire is more discreet.
Alkaline batteries are recommended for this configuration; rechargeable NiMH cells have a lower voltage and won’t drive the servo with sufficient force. This is also why we’re not using a 3.7V LiPoly battery.
You’ll need to add a JST connector to the Trinket, and a JST extension cable. The battery pack can then be carried in a pocket.
Start by “tinning” one of the JST pads on the back of the Trinket…heat the pad and apply solder so the whole surface is covered.
Hold the JST socket in place (tweezers recommended) and re-melt the solder, allowing the part to sink into position.
Once this first pin is tacked down, the rest are easy. Remember to heat the parts, then apply solder…do not melt solder on the iron and “wipe” it on the parts…that makes a weak cold solder joint. Properly done, the connections should be shiny and smooth.
Congrats, you’ve done surface-mount soldering!
Clip off the connector from the servo, leaving about 3 inches of cable attached, then strip and tin the ends of the wires. Then solder the following connections:
Brown wire
Red wire
Orange wire
Pin #0
Wiring for USB Power
I'm not especially fond of USB batteries for wearable projects — USB plugs lack a latching connector and pull out too easily — but the fact remains that a lot of people already have these battery packs around for charging a cell phone, and may want to put it to other uses.
You don’t need the JST socket or extension when using USB power, just a suitable USB cable. The battery can be carried in a pocket.
Brown wire
Red wire
Orange wire
Pin #0
Just three connections! Simple and adorable.
Why the different connections for USB vs AAA?
Servos sometimes need a lot of current. Best way to ensure this is to wire directly to the power source. The Trinket’s '5V' pin is only rated for 150 milliamps, but the servo may want as much as 350 mA at times. The 'USB+' pin goes directly to USB, and 'BAT+' directly to the battery pack.
In reality, we probably could get away with using the 5V pin…the servo only operates intermittently…but that would set a bad example. You might decide to adapt this project into new things that make more vigorous use of servos…wired wrong, either the servo response would be anemic or one would risk burning out the voltage regulator. So instead, there’s different wiring for different power sources.
Can I use a 3V Trinket instead?
Sure! In the Arduino IDE, select “Trinket 16 MHz” from the Tools→Board menu regardless.
I’m using a different servo with different-colored wires. Will it work?
Most likely, yes. The +V wire is always red. Ground is usually black or sometimes brown. Signal may be orange, yellow, white or blue.
3D Printing and Assembly
This enclosure is four pieces that are fasten together with machine screws. These parts are optimized to print on FDM based machines with no support material.
Print Time
2 shells
15% infill
0.2 layer height
40 speed
about 15 minutes
about 35 minutes
about 25 minutes
about 10 minutes
about 5 minutes
The holes in the parts are sized for #4-40 3/8 flat phillips machine screws. You'll need six screws for this project and a screw driver.
Secure Clip to Bot
Insert the clip.stl part into the bot.stl part with the ledge part fitting into the slit. They should snap fit tolerance. Insert a single #4-40 screw into the tab so it threads through both parts. Fasten the screw only until the tip of the screw thread peeps through the other side.
Mounting Servo
Position the top part over the micro servo and line up the standoffs on the cover with the holes in micro servos tabs. Insert and fasten two #4-40 machine screws into the top part. The screws and micro servo should be flush with the surface of the top enclosure part.
Mounting Trinket
Insert the Trinket into the box.stl part with the USB and JST port facing the cut out. It should be oriented up right with the mounting holes over the standoffs. Insert and fasten two machine screws into the box.stl. The screw heads should be flush with the enclosure.
Installing Top
Position the top.stl part over the box.stl part where the lip inserts to the frame. Press it down into place. Insert and fasten a #4-40 machine screw into the mounting hole near the bottom center of the top.stl part.
Install Clip and Bot into Box
Place the bot.stl part over the box.stl part and press it into the part. Fasten the screw all the way through until the screw head is flush with the part.
Secure servo horn to tail
The large hole in the attachment is meant to snap fit onto the nub of the servo. The part with the slit is suppose to clip onto the zip tie. You'll want to get this attachment secured to the zip tie before pressing it into the servo tooth. It's a tight fit so you'll need to do a bit of "hard but too hard" type of thing to clip it on.
Install attachment to servo
Get the servo teeth in the center with the orientation relative to your clips position. You'll want to test fit this to see which way you need to install the attachment. Once you figure out what's up and down, press the attachment into the servo tooth. Insert and fasten a #2-56 machine screw into the hole of the attachment to secure it to the servo.
Install Ziptie
The slit in the servo horn is sized for a ziptie that's 4.5mm wide by thick 1.4mm. Find the center of the ziptie and gently fit it inside the slit of the servo horn. The ziptie is held in place with friction - apply adhesives for a perminent hold.
Tie the Tail
The tail we used in this project has an hoop near the top. It came with a small bead chain that was threaded through the hoop. It's a great pace to thread the ziptie. String it through and tie it however tight you'd like.
When you run the software presented on the next page, you might find the tail is off-balance. Easily fixed! Just wait for the tail to come to rest, disconnect power, then unscrew the servo horn (the little piece to which our tail is tied) and reinstall it in a neutral (centered) position.
You’ll need to have the Arduino IDE software configured for use with the Adafruit Trinket. If you’ve not done that before, it’s explained in this guide.
Copy the code below and paste it into a new Arduino sketch.
(Instructions continue below, after the code.)
Simple servo tail wagger for Adafruit 5V Trinket (not Pro) microcontroller.
Uses servo on pin 0. The tail has no 'tendons' -- it's a passive thing,
simply hanging off the servo -- though a weak 'spine' (such as aquarium tube)
adds just enough body to help. Pendulum math is then used to induce a
reasonable wag effect.
To break up the repetition and appear a little more 'alive,' the speed,
magnitude and duration of the tail wag is randomized (within certain ranges),
and it periodically settles down and stops (adds variety and also saves some
battery). There's an optimal period (single-swing time) for a given tail
length, but it may randomly go a little faster or slower than this to add
some 'english' to the wag.
You'll need to calibrate this, editing a few lines below. TAIL_LENGTH is the
length of the tail in meters (e.g. a 40 cm tail is 0.4 meters); for inches,
multiply by 2.54 to get centimeters, then divide by 100 for meters.
SERVO_MIN and SERVO_MAX are the pulse times (in microseconds) for the leftmost
and rightmost servo positions; though nominally these are 1000 and 2000 usec
(1.0 to 2.0 milliseconds), every servo in reality is a little different, and
you'll need to tune these values for your actual desired swing range.
#ifdef __AVR_ATtiny85__
#include <avr/power.h>
#define TAIL_LENGTH 0.4 // Nominal tail length (meters)
#define SERVO_PIN 0 // Servo is connected here
#define SERVO_MIN 500 // Servo pulse times
#define SERVO_MAX 1800 // in microseconds
// Tail cycles through four states: off, ramp up, steady wag, ramp down.
// Durations are semi-random; this table sets min & max times for each.
static const uint8_t PROGMEM modeTime[4][2] = {
{ 4, 9 }, // 4 to 9 second off time
{ 3, 6 }, // 3 to 6 second ramp up
{ 4, 12 }, // 4 to 12 sec steady wag
{ 2, 5 } // 2 to 5 sec ramp down
#define MODE_OFF 0
#define MODE_RAMP_UP 1
#define MODE_HOLD 2
#define MODE_RAMP_DOWN 3
uint8_t tailMode = MODE_OFF;
float wagnitude = 0.8, // Magnitude of current wag cycle
period = M_PI * 2.0 * sqrt(TAIL_LENGTH / 9.8);
uint32_t modeStartTime = 0,
modeDuration = 0,
lastPulseTime = 0;
// SETUP just configures prescaler & enables servo output --------------------
void setup() {
#if defined(__AVR_ATtiny85__) && (F_CPU == 16000000L)
clock_prescale_set(clock_div_1); // 16 MHz Trinket (not Pro) requires this
// LOOP does all the tail-waggling math --------------------------------------
void loop() {
uint32_t t = millis(); // Elapsed time, milliseconds
// Compare time in current mode against planned duration
if((t - modeStartTime) > modeDuration) { // Time's up!
for(;;) {
if(++tailMode > MODE_RAMP_DOWN) // Cycle to next mode,
tailMode = MODE_OFF; // wrap if needed
modeDuration = 1000 * random( // Randomize mode duration
if(tailMode != MODE_OFF) break; // If 'off' mode...
// Randomize magnitude of next wag cycle (70% to 100%)
wagnitude = (float)random(7, 10) / 10.0;
// Randomize tail length for next wag cycle (60% to 120%)
float len = TAIL_LENGTH * (float)random(6, 12) / 10.0;
// Solve for period (sec) from length (meters):
period = M_PI * 2.0 * sqrt(len / 9.8);
delay(modeDuration); // Stop servo,
t = millis(); // revise time and
} // mode-cycle again
modeStartTime = t; // Save mode start time
// Calc amplitude of wag at current time (ramping up/down/steady)
float a = wagnitude; // Assume MODE_HOLD
if(tailMode != MODE_HOLD) {
a = wagnitude * (float)(t - modeStartTime) / (float)modeDuration;
if(tailMode == MODE_RAMP_DOWN) a = wagnitude - a;
uint16_t servoPulseLength = SERVO_MIN + (int)((float)SERVO_RANGE *
((float)t / 1000.0) // Current time in seconds
/ period * M_PI * 2.0 // Seconds to wag cycles
) * a) // Sine wave * amplitude (-1.0 to 1.0)
+ 1.0) * 0.5); // Convert to integer servo usec pulse
// Handle servo pulse @ 50 Hz...
while(((t = micros()) - lastPulseTime) < 20000); // Wait for it...
digitalWrite(SERVO_PIN, HIGH);
digitalWrite(SERVO_PIN, LOW);
lastPulseTime = t;
Measure the length of your tail and convert to meters. Perfectly fine to use round “ish” numbers…it’s not rocket science. For example, a 12 inch tail is about 30-ish centimeters, or 0.3 meters.
Look for this line in the Arduino code, and replace the number there with your measurement:
#define TAIL_LENGTH 0.4 // Nominal tail length (meters)
In the Tools→Board menu, select “Adafruit Trinket 16 MHz.” Connect a USB cable and click the Upload button.
Normally the reset button is used to start the Trinket bootloader and upload code…but inside the enclosure, this can’t be reached. Plugging in the USB cable has the same effect…there’s about a 10 second window to start uploading code to the board.
After uploading…if using the 3xAAA battery pack, unplug the USB cable and switch the battery pack on. The servo will not run off USB in this configuration. This is normal.
If all goes well, the tail will be quiet for a moment, then gradually start to wag. After a few seconds it’ll settle down, then start up again, perhaps with a slightly different speed.
If the swinging is off-center, there’s a fix. Look for these two lines in the code:
#define SERVO_MIN 500 // Servo pulse times
#define SERVO_MAX 1800 // in microseconds
Servos expect a continuous series of pulses (about 50 times per second…50 Hz). The “on” time of each pulse indicates the servo position. Nominally this time is said to be between 1 and 2 milliseconds (1,000 and 2,000 microseconds) to represent the full range of motion…but every servo’s a little different, and the range of values could go higher or lower. So you may need to experiment with different values and upload the revised code to the board.
If the amount of rotation is acceptable, and it’s not running up against the left or right limits of the servo…just off-center…an easier option is just to unscrew the servo horn (the bit to which the tail is tied) and re-attach it at the desired angle.
How does it work?
The code is based on a formula devised by Galileo Galilei in the early 1600s…that the period (swing time) of a pendulum is much more dependent on its length than the amplitude of its swing…in fact, for small swing ranges the amplitude can be ignored.
L is the length of the pendulum (our tail), in meters. g is acceleration due to gravity — about 9.8 meters/second² on Earth. T is the resulting approximate period in seconds. Then we just use a sine wave matching that period.
The code cycles through four states: resting (no wagging), ramp up, steady wag, and ramp down. During all but the resting phase, the period of the swing remains the same (using the above formula)…only the amplitude changes. Gradually ramping up imparts just a little bit of extra energy on each cycle…a bit like kicking your legs on a swing. This is how we’re able to use such a small and inexpensive servo.
The four states are slightly randomized…it may rest or wag a little more or less on some cycles, and may go a bit faster or swing higher at times. A little unpredictability like this helps make it a little more believable or “alive” — our brains pick up very quickly on obvious repeating patterns!
This guide was first published on May 10, 2015. It was last updated on Sep 20, 2018. |
How has the past shape development present ?
History doesn’t just happen; it is made by real people who faced real challenges, who had uncertainty about the future, just as we do today. McCullough (2003), has once said that history is not about the past. If you think about it, no one ever lived in the past, they lived in the present. Additionally, history can be an important instrument that informs our approach to critical issues today. It is important to remember that when history is made it becomes a piece of our world, a factor in our future decisions. Heilbroner (1974) “explore the past to understand the present and shape the future”.
How does the 1929 Colonial Development Act compare to the approach to development of the British government today?
The history of British overseas aid goes back a long way throughout the nineteenth century George (1929), the Colonial Development Act was introduced for the purpose of promoting colonial development and it stressed the importance of duty to humanity to develop the vast economic resources of the great continent. Colonial development was a matter primarily for the colonies themselves. They were required to finance their economic development from the proceeds of sales of their export crops and whatever private international capital they would attract. It introduced an entirely new concept of colonial development in which the provision of annual grants and loans would prove mutually advantageous to the colonial territories.
However, other studies argue that the act was passes as a means rather than as an end in itself. for example, the act was passed in order to benefit the economy of the united kingdom rather than to benefit the colonies. Therefore, the act was regarded as a sort of multiplier with both forward and backward linkage effects, a twin concept which is now quite familiar to development with economists. For example it excluded investment in social development in general and in education in particular, the act seems to have placed too much emphasis on the material and physical aspects of development.
Of course, modern-day development studies is quick to distance itself from the days of colonialism; but below the surface – the concept of modern-day ‘foreign aid’ is largely the same as it was back in 1929, we’re just not as honest about it. Development aid nowadays is still very much regarded as a having dual-objectives. lot of countries, including the UK, give aid in the hope that the countries they are investing in will soon begin consuming British goods which would subsequently lead to an increase in the UK’s economic growth and give a boost to the UK’s export market.
what is the significance of the past in the present for the development studies?
Sharing knowledge is a vital component in the growth and advancement of our society in a sustainable and responsible way, so the past is significant in the present for development studies. Alfini and Chambers (2010) the prevailing words and expressions in development discourse keep changing, some become perennials, long-term survivors year after year, like poverty, gender, sustainable, and livelihood. Others have their day and then fade, like scheme and integrated rural development. Yet others mark major shifts in ideology, policy, and reality, as have liberalisation, privatisation, and globalisation.
In conclusion,our language influences both policies and practice in development thus, studying how the language of development policy has changed and how development was in the past can give us a sense of the historical shifts in development thinking and priorities, and help us to reflect on where we are going (or could go) in the future.
1. author David McCullough, (2003) the Course of Human Events, Jefferson lecture.
2. (George, A Re-examination of the (1929) Colonial Development Act, the Economic History review, p68).
3. (Naomi Alfini and Robert Chambers (2010) Words count: the changing language of British aid)
4. Robert Heilbroner,(1974), An Inquiry into the Human Prospect (New York: W. W. Norton and Company.
5., (22 September 2010); (accessed 9 0ct0ber 2013)
Leave a Reply
You are commenting using your account. Log Out / Change )
Google+ photo
Twitter picture
Facebook photo
Connecting to %s |
Disappearing Rectangles
Imagine a rectangular piece of paper with several shapes drawn along the midline. Cut the paper into two and then cut the upper portion so that it becomes possible to exchange its two pieces. After the exchange there appear to be 1 shape less than before. This is in the spirit of the famous Sam Loyd's puzzle Disappearing Leprechauns.
The applet below is a much simplified version of Sam Loyd's invention. Both upper and lower portion could be dragged left or right and the number of shapes could be determined by clicking on the bold number at the bottom left of the applet.
What if applet does not run?
(Tony Fatseas offers a beautiful replica of the original Sam Loyd's "Get Off the Earth" puzzle.)
|Contact| |Front page| |Contents| |Eye opener|
Copyright © 1996-2018 Alexander Bogomolny
[an error occurred while processing this directive]
[an error occurred while processing this directive] |
Simple and Easy Tips to Control Anger in a Relationship!!
Anger is triggered when a person is emotionally hurt and is a basic human emotion that everyone one us experiences. It is an unpleasant feeling that occurs when a person is mistreated, opposed, injured, or has her own views on different things.presenting you the best Tips to Control Anger in a Relationship. Thank us Later.
According to psychology, Anger is a completely normal and healthy human emotion if it’s under control.
If anger of a person goes out of control then it becomes destructive and causes problems in a persons life. It leads to self-destruction, spoils your relationship, causes problems at work, and others.
So what is the cause of Anger?
You might be angry on a person, event, or other personal problems. It is caused by both internal and external events. Even past memories can trigger anger in a person.
These days the most common cause of anger is event, it might be traffic jam or a canceled plan.
How does the body react to Anger?
People in general think Anger makes your “blood boil” or eyes “red”. But that’s not true.
When a person is angry the muscles become tense, blood pressure rises, adrenaline pour into the bloodstream, heart pumps faster, and blood flow increases. Your body generates more energy to take action.
What if you do not control your anger?
It can cause personal problems and spoil your happy relationship, serious health problems, and violent behavior which leads to crime-abuse.
Anger sometimes in some cases help you achieve your goals by firing up the senses.
Anger is also a valuable signal as it lets us know if something is going wrong.
Every relationship must have ups and downs, but anger can make your relationship with a person worse. Normally people start blame-game in relationship and go negative in a conversation “in” anger.
Jealously is the biggest source of anger in relationships. Its a mixture of insecurity, fear, and anger. Hate is also a source of anger. When you dislike a person or anger turning towards someone.
Gossip, keeping secrets, competitiveness, teasing and name calling, and balancing time between friends and dating partners, leads to anger in friendships.
Anger in relationships triggers when your partner doesn’t call or doesn’t take your call, ignore of text messages, partner talking to another girl, and catching your partner with someone in compromisable situation.
In relationships, anger can hurt others persons feelings, can turn into domestic violence, you lose your respect when angry.
To address these anger problems, we at GirlsXP will tell How to Control Anger in a Relationship
1. While angry, attempt to see things from other person’s perspective
Before you bust out your anger on a person, make an attempt to see things from other person’s perspective. This will help think and react accordingly. Yelling or shouting at each other will not help your relationship. It will make it even worse. It is very important to understand your partners perspective and why is he/she angry.
2. Recognize your body’s reactions to anger
It is very important to recognize your body’s reactions to anger. Women tend to speak a lot in anger than men. Women engage a lot in conversation thinking to come to a conclusion. While angry it is better not to have a conversation.
3. Learn stress management techniques and relaxation skills
If you get anger a lot than expected you should learn how to manage your stress and personal life. Physical activities can help you reduce stress. Go for a walk instead when you are angry.
4. Express your true feelings to your partner
Anger in no way will improve your relationship. Instead, express your feelings with clarity to your partner without hurting others. Do not confront the other person in anger.
5. The key to anger reduction is “knowing yourself”
Women at work trying to think that a specific task can be completed at home and get work at home, inside their bedroom. Instead of a romantic night, you complete your tasks. Complete all your important jobs before they become urgent.
Most people remember 20% of what they hear, understand this fact to reduce your anger. Remember, you cannot change others as easily as you can change yourself.
When you are angry and upset try 1-2-3 TURTLE
1- Go inside your shell and think before you act. Take a “Time Out”.
2- Take 3 Deep Breaths, time to relax and calm yourself down.
3- Walk Away and think of a good solution.
Anger is on letter short of D.A.N.G.E.R! |
Send to
Choose Destination
J Dent Res. 2015 Oct;94(10):1341-7. doi: 10.1177/0022034515590377. Epub 2015 Aug 10.
Diet and Dental Caries: The Pivotal Role of Free Sugars Reemphasized.
Author information
London School of Hygiene and Tropical Medicine, London, UK, and World Obesity, London, UK.
The importance of sugars as a cause of caries is underemphasized and not prominent in preventive strategies. This is despite overwhelming evidence of its unique role in causing a worldwide caries epidemic. Why this neglect? One reason is that researchers mistakenly consider caries to be a multifactorial disease; they also concentrate mainly on mitigating factors, particularly fluoride. However, this is to misunderstand that the only cause of caries is dietary sugars. These provide a substrate for cariogenic oral bacteria to flourish and to generate enamel-demineralizing acids. Modifying factors such as fluoride and dental hygiene would not be needed if we tackled the single cause--sugars. In this article, we demonstrate the sensitivity of cariogenesis to even very low sugars intakes. Quantitative analyses show a log-linear dose-response relationship between the sucrose or its monosaccharide intakes and the progressive lifelong development of caries. This results in a substantial dental health burden throughout life. Processed starches have cariogenic potential when accompanying sucrose, but human studies do not provide unequivocal data of their cariogenicity. The long-standing failure to identify the need for drastic national reductions in sugars intakes reflects scientific confusion partly induced by pressure from major industrial sugar interests.
cariogenic; dose-response; epidemiology; guidelines; multifactorial; review
[Indexed for MEDLINE]
Supplemental Content
Full text links
Icon for Atypon
Loading ...
Support Center |
What is a panic attack?
A panic attack is an episode of intense anxiety and fear that causes physical symptoms. A panic attack can occur when your flight-or-fight response is triggered, but you aren’t in immediate physical danger. According to the Australian Bureau of Statistics, up to 40% of the population will experience a panic attack at some point in their lifetime.
What does a panic attack feel like?
A panic attack can be very frightening, particularly if you don’t know what is happening to you. You may feel completely overwhelmed and are likely to experience some of the following symptoms:
• Accelerated heart rate
• Excessive sweating
• Tight chest
• Trembling
• Shortness of breath or difficulty breathing
• Feeling light-headed or dizzy
• Stomach in knots or nausea
• Fear of dying or losing control
• Feelings of detachment
When you have a panic attack, the symptoms feel very intense. A panic attack can last from a few minutes up to half an hour. The intensity of an attack usually peaks in ten minutes and then it starts to subside. Read our article on how can you deal with a panic attack.
What can cause a panic attack to happen?
Panic attacks can occur at any time. You may be in a calm state or already anxious when it happens. The exact causes of a panic attack aren’t clear, but it may include:
• Ongoing stress
• A traumatic event that has caused acute stress
• A physical illness
• Intensive exercise
Is a panic attack the same as anxiety?
One difference between a panic attack and anxiety is how long your symptoms last. Anxiety tends to be short-lived and is caused by a specific stressful situation (e.g. you are about to give a presentation) and once the stressor goes away so does the anxiety. Panic attacks appear to happen out of the blue and aren’t necessarily triggered by a specific stressor. Panic attacks are also much more intense than feeling anxious.
What is a panic disorder?
A panic disorder is when someone has recurring panic attacks that are disabling. These panic attacks happen at unexpected times and the person worries for at least a month about another one returning. People with a panic disorder may significantly change their behaviour to avoid having a panic attack. Around 5% of Australians will experience a panic disorder. Having a panic attack does not mean you have a panic disorder.
If you are experiencing panic attacks, you can get help. Speak to your doctor or call one of our Suicide Call Back Service counsellors on 1300 659 467.
If it is an emergency dial 000. |
Cortisol, A.M. Most Popular
The Cortisol, A.M. test contains 1 test with 1 biomarker.
Cortisol is increased in Cushing's Disease and decreased in Addison's Disease (adrenal insufficiency). Patient needs to have the specimen collected between 7 a.m.-9 a.m.
Also known as: Cortisol AM
Cortisol, A.M.
A cortisol level is a blood test that measures the amount of cortisol, a steroid hormone produced by the adrenal gland. The test is done to check for increased or decreased cortisol production. Cortisol is a steroid hormone released from the adrenal gland in response to ACTH, a hormone from the pituitary gland in the brain. Cortisol affects many different body systems. It plays a role in: bone, circulatory system, immune system. metabolism of fats, carbohydrates, and protein. ervous system and stress responses. |
(redirected from Insurrections)
Also found in: Dictionary, Thesaurus, Encyclopedia.
Related to Insurrections: rebelled
noun anarchy, defiance, disorder, disturbance, insubordination, insurgence, insurgency, motus, mutiny, noncompliance, outbreak, overthrow, political upheaval, rebellio, rebellion, resistance to government, revolt, revolution, riot, rising, seditio, sedition, uprising
See also: anarchy, commotion, defiance, disloyalty, mutiny, outbreak, outburst, rebellion, resistance, revolt, revolution, riot, sedition, treason
References in periodicals archive ?
172) Federal action to quell insurrections is not only "desirable"--in fact, it is expressly authorized by the Constitution.
The creation of the PCA Court by today's Congress would be consistent with the goals of the 1795 Congress that deleted the judicial certification requirement--namely, removing impediments to the president's ability to deploy troops domestically to suppress insurrections.
The book closely links the need to understand better the importance of unarmed insurrections to what it argues to be a decline in successful armed insurgencies in the late 20th century.
In the process of joining insights from the study of social movements and the literature on unarmed insurrections, the book inevitably touches on a multitude of interrelated issues, obviously without being able to give sufficient attention to each one.
This reference for scholars contains primary documents describing the significant insurrections and rebellions in the American colonies during the period 1675-1690.
The hopes and passions that were aroused by the insurrections of the period led a great many people to want to build something larger and firmer and more powerful than a student movement could hope to be--a revolutionary party for the adult world and not just for the student neighborhoods.
Everyone knew that, given a few chance events, Harlem might erupt in an insurrection of its own, something bigger and more violent than anything taking place up the hill at Columbia--and this was a source of genuine power for our strike.
Violent insurrection is a very high-risk proposition," he told me.
Among the repertoires cited by Tarrow are urban insurrections, strikes, demonstrations of many types, bread riots and grain seizures, though he does not mention hiring fairs.
and its coalition allies restore Saddam Hussein to power in order to put down the growing insurrection there?
Van Young provides a solid overview of the 1808-1810 agricultural crisis that helped trigger the insurrection. |
Thursday, May 10, 2018
The Suspicious Border Gasoline Boom
Stirring up cocaine in a jungle laboratory. The process uses
lots of gasoline. (Photo: Business Insider
The town of Argelia, in the department of Cauca, has some 27,000 residents and 19 gas stations. However, last year the municipality racked up 3.4 million gallons of gasoline sales and 642,000 gallons of diesel sold - some three times as much as just two years before and higher per-capita than sales in Bogotá - despite Argelia's poverty and few cars.
Argelia, Cauca: booming gasoline market. Foto: RCN Radio
The paradoxical situation, reported by El Tiempo, may a simple explanation: Argelia and other rural agricultural communities with booming fuel sales also happen to have booming coca leaf and cocaine economies - and gasoline and diesel are basic ingredients for converting leaves into the drug.
The intersects with many other disfunctions. Because cocaine is illegal, those many millions of gallons of fuel are handled without any regulation or safety laws. So, once exhausted, they're often just dumped into rivers or onto the jungle floor.
The popularity of motor fuels for cocaine production is partly the fault of the Colombian government, which spends a fortune every year on fuel subsidies - particularly in border areas. Fuel subsidies make little sense socially or economically, since they go disproportionately to the wealthy. And they make absolutely no sense at all environmentally, since fossil fuel production takes a huge environmental toll all along its life cycle and burning fuels contributes to global warming. But the subsidies make sense politically, since they buy votes. The pressure to subsidize gasoline is particularly strong in areas near Ecuador and Venezuela, which subsidize their fuels much more than Colombia does.
And by subsidizing gasoline for drivers, Colombia also does so for drug producers.
Officials may be considering various strategies to deal with this problem - altho none of them likely involves raising the price of gasoline to pay for its social and environmental impacts, which would not only make cocaine production more expensive, but would also reduce traffic jams, clean the air and generally make Colombia's cities more liveable.
Solutions which Colombia is more likely to try, such as rationing fuel in some border areas, will inevitably increase smuggling and contraband, particularly from Venezuela, and enrich criminal organizations. Officials also talk about adding a chemical ingredient to fuels to make them less effective for drug making. But expect drug makers to find a way to neutralize this chemical and to increase smuggling across the border, since Colombia's neighbors certainly won't add the ingredient to their own fuels.
There's only one way to end these absurd and ineffective attempts to use ineffective policies to repair a failed one - decriminalize drugs and minimize their harm.
By Mike Ceaser, of Bogotá Bike Tours
No comments: |
Honey bee
honey bee
A honey bee (or honeybee) is any member of the genus Apis, primarily distinguished by the production and storage of honey and the construction of perennial, Swarming (honey bee) · Carniolan honey bee · Maltese honey bee · Stingless bee. Because of bees, flowering plants grace our planet with beauty and food. Our honey bees are just one of 20, bee species that do this demanding work. A honey bee (or honeybee) is any member of the genus Apis, primarily distinguished by the production and storage of honey and the construction of perennial, Apis mellifera · Stingless bee · Apis cerana · Honey bee life cycle.
Honey bee - Versenken
A more-elaborate definition would note that they are Bumblebee visiting a Zinnia flower There are about 50 different types of Bumblebees Bombus sp. Das Ende mit dem einzelnen Loch nach oben über die Schüssel halten, so dass die 5 Öffnungen nach unten schauen ca. Distribution of Africanized Honeybees in the U. Mission Statement Honey Bee Suite is dedicated to honey bees, beekeeping, wild bees, other pollinators, and pollination ecology. Visit that web site. Folgendes benötigt Ihr zum Extrahieren mit dem Honey Bee Extraktor: CCD is unique due to the lack of evidence as to what causes the sudden die-off of adult worker bees, as well as few to no dead bees found around the hive. In cold climates, some beekeepers have kept colonies alive with varying degrees of success by moving them indoors for winter. The work of thousands of scientists and beekeepers has contributed to my understanding of bees, and to them I am immensely grateful. Biesmeijer demonstrated that most shakers are foragers and the shaking signal is most often executed by foraging bees on pre-foraging bees, concluding that it is a transfer message for several activities or activity levels. The genetics of the western honey bee A. Drones males are produced from unfertilized eggs, so they represent only the DNA of the queen that laid the eggs, i. As the field bees forage for nectar, pollen sticks to the fuzzy hairs Sloty Casino Online Bewertungen mit Promotionen und Boni cover their bodies. This complex apparatus, including the barbs on the sting, A Day At The Derby Slots - Free to Play Online Demo Game thought to have evolved specifically in response to predation by vertebrates, as the barbs do not usually function and the sting apparatus does not detach unless the sting is embedded in fleshy tissue. Retrieved February 26, On average during the year, about one percent of a colony's worker bees die naturally per day. One other possible hypothesis is that the bees are falling victim to a combination of insecticides and parasites. Flower nectar is one of two food sources used by honeybees. Wilson The Ants Sociobiology
Honey bee Video
Honey Bee Blake Shelton w/lyrics The worker dies after the sting becomes lodged and is subsequently torn loose from the bee's abdomen. Once mated, they lay eggs and fertilize them as needed from sperm stored in the spermatheca. Workers born in spring and summer will work hard, living only a few weeks, but those born in autumn will remain inside for several months as the colony clusters. Worker bees secrete the wax used to build the hive, clean, maintain and guard it, raise the young and forage for nectar and pollen, and the nature of the worker's role varies with age. Modern hives also enable beekeepers to transport bees, moving from field to field as crops require pollinating a source of income for beekeepers. Insect bites and stings Insect sting allergy Bed bug Bee sting Flea Horsefly Louse Mosquito Wasp. Insect predators of honeybees include the Asian giant hornet and other wasps , robber flies , dragonflies such as the green darner , the European beewolf , some Praying mantises , and the water strider. Their stings are often incapable of penetrating human skin, so the hive and swarms can be handled with minimal protection. O Gustafson see links below. They are typically dark in color with bands of whitish hairs running across the abdomen and range in size from 5 - 25 mm. Hier finden Sie eine Übersicht zu den angebotenen Zahlungsmöglichkeiten. Feral bee populations were greatly reduced during this period; they are slowly recovering, primarily in mild climates, due to natural selection for varroa resistance and repopulation by resistant breeds. honey bee
0 thoughts on “Honey bee
Hinterlasse eine Antwort
|
View text in paragraph format View text in list format
Esther Chapter 1 | Parsha:
1It happened in the days of Ahasuerus—that Ahasuerus who reigned over a hundred and twenty-seven provinces from India to Nubia. 2In those days, when King Ahasuerus occupied the royal throne in the fortress Shushan, 3in the third year of his reign, he gave a banquet for all the officials and courtiers—the administration of Persia and Media, the nobles and the governors of the provinces in his service. 4For no fewer than a hundred and eighty days he displayed the vast riches of his kingdom and the splendid glory of his majesty. 5At the end of this period, the king gave a banquet for seven days in the court of the king’s palace garden for all the people who lived in the fortress Shushan, high and low alike. 6[There were hangings of] white cotton and blue wool, caught up by cords of fine linen and purple wool to silver rods and alabaster columns; and there were couches of gold and silver on a pavement of marble, alabaster, mother-of-pearl, and mosaics. 7Royal wine was served in abundance, as befits a king, in golden beakers, beakers of varied design. 8And the rule for the drinking was, “No restrictions!” For the king had given orders to every palace steward to comply with each man’s wishes. 9In addition, Queen Vashti gave a banquet for women, in the royal palace of King Ahasuerus.
10On the seventh day, when the king was merry with wine, he ordered Mehuman, Bizzetha, Harbona, Bigtha, Abagtha, Zethar, and Carcas, the seven eunuchs in attendance on King Ahasuerus, 11to bring Queen Vashti before the king wearing a royal diadem, to display her beauty to the peoples and the officials; for she was a beautiful woman. 12But Queen Vashti refused to come at the king’s command conveyed by the eunuchs. The king was greatly incensed, and his fury burned within him.
13Then the king consulted the sages learned in procedure. (For it was the royal practice [to turn] to all who were versed in law and precedent. 14His closest advisers were Carshena, Shethar, Admatha, Tarshish, Meres, Marsena, and Memucan, the seven ministers of Persia and Media who had access to the royal presence and occupied the first place in the kingdom.) 15“What,” [he asked,] “shall be done, according to law, to Queen Vashti for failing to obey the command of King Ahasuerus conveyed by the eunuchs?”
16Thereupon Memucan declared in the presence of the king and the ministers: “Queen Vashti has committed an offense not only against Your Majesty but also against all the officials and against all the peoples in all the provinces of King Ahasuerus. 17For the queen’s behavior will make all wives despise their husbands, as they reflect that King Ahasuerus himself ordered Queen Vashti to be brought before him, but she would not come. 18This very day the ladies of Persia and Media, who have heard of the queen’s behavior, will cite it to all Your Majesty’s officials, and there will be no end of scorn and provocation!
21The proposal was approved by the king and the ministers, and the king did as Memucan proposed. 22Dispatches were sent to all the provinces of the king, to every province in its own script and to every nation in its own language, that every man should wield authority in his home and speak the language of his own people.
Add Remark
Chapter Tags |
Sunday, September 13, 2015
Should I install Windows 10 on my computer?
Sunday, February 8, 2015
It's not safe out there
Many people have asked me how to prevent being infected with a virus or spyware. My answer is that you should use a good anti virus program and perhaps a separate anti malware program (such as MalWareBytes). However, even the best anti virus program is no substitute for common sense and good surfing behavior. Unfortunately, to an inexperienced user, the bad guys come up with all manner of ways to try and trick you into visiting sites or downloading programs that are not in your best interest. It's important to look carefully at every thing you click on and, if you don't clearly understand what you're doing, STOP! Because most web site exist to make money, you've got to be extremely vigilant to not allow them to take an unfair advantage of you.
Most web sites in themselves are not dangerous. But, they often will sell ad space to other companies (this is how a lot of "free" sites make money). When these "banner" ads are sold and placed, the company that owns the web site may not spend a whole lot of time evaluating the content- Maybe they're too busy counting their money? As a result, questionable ads may appear even on what should be considered legitimate web sites. As an example, here is a banner ad that appeared on a well known web site on April 19, 2013. I know for a fact that this was not the result of another infection as this appeared on a computer that had just been re-built and this was the very first time it had been connected to the Internet
Notice the circled ad in the middle of the page. This looks like something you might need from the description and it looks vaguely familiar. In particular, notice the logo:
And see how similar it looks to the logo for Flash Player from Adobe:
It's important to remember that Adobe Flash Player is a legitimate program but Flash Video Downloader is NOT. The small (very small) text just below and to the right of this ad gives a clue with the phrase "GetSavin - About this ad". A Google search for GetSavin will bring up numerous pages as to how to get rid of this. An explanation as to what it ACTUALLY is:
If you are seeing in-text advertisements and pop-up ads from “Ads by GetSavin” within Internet Explorer, Firefox or Chrome, then your computer is infected with an adware program.
GetSavin is an adware program that is commonly bundled with other free programs that you download off of the Internet. Unfortunately, some free downloads do not adequately disclose that other software will also be installed and you may find that you have installed GetSavin without your knowledge.GetSavin is advertised as a program that will enhance your experience while viewing a video on YouTube. Though this may sound like a useful service, the GetSavin program can be intrusive and will display ads whether you want them to or not.
GetSavin is an ad-supported (users may see additional banner, search, pop-up, pop-under, interstitial and in-text link advertisements) cross web browser plugin for Internet Explorer, Firefox and Chrome, and distributed through various monetization platforms during installation. GetSavin is typically added when you install another free software (video recording/streaming, download-managers or PDF creators) that had bundled into their installation this adware program. When you install these free programs, they will also install GetSavin as well. Some of the programs that are known to bundle GetSavin include “Youtube Downloader HD”, “Fast Free Converter”, “Video Media Player 1.1″ and “DVDX Player 3.2″.
When installed, GetSavin will display advertising banners on the webpages that you are visiting, stating that they are brought to you by “Ads by GetSavin”.
The justification for the GetSavin Ads according to its author, is that it helps recover programming development cost and helps to hold down the cost for the user.
The problem is, this stuff gets installed without your knowledge. You may have given your permission to install it, but it's your permission without your informed consent.
Bottom line: if a web site like can have something like this, is there really any safe web site? There is no substitute for common sense. If you have the tiniest question about something being installed, don't do it. Instead, research the item and make 100% sure it's something you need.
Tuesday, February 3, 2015
Simple steps to troubleshoot your internet connection
Next time the internet is slow or non-working, try the following to isolate the problem:
1. Click on the Start button and in the “Search Programs and Files” box, type CMD
2. Right click on the CMD icon at the top of the menu and left click on “Run as administrator”.
3. Answer Yes to the User Account Control dialog. This will then open a black DOS window.
4. Type “ipconfig /all” and press the Enter key. Look through the information on the screen and find the following values and write them down (Your numbers may likely be different than what is shown in this example:
5. Click in this window and at the blinking cursor, type (without the quotation marks)” ping” and press Enter. You should receive 4 replies. This is the numeric address of your computer.
6. Next, type “ping” and press Enter. This is the internal address of your router. With each of these tests, you should receive 4 replies.
7. Now, “ping” and press Enter. This is what is called a DNS server. This server MUST respond for you to be able to get to the internet. This is the computer that translates a name (like into numbers.
8. Finally, “ping” and press Enter. This should return a series of replies as well.
If you don’t get a reply at any of these steps, that is where the problem most likely is. If you get replies from ALL of these, but still cannot get on the internet, then the problem is with your web browser and that will need to be addressed separately.
Here's an interesting article I came across a couple of years ago. Most of my customers are VERY reasonable, but I have had some occasions where I've received inappropriate calls.I once received a call at 6:00 am on a Sunday morning and the caller truly seemed puzzled as to why I wasn't more receptive to their request. Another person typically spends about $100 to $200 a year on support and feels this entitles him to call every couple of weeks "just to ask a quick question". I don't mind being helpful, but I'm a one-man shop and simply can't provide 24-7-365 support.
Ten reasons not to fix computers for free
Do you feel like a heel if you don't want to fix computer problems for friends and family? Here are some of the reasons you shouldn't feel guilty.
Like most IT pros, I have had plenty of friends and family members ask me to fix their PCs. Although I have always tried to help people whenever I can, I have come to the realization that with a few exceptions it is a bad idea to fix people’s PCs for free.
Don’t get the wrong idea. There are some people that I truly don’t mind helping. I would never refuse to help my wife with a computer problem, nor would I cut off my mother. Unfortunately though, the majority of those that I have helped have abused the situation. As such, this article is a list of ten reasons why I don’t recommend fixing PCs for free.
1. Future problems are your fault
When a friend or family member asks you to fix their computer, they do so because they do not know enough to fix the problem themselves. Because the person typically does not understand the cause of or the solution to the problem, they probably also are not going to understand which problems are related and which are not. As a result, anything that happens to the computer after you touch it may be perceived to be your fault. All the computer’s owner knows is that the problem did not occur until after you worked on the computer.
2. People may not respect your time
Before I stopped fixing computers for friends and family, I had a big problem with people not respecting my time. Friends would call me at all hours of the day or night and expect me to drop whatever I was doing, drive to their house, and fix their computer right then.
3. Things sometimes go wrong
The third reason why I don’t recommend fixing people’s computers for free is because if you break it, you bought it. I have never personally run into a problem with this one, but I do know someone who brought a friend’s laptop home to fix, only to have his three year old daughter knock the laptop off the table and break it.
4. People don’t value things that are free
People seem to be conditioned to accept the idea that the best things in life are those that are the most expensive. This can be a problem when it comes to fixing people’s computers for free, because your advice might be perceived as carrying no more weight than anyone else’s.
To give you a more concrete example, there is someone in my family who constantly calls me with computer questions. I try to be nice and answer the questions, but often times this person does not like the answer. In those situations this person will tell me that my brother, my aunt, or somebody else in my family with absolutely no IT experience told them the opposite of what I am telling them. Inevitably, this person ends up ignoring my advice.
5. They expect free tech support for life
When you fix someone’s computer for free and you do a good job, you can become a victim of your own success. The next time that the person needs help, they will remember what a good job you did. In the future you may be asked to assist with everything from malware removal to operating system upgrades.
6. People adopt risky habits because they are getting free tech support
This one might be my biggest pet peeve related to helping friends with their computer problems. If a friend or family member assumes that you will always be there to bail them out when they have computer problems then they have no incentive try to prevent problems from happening. As such, they might adopt risky habits or even do some things that just do not make sense.
I will give you a couple of quick examples of this one. I have one friend whose teenage son infected his computer with all sorts of malware while trying to find free adult content on the Internet. The infection was so bad that it took me all weekend to fix. I suggested to my friend that he either keep his son off of his computer, or only allow him to access the Internet through a hardened sandboxed environment. A few days later my friend told me the infection was back. After asking him a few questions, I discovered that he had given his son the admin password so that he could “download something for school.”
The other example was that I once did a hard disk replacement for a family member. I won’t bore you with the details, but the hard disk replacement was anything but smooth. There were issues with everything from BIOS compatibility to the physical case design. After spending all evening working on it, I finally got everything working. By the time that I arrived home I had a message on my voice mail from the person whose computer I had just upgraded. She said that she had let her eight-year-old son disassemble the computer because she wanted him to learn about computers, but he couldn’t figure out how to put it back together.
7. It doesn’t end with computers
Another reason why I don’t recommend doing free computer repairs for friends or family is because the job might not end with computer repairs. Once the person figures out that you are good with electronics they may have you working on other things. For instance, I once helped a neighbor recover some data off of a failed hard disk. Two weeks later he had me on the roof helping to realign his satellite dish.
8. Things can snowball
Sometimes when you fix a friend’s computer for free, the expectations of free technical support can snowball into free support for everyone. I once fixed a computer for someone in my family. When I was done, the person told me that they have a friend who is also having problems and asked if I could look at that too.
9. Your service isn’t just free, it is costing you money
For instance, you are probably spending money on gas to drive to your friend’s house. You might also end up using supplies such as blank media or printer ink. I have even had friends who expect me to supply them with the software licenses.
10. Fixing computers is too much like work
The best reason of all for not fixing friend’s computers for free might be that doing so is too much like work. If you spend all day at work fixing computer problems, do you really want to deal with the same thing when you leave the office?
What is your policy on volunteering your tech skills for friends and family? |
Minerva Roman Goddess – Mythology, Symbolism, Meaning and Facts
Roman mythology consists of traditional stories about gods and heroes in ancient Rome. Actually, when the Romans conquered other lands, they also adopted many things from their cultures. It is fascinating that they also adopted the gods of other lands.
For example, when the Romans conquered Greece, they adopted their gods as well. Even though they adopted Greek gods, they changed their names. Not only changed they the names of Greek gods, but they also changed some stories and adapted them to Romans.
In this text we will talk about Roman goddess Minerva. She was the goddess of wisdom, poetry, commerce and medicine. Also, Minerva was later the goddess of the war. As all other Roman gods, Minerva had also her counterpart in Greek mythology and that was the goddess Athena. There were many temples in Rome that were dedicated to Roman goddess Minerva.
In this article you will see something more about this goddess and her importance for Roman mythology and culture in general. If you are interested in Roman mythology and if you would like to find out more about the Roman goddess Minerva, we recommend you to read this article.
You will have the opportunity to find out more about Minerva’s origin and life, but also to discover many fascinating myths and legends that are surrounding this goddess.
Mythology and Symbolism
When it comes to the origin of the goddess Minerva, she was the Roman goddess of handicrafts and it was believed that she was Jupiter’s daughter. Actually, the legend says that she was born from Jupiter’s head. You may be wondering why the head of Jupiter, because this story seems to be a little unusual. Well, there was a prophecy that Jupiter’s child would have more powers than him. In that time the titaness Metis was pregnant with Jupiter, so he decided to swallow her.
However, Metis was still alive in his stomach and she created weapons for her daughter, which caused big headaches to Jupiter. He had to open his head and then Minerva came to the world. She was in armor and she was a beautiful girl. Very soon Minerva became the goddess of wisdom. The story about Minerva’s origin is one of the most important myths in ancient Rome. Of course, there are many other interesting myths and legends that are representing the importance that Minerva had among Roman people.
In ancient Rome Minerva was adored and she was one of the most favorite deities, along with Juno and Jupiter. Actually, she was the part of the so called holy Capitoline triad. This group of three gods was named for the famous Capitoline Hill that existed in Rome. The fact that Minerva was a part of this Capitoline triad is telling a lot about her importance in the Roman mythology.
There was a famous legend, which says that Aeneas, a great hero, escaped from Troy and brought a cult statue of Minerva to Rome. This statue was preserved in the temple called Temple of Vesta and it was believed that it will be safe there.
Another interesting fact is that the goddess Minerva was well known for her chastity. She was fiercely protecting her chastity and she was one of virgin goddesses in ancient Rome. There is a myth that says that she refused once the god of war, whose name was Mars. Also, there was a story about the god Vulcan who was in love with her, but she didn’t want to be with him because he looked very bad. She didn’t like his physical appearance, so she refused him.
There were many different myths, legends and literary works about the Roman goddess Minerva. If you are interested in Roman mythology, then you have probably heard of Metamorphoses written by Ovid. In this story Minerva won a competiton because of her great tapestry. Now you will see more details about this myth. The Ovid’s story was about a mortal girl named Arachne who had better weaving skills than Minerva. Because of that the goddess Minerva was angry and she wanted to compete with her. Minerva made a magnificent piece which represented all Roman gods. Also, on the edges of her tapestry there were presented mortals who challenged their gods.
On the other side, the tapestry of the mortal girl Arachne presented different gods who were taking various form in order to seduce mortals. Even though Arachne’s work was also very interesting and good, Minerva was the winner. Actually, she declared herself as a winner and she wanted to punish the poor girl. The punishment of Arachne was very hard and unpleasant. Minerva was hitting her 3 times on head and in the end she transformed Arachne into a spider. It was the punishment for all mortals who were brave enough to challenge the Roman gods.
The symbolism of the Roman goddess Minerva was also connected with victory. Pompey was absolutely dedicated to Minerva’s temple because it was believed that this goddess will bring them victory.
Also, many other emperors and soldiers respected Minerva and her significance in Greek mythology. They believed this goddess will help them win a battle and protect them against an enemy. That’s why Minerva was sometimes identified with the Greek goddess Athena Nike, who was the goddess of victory in ancient Greece.
It was believed that Minerva had great strategies in war, so she was able to win even in the most difficult situations. There were many literary works about her warlike character, but there were also many paintings and sculpture on which she was represented wearing a helmet and carrying a spear.
It is also important to mention that very often Minerva is represented with the symbols of owl and snake. The owl is a symbol of her wisdom and also the victory, so it is a very frequent motive on many artistic works related to goddess Minerva. Also, Minerva is usually represented with a snake at the bottom of her feet, which could be a symbol of her wisdom as well. It is interesting that Minerva used her wisdom in order to win the battles. She had the best strategies and when it comes to war, she was even more successful than Mars, who was actually the god of a war.
When we talk about the symbolism related to Minerva, we have to mention that she was also considered as a goddess of medicine, so she was usually called Minerva Medica. Also, it was believed that Minerva invented musical instruments and numbers.
Meaning and Facts
We have already said that the Roman goddess Minerva was very loved and respected among people. It is important to mention that the shrine dedicated to Minerva was built in 263 BCE on the Aventine. This shrine was a place where craftsmen were meeting and also many actors and poets. Also, Minerva had another shrine that was placed on the popular hill of Rome called Mons Caelius.
Over time this goddess became more and more popular and she had a better position in The Pantheon of Rome. Also, it is known that Roman gods had usually their own festivals, so we have to mention Quinquatrus festival, which was the festival of Minerva.This festival lasted 5 days and it was the beginning of campaign season for Roman’s soldiers. It began on March 19 and it lasted until 23th March. It is important that on the first day of that festival there could be no battles and blood, but in other 4 days the gladiators had their contests.
We have already said that many emperors and soldiers had a special respect towards Minerva. They admired her and believed in her powers. Not only Pompey was dedicated to her but also the emperor named Domitian. He believed that Minerva protected him and his army. That’s why he commissioned another temple dedicated to Minerva. It was the temple in the Rome in the Nerva Forum. This way the worship of this goddess became even stronger.
As you can see, there were so many fascinating representations of Minerva in the Roman art. However, we can say that the greatest artistic work of that kind is a statue of Minerva that is 3 meter high. This great statue was made in the 2nd century BCE and it represents a figure with a belt and a long woolen tunic that was worn in ancient Greece and Rome.
Also, there is a medusa on this sculpture and the goddess has also a shield in her arm. There is a helmet on her head as well. As you can see, on this statue Minerva is represented as a noble warrior. Today this statue is preserved in the Capitoline Museums of Rome.
Even though some people think that the Romans have stolen everything from Greek mythology, it is not really true. Actually, there is no doubt that Greek mythology and religion had a strong influence on the Romans, but we have to mention the influence that Italian sources had as well. It means that the Roman mythology is actually a combination of the Greek and Italian influences. As you could see in this article, Minerva is a great example when we talk about the combination of these two cultures and traditions.
Of course, there is no doubt that Minerva had something to do with the Greek goddess Athena, as we had already said. Also, we can say that another counterpart of Minerva was the so called Menvra, who was the Etruscan goddess. This is a clear sign that Roman mythology was also influenced by Greek mythology. But, you may not know who Etruscans were. Of course, it is important to know that Etruscans were Italian people of Tuscans. They lived on the territory of Rome before the Latin tribes arrived there. The goddess Menvra was very important for the Etruscans, so they created many myths and legends about her.
The Roman goddess Minerva is best described as the combination of Athena and Menvra. Some aspects from both religions and mythologies have been combined and that’s why the Roman goddess Minerva was so respected and famous. All myths and legends in ancient Rome were talking about her powers and importance for the Roman people.
As you have seen in this article, the importance of Minerva among Roman people was enormous. This goddess of wisdom, science, arts and war was usually represented with weapons, which showed her fearlessness and bravery.
There is no doubt that Minerva was one of the most important deities in the whole Roman mythology. You have seen some of the most interesting stories and myths that are surrounding this powerful goddess. We hope you can understand better why Minerva was so important to the Roman people. Of course, we have to mention that her name is used even today.
In honor of this amazing goddess and her powers, there are many popular series and films with the characters called Minerva, Minerva is the logo of a famous German company, Minerva is the name of characters in literary works, etc. This way Minerva exists in the modern world as well.
More interesting articles: |
Ce diaporama a bien été signalé.
What is the Common Good?
4 191 vues
Publié le
Introductory speech at the Common Good Forum, Paris Aug 25 2013
• Soyez le premier à commenter
What is the Common Good?
1. 1. Prof. Claude Rochet http://claude-rochet.fr Common Good Forum Paris 25 aout 2013
2. 2. 25/08/13Prof. Claude Rochet - Coomon good forum Paris * *The first Resistance against nazi and Vichy rule was spiritual *A legal regime is illegitimate if it infringes the principles of the common good *Citizens may appreciate the legitimacy in the eyes of the natural right
3. 3. 25/08/13Prof. Claude Rochet - Coomon good forum Paris * “…not the individual good, but the common good is what makes Cities great. And, without doubt, this common good is not observed except in Republics » Machiavelli, Discourses, II, 2 « … it should be said that the common good of the city and the particular good of one person differ not only according to quantity (to much and little), but according to a formal difference. The meaning of the common good and that of the singular good are different, just as the meaning of the whole and the part are different. » Summa Theologica II-II, q. 58, 7 ad 2 More than the sum of the parts, so… … Where stands the common good? … Who, and how to, define(s) it? ….What is it for?
4. 4. 25/08/13Prof. Claude Rochet - Coomon good forum Paris * *The Sharing of the Good: *Pooling the good(s): i.e public services, commons… *The Commonality of the Common Good *Has everyone access to the good in common?: i.e equality of access to public services, capacity to use the commons (A. Sen) *The Good of the Common Good *The systemic effect of the common good: * How the good of the individual improves by living in a society ruled by the common good * How the interplay between the one and the global improves the common good? Ref: Gaston Fessard “Autorité et bien commun”
5. 5. 25/08/13Prof. Claude Rochet - Coomon good forum Paris * *The sum of the private goods *The common goodS *The general interest *A defined moral norm *A convention from the contractual law *A decision of the ruler or of the majority against the minority *More than the sum: an emerging reality *The use of the goods *A permanent process of deliberation *General lines to be found by human reason according to the principles of natural right * The common good is a deliberative process, not a content!
6. 6. 25/08/13Prof. Claude Rochet - Coomon good forum Paris * *Political: *A demarcation line between negative liberty (each individual is naturally free and knows what is good for him: the common good is the sum of private goods, Cf. Isaïah Berlin) and positive liberty (man aspires to liberty but is not free: liberty is a constructivist process, Cf Machiavelli, A. Sen) *Without common good, there is no political life since there is no end for the society that is superior to individual ends *The common good requires an active political life, the Machiavelli’s vivere politico that allows the many of the people to offset the power of the few rich. The common good can’t exist without deliberation and confrontation : the common good is the source of the legitimacy of the State
7. 7. 25/08/13Prof. Claude Rochet - Coomon good forum Paris * *Economical: * A demarcation line between economy as a man created activity (increasing returns economy) and economy as nature given activity (decreasing returns) * The assumption of a common good incites to institutions that favor synergies between economic activities * Synergies reinforce the common good since global prosperity depends on the interplay between each economic player & vv. Global prosperity is not the sum of selfishness but the product of synergies brought about by the common good
8. 8. 25/08/13Prof. Claude Rochet - Coomon good forum Paris * *Social: *A demarcation line between liberal individualism (each one is responsible for his own fate) and republicanism (capacitation + meritocracy + protection against misfortune) *Each man is guided by benevolence toward others and hungers for justice (Cicero’s mutual offices, Renaissance civic humanism, Adam Smith’s sympathy…) *Social life is the source of (informal) institutions at small scale (Oström) and the State is the institution of institutions (Hauriou) at large scale Prosperity relies on citizens’ commitment in social life
9. 9. 25/08/13Prof. Claude Rochet - Coomon good forum Paris * *Emergence: A reality that doesn’t exist in itself but is the abstract product of interactions between tangible realities that allows the global system of the society to be consistent Polity Economy Social life Common Good *This system is dynamic and adaptive: it maintains its stability through transition and change
10. 10. 25/08/13Prof. Claude Rochet - Coomon good forum Paris * LOVE - SENSE MORAL LAW & POLITICS TECHNO-ECONOMIC Angélism Cynism Tyranny Common good Ordering your thinking according logical categories: A problem in each categories has its own logic, but needs to resort to the superior category. Ignoring this leads either to cynism (rash materialism), angelism (ignoring the contingency) or tyranny (reducing every problem to one category) (Inspired from Blaise Pascal) |
Amadis Conference Preparation
Elisa Beshero-Bondar edited this page Oct 4, 2015 · 11 revisions
Part 1: First 10 minutes. Stacey begins.
Introduce Amadis: What is this thing?
Textual History: pithy and interesting! NOT necessarily a series of word-for-word translations into other languages or into newer forms of Spanish. Southey and earlier translators all CHANGED this text in producing versions of it in their own languages. But they're also translating.
Typical pejorative terms for this:
• ignorance
• "unfaithful"
Actually it's not so terrible--it might be a GOOD way to translate, to reproduce this romance adapted to a specific culture. IS this a translation or a creative adaptation? It's something of BOTH.
TERMS OF THE DEBATE: "Bad Translation" or "Cultural Adaptation"? Translation theory is limited in its grasp of cultural adaptation!
Issue in translation theory = word-for-word vs. sense-for-sense. Ours is closer to sense for sense WITH interest in omissions and additions.
show page images
Our Editorial Declaration show on screen and discuss our decisions.
Our specific project: Why work on Montalvo and Southey?
--Why is Southey's translation interesting? --compression, omissions, and leveling the narrative. Additions and changes to the speech acts: alterations of direct to indirect discourse, AND vice versa.
feature examples from two or three key, juicy passages: side-by-side views on web
How did we apply TEI markup to study this act of translation and/or adaptation?
<cl> markup of Montalvo: How we applied it, and why we decided to make these sequential sibling elements, and not nested. NOT always literal clauses: <cl> as "clause-like unit". Complexities of Montalvo chunking plus the need to map LOCATIONS in the text, giving each a distinct xml:id so that we can reference that id in our coding of Southey's translation.
** Presence of pseudo-markup ** in Montalvo: Show images. Discuss how we used these to signal clause-like breaks and what kinds of things they designate.
visual of code: probably side-by-side of Southey and Montalvo, PLUS the TEI code that does it
**<s> elements ** in Southey and <p> elements: Southey's structure is mostly his own. He adds paragraphs and creates English sentences. Speak to Southey's editorial work on punctuation and grammar. [Stacey] We don't use @ref attributes on the <s> elements because we don't expect there to be a one-to-one correspondence between sentences and the "clause-like" groupings of Montalvo.
You can’t perform that action at this time.
Press h to open a hovercard with more details. |
Accessibility links
Breaking News
Los Angeles' La Brea Tar Pits: Where Ancient Animals Lived and Died
Scientists have recovered more than one million fossil bones from 600 kinds of animals and plants.
Click Arrow to Hear This Program:
Play Audio File
Or download MP3 right-click or option-click and save link
BOB DOUGHTY: I'm Bob Doughty.
FAITH LAPIDUS: And I'm Faith Lapidus with EXPLORATIONS in VOA Special English. Today we tell about a scientific research area in the United States. It is filled with the remains of ancient animals. This unusual place is in the center of Los Angeles, California. Its name is Rancho La Brea. But most people know it as the La Brea Tar Pits.
BOB DOUGHTY: To understand why La Brea is an important scientific research center we must travel back through time almost forty thousand years. Picture an area that is almost desert land. The sun is hot. A pig-like creature searches for food. It uses its short, flat nose to dig near a small tree. It finds nothing. The pig starts to walk away, but it cannot move its feet.
They are covered with a thick, black substance. The more it struggles against the black substance, the deeper it sinks. It now screams in fear and fights wildly to get loose.
Less than a kilometer away, a huge cat-like creature with two long front teeth hears the screams. It, too, is hungry. Traveling across the ground at great speed, the cat nears the area where the pig is fighting for its life.
The cat jumps on the pig and kills it. The pig dies quickly, and the cat begins to eat. When it attempts to leave, the cat finds it cannot move. The more the big cat struggles, the deeper it sinks into the black substance.
Before morning, the cat is dead. Its body, and the bones of the pig, slowly sink into the sticky black hole.
FAITH LAPIDUS: Scientists say the story we have told you happened again and again over a period of many thousands of years. The black substance that trapped the animals came out of the Earth as oil.
The oil dried, leaving behind a partly solid substance called asphalt. In the heat of the sun, the asphalt softened. Whatever touched it would often become trapped forever.
In seventeen sixty-nine, a group of Spanish explorers visited the area. They were led by Gaspar de Portola, governor of Lower California.
The group stopped to examine the sticky black substance that covered the Earth. They called the area “La Brea,” the Spanish words for “the tar.”
Many years later, settlers used the tar, or asphalt, on the tops of their houses to keep water out. They found animal bones in the asphalt, but threw them away. In nineteen-oh-six, scientists began to study the bones found in La Brea. Ten years later, the owner of the land, George Allan Hancock, gave it to the government of Los Angeles. His gift carried one condition. He said La Brea could only be used for scientific work.
BOB DOUGHTY: Today, the La Brea Tar Pits are known to scientists around the world. The area is considered one of the richest areas of fossil bones in the world. It is an extremely valuable place to study ancient animals. Scientists have recovered more than one million fossil bones from the La Brea Tar Pits. They have identified more than six hundred different kinds of animals and plants.
The fossils are from creatures as small as insects to those that were bigger than a modern elephant. These creatures became trapped as long ago as forty thousand years. It is still happening today. Small birds and animals still become trapped in the La Brea Tar Pits.
FAITH LAPIDUS: Rancho La Brea is the home of a modern research center and museum. Visitors can see the ancient fossil bones of creatures like the imperial mammoth and the American mastodon. Both look something like the modern day elephant, but bigger.
The museum also has many fossil remains of the huge cats that once lived in the area. They are called saber-toothed cats because of their long, fierce teeth. Scientists have found more than two thousand examples of the huge cats.
The museum also has thousands of fossil remains of an ancient kind of wolf. Scientists believe large groups of wolves became stuck when they came to feed on animals already trapped in the asphalt.
BOB DOUGHTY: In nineteen sixty-nine, scientists began digging at one area of La Brea called Pit Ninety-One. They have found more than forty thousand fossils in Pit Ninety-One. More than ninety-five percent of the mammal bones are from just seven different animals. Three were plant-eaters. They were the western horse, the ancient bison and a two-meter tall animal called the Harlan’s ground sloth.
Four of the animals were meat-eating hunters. These were the saber-tooth cat, the North American lion, the dire wolf and the coyote. All these animals, except the dog-like coyote, have disappeared from the Earth.
FAITH LAPIDUS: Researchers say ninety percent of the fossils found are those of meat-eating animals. They say this is a surprise because there have always been more plant-eaters in the world. The researchers say each plant-eater that became trapped caused many meat-eaters to come to the place to feed. They, too, became trapped.
Rancho La Brea has also been a trap for many different kinds of insects. Scientists free these dead insects by washing the asphalt away with special chemicals. The La Brea insects give scientists a close look at the history of insects in southern California.
The seeds and pollen, or the lack of them, can show severe weather changes over thousands of years. Scientists say these provide information that has helped them understand the history of the environment. The seeds and pollen have left a forty-thousand-year record of the environment and weather for this area of California.
BOB DOUGHTY: Digging at Pit Ninety-One was recently suspended in order to pay closer to attention to a new discovery called Project Twenty-Three. In two thousand six a nearby art museum began an underground building project.
La Brea scientists had a chance to investigate an area that otherwise would have been impossible to study. This area turned out to be very rich in fossils. So, twenty-three huge containers of tar, clay and mud were removed from the area for research. This is why the project is now known as Project Twenty-Three.
Scientists have fully examined only several boxes of earth and tar. It will take years to complete all of the containers. But scientists have so far counted over seven hundred parts from different organisms. One huge discovery was the nearly complete skeleton of a male mammoth. Researchers have named the mammoth Zed. This is the largest mammoth ever found in the area.
Rancho La Brea scientists publish an Internet blog that documents this exciting project. It describes in detail the huge amount of work involved in carefully examining the many layers of tar and earth. For example, you can learn about the degreasing machine. Researchers place a big block of tar into the machine. It removes the oily material, leaving behind hundreds of fossils.
FAITH LAPIDUS: Each year, thousands of visitors come to see the fossils at Rancho La Brea. They visit the George C. Page Museum. Mister Page was a wealthy man who became very interested in the scientific work being done at the tar pits. He gave the money to build the museum and research center.
Visitors to the museum can see the “fish bowl,” a laboratory surrounded by glass. Here, they can watch scientists do their research. Visitors can watch the scientists clean, examine, repair and identify fossils that are still being discovered. Through this process, scientists are able to answer questions and solve puzzles about animals and their environment from thousands of years ago. The objects found in Project Twenty-three could double the size of the research center’s collection.
It is exciting to stand only a few meters away and watch scientists clean the asphalt off a fossil that is thousands of years old. Visitors quickly learn why researchers consider Rancho La Brea a very special place.
FAITH LAPIDUS: And I'm Faith Lapidus. You can find a link to the La Brea Tar Pits blog on our Web site, You can also comment on our programs. Join us again next week for EXPLORATIONS in VOA Special English. |
Aggressive Kids? 5 Ways To Curve His Behavior
May 19, 2016 |
Many youth leagues are getting started for the summer and the sounds of yelling coaches (and helmets clicking, if you’re in a football-heavy part of the country) are wafting in the air at local parks and fields all over the globe. While our parks are separated by country, state and county lines a commonality is the level of aggression and “toughness” being ingrained into our little sports stars to be the best, run the fastest and hit the hardest.
No doubt there are many life lessons to be gleaned from the teamwork, discipline and dedication required in playing a sport like football but my concern is are we teaching our children how to turn the aggression off when they aren’t on the field or the court, but are instead in the classroom or learning how to successfully build relationships.
Outside of possible health implications stemming from playing contact sports at such an early age, I find the mental implications just as important. If we are teaching boys from as early as four to be tough and hit hard and we never bother to go back and put those actions into context, are we building up overly aggressive children that later turn into extremely insensitive aggressive adults?
Aggression is a quality that can work for or against you and I would love to see our children taught how to channel their aggression in positive ways on and off the sports field not leaving them to their own definitions. Of course the idea of hitting someone while fully padded on a field is fun, but do they know this is not the move when on the playground at recess.
Teaching moments abound in parenthood, and are best understood by the child when discussed in context utilizing age appropriate comparisons. Here are 5 tips that may help in discussing with your child the importance of leaving negative aggression on the field.
1. Make correlations between sports and everyday life: Discipline and team work are great traits to carry into everyday life, be sure to explain the benefits of these. When discussing more aggressive matters explain how improper aggression can sever relationships and make a person difficult to deal with in personal, school and work environments.
2. Balance aggression with compassion: It may be understandable to get on an unfocused child on the field, when at home bring in the compassion. Home is a safe place where children should feel loved. Make sure the tough love has an even balance of tough and love.
3. Punish negative aggressive behavior: Negative aggressive behaviors shown in the classroom or at home should be punished. Don’t just punish without explaining why the punishment is being handed down.
4. Offer alternative behaviors: Give examples of other behaviors they could’ve exhibited to help them the next time they are faced with the same dilemma. They won’t get it right every time but remind them often of other more positive behaviors.
5. Don’t take the fun away: The more aggressive and forceful we are as parents the harder our children work to make us happy. If they think we value their hits and runs harder than their ability to have fun and do their best they will focus more on the aggression than the other positive traits that come with teamwork.
Trending on MadameNoire
|
America’s Most Powerful Nuclear Weapon Almost Killed The People Who Helped Build It
January 10, 2018 Topic: Security Blog Brand: The Buzz Tags: Nuclear weaponsMilitaryTechnologyWorldU.S.Atomic Weapons
An atomic disaster like no other.
Sixty-one years ago on an island in the South Pacific, scientists and military officers, fishermen and Marshall Islands natives observed first-hand what Armageddon would be like.
And it almost killed them all. The Atomic Energy Commission code-named the nuclear test Castle Bravo.
The March 1, 1954 experiment was the first thermonuclear explosion based on practical technology that would lead to a deliverable H-bomb for the Air Force’s Strategic Air Command—part of the Operation Castle series of tests needed to manufacture the high-yield weapons.
Bravo was the worst radiological disaster in American atomic testing history—but the test provided information that led to a lightweight, high-yield megaton bomb that would fit inside a SAC bomber.
Recommended: America’s Battleships Went to War Against North Korea
Recommended: 5 Places World War III Could Start in 2018
Widespread contamination sickened and exiled Pacific Islanders and killed a Japanese citizen. The United States had to admit it possessed the ability to make deliverable H-bombs—an information windfall for the Soviet Union, and the catalyst for serious consideration of a ban on atmospheric nuclear tests.
Bravo’s fallout even inspired the creation of a science fiction screen legend Godzilla. In the 1954 Japanese movie of the same name, atomic testing resurrects the “King of Monsters”—a symbol for the new terror felt in the only nation ever attacked with nuclear weapons.
Perhaps most importantly, Bravo forced many scientists and military officers to concede how deadly nuclear weapons really were—not just in their immediate effects such as blast and intense heat, but the lingering effects of high-energy radiation.
“I think the most important message we might take away from the Castle Bravo shot is the amount of hubris it represents,” Alex Wellerstein, a historian at the Stevens Institute of Technology and blogger, told War Is Boring.
“The scientists and military assured the politicians and Marshallese people that it was a safe experiment, that they had things under control, that they understood what would happen. And they were very wrong.”
The Bravo shot in 1954 was not the first test at Bikini Atoll, part of the 140,000-square-mile Pacific Proving Grounds . Nor would it be the last—from 1946 to 1958, the U.S. government held 67 atmospheric tests there.
Only two years earlier, the Ivy Mike shot demonstrated the first true thermonuclear reaction. It produced a 10-megaton yield, but the device relied on cryogenic liquid hydrogen isotopes that were bulky, required refrigeration equipment that weighed tons and was almost impossible to store in a weapon.
A prototype “wet fuel” bomb based on the Ivy Mike test was 24 feet long, five feet wide and weighed 30 tons. It was more like a railroad box car than a deliverable weapon. But Bravo used lithium deuteride “dry fuel,” which is solid and lightweight at room temperature.
Scientists estimated the device would have a yield of about five megatons. They based many of their safety precautions — such as the location of various observation posts and ships, a safety “exclusion zone” in the Pacific Ocean surrounding Bikini and estimates of fallout dispersal — on a five-megaton yield.
Zero hour for Bravo was at 6:45 a.m. local time on March 1. From the moment the device detonated, many of the observers knew something had gone spectacularly wrong.
The flash from the nuclear explosion was overwhelming, even by the standards of nuclear explosions. Men saw their bones appear as shadows through their living flesh. Streams of blinding light shone through the smallest cracks and pinholes in secured doors and hatches.
Bravo’s thermal radiation was far more intense than expected. More than 30 miles away from Ground Zero on Bikini Atoll, sailors on board Navy ships said the heat was like having a blowtorch applied to their bodies.
The shock wave destroyed buildings supposedly outside of the calculated damage zone. It nearly knocked observation aircraft out of the sky, and caused some men inadvertently trapped in a forward observation bunker to wonder if the explosion ripped their concrete and steel shelter from its foundations and flung it into the sea.
Then there was the fireball.
It was four miles in diameter and hotter than the surface of the sun. The Bravo fireball rose at the rate of 1,000 feet per second, and created a mushroom cloud that eventually topped 130,000 feet above sea level.
“In mere seconds the sailors sensed that something unspeakably wrong was occurring … Battle-hardened men who had served in World War II went to their knees and prayed,” wrote L. Douglas Keeney in 15 Minutes: General Curtis LeMay and the Countdown to Nuclear Annihilation.
“We soon found ourselves under a large black and orange cloud that seemed to be dropping bright red balls of fire all over the ocean around us,” one sailor recounted. “I think many of us expected that we were witnessing the end of the world.” |
Fight Anorexia nervosa and rediscover life
People around the globe are concerned about the harmful effects of obesity, and it is a global killer. Every year 35.8 million people die all around the world from the side effects of being overweight or obese. Around 2.3% of global disability-adjusted life year (DALY) which is defined as the number of years lost from disability, ill-health or early death is caused by overweight or obesity.
With the occurrence and awareness of obesity being prevalent worldwide, being thin is considered to be a mark of respect. Men and women all around the globe are conscious about their body weight and desire to maintain a slim or thin figure. In many parts of the world, people just want to be thin, to an extent of being unhealthily underweight. They do everything to remain as thin as possible and they fear gaining even a little bit of weight.
Anorexia nervosa commonly referred to as anorexia is an eating disorder where the affected person is extremely thin and fears gaining weight and he/she is excessively pre-occupied with their thin figure and they equate excessive thinness with self-worth. The main pre-occupation of people with Anorexia nervosa is to prevent weight gain or lose weight although their BMI is in most cases is much below the normal range. In order to achieve maintaining a thin body image, they try various methods to prevent weight gain or continue losing weight, by indulging in diuretics, diet aids and laxatives and all these are used in unhealthy proportions. They exercise abundantly as well with the ambition and intention to lose weight. There are people who use methods of binging and purging to lose weight or maintain weight and this behavior is similar with people who suffer from Bulimia.
Anorexia nervosa is a psychiatric disorder, and the root cause of it is mental disorders that affects the overall wellbeing of a person. The affected person commonly suffers from these mental disorders such as anxiety, depression, mood disorders, obsessive compulsive disorders, personality disorders and substance misuse.
It is not healthy to suffer from Anorexia nervosa, the behavior of wanting to remain extremely thin, by throwing out food through unhealthy methods of misusing diuretics and laxatives causes serious physical symptoms that can be life-threatening. Below are the physical symptoms of Anorexia.
Extremely thin body weight
Abnormal blood count
Low blood pressure
Soft thin hair
Appearance of bluish discoloration on fingers
Dry or yellow skin
Low blood pressure
Irregular heart beat
Swelling of arms and legs
Anorexia nervosa causes severe complications on the body and it is a strain on the mind as well. Food is not the main pre-occupation for a person who is suffering from this condition. It is the emotional burden of being thin and pondering on the thought that being thin is equated to being normal and accepted. What goes on in the mind of a person suffering from Anorexia nervosa, let us browse through the behavioral and emotional symptoms.
Emotional symptoms of a person suffering from Anorexia nervosa:
No interest in eating
Denies recognizing hunger
Intense fear of weight gain
Preoccupation about how much to consume
The person lies about the amount of food they eat
They tend to exhibit reclusive behavior and withdraw from society
Reduced desires in sexual activity
Constant irritation
Depressed mood
Suicidal thoughts
Watch out for these behavioral symptoms in a person with Anorexia nervosa:
The person will restrict the amount of food consumed, and will use weight loss methods to maintain weight or lose weight through exercise, fasting or dieting.
The person is uncomfortable about eating more food than they desire, and will use various methods to get rid of what according to a person with Anorexia is called excessive food consumption. They use methods of diet aids, enemas or laxatives and even bingeing and self-induced vomiting to get rid of extra food consumed.
These are the signs to notice in a person who is suffering or is likely to have Anorexia nervosa:
The person develops fear of eating and tends to skip meals and makes excuses for not eating
A person with Anorexia is conscious about weight gain even though they are extremely thin. He/she will binge on foods considered to be safe like low calorie food or food with low fat content.
Restricted eating is a common behavior, he/she will tend to spit out the food instead of chewing.
The affected person is constantly on the weighing scale ensuring that they maintain their weight or lose weight.
They constantly complain about being fat
They avoid eating in public and cover themselves in many layers of clothing
After inducing vomiting, you will notice eroded teeth and calluses on knuckles.
Cause of Anorexia nervosa:
Till now scientists have not been able to determine the cause of Anorexia nervosa and the only experimental proof is it could be a combination of biological, environmental and psychological factors:
It is unclear whether Anorexia is transmitted through genes, but experts say genetics is likely to play a role, in people who strive for perseverance, perfectionism and sensitivity and all these characteristics are present in a person with Anorexia.
Asian and western cultures emphasize on thinness, here it is cool to be thin and it is equated with success and being assured. It is the influence of ideologies of peers that puts a strain on individuals to be thin, and this is causing a lot of pressure especially with young girls.
Emotions play a major role in Anorexia, the affected individual particularly young women avoid the urge to eat even while feeling hungry. They maintain strict diets and binge on low calorie foods. Although society views them to be thin, they are not convinced, they consider themselves to be fat although they are not and it leads to high levels of anxiety causing them to eat less and restrict certain foods.
Risk factors that increase occurrence of Anorexia:
When individuals particularly girls are exposed to media, it increases chances of Anorexia, mainly because the media showcases series of skinny models and actors and here being skinny is associated with being beautiful and popular.
Being female:
Girls and women are at a higher risk of suffering from Anorexia, and there have been increasing incidences of boys and men also suffering from this condition and the main cause is growing peer pressure.
Sport activities:
Sport activities is one of the biggest triggers of Anorexia, here coaches favor skinny players and will advise the bigger ones to lose weight and it builds pressure on the mind of an individual, increasing susceptibility of the disease.
Family history:
A person whose first degree relative has Anorexia such as parent or sibling are at a higher risk of suffering from the disease.
Coping with Anorexia nervosa:
A person with Anorexia lacks self-worth and they are influenced by societal pressures of being thin, and they feel being even excessively thin is not good enough. They are unable to accept themselves for who they are, and believe being thin is equal to being accepted, being responsible and looking good.
An individual with Anorexia needs help to change the way they think and start accepting themselves. It is important for close family and friends to help such individuals. The affected individual can also gradually start practicing self-help techniques in overcoming this disease. The best form of recovery is to start accepting yourself for who you are and recognize your good personalities. Once you are aware of your good qualities you can build self-confidence and start loving yourself and enjoy the good things life has to offer.
Aim to recognize the emotion you are feeling, if you feel fat and don’t feel good about yourself, ask yourself what is bothering you, and write down your thoughts and feelings in a diary. Mingle with people who make you feel good about yourself and this way you will be able to rediscover yourself by recognizing your good qualities. Gradually work on giving little attention to your thoughts about being fat and try to do fun activities which you enjoy like music, reading, writing, movies and so on. A good way to accept yourself is to reward yourself, do something that makes you feel happy like enjoy the fresh breeze, go to a place where you can relax like a library or bowling center, buy some amazing cosmetics and home decorations like perfume, fruity lotion, scented candles and so on.
It is important to recognize your Anorexia nervosa condition and seek help from a doctor, therapist and nutritionist. Fight this condition and rediscover the good moments of life.
Leave a Reply
You are commenting using your account. Log Out / Change )
Google+ photo
Twitter picture
Facebook photo
Connecting to %s |
Explained: National Congress of China's Communist party | China News | Al Jazeera
Explained: National Congress of China's Communist party
The 19th National Congress begins on Tuesday but what does it mean for the country and the world?
Explained: National Congress of China's Communist party
A wider government shake-up by Xi Jinping could also be on the agenda [EPA]
China's ruling Communist Party will begin its 19th National Congress on October 18.
China's president and the party's general secretary, Xi Jinping, is expected to use the twice-a-decade event to consolidate his hold on power in the world's second-largest economy.
A wider government shake-up could also be on the agenda as a majority of the Communist Party's top decision-making body, the seven-member Politburo Standing Committee, are expected to retire during the meeting.
Here are some of the key questions about the Congress, who it involves and why it matters:
What is the National Congress?
Held every five years since the 11th Communist Party of China (CCP) Congress in 1977 - the year after Chairman Mao Zedong's death - the National Congress draws together selected delegates from the CCP's membership base.
Attendees are required to elect candidates to senior party positions, consider the general secretary's report, and decide on amendments to the CCP's constitution.
While the meeting is the highlight of the Chinese political calendar, at which the general line for the CCP is established and celebrated, outcomes have already been decided before the event, according to Roderic Wye, associate fellow of the Asia programme at Chatham House.
"The Congress is a celebration of decisions that have already been taken that we don't know about from the outside [of the party]," Wye told Al Jazeera.
The week-long gathering is held in Beijing.
China's new leaders will be unveiled at its conclusion, which is believed to be on October 25.
How does it work?
The Congress will bring together 2,287 CCP delegates to shape policy and decide on political positioning.
Only those showing "unshakable belief" and the "correct political stance" are invited to attend, according to the government-owned newspaper China Daily.
Xi will begin proceedings with his report at Beijing's Great Hall of the People and will use the address to outline the party's priorities over the next five years.
The report will then be studied during the Congress, though it is widely expected to be accepted and implemented by delegates without challenge.
Delegates, representing an estimated 90 million party members nationwide, will also elect about 200 members to the CCP's Central Committee.
The committee is then tasked with appointing members to the 25-member Politburo, which in turn decides on membership of the seven-member Politburo Standing Committee that sits at the apex of Chinese politics.
CCP attendees are also tasked with deciding on amendments to the party's constitution. Since its creation in 1922, the constitution has been changed at every Congress.
What to look out for?
Chinese politics and the workings of the CCP Congress are shrouded in secrecy.
However, this event is likely to reveal the extent to which Xi has consolidated power since rising to the role of general secretary in November 2012.
Elections and constitutional amendments decided at this summit are being watched closely by analysts for signs that the Chinese leader has tightened his grip at the helm of his party.
Xi will almost certainly have attempted to exert his influence over the renewal process, Hongyi Lai, an associate professor of contemporary Chinese studies at the University of Nottingham, told Al Jazeera.
"It is widely expected that he will try to put in new leaders, ones that have supported him or that he has groomed over the last few years," Lai said. "[And] it's also likely that he will put 'Xi Jinping thought' into the constitution."
Appointments to the powerful Politburo Standing Committee will be of particular significance, potentially signifying Xi's plans for a successor, or his intention to remain in power past China's current constitutional two-term limit.
Up to five of its seven members - of which Xi is one - are expected to retire, having reached the age of 68.
Wang Qishan, 69, is a current committee member and close political ally of Xi. Should he continue in his role, as the country's anti-corruption chief, it may signal that China's leader is considering extending his own political life at the helm of the CCP past the next Congress, in 2022.
At least on the surface, Xi has sowed himself more aggressively and obviously into the Chinese political system
Hongyi Lai, associate professor
Xi's expected move to write his own political philosophy in the CCP constitution may also indicate an intention to prolong his period in power. The amendment would elevate Xi to a constitutional status akin to Mao Zedong, founding member of the CCP and the People's Republic of China, and Deng Xiaoping.
"This signifies the extent of his influence in the Chinese political system, neither of his two immediate predecessors would have been able to do this at the end of their first terms," Lai said.
"At least on the surface, Xi has sowed himself more aggressively and obviously into the Chinese political system."
What does this mean for China, and the world?
Xi's first five years at the top of Chinese politics have been characterised by consolidation of power as opposed to delivering his plans for reform revealed at the 18th CCP Congress.
Upon assuming office in 2012, he promised to pursue new social and economic policies, including a crackdown on corruption and a reduction in bureaucracy.
Should he be successful in extending his control over the CCP over the next week, Xi will be under pressure to finally implement such plans during his second term, Lai said.
"The burden is on him [Xi] to pursue new policy, and he may want to introduce reforms regarding the economy, society and governance," he said. "In the second term, the burden is on him to deliver."
On the international stage, however, this Congress is unlikely to alter the country's approach under Xi which has seen China increasingly assert itself in global politics, according to Wye.
"I don't think we are expecting a sudden swerve in Chinese policy, but there will be a lot of attention given to the country's One Belt and One Road initiative," he said.
SOURCE: Al Jazeera News
Interactive: Coding like a girl
Interactive: Coding like a girl
Hundreds face mass eviction in Canada's capital
I remember the day … I designed the Nigerian flag
I remember the day … I designed the Nigerian flag
|
12: Neonatal Jaundice Flashcards Preview
Simmons NURP 501 Exam 3 Weeks 12 and 13 > 12: Neonatal Jaundice > Flashcards
Flashcards in 12: Neonatal Jaundice Deck (36):
What is the description and cause of Jaundice?
The accumulation of bilirubin in the skin -yellow tones
Becomes apparent when bilirubin level is >5-7
Advances Head to toe
Rise in bilirubin from 1.5 in cord blood to 5-6 on the third day of life, declining to a normal level (<1.3-1.5) by 10-12 days in Caucasian and AA infants
Asian infants reach 8-12 on day 4-5 and decline more slowly
Breastmilk jaundice: inadequate feeding can lead to increased jaundice because of slow moving bowels and inability to clear bili
What is Non-physiologic (pathologic) Jaundice?
Appears at <24 hours and may last longer than 8 days
The rate of increase is >0.5 mg/dL/hr
Total bili >12.5 before 48 hours old or direct bili exceeds 1.5-2
Kernicterus (bilirubin encephalopathy) involves toxicity of the nervous system resulting from high levels of bili
Consider exchange transfusion when bili levels reach 25-30
What are the causes of jaundice?
Increased rate of hemolysis
Decreased rate of conjugation
Abnormalities of excretion or absorption
What increases the rate of hemolysis to cause jaundice?
ABO incompatibility
Rh incompatibility
Abnormal RBC shapes
RBC enzyme abnormalities
What decreases the rate of conjunction and causes jaundice?
Immaturity of bilirubin conjugation (physiologic jaundice)
Congenital familial nonhemolytic jaundice
Breast milk jaundice
What are the abnormalities of excretion and absorption that cause jaundice?
Metabolic abnormalities
Biliary atresia
Choledochal cyst
Obstruction of ampulla of Vater
What family factors increase the risk of jaundice?
Significant hemolytic disease/anemia
Inborn errors of metabolism
Early or severe jaundice
Ethnic or geographic origin associated with hemolytic anemia
Hepatobiliary disease
Sibling received phototherapy
What maternal history factors can increase the risk of jaundice?
ABO or Rh incompatibilities in previous pregnancy
Sepsis risk for the infant - Prolonged ROM
Macrosomic infant of a diabetic mother
What color of skin is an infant suffering from the deposition of indirect bili in the skin?
Bright yellow or orange
What color of skin is an infant suffering from the obstructive type (direct bili)
Greenish or muddy yellow
Physical exam of an infant with jaundice?
Signs of infection
Poor feeding
Loss of Moro reflex - signs of bili toxicity to the brain
Diminished DTR's
Respiratory distress
Failure to suck
Bulging fontanelle
Twitching of face or limbs
Shrill, high pitched cry
Signs of kernicterus
What are diagnostic studies for jaundice?
TcB Noninvasive Transcutaneous bilirubin
TSB (indirect and direct) for infants with TcB >15, for darker skinned infants or for infants under phototherapy
Fractionated bili: provides the concentration of both unconjugated and conjugated bili
Additional Labs:
Blood type
Isoimmmune antibodies of mother
Coombs test on infant
Reticulocyte count
Elevated indirect (unconjugated) serum bili with a normal reticulocyte count and negative Coombs test
physiologic jaundice
breast milk jaundice
congenital familial nonhemolytic jaundice
Elevated indirect and direct serum bili with a negative Coombs test and a normal reticulocyte count
metabolic abnormalities
biliary atresia
choledochal cyst in the bile duct
GI or pancreatic obstruction
How is jaundice managed?
Monitoring of TcB and TSB
Frequent breast feeding, supplementation feeding
Phototherapy with overhead light, bili blanket and bilibed
Exchange transfusion
How is jaundice prevented?
Frequent breast feeding
Systematic assessment of the new born for the risk of hyperbilirubinemia
Early and focused follow-up based on risk assessment
Supplement feeding if infant is inadequate intake, weight loss >10%, infant dehydration
How is bilirubin produced?
The breakdown of RBC --> Heme --> bilirubin
1g of Hgb produces 24 mg of bilirubin
Unconjugated bilirubin binds to albumin and is taken to the liver where it becomes conjugated, conjugated bilirubin is then excreted from the liver via the bile duct into the intestine. There it is further metabolized and excreted in the stool. Without appropriate gut flora or prolonged time in the intestine it is turned back into unconjugated bili and absorbed by the intestines back into circulation.
What is Unconjugated Bilirubin?
Or indirect bilirubin
Bilirubin not yet conjugated and reversibly bound to albumin.
Unconjugated bilirubin builds up when glucouronyl transferase is deficient and conjugation is slow
What is conjugated bilirubin?
Or direct bilirubin
Bilirubin conjugated with glucuronide
Water soluble so it's easily excretable. It builds up with obstruction of bile flow (cholestasis)
What is free bilirubin?
A small amount of unconjugated bilirubin not bound to albumin so it can cross the BBB and cause damage to neurons
What causes physiologic jaundice?
Increased RBC mass in the newborn
Decreased activity of UDP glucuronyl transferase that correlates with gestational age, leading to decreased conjugation and excretion
Increased enterohepatic circulation due to lack of intestinal flora, decrease gut motility and small enteral intake
What increases the chances of nonphysiological jaundice?
Early jaundice: infants with jaundice in first 24-36 hours of life likely have increased production of bili (hemolysis) or excessive blood cell breakdown from polycythemia or bruising
Significant jaundice in a previous infant
Exclusive breast feeding: due to small intake in first few days, delayed institution of intestinal flora and increased enterohepatic circulation of bilirubin
Gestational age <38 weeks
East Asian race
Cephalhematomas and bruising
Maternal age >25
Male sex
What is hyperbilirubinemia?
Hemolysis leading to increased bilirubin production?
What are the leading causes of hyperbilirubinemia?
Excessive production of bilirubin
Decreased bilirubin clearance
Breastfeeding-Associated Jaundice
What causes excessive production of bilirubin?
Hemolysis leading to increased bilirubin production
Nonantibody-mediated hemolysis
Nonhemolytic causes of increased Bilirubin production
What causes hemolysis leading to increased bilirubin production?
Antibody-mediated: Directed Coombs test or indirect Coombs test - Positive
ABO incompatibility: Mother Type O, Baby type A or B
Rh incompatibility: Mom Rh neg, baby Rh pos - Usually prevented with RhoGam
What is nonantibody-mediated hemolysis: Direct Coombs test or direct antibody test - Negative
Red cell membrane defects
Red cell enzyme defects
What is nonhemolytic causes of increased bilirubin production?
Bruising: related to birth trauma
Polycythemia: Can be due to delayed cord clamping, twin-twin or maternal-fetal transfusion, or secondary to chronic intrauterine hypoxia, IUGR, or maternal diabetes
What can cause decrease bilirubin clearance?
Bowel obstruction/delayed passing of meconium
Inborn errors of metabolism:
Glycuronyl transferase deficiency - Crigler-Najjar Syndrome, Gilbert Syndrome
How does phototherapy work?
Light energy is absorbed by the bilirubin molecule and changes its stereochemical shape making it more water soluble. This can be excreted in the bile without conjugation. This makes the bilirubin less toxic as the serum levels fall.
What are the possible complications of phototherapy?
Retinal Effects: Wear an eye patch
Bronze-baby syndrome: transient gray-bronze discoloration of infants with cholestatic jaundice treated with photo therapy. Usually, disappears after stopping phototherapy. No known long last effects.
What are guidelines for therapy?
Screening TSB >95%
TSB >8 mg % at 24 hours
TSB >13 mg % at 48 hours
TSB >16 mg % at 72 hours
What are some future medications to treat hyperbilirubinemia?
Heme oxygenase inhibitors (tin-protoporphyrin and tin-mesoporphyrin): suppress formation of bilirubin
Phenobarbital: When administered before birth, to mother of infant with known severe hemolytic disease, it will induce the infant's hepatic enzymes and increase hepatic uptake of bili and excretion of bili into the bile.
What is the pathology of Kernicterus?
yellow staining of brain nuclei (kerns), with current management of hyperbilirubinemia, this is a rare complication
What are the clinical manifestations of Kernicterus?
Poor suck to severe
Largely irreversible disease associated with coma and seizures
What is the root cause of re-emergence of kernicterus?
Early discharge (<48 hrs) without early follow-up
Failure to check the bili level in an infant noted to be jaundiced before 24 hours of age
Failure to recognize the presence of risk factors
Underestimating the severity of jaundice by clinical (visual) assessment
Lack of concern regarding the presence of jaundice
Delay in measuring serum bili despite marked jaundice
Failure to respond to parental concerns regarding jaundice, poor feeding or lethargy. |
There’s Nothing Average About This Year’s Gulf of Mexico 'Dead Zone'
By Andrea Basche
The National Oceanographic and Atmospheric Administration (NOAA) released Thursday its annual forecast for the size of the Gulf of Mexico “dead zone"—an area of coastal water where low oxygen is lethal to marine life. They say we should expect an “average year." That doesn't sound so bad, but as we wrote last year, the dead zone average is approximately 6,000 square miles or the size of the state of Connecticut. Average is not normal.
The hypoxic zone in the Gulf of Mexico is the second only in the world to the flush of toxins from Eurasia that pollute the Baltic Sea. Hypoxic zones occur naturally wherever major rivers meet the ocean. However, human activity has increased their area, and these persistent dead zones lead to health threats, economic losses and diminished food supplies. Photo credit: NOAA
This is especially troubling when we know that solutions exist for reducing agricultural pollution, which contributes to the dead zone. And for many years, there's been a lot of effort dedicated to reducing the dead zone's massive footprint.
The Dead Zone Starts on the Farm
Dead zones—also known as hypoxic zones—can occur naturally, but human activity perpetuates their presence. Hypoxia in the ocean results from low dissolved oxygen, a state that occurs when excess pollutants, such as nitrogen and phosphorus, enter bodies of water. These pollutants have various natural and man-made sources, but they are critical nutrients for plant growth and thus the active ingredients in fertilizers applied to farm fields.
The movement of water causes nitrogen to “leach" through the soil or “run off" into bodies of water, while phosphorus most commonly escapes from farm fields with sediment and soil erosion. However they get into water, these pollutants make delicious food sources for algae, which “bloom" as a result of the buffet. Dead algae sink and decompose in water, which depletes oxygen, suffocating other marine organisms.
The second largest dead zone in the world is the one predicted Thursday, in the Gulf of Mexico. The Mississippi River empties into the Gulf and many other bodies of water that run through the Corn Belt and other major agricultural regions of the U.S. feed the Mississippi.
It has been a wet spring across most of the U.S., including the Midwest and it is true that the amount of rainfall (and thus water moving through and over the soil) impacts the size of the dead zone from year to year. But so do the practices on farms and these are much more within our control than the rain.
Myth Busting: Fertilizer is Only One Part of a Bigger Farming Problem
Every single article I read in news about the dead zone, algal blooms in Lake Erie or polluted drinking water in Des Moines, seems to count the number one evildoer as fertilizer, particularly farmers who are applying too much of it. As someone who researched Midwest agriculture while living in Iowa for several years, this drives me a little crazy because the gross oversimplification misses the bigger farming problem, of which the amount of fertilizer is just a part. A major issue with our farming system today, especially in the Corn Belt, is that the primary crop only grows from April/May until September/October when harvested. The rest of the fall, winter and spring leave the soil bare and susceptible to phosphorus and nitrogen loss.
One proposed solution to the runoff problem is what's known as the “4R" strategy—using fertilizer at the right rate, the right time, the right place and the right source. There is no doubt that such practices can help reduce water pollution and dead zones, but not enough from my perspective, especially given the disproportionate emphasis placed on such approaches as a “silver bullet."
A more ecological approach to farming—mainly, finding ways to protect the soil all year, including perennial crops, agroforestry or cover crops—could be a highly effective strategy to reducing water pollution and ultimately the size of the dead zone. However, we currently discourage farmers from applying their highest management skills, due to a history of farm policies (from crop insurance to other market supports) that incentivize annual cropping patterns focused on short-term results.
This is Not a Problem Disappearing Anytime Soon
Along with scientists and other partners, the U.S. Environmental Protection Agency launched a task force in 1997 to deal with the dead zone issue and coordinate a plan to reduce its effects. Through that task force, the goal of limiting the dead zone size to roughly 3,000 square miles was determined. Again, this year's prediction is for 6,000 square miles (a prediction that comes from several research groups or an ensemble of models, common in weather forecasting). The actual size of the dead zone will be monitored by NOAA and partners in late July and officially released in early August.
The news of a dead zone predicted to be more than double the designated goal is why an “average" forecast should actually be alarming, particularly after two decades of efforts to make the problem better. Certainly this is an issue that has not and will not disappear overnight and there are many farmers trying to improve the situation. However, until we start to have an honest discussion that includes policy change toward perennializing farming and moving beyond fertilizer management, I don't expect to see better than average dead zone forecasts anytime soon.
Andrea Basche is a Kendall Science Fellow in the Union of Concerned Scientist's food and environment program.
Lake Erie's Toxic Algae Bloom Forecast for Summer 2016
Arctic, Greenland Stuck in Feedback Loop of Melting
Flooding and Climate Change: French Acceptance, Texas Denial
Show Comments ()
EcoWatch Daily Newsletter
5 Ingredients for Health: Starting with Food
Keep reading... Show less
Malte Mueller / Getty Images
When Profit Drives Us, Community Suffers
By David Korten
Keep reading... Show less
The Revelator
Interactive Map: Air Pollution in 2100
By Dipika Kadaba
Keep reading... Show less
ddukang / iStock / Getty Images Plus
Is Apple Cider Vinegar Good for You? A Doctor Weighs In
By Gabriel Neal
Keep reading... Show less
Dumpster Debacle Distracts From Serious Spike in Whale Deaths
Keep reading... Show less
Keep reading... Show less
Keep reading... Show less
Pipeline Leaks 63,840 Gallons of Produced Water in North Dakota
Keep reading... Show less
|
Superchargers vs. Turbos
What is the difference between a Turbocharger and a Supercharger? It may seem that the two terms are used interchangeably. However, if so, it would be in error. While they both augment an engine’s power, their components are completely different. Essentially, when considering superchargers vs turbos, you are looking at the price and complexity of the part.
What is a Supercharger?
A supercharger is a part of an automobile which works to increase the air density or pressure supplied to the internal combustion engine of the vehicle. This air compressor offers increased power to the engine by giving more oxygen to it in each intake cycle. Therefore, a supercharger enables the engine to do more work and burn more fuel.
Superchargers can be categorized into two main types: dynamic compressors and positive displacement. Dynamic compressors deliver pressure at high speeds, usually above a threshold speed. Positive displacement compressors and blowers deliver pressure increase at a constant level at all engine speeds.
What is a Turbocharger?
When a turbine provides power for the supercharger, the supercharger is referred to as a turbo or turbocharger. The exhaust gas powers these turbine-driven units. They force air into the combustion chamber to increase power and efficiency of the internal combustion engine. Due to forcing extra air, the engine adds more fuel proportionally into the combustion chamber. This further leads to improved output results of the engine.
How Superchargers and Turbochargers Work
The amount of boost produced by a supercharger depends on the impeller rotational speed and size, and the type of compressor. The maximum operating speed of an automotive supercharger is usually over 30,000 RPM while that of a turbo is over 100,000 RPM. Until the impeller reaches the point of maximum speed, the compressor doesn’t produce its full boost.
Turbochargers use centrifugal compressors. They boast the same limitations as a supercharger. In a turbo, the speed of the impeller depends on the speed of the exhaust stream and not on the engine’s speed. The turbine’s speed is not fixed; it changes with the position of the throttle.
The turbine spins below its boost threshold at steady cruising speed. When pressing the gas pedal, there is an increase in speed of the exhaust gasses. This results in the acceleration of the turbine. It takes a bit of time for the turbine to overcome its own inertia and to reach the peak speed. This produces a brief delay, which is termed turbo lag.
Turbo Lag
The amount of boost produced affects the severity of turbo lag. A number of efforts have been made to reduce turbo lag. The list includes reducing the turbine mass, changing their shape, adding movable nozzles, and others.
Some cars use two or more different sized sequential turbochargers, where the smaller turbo responds with good low-speed, and the bigger one provides a maximum boost at peak speeds. A few vehicles such as the 1986-1988 Porsche 959 as well as the 2007 Peugeot 407 2.2 HDi use sequential turbochargers.
It is easier to reduce turbo lag with a turbo featuring low maximum boost. The turbo with light pressure used by some Volvo and Saab engines doesn’t produce good boost, but they have a fairly linear power curve and minimal lag.
Two-stage turbo-superchargers are an even more complex alternative. They use both a turbocharger and an engine-driven supercharger in sequence. This kind of supercharger produces maximum boost even at low speeds. Volkswagen uses this concept with its twincharger engines.
Key Differences between the Turbo and Supercharger
1. Efficiency
2. Space Requirement
3. Mode of Driving Power
4. Complexity and Cost
Now that you understand how a turbocharger and supercharger work, let’s take a moment to explore the key differences between them.
Turbos tend to be less responsive, but more efficient in comparison to the superchargers. Superchargers consume some amount of engine power even when they are not producing useful boost. The back pressure increased by the turbochargers in the exhaust also costs power. In the case of a turbo, the compressor, even at low speeds, can create some amount of internal drag. The increased power that’s provided by the compressor at maximum boost outweighs these parasitic losses, but the fact can’t be ignored that these losses affect the efficiency of the engine.
Space Requirement
Both turbo and superchargers consume more space in the engine compartment and increase the weight of the vehicle. Typically, the bolt-on superchargers fit easily under the hood, and they weigh less in comparison to the weight of Turbo.
Mode of Driving Power
The engine drives a supercharger which connects through a belt to the crankshaft. A turbocharger’s turbine gets its power from the exhaust gas of the engine.
Complexity and Cost
In order to assure engine efficiency, the moving parts need to be strong and precisely machined. Turbos are made from exotic materials so they can withstand high temperatures and high operating speed of the exhaust system. Forced-induction engines demand proper lubrication, as they tend to put a high strain on the oil system of the engine. It is important to use good-quality oil to ensure best results. Additionally, frequent oil change is also important in superchargers in order to eliminate the possibilities of sludge build-up.
The best application for superchargers and turbos depends on the type of vehicle you outfit. You will find that the turbochargers are most commonly used in Europe because of the smaller engines in European cars. Overall, superchargers deliver better boost at lower RPMs, and they’re comparatively more reliable. Whichever you choose, you can count on boosted performance for your car.
No Comments Yet.
Add your comment
You must be logged in to post a comment.
Pin It on Pinterest
Share This |
Animal Diversity 7th Edition
Published by McGraw-Hill Education
ISBN 10: 0073524255
ISBN 13: 978-0-07352-425-2
Chapter 1 - Review Questions - For Further Thought - Page 39: 1
The comparative theories that explain the start of life and diversity are sometimes contradictory. However, to understand them better, you need to start by looking at available evidence.
Work Step by Step
Step 1of 2 There exist many theories and ideas in regards to the start of life. However, scientifically the problem is not yet worked out due to the availability of few observable facts. There are some generally agreed upon theories, but there are also controversies such as the arrival of metabolism or development of cells. There are difficulties of deducing the actual conditions that lead to the start of life on earth. For this reason, there is minimal progress in addressing the question, despite it being an important topic. Also, it 's hard to recreate early earth's biogenetic conditions so as to carry out the experiments. In answering this question, therefore, it requires the help of Chemistry, Molecular Biology, and Geology. Step 2 of 2 On the other hand, there exists a variety of information in regards to the evolution of the diversity of life. For example, by the use of evidence from genetic information and fossils, it has been easy to construct the phylogenetic tree that shows evolutionary relationships of various species. Also, improved techniques have aided research in this field over time.
Update this answer!
Update this answer
|
Remember the hub-bub over thimerosal - the vaccine preservative that many thought was connected to autism? Ten years ago, parents demanded that the preservative be removed from vaccines after a British study concluded that it might in some way be connected to the onset of autism. Over the years, countless large studies have disproved this original study and found no link between the preservative and autism. Experts even found and proved that the original study was flawed and possibly even intentionally biased. And further studies found that removing thimerosal in the U.S.... and Europe did nothing to change autism rates in those regions.
Still, despite policy statements from the U.S.... Institute of Medicine, the American Academy of Pediatrics, and the World Health Organization endorsing the use of thimerosal in vaccines, there is once again a movement afoot to ban it at the international level. Even claiming that it is racist and elitist not to do so. Thus, the U.N.. is now considering banning thimerosal from vaccines and expects to make a decision on the issue sometime after a final meeting in January.
In the U.S..., thimerosal is not used in any vaccines that are distributed in single-dose vials, with the exception of certain types of flu shots. But in countries with fewer resources - such as electricity and refrigeration - and where many children still die from diseases that could have been prevented with a vaccine, it's cheaper and easier for health care workers to use multi-dose vials of vaccines. Using thimerosal in the vaccine helps to prevent the remainder of a vaccine from becoming contaminated with bacteria or fungi each time a dose is used.
Still, some humanitarian groups, such as the U.S...-based SafeMinds have argued that it is unethical to allow thimerosal in vaccines headed to developing nations while removing it from vaccines in the U.S.... and Europe.
But experts argue that banning thimerosal could increase the cost of vaccines for developing nations by anywhere from two to five times and make transporting and storing vaccines that much more difficult as well. The bottom line: Kids could die.
And that seem pretty unethical to me.
U.N. considers banning thimerosal from vaccines
Health experts argue that banning the preservative could make it difficult for children in developing countries to get life-saving vaccines. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.